uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,155,128
arxiv
\section{Introduction} Work stealing is an efficient and popular paradigm for scheduling multithreaded computations. While its practical benefits have been known for decades \cite{BurtonSl81,Halstead84} and several researchers have found applications of the paradigm \cite{AroraBlPl98,DinanLaSaKrNi09,KarpZh93,LeisersonScSu15}, Blumofe and Leiserson \cite{BlumofeLe99} were the first to give a theoretical analysis of work stealing. Their scheduler executes a fully strict (i.e., well-structured) multithreaded computations on $P$ processors within an expected time of $T_1/P+O(T_\infty)$, where $T_1$ is the minimum serial execution time of the multithreaded computation (the \textit{work} of the computation) and $T_\infty$ is the minimum execution time with an infinite number of processors (the \textit{span} of the computation.) In multithreaded computations, it sometimes occurs that a processor performs some computations and stores the results in its cache. Therefore, a work-stealing algorithm could potentially benefit from exploiting locality, i.e., having processors work on their own work as much as possible. Indeed, an experiment by Acar et al. \cite{AcarBlBl00} demonstrates that exploiting locality can improve the performance of the work-stealing algorithm by up to 80\%. Similarly, Guo et al. \cite{GuoZhCa10} found that locality-aware scheduling can achieve up to 2.6$\times$ speedup over locality-oblivious scheduling. In addition, work-stealing strategies that exploit locality have been proposed. Hierarchical work stealing, considered by Min et al. \cite{MinIaYe11} and Quintin and Wagner \cite{QuintinWa10}, contains mechanisms that find the nearest victim thread to preserve locality and determine the amount of work to steal based on the locality of the victim thread. More recently, Paudel et al. \cite{PaudelTaAm13} explored a selection of tasks based on the application-level task locality rather than hardware memory topology. In this paper, we investigate a variant of the work-stealing algorithm that we call the \textit{localized work-stealing algorithm}. In the localized work-stealing algorithm, when a processor is free, it makes a steal attempt to get back its own work. We call this type of steal a \textit{steal-back}. We show that the expected running time of the algorithm is $T_1/P+O(T_\infty P)$, and that under the ``even distribution of free agents assumption'', the expected running time of the algorithm is $T_1/P+O(T_\infty\lg P)$. In addition, we obtain another running-time bound based on ratios between the sizes of serial tasks in the computation. If $M$ denotes the maximum ratio between the largest and the smallest serial tasks of a processor after removing a total of $O(P)$ serial tasks across all processors from consideration, then the expected running time of the algorithm is $T_1/P+O(T_\infty M)$. This paper is organized as follows. Section \ref{sec:localizedsetting} introduces the setting that we consider throughout the paper. Section \ref{sec:delayseq} analyzes the localized work-stealing algorithm using the delay-sequence argument. Section \ref{sec:amortization} analyzes the algorithm using amortization arguments. Section \ref{sec:localizedvariants} considers variants of the localized work-stealing algorithm. Finally, Section \ref{sec:conclusion} concludes and suggests directions for future work. \section{Localized Work-Stealing Algorithm} \label{sec:localizedsetting} Consider a setting with $P$ processors. Each processor owns some pieces of work, which we call \textit{serial tasks}. Each serial task takes a positive integer amount of time to complete, which we define as the \textit{size} of the serial task. We assume that different serial tasks can be done in parallel and model the work of each processor as a binary tree whose leaves are the serial tasks of that processor. The trees are balanced in terms of the number of serial tasks on each branch, but the order in which the tasks occur in the binary tree is assumed to be given to us. We then connect the $P$ roots as a binary tree of height $\lg P$, so that we obtain a larger binary tree whose leaves are the serial tasks of all processors. As usual, we define $T_1$ as the work of the computation, and $T_\infty$ as the span of the computation. The span $T_\infty$ corresponds to the height of the aforementioned larger binary tree plus the size of the largest serial task. In addition, we define $T_{\infty}'$ as the height of the tree not including the part connecting the $P$ processors of height $\lg P$ at the top or the serial tasks at the bottom. Since $T_{\infty}'$ corresponds to a smaller part of the tree than $T_{\infty}$, we have $T_{\infty}'<T_{\infty}$. The randomized work-stealing algorithm \cite{BlumofeLe99} suggests that whenever a processor is free, it should ``steal'' randomly from a processor that still has work left to do. In our model, stealing means taking away one of the two main branches of the tree corresponding to a particular processor, in particular, the branch that the processor is not working on. The randomized work-stealing algorithm performs $O(P(T_\infty+\lg(1/\epsilon)))$ steal attempts with probability at least $1-\epsilon$, and the execution time is $T_1/P+O(T_\infty+\lg P+\lg(1/\epsilon))$ with probability at least $1-\epsilon$. This paper investigates a localized variant of the work-stealing algorithm. In this variant, whenever a processor is free, it first checks whether some other processors are working on its work. If so, it ``steals back'' randomly only from these processors. Otherwise, it steals randomly as usual. We call the two types of steal a \textit{general steal} and a \textit{steal-back}. The intuition behind this variant is that sometimes a processor performs some computations and stores the results in its cache. Therefore, a work-stealing algorithm could potentially benefit from exploiting locality, i.e., having processors work on their own work as much as possible. We make a simplifying assumption that each processor maintains a list of the other processors that are working on its work. When a general steal occurs, the stealer adds its name to the list of the owner of the serial task that it has just stolen (not necessarily the same as the processor from which it has just stolen.) For example, if processor $P_1$ steals a serial task owned by processor $P_2$ from processor $P_3$, then $P_1$ adds its name to the $P_2$'s list (and not $P_3$'s list.) When a steal-back is unsuccessful, the owner removes the name of the target processor from its list, since the target processor has finished the owner's work. An example of an execution of localized work-stealing algorithm can be found in \cite{Suksompong14}. We assume that the overhead for maintaining the list and dealing with contention for steal-backs is constant. This assumption is reasonable because adding (and later removing) the name of a processor to a list is done when a general steal occurs, and hence can be amortized with general steals. Randomizing a processor from the list to steal back from takes constant time. When multiple processors attempt to steal back from the same processor simultaneously, we allow an arbitrary processor to succeed and the remaining processors to fail, and hence do not require extra processing time. \section{Delay-Sequence Argument} \label{sec:delayseq} In this section, we apply the delay-sequence argument to establish an upper bound on the running time of the localized work-stealing algorithm. The delay-sequence argument is used in \cite{BlumofeLe99} to show that the randomized work-stealing algorithm performs $O(P(T_\infty+\lg(1/\epsilon)))$ steal attempts with probability at least $1-\epsilon$. We show that under the ``even distribution of free agents assumption'', the expected running time of the algorithm is $T_1/P+O(T_\infty\lg P)$. We also show a weaker bound that without the assumption, the expected running time of the algorithm is $T_1/P+O(T_\infty P)$. Since the amount of work done in a computation is always given by $T_1$, independent of the sequence of steals, we focus on estimating the number of steals. We start with the following definition. \begin{definition} The \textit{even distribution of free agents assumption} is the assumption that when there are $k$ \textit{owners} left (and thus $P-k$ \textit{free agents}), the $P-k$ free agents are evenly distributed working on the work of the $k$ owners. That is, each owner has $P/k$ processors working on its work. \end{definition} While this assumption might not hold in the localized work-stealing algorithm as presented here, it is intuitively more likely to hold under the hashing modification presented in Section \ref{sec:localizedvariants}. When the assumption does not hold, we obtain a weaker bound as given in Theorem \ref{thm:distweakbound}. Before we begin the proof of our theorem, we briefly summarize the delay-sequence argument as used by Blumofe and Leiserson \cite{BlumofeLe99}. The intuition behind the delay-sequence argument is that in a random process in which multiple paths of the process occur simultaneously, such as work stealing, there exists some path that finishes last. We call this path the \textit{critical path}. The goal of the delay-sequence argument is to show that it is unlikely that the process takes a long time to finish by showing that it is unlikely that the critical path takes a long time to finish. To this end, we break down the process into \textit{rounds}. We define a round so that in each round, there is a constant probability that the critical path is shortened. (In the case of work stealing, this means there exists a steal on the critical path.) This will allow us to conclude that there are not too many rounds, and consequently not too many steals in the process. \begin{theorem} \label{thm:evendist} With the even distribution of free agents assumption, the number of steal attempts is $O(P\lg P(T_\infty+\lg(P/\epsilon)))$ with probability at least $1-\epsilon$, and the expected number of steal attempts is $O(P\lg PT_\infty)$. \end{theorem} \begin{proof} Consider any processor. At timestep $t$, let $S^t$ denote the number of general steals occurring at that timestep, and let $X^t$ be the random variable \[ X^t= \begin{cases} 1/P^t,& \text{if the processor can steal back from } P^t \text{ other processors};\\ 0, & \text{if the processor is working.} \end{cases} \] We define a \textit{round} to be a consecutive number of timesteps $t$ such that \[\sum_t\left(S^t+PX^t\right)\geq P,\] and such that this inequality is not satisfied if we remove the last timestep from the round. Note that this condition is analogous to the condition of a round in \cite{BlumofeLe99}, where the number of steals is between $3P$ and $4P$. Here we have the term $S^t$ corresponding to general steals and the term $PX^t$ corresponding to steal-backs. We define the \textit{critical path} of the processor to be the path from the top of its binary tree to the serial task of the processor whose execution finishes last. We show that any round has a probability of at least $1-1/e$ of reducing the length of the critical path. We compute the probability that a round does not reduce the length of the critical path. Each general steal has a probability of at least $1/P$ of stealing off the critical path and thus reducing its length. Each steal-back by the processor has a probability of $1/P^t$ of reducing the length of the critical path. At timestep $t$, the probability of not reducing the length of the critical path is therefore \[ \left(1-\frac{1}{P}\right)^{S^t}\left(1-X^t\right)\leq e^{-\frac{S^t}{P}-X^t}, \] where we used the inequality $1+x\leq e^x$ for all real numbers $x$. Therefore, the probability of not reducing the length of the critical path during the whole round is at most \[ \prod_t e^{-\frac{S^t}{P}-X^t} = e^{-\sum_t\left(\frac{S^t}{P}+X^t\right)} \leq e^{-1}. \] Note that this bound remains true even when there are concurrent thieves, since we are concerned with the probability that in a given round the length of the critical path is not reduced. If there are concurrent thieves trying to make a steal on the critical path, one of them will be successful, and the other unsuccessful thieves do not play a role in our analysis. With this definition of a round, we can now apply the delay-sequence argument as in \cite{BlumofeLe99}. Note that in a single timestep $t$, we have $S^t\leq P$ and $PX^t\leq P$. Consequently, in every round, we have $P\leq\sum_t\left(S^t+PX^t\right)\leq 3P.$ Suppose that over the course of the whole execution, we have $\sum_t\left(S^t+PX^t\right)\geq 3PR$, where $R=cT_\infty + \lg(1/\epsilon)$ for some sufficiently large constant $c$. Then there must be at least $R$ rounds. Since each round has a probability of at most $e^{-1}$ of not reducing the length of the critical path, the delay-sequence argument yields that the probability that $\sum_t\left(S^t+PX^t\right)\geq 3PR=\Theta(P(T_\infty+\lg(1/\epsilon)))$ is at most $\epsilon$. We apply the same argument to every processor. Suppose without loss of generality that processor 1's work is completed first, then processor 2's work, and so on, up to processor $P$'s work. Let $S_i$ denote the number of general steals up to the timestep when processor $i$'s work is completed, and let $X_i^t$ denote the value of the random variable $X^t$ corresponding to processor $i$. In particular, $S_P$ is the total number of general steals during the execution, which we also denote by $S$. We have \[\text{Pr}\left[S_i+\sum_tPX_i^t\geq \Theta(P(T_\infty+\lg(1/\epsilon)))\right]\leq \epsilon.\] Now we use our even distribution of free agents assumption. This means that when processor $i$ steals back, there are at most $(i-1)/(P-i+1)$ processors working on its work. Hence $X_i^t\geq (P-i+1)/(i-1)$ whenever $X_i^t\neq 0$. Letting $W_i$ be the number of steal-backs performed by processor $i$, we have \[\text{Pr}\left[S_i+\frac{P(P-i+1)}{i-1}W_i\geq \Theta(P(T_\infty+\lg(1/\epsilon)))\right]\leq \epsilon.\] For processor $2\leq i\leq P-1$, this says \[\text{Pr}\left[\dfrac{i-1}{P(P-i+1)}S_i+W_i\geq \Theta\left(\dfrac{i-1}{P-i+1}(T_\infty+\lg(1/\epsilon))\right)\right]\leq \epsilon.\] In particular, we have \[\text{Pr}\left[W_i\geq \Theta\left(\dfrac{i-1}{P-i+1}(T_\infty+\lg(1/\epsilon))\right)\right]\leq \epsilon.\] For processor $P$, we have \[\text{Pr}\left[S+\frac{P}{P-1}W_P\geq \Theta(P(T_\infty+\lg(1/\epsilon)))\right]\leq \epsilon.\] Since $P\geq P-1$, we have \[\text{Pr}\left[S+W_P\geq \Theta(P(T_\infty+\lg(1/\epsilon)))\right]\leq \epsilon.\] Since $\sum_{i=2}^{P-1}\frac{i-1}{P-i+1}$ grows as $P\lg P$, adding up the estimates for each of the $P$ processors and using the union bound, we have \[\text{Pr}\left[S+\sum_{i=1}^PW_i\geq \Theta(P\lg P(T_\infty+\lg(1/\epsilon)))\right]\leq P\epsilon.\] Substituting $\epsilon$ with $\epsilon/P$ yields the desired bound. Since the tail of the distribution decreases exponentially, the expectation bound follows. \end{proof} The bound on the execution time follows from Theorem \ref{thm:evendist}. \begin{theorem} With the even distribution of free agents assumption, the expected running time, including scheduling overhead, is $T_1/P+O(T_\infty\lg P)$. Moreover, for any $\epsilon>0$, with probability at least $1-\epsilon$, the execution time on $P$ processors is $T_1/P+O(\lg P(T_\infty+\lg(P/\epsilon)))$. \end{theorem} \begin{proof} The amount of work is $T_1$, and Theorem \ref{thm:evendist} gives a bound on the number of steal attempts. We add up the two quantities and divide by $P$ to complete the proof. \end{proof} Without the even distribution of free agents assumption, we obtain a weaker bound, as the following theorem shows. \begin{theorem} \label{thm:distweakbound} The number of steal attempts is $O(P^2(T_\infty+\lg(P/\epsilon)))$ with probability at least $1-\epsilon$. \end{theorem} \begin{proof} We apply a similar analysis using the delay-sequence argument as in Theorem \ref{thm:evendist}. The difference is that here we have $X_i^t\geq 1/P$ instead of $X_i^t\geq (P-i+1)/(i-1)$. Hence, instead of \[\text{Pr}\left[S_i+\frac{P(P-i+1)}{i-1}W_i\geq \Theta(P(T_\infty+\lg(1/\epsilon)))\right]\leq \epsilon,\] we have \[\text{Pr}\left[S_i+W_i\geq \Theta(P(T_\infty+\lg(1/\epsilon)))\right]\leq \epsilon.\] The rest of the analysis proceeds using the union bound as in Theorem \ref{thm:evendist}. \end{proof} Again, the bound on the execution time follows from Theorem \ref{thm:distweakbound}. \begin{theorem} The expected running time of the localized work-stealing algorithm, including scheduling overhead, is $T_1/P+O(T_\infty P)$. Moreover, for any $\epsilon>0$, with probability at least $1-\epsilon$, the execution time on $P$ processors is $T_1/P+O(P(T_\infty+\lg(P/\epsilon)))$. \end{theorem} \begin{proof} The amount of work is $T_1$, and Theorem \ref{thm:distweakbound} gives a bound on the number of steal attempts. We add up the two quantities and divide by $P$ to complete the proof. \end{proof} \begin{remark} In the delay-sequence argument, it is not sufficient to consider the critical path of only one processor (e.g., the processor that finishes last.) For example, suppose that there are 3 processors, $P_1,P_2$, and $P_3$. $P_1$ owns 50 serial tasks of size 1 and 1 serial task of size 100, $P_2$ owns 1 serial task of size 1 and 1 serial task of size 1000, and $P_3$ owns no serial task. At the beginning of the execution, $P_3$ has a probability of 1/2 of stealing from $P_1$. If it steals from $P_1$ and gets stuck with the serial task of size 100, $P_1$ will perform several steal-backs from $P_3$, while the critical path is on $P_2$'s subtree. Hence, the steal-backs by $P_1$ do not contribute toward reducing the length of the critical path. \end{remark} We briefly discuss the scalability of our localized work-stealing strategy. The bound $T_P \leq T_1/P + O(T_{\infty})$ provided by Blumofe and Leiserson \cite{BlumofeLe99} means that when $P \ll T_1/T_{\infty}$, we achieve linear speedup, i.e., $T_P \approx T_1/P$. Indeed, when $P \ll T_1/T_{\infty}$, we have that $T_{\infty}\ll T_1/P$, which implies that the term $T_1/P$ is the dominant term in the sum $T_1/P + O(T_{\infty})$. On the other hand, for our bound of $T_P \leq T_1/P + O(T_{\infty} P)$, when $P \ll \sqrt{T_1/T_{\infty}}$, we have that $T_{\infty}P\ll T_1/P$, and hence the term $T_1/P$ dominates in the sum $T_1/P + O(T_{\infty} P)$. As a result, we achieve linear speedup in localized work stealing when $P \ll \sqrt{T_1/T_{\infty}}$. In other words, we have square-rooted the effective parallelism. Thus the application scales, but not as readily as in vanilla randomized work stealing. \section{Amortization Analysis} \label{sec:amortization} In this section, we apply amortization arguments to obtain bounds on the running time of the localized work-stealing algorithm. We show that if $M$ denotes the maximum ratio between the largest and the smallest serial tasks of a processor after removing a total of $O(P)$ serial tasks across all processors from consideration, then the expected running time of the algorithm is $T_1/P+O(T_\infty M)$. We begin with a simple bound on the number of steal-backs. \begin{theorem} The number of steal-backs is at most $T_1+O(PT_\infty)$ with high probability. \end{theorem} \begin{proof} Every successful steal-back can be amortized by the work done by the stealer in the timestep following the steal-back. Every unsuccessful steal-back can be amortized by a general steal. Indeed, recall our assumption that after each unsuccessful steal-back, the target processor is removed from the owner's list. Hence each general steal can generate at most one unsuccessful steal-back. Since there are at most $O(PT_\infty)$ general steals with high probability, we obtain the desired bound. \end{proof} The next theorem amortizes each steal-back against general steals, using the height of the tree to estimate the number of general steals. \begin{theorem} \label{steal-back} Let $N$ denote the number of general steals in the computation, and let $T_\infty'$ denote the height of the tree not including the part connecting the $P$ processors of height $\lg P$ at the top or the serial tasks at the bottom. (In particular, $T_\infty'<T_\infty.$) Then there are at most $T_\infty'N$ steal-back attempts. \end{theorem} \begin{proof} Suppose that a processor $P_i$ steals back from another processor $P_j$. This means that earlier, $P_j$ performed a general steal on $P_i$ which resulted in this steal-back. We amortize the steal-back against the general steal. Each general steal generates at most $T_\infty'$ steal-backs (or $T_\infty'+1$, to be more precise, since there can be an unsuccessful steal-back after $P_j$ completed all of $P_i$'s work and $P_i$ erased $P_j$'s name from its list.) Since there are $N$ general steals in our computation, there are at most $T_\infty'N$ steal-back attempts. After $P_j$ performed the general steal on $P_i$, it is possible some other processor $P_k$ makes a general steal on $P_j$. This does not hurt our analysis. When $P_i$ steals back from $P_k$, we amortize the steal-back against the general steal that $P_k$ makes on $P_j$, not the general steal that $P_j$ makes on $P_i$. \end{proof} Since there are at most $O(PT_\infty)$ general steals with high probability, Theorem \ref{steal-back} shows that there are at most $O(T_\infty'PT_\infty)$ steals in total with high probability. The next theorem again amortizes each steal-back against general steals, but this time also using the size of the serial tasks to estimate the number of general steals. \begin{theorem} \label{ratio} Define $N$ and $T_\infty'$ as in Theorem \ref{steal-back}, and let $X$ be any positive integer. Remove a total of at most $X$ serial tasks from consideration. (For example, it is a good idea to exclude the largest or the smallest serial tasks.) For each processor $i$, let $M_i$ denote the ratio between its largest and the smallest serial tasks after the removal. Let $M=\max_i M_i$. Then the total number of steal-back attempts is $O(N\min(M,T_\infty'))+T_\infty'X$. \end{theorem} \begin{proof} There can be at most $T_\infty'X$ steal-backs performed on subtrees that include one of the $X$ serial tasks, since each subtree has height at most $T_\infty'$. Consider any other steal-back that processor $P_i$ performs on processor $P_j$. It is performed against a subtree that does not include one of the $X$ serial tasks. Therefore, it obtains at least $1/(M+1)$ of the total work in that subtree, leaving at most $M/(M+1)$ of the total work in $P_j$'s subtree. We amortize the steal-back against the general steal that $P_j$ performed on $P_i$ earlier. How many steal-backs can that general steal generate? We first assume that there are no general steals performed on $P_i$ or $P_j$ during the steal-backs. Then, $P_i$ can only steal back at most half of $P_j$'s work (since $P_j$ is working all the time, and thus will finish half of its work by the time $P_i$ steals half of its work). To obtain the estimate, we solve for $K$ such that \[\left(\dfrac{M}{M+1}\right)^K=\dfrac12,\] and we obtain \[K=\dfrac{\lg 2}{\lg(M+1)-\lg(M)}.\] By integration, we have \[\int_M^{M+1}\dfrac{1}{M+1} dx < \int_M^{M+1}\dfrac{1}{x}dx < \int_M^{M+1}\dfrac{1}{M}dx,\] so that \[\dfrac{1}{M+1}<\ln(M+1)-\ln(M)<\dfrac{1}{M},\] or \[M<\dfrac{1}{\ln(M+1)-\ln(M)}<M+1.\] Since $\lg$ and $\ln$ are off each other by only a constant factor, $K$ grows as $O(M)$. This means that one random steal will be amortized against at most $O(M)$ steal-backs. Combined with the estimate involving $T_\infty$ from Theorem \ref{steal-back}, we have the desired bound, assuming that there are no general steals performed on $P_i$ or $P_j$ during these steal-backs. Now we show that this last assumption is in fact unnecessary. That is, if there are general steals performed on $P_i$ or $P_j$ during these steal-backs, our estimate still holds. If a general steal is performed on $P_i$ after $P_i$ steals back from $P_j$, we amortize this steal-back against this general steal instead of against the general steal that $P_j$ made on $P_i$. Since each general steal can be amortized against in this way by at most one steal-back, our estimate holds. On the other hand, if a general steal is performed on $P_j$, then the steal-backs that $P_i$ has performed on $P_j$ become an even higher proportion of $P_j$'s work, and the remaining steal-backs proceed as usual. So our estimate also holds in this case. \end{proof} Applying Theorem \ref{ratio}, we may choose $O(P)$ serial tasks to exclude from the computation of $M$ without paying any extra ``penalty'', since the penalty $O(PT_\infty')$ is the same as the number of general steals. After we have excluded these serial tasks, if $M$ turns out to be constant, we obtain the desired $O(PT_\infty)$ bound on the number of steal-backs. The next theorem formalizes this fact. \begin{theorem} Define $N$ and $T_\infty'$ as in Theorem \ref{steal-back}, and remove any $O(P)$ serial tasks from consideration. For each processor $i$, let $M_i$ denote the ratio between its largest and the smallest serial tasks after the removal. Let $M=\max_i M_i$. Then the expected execution time on $P$ processors is $T_1/P+O(T_\infty\min(M,T_\infty'))$. \end{theorem} \begin{proof} The amount of work is $T_1$, and Theorem \ref{ratio} gives a bound on the number of steal-back attempts in terms of the number of steal attempts. Since we know that the expected number of steal attempts is $O(PT_\infty)$, the expected number of steal-back attempts is $O(PT_\infty\min(M,T_\infty'))$. We add this to the amount of work and divide by $P$ to complete the proof. \end{proof} \begin{remark} In the general case, it is not sufficient to amortize the steal-backs against the general steals. That is, there can be (asymptotically) more steal-backs than general steals, as is shown by the following example. Suppose that the adversary has control over the general steals. When there are $k$ owners left, the adversary picks one of them, say $P_i$. The other $k-1$ owners are stuck on a large serial task while $P_i$'s task is being completed. The $P-k$ free agents perform general steals so that $P_i$'s tree is split evenly (in terms of the number of serial tasks, not the actual amount of work) among the $P-k+1$ processors. Then $P_i$ finishes its work, while the other $P-k$ processors are stuck on a large serial task. $P_i$ performs repeated steal-backs on the $P-k$ processors until each of them is only down to its large serial task. Then they finish, and we are down to $k-1$ owners. In this case, $O(P^2T_\infty)$ steal-backs are performed, but only $O(P^2)$ general steals. In particular, it is not sufficient to use the bound on the number of general steals as a ``black box'' to bound the number of steal-backs. We still need to use the fact that the general steals are random. \end{remark} \section{Other Strategies} \label{sec:localizedvariants} In this section, we consider two variants of the localized work-stealing algorithm. The first variant, hashing, is designed to alleviate the problem of pile-up in the localized work-stealing algorithm. It assigns an equal probability in a steal-back to each owner that has work left. In the second variant, mugging, a steal-back takes all or almost all of the work of the processor being stolen from. A simple amortization argument yields an expected number of steals of $O(PT_\infty)$. \subsection*{Hashing} Intuitively, the way in which the general steals are set up in the localized work-stealing algorithm supports pile-up on certain processors' work. Indeed, if there are several processors working on processor $P_1$'s work, the next general steal is more likely to get $P_1$'s work, in turn further increasing the number of processors working on $P_1$'s work. A possible modification of the general steal, which we call \textit{hashing}, operates as follows: first choose an owner uniformly at random among the owners who still has work left, then choose a processor that is working on that owner's work uniformly at random. Loosely speaking, this modification helps in the critical path analysis both with regard to the general steals and to the steal-backs. Previously, if there are $k$ owners left, a general steal has a $\dfrac{k}{P}$ probability of hitting one of the $k$ remaining critical paths. Now, suppose there are $P_1,P_2,\ldots,P_k$ processors working on the $k$ owners' work, where $P_1+\ldots+P_k=P$. The probability of hitting one of the critical paths is \[\dfrac{1}{k}\left(\dfrac{1}{P_1}+\ldots+\dfrac{1}{P_k}\right)\geq \dfrac{k}{P}\] by the arithmetic-harmonic mean inequality \cite{Gwanyama04}. Also, the modified algorithm chooses the owner randomly, giving each owner an equal probability of being stolen from. \subsection*{Mugging} A possible modification of the steal-back, which we call \textit{mugging}, operates as follows: instead of $P_i$ taking only the top thread from $P_j$'s deque during a steal-back (i.e. half the tree), $P_i$ takes either (1) the whole deque, except for the thread that $P_j$ is working on; or (2) the whole deque, including the thread that $P_j$ is working on (in effect preempting $P_j$.) Figure \ref{fig:mugging} shows the processor of $P_j$ in each of the cases. Figure \ref{fig:mugging}(a) corresponds to the unmodified case, Figure \ref{fig:mugging}(b) to case (1), and Figure \ref{fig:mugging}(c) to case (2). The yellow threads are the ones that $P_i$ steals from $P_j$, while the white threads are the ones that $P_j$ is working on. In Figure \ref{fig:mugging}(c), the bottom thread is preempted by $P_i$'s steal. In both modifications here, each general steal can generate at most one steal-back. Therefore, the expected number of steal-backs is $O(PT_\infty)$, and the expected number of total steals is also $O(PT_\infty)$. \pgfdeclarepatternformonly{stripes} {\pgfpointorigin}{\pgfpoint{1cm}{1cm}} {\pgfpoint{1cm}{1cm}} { \pgfpathmoveto{\pgfpoint{0cm}{0cm}} \pgfpathlineto{\pgfpoint{1cm}{1cm}} \pgfpathlineto{\pgfpoint{1cm}{0.5cm}} \pgfpathlineto{\pgfpoint{0.5cm}{0cm}} \pgfpathclos \pgfusepath{fill} \pgfpathmoveto{\pgfpoint{0cm}{0.5cm}} \pgfpathlineto{\pgfpoint{0cm}{1cm}} \pgfpathlineto{\pgfpoint{0.5cm}{1cm}} \pgfpathclos \pgfusepath{fill} } \begin{figure} \centering \begin{tabular}{c} \begin{tikzpicture}[scale=0.7] \draw [fill=white,very thick] (0,0) rectangle (2,1); \draw [fill=lightgray,very thick] (0,1) rectangle (2,2); \draw [fill=lightgray,very thick] (0,2) rectangle (2,5); \draw [fill=lightgray,very thick] (0,5) rectangle (2,6); \draw [fill=yellow,very thick] (0,6) rectangle (2,7); \fill (1,2.75) circle (2pt); \fill (1,3.5) circle (2pt); \fill (1,4.25) circle (2pt); \end{tikzpicture} \\[\abovecaptionskip] \small (a) Work stealing \end{tabular} \begin{tabular}{c} \begin{tikzpicture}[scale=0.7] \draw [fill=white,very thick] (0,0) rectangle (2,1); \draw [fill=yellow,very thick] (0,1) rectangle (2,2); \draw [fill=yellow,very thick] (0,2) rectangle (2,5); \draw [fill=yellow,very thick] (0,5) rectangle (2,6); \draw [fill=yellow,very thick] (0,6) rectangle (2,7); \fill (1,2.75) circle (2pt); \fill (1,3.5) circle (2pt); \fill (1,4.25) circle (2pt); \end{tikzpicture} \\[\abovecaptionskip] \small (b) Variant (1) of mugging \end{tabular} \begin{tabular}{c} \begin{tikzpicture}[scale=0.7] \draw [fill=white,very thick,pattern=stripes, pattern color=yellow] (0,0) rectangle (2,1); \draw [fill=yellow,very thick] (0,1) rectangle (2,2); \draw [fill=yellow,very thick] (0,2) rectangle (2,5); \draw [fill=yellow,very thick] (0,5) rectangle (2,6); \draw [fill=yellow,very thick] (0,6) rectangle (2,7); \fill (1,2.75) circle (2pt); \fill (1,3.5) circle (2pt); \fill (1,4.25) circle (2pt); \end{tikzpicture} \\[\abovecaptionskip] \small (c) Variant (2) of mugging \end{tabular} \caption{Deque of processor in variants of mugging} \label{fig:mugging} \end{figure} \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we have established running-time bounds on the localized work-stealing algorithm based on the delay-sequence argument and on amortization analysis. Here we suggest two possible directions for future work: \begin{itemize} \item This paper focuses on the setting in which the computation is modeled by binary trees. Can we achieve similar bounds for more general computational settings, e.g., one in which the computation is modeled by directed acyclic graphs (DAG)? \item The hashing variant of the localized work-stealing algorithm (Section \ref{sec:localizedvariants}) is designed to counter the effect of pile-up on certain processors' work. What guarantees can we prove on the running time or the number of steals? \end{itemize}
2,869,038,155,129
arxiv
\section{SUPPLEMENTAL MATERIAL} \label{sec:supplement} Here, we present and discuss the posterior distribution resulting from our mapping of the self-interacting sterile neutrino DM model to a thermal-relic WDM model that was constrained by the MW satellite population in Ref.~\cite{DES:2020fxi}. Fig.~\ref{fig:gh_snudm} shows the posterior distribution over the eight galaxy--halo connection parameters, which are described in Ref.~\cite{DES:2020fxi} and summarized in Ref.~\cite{DES:2019ltu}, and our three self-interacting sterile neutrino DM parameters. The degeneracies between sterile neutrino DM parameters were discussed in the main text; here, we focus on degeneracies between sterile neutrino DM and galaxy--halo connection parameters. The galaxy--halo connection parameters include: \begin{enumerate} \item $\alpha$, the faint-end slope of the galaxy luminosity function. More negative values of $\alpha$ correspond to steeper luminosity functions, resulting in a larger number of predicted dwarf galaxies and thus a shallower stellar mass--halo mass relation in the context of abundance matching; \item $\sigma_M$, the scatter in satellite galaxy luminosity at fixed subhalo peak maximum circular velocity. Larger values of $\sigma_M$ cause more faint galaxies to up-scatter to observable luminosities and thus decreases the masses of the smallest halos inferred to host observed MW satellite galaxies; \item $\mathcal{M}_{50}$, the peak virial halo mass at which $50\%$ of halos host potentially observable satellite galaxies. In Ref.~\cite{DES:2020fxi}, the galaxy occupation fraction is assumed to be a monotonic function of peak halo mass; thus, smaller values of $\mathcal{M}_{50}$ combined with small values of $\sigma_M$ ensure that the most massive subhalos host observable MW satellite galaxies (note that $\mathcal{M}_{50}$ and $\sigma_M$ are significantly anti-correlated in the Fig.~\ref{fig:gh_snudm} posterior); \item $\mathcal{B}$, the efficiency of subhalo disruption due to the MW disk, defined relative to predictions from hydrodynamic simulations \cite{Garrison-Kimmel:2017zes}, such that $\mathcal{B}=1$ corresponds to hydrodynamic predictions and smaller (larger) $\mathcal{B}$ corresponds to less efficient (more efficient) disruption \cite{Nadler:2017dxq,Nadler:2018iux}. Variations in $\mathcal{B}$ mostly affect the radial distribution and overall abundance of MW subhalos, rather than altering the subhalo population in a mass-dependent fashion; \item $\sigma_{\mathrm{gal}}$, the width of the galaxy occupation fraction. At fixed $\mathcal{M}_{50}$, increasing $\sigma_{\mathrm{gal}}$ results in a shallower transition from galaxy-less to galaxy-hosting halos, and thus allows lower-mass halos to contribute more significantly to the observed MW satellite population; \item $\mathcal{A}$, the amplitude of the relation between galaxy size (specifically, azimuthally averaged, projected half-light radius) and halo size (specifically, subhalo virial radius at the time of accretion onto the MW). For $\mathcal{A}\gtrsim 30~\mathrm{pc}$, increasing this parameter decreases the surface brightnesses of a predicted satellite population (assuming their luminosities are fixed), which makes bright satellites more difficult to detect---because they are generally closer to the surface brightness detectability boundary than dim satellites \cite{DES:2019vzn}---and forcing lower-mass halos to host observed systems. However, for sufficiently small values of $\mathcal{A}$, dim satellites are pushed below the size that Refs.~\cite{DES:2019ltu,DES:2020fxi} assume to distinguish dwarf galaxies from star clusters (i.e., half-light radii of $10~\mathrm{pc}$), and thus forces more high-mass halos to contribute to the observed satellite population in the inference; \item $\sigma_{\log R}$, the scatter in satellite size at fixed halo size. Increasing $\sigma_{\log R}$ causes more small satellites to up-scatter to non-observable surface brightnesses, thus increasing the average mass of halos inferred to host MW satellites; \item $n$, the power-law index of the galaxy--halo size relation. Increasing $n$ yields a steeper size relation, meaning that low-mass halos host satellites decrease in size quickly relative to high-mass halos. This slightly pushes the halo population predicted to host observed MW satellites towards lower masses, because smaller satellites are generally easier to detect. \end{enumerate} Based on these effects, the degeneracies between sterile neutrino DM and galaxy--halo connection parameters are readily understood. In particular, any change to a galaxy--halo connection parameter that \emph{decreases} the average mass of halos inferred to host observed MW satellite galaxies results in a more stringent lower limit on $m_4$, because smaller values of $m_4$ preferentially suppress the abundance of low-mass halos, which are required to host observable galaxies in these regions of parameter space. Conversely, any change to a galaxy--halo connection parameter that \emph{increases} the average mass of halos inferred to host observed MW satellite galaxies allows for smaller values of $m_4$, because larger values of $m_4$ decrease the abundance of low-mass halos and thus do not manifest in observable changes to the predicted satellite population in these regions of parameter space. This reasoning explains all of the most prominent degeneracies between sterile neutrino DM and galaxy--halo connection parameters in Fig.~\ref{fig:gh_snudm}; for example, $m_4$ is noticeably (and positively) correlated with $\sigma_M$, $\mathcal{M}_{50}$, and $\sigma_{\mathrm{gal}}$ because increasing any of these parameters decreases the average mass of halos that host observed MW satellites based on the parameter descriptions above. Because the free-streaming scale is largely set by $m_4$ in our self-interacting sterile neutrino DM model, degeneracies between the remaining sterile neutrino DM and galaxy--halo connection parameters are much weaker. Note that $m_4$ is anti-correlated with both $\sin^2 2\theta$ and $\lambda_{\phi}$ in the posterior because self-interactions enable more frequent active neutrino scattering, allowing the DM relic density to be saturated at smaller mixing angles. Thus, these parameters all exhibit similar degeneracies with the galaxy--halo connection parameters. \begin{figure*}[t!] \includegraphics[width=0.98\textwidth]{figures/gh_SNuDM.pdf} \caption{Posterior distribution from the thermal-relic WDM fit to Dark Energy Survey and Pan-STARRS1 MW satellites, presented in Ref.~\cite{DES:2020fxi}, cast into the parameter space of our self-interacting sterile neutrino DM model (i.e., $m_4$, $\sin^2 2\theta$, and $\lambda_{\phi}$). Dark and light shaded contours represent 68\% and 95\% confidence intervals, respectively. Marginalized posteriors are shown in the top panels of each column. Note that $\sigma_M$, $\sigma_{\rm{gal}}$, and $\sigma_{\log R}$ are reported in $\rm{dex}$, $\mathcal{M}_{50}$ is reported as $\log_{10}(\mathcal{M}_{50}/\mathcal{M}_{\odot})$, $m_4$ is reported as $\log_{10}(m_4/\mathrm{keV})$, $\mathcal{A}$ is reported in $\rm{pc}$, and $\alpha$, $\mathcal{B}$, $n$, $\sin^2 2\theta$, and $\lambda_{\phi}$ are dimensionless. For details on the prior distributions assumed for galaxy--halo connection parameters when forward-modeling the MW satellite population, see Ref.~\cite{DES:2020fxi}. }\label{fig:gh_snudm} \end{figure*}
2,869,038,155,130
arxiv
\section{Introduction} The recently burgeoned developments in nano-science and semiconductors, such as the nano-wired FET at 3nm node \cite{DeyJenaMohapatraDashDasMaiti2020}, as well as those in high energy density physics \cite{GrazianiBauerMurillo2014}, quantum tomography \cite{RundleDaviesDwyerToddEveritt2020} and quantum optics \cite{TianWangEberly2017,HanGeFangYuGuoMaDengGongLiu2019}, urgently demand efficient and highly accurate simulations of high-dimensional quantum models. Specifically, the Wigner equation \cite{Wigner1932} under the Coulomb interaction is of great importance in describing the non-equilibrium electron dynamics in quantum regime, including the electron-proton couplings in hot density matter \cite{GrazianiBauerMurillo2014}, the quantum entanglement in nano-wires \cite{BenamBallicchiaWeinbubSelberherrNedjalkov2021}, the quantum tunneling effects in nanodevices \cite{bk:NedjalkovQuerliozDollfusKosina2011}, strong-field atomic ionization processes \cite{TianWangEberly2017,HanGeFangYuGuoMaDengGongLiu2019} and visualization of quantum states \cite{KurtsieferPfauMlynek1997,DaviesRundleDwyerToddEveritt2019}, owing to its huge advantage in calculating quantum statistics and experimental observability \cite{bk:CurtrightFairlieZachos2013}. However, an investigation of realistic quantum systems in 3-D spatial space requires to solve the Wigner equation in 6-D phase space, so that the curse of dimensionality (CoD) poses a tremendous obstacle to its numerical resolution. Indeed, it has already taken over thirty years to develop efficient Wigner solvers, including both deterministic and stochastic algorithms. In contrast to the relatively newer branch of particle-based stochastic methods \cite{KosinaNedjalkovSelberherr2003,MuscatoWagner2016,ShaoXiong2019}, which usually exhibit slower convergence rate, grid-based deterministic solvers allow highly accurate numerical resolutions in the light of their concise principle and solid mathematical foundation, ranging from the finite difference scheme \cite{Frensley1989} and the spectral collocation method combined with the operator splitting \cite{Ringhofer1990,ArnoldRinghofer1995} to the recent advanced techniques such as the spectral element method \cite{ShaoLuCai2011,XiongChenShao2016,ChenShaoCai2019}, the spectral decomposition \cite{VandePutSoreeMagnus2017} and the Hermite spectral method \cite{FurtmaierSucciMendoza2015,ZhanCaiHu2021}, as well as those for advection such as the discontinuous Galerkin method \cite{GambaGualdaniSharp}, WENO scheme \cite{DordaSchurrer2015} and exponential integrators \cite{FurtmaierSucciMendoza2015}. Unfortunately, there still remains a huge gap in terms of the applicability of even the state-of-the-art deterministic scheme to full 6-D problems, and the foremost problem is definitely the storage of 6-D grid mesh. On one hand, the required memory to store a fine 6-D tensor is still prohibitive for a single computer, e.g., the requirement to store a uniform grid mesh of size $81^3 \times 64^3$ in single precision is about $81^3\times64^3\times 4/1000^3 \approx 557$GB. On the other hand, the highly oscillatory structure of the Wigner function poses a severe restriction on the sampling frequency \cite{Frensley1989}, which is further complicated by singular potentials like the Coulomb interaction. As a consequence, it strongly calls for an efficient algorithm that should be highly accurate enough to capture the fine structure of the solutions and suitable for modern high-performance computing platform. This paper makes the first attempt to simulate the 6-D Wigner equation via a massively parallel deterministic solver. The proposed CHArcteristic-Spectral-Mixed (CHASM) scheme takes advantages of both the parallel semi-Lagrangian scheme \cite{MalevskyThomas1997,KormannReuterRampp2019} and the spectral method, under the same guiding principle in our preceding advective-spectral-mixed (ASM) scheme \cite{XiongChenShao2016}. Specifically, it exploits two distinct features of the Wigner equation: Locality in spatial advection and nonlocality in quantum interaction. The local cubic B-spline, as a kind of wavelet basis, is applied for interpolating the local advection, while the Fourier basis is adopted to tackle the nonlocal pseudodifferential operator (${\rm \Psi} \textup{DO}$) due to its intrinsic global and oscillatory nature. There are two major difficulties to be resolved. The first is how to distribute a global cubic spline into several patches because solving the spline coefficients indeed requires the information from all patches. Owing to a key observation of the rapid decay property of wavelet basis in the dual space \cite{bk:Chui1992,MalevskyThomas1997}, we introduce a perfectly matched boundary condition (PMBC) for patched splines to give a closure of the spline coefficients, which allows the local splines to recover the global one as accurately as possible. Domain decomposition is only performed in the spatial direction so that communications can be restricted in adjacent processors. The second is how to tackle ${\rm \Psi} \textup{DO}$ with a singular Riesz kernel (see Eq.~\eqref{def.pdo}) as the singularity causes troubles in the convergence of the commonly used Fourier spectral method \cite{Ringhofer1990,Goudon2002}. Motivated from recent progress in fast algorithm for singular convolution \cite{VicoGreengardFerrando2016,GreengardJiangZhang2018,LiuZhangZhang2022}, we utilize the truncated kernel method (TKM) to derive a highly efficient approximation to ${\rm \Psi} \textup{DO}$. With these endeavors, we succeed in simulating 6-D Wigner-Coulomb dynamics of an electron wavepacket attracted by one or two protons. The solutions may help reveal the presence of electron-proton coupling \cite{GrazianiBauerMurillo2014,BenamBallicchiaWeinbubSelberherrNedjalkov2021}, uncertainty principle and quantum tunneling \cite{PakHammesSchiffer2004} in phase space. The rest of this paper is organized as follows. In Section \ref{sec.back}, we briefly review the background of the Wigner equation and the characteristic method. In Section \ref{sec.characteristic}, we mainly illustrate the construction of local splines to interpolate the spatial advection. Section \ref{spectral} discusses TKM for ${\rm \Psi} \textup{DO}$ with a weakly singular symbol. Several typical numerical experiments are performed in Section \ref{sec.num} to verify the accuracy of CHASM, where a first attempt to simulate quantum Coulomb dynamics in 6-D phase space is obtained. Finally, the conclusion is drawn in Section \ref{sec.discussion}. \section{Background} \label{sec.back} As a preliminary, we make a brief review of the single-body Wigner equation and outline the framework of the characteristic method. \subsection{The Wigner equation} Quantum mechanics in phase space is rendered by the Wigner function, the Weyl-Wigner transform of a density matrix $\rho(\bm{x}_1, \bm{x}_2, t)$, \begin{equation}\label{def.Wigner_function} f(\bm{x}, \bm{k}, t) = \int_{\mathbb{R}^3} \rho(\bm{x} - \frac{\bm{y}}{2}, \bm{x} + \frac{\bm{y}}{2}, t) \mathrm{e}^{-\mathrm{i} \bm{k} \cdot \bm{y}} \D \bm{y}, \end{equation} where $\bm{x}$ is the spatial variable and $\bm{k}$ the Fourier conjugated wave vectors. The Wigner function plays a similar role as the probability density function, but allows negative values due to Heisenberg's uncertainty principle. The governing equation, known as the Wigner equation, is a partial integro-differential equation, \begin{equation}\label{eq.Wigner} \frac{\partial }{\partial t}f(\bm{x}, \bm{k}, t)+ \frac{\hbar \bm{k}}{m} \cdot \nabla_{\bm{x}} f(\bm{x},\bm{k}, t) = \Theta_V[f](\bm{x}, \bm{k}, t), \end{equation} where $m$ is the mass, $\hbar$ is the reduced Planck constant and ${\rm \Psi} \textup{DO}$ reads as \begin{equation}\label{def.pdo_convolution} \Theta_V[f](\bm{x}, \bm{k}, t) = \frac{1}{\mathrm{i} \hbar (2\pi)^3} \iint_{\mathbb{R}^{6}} \mathrm{e}^{-\mathrm{i} (\bm{k} - \bm{k}^{\prime}) \cdot \bm{y} }D_V(\bm{x}, \bm{y}, t) f(\bm{x}, \bm{k}^{\prime}, t) \D \bm{y} \D \bm{k}^{\prime} \end{equation} with $D_V(\bm{x}, \bm{y}, t) = V(\bm{x} + \frac{\bm{y}}{2}) - V(\bm{x} - \frac{\bm{y}}{2})$. The Coulomb interaction in $\bm{x} \in \mathbb{R}^3$ is of great importance in realistic applications. When the atomic unit $ m = \hbar = e = 1$ is adopted and the attractive Coulomb potential is considered, $V(\bm{x}) = -{1}/{|\bm{x} - \bm{x}_A |}$, ${\rm \Psi} \textup{DO}$ is equivalent to \begin{equation}\label{def.pdo} \Theta_{V}[f](\bm{x}, \bm{k}, t) = \frac{2}{c_{3, 1}\mathrm{i}} \int_{\mathbb{R}^3} \mathrm{e}^{2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}^{\prime}} \frac{1}{|\bm{k}^{\prime}|^2} ( f(\bm{x}, \bm{k} - \bm{k}^{\prime}, t) - f(\bm{x}, \bm{k}+\bm{k}^{\prime}, t) )\D \bm{k}^{\prime} \end{equation} with $c_{n, \alpha} = \pi^{n/2} 2^\alpha {\Gamma(\frac{\alpha}{2})}/{\Gamma(\frac{n-\alpha}{2})}$. It is a twisted convolution involving both singular kernel and phase factor. When the interacting body is torn away from the atom, i.e., $|\bm{x} - \bm{x}_A|$ increases, ${\rm \Psi} \textup{DO}$ decays as the phase factor becomes more oscillating. Since ${\rm \Psi} \textup{DO}$ is real-valued due to the symmetry $\bm{k} \to - \bm{k}$ and \begin{equation}\label{mass_conserve} \int_{\mathbb{R}^3} \Theta_V[f](\bm{x}, \bm{k}, t) \D \bm{k} = 0 \iff \frac{\D}{\D t} \iint_{\mathbb{R}^3 \times \mathbb{R}^3} f(\bm{x}, \bm{k}, t) \D \bm{x} \D \bm{k} = 0, \end{equation} the total mass is conserved. The Wigner equation with ${\rm \Psi} \textup{DO}$ \eqref{def.pdo} have many stationary solutions given by the Weyl-Wigner transform of $\rho(\bm{x}, \bm{y}) =\phi(\bm{x}) \phi^\ast(\bm{y})$, with $\phi(\bm{x})$ being eigenfunction of the corresponding Schr{\"o}dinger equation. \subsection{The Lawson scheme and the characteristic methods} A typical numerical scheme for solving Eq.~\eqref{eq.Wigner} is the characteristic method. Its derivation starts from the variation-of-constant formula of \eqref{eq.Wigner}, \begin{equation}\label{variation_of_constant} f(\bm{x}, \bm{k}, t) = \mathrm{e}^{-\frac{\hbar t}{m} \bm{k} \cdot \nabla_{\bm{x}}} f(\bm{x}, \bm{k}, 0) + \int_0^t \mathrm{e}^{-\frac{\hbar \tau}{m} \bm{k} \cdot \nabla_{\bm{x}}} \Theta_V[f](\bm{x}, \bm{k}, t - \tau) \D \tau, \end{equation} where the semigroup $\mathrm{e}^{-\frac{\hbar \tau}{m} \bm{k} \cdot \nabla_{\bm{x}}}$ corresponds to the advection along the characteristic line, say, $\mathrm{e}^{-\frac{\hbar \tau}{m} \bm{k} \cdot \nabla_{\bm{x}}} f(\bm{x}, \bm{k}, t) = f(\mathcal{A}_\tau(\bm{x}, \bm{k}), t - \tau)$ with $\mathcal{A}_\tau(\bm{x}, \bm{k}) = (\bm{x} - \frac{\hbar \bm{k}}{m} \tau, \bm{k})$. The characteristic method approximates the integral on the right hand side of Eq.~\eqref{variation_of_constant} by polynomial interpolation in the light of the Lawson scheme, \begin{equation} f^n(\bm{x}, \bm{k}) = f^{n-1}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \tau \sum_{j=0}^{q} \beta_j \Theta_V[f^{n-j}](\mathcal{A}_{j\tau}(\bm{x}, \bm{k})). \end{equation} We adopt the one-stage Lawson predictor-corrector scheme (LPC1): \begin{equation*} \begin{split} \textup{Predictor}:\widetilde{f}^{n+1}(\bm{x}, \bm{k}) &= f^{n}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \tau \Theta_{V}[f^{n}](\mathcal{A}_\tau(\bm{x}, \bm{k})), \\ \textup{Corrector}: f^{n+1}(\bm{x}, \bm{k}) &= f^{n}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \frac{\tau}{2} \Theta_V[\widetilde{f}^{n+1}](\bm{x}, \bm{k}) + \frac{\tau}{2} \Theta_{V}[f^{n}](\mathcal{A}_\tau(\bm{x}, \bm{k})). \end{split} \end{equation*} The Strang splitting is also an efficient strategy for temporal integration and its success in solving 6-D Boltzmann equation was reported in \cite{DimarcoLoubereNarskiRey2018}. However, the non-splitting Lawson scheme is believed to be more advantageous in numerical stability \cite{CrouseillesEinkemmerMassot2020}. The remaining problem is how to evaluate the exact flow $ f^{n}(\mathcal{A}_{\tau}(\bm{x}, \bm{k}))$ and $ \Theta_V[f^{n}](\mathcal{A}_{\tau}(\bm{x}, \bm{k}))$ on the shifted grid. In general, they can be interpolated via a specified basis expansion of $f^n$ within the framework of the semi-Lagrangian method, such as the spline wavelets \cite{CrouseillesLatuSonnendrucker2009,Kormann2015}, the Fourier basis and the Chebyshev polynomials \cite{ChenShaoCai2019}. Regarding that the spatial advection is essentially local, we adopt the cubic B-spline as it is a kind of wavelet basis with low numerical dissipation and the cost scales as $\mathcal{O}(N_x^d)$ ($d$ is dimensionality) \cite{CrouseillesLatuSonnendrucker2009}. Here we focus on the unidimensional uniform setting, while the multidimensional spline can be constructed by its tensor product (see Section \ref{sec:6d_parallel} below). Suppose the computational domain is $[x_0, x_{N}]$ containing $N+1$ grid points with uniform spacing $h = {(x_{N} - x_0)}/{N}$. The projection of $\varphi(x)$ onto the cubic spline basis is given by \begin{equation}\label{interpolation} \varphi(x) \approx s(x) = \sum_{\nu = -1}^{N+1} \eta_{\nu} B_{\nu}(x) \quad \textup{subject to} \quad \varphi(x_i) = s(x_i), \quad i = 0, \dots, N, \end{equation} where $B_\nu$ is the cubic B-spline with compact support over four grid points, \begin{equation} B_{\nu}(x) = \left\{ \begin{split} &\frac{(x - x_{\nu-2})^3}{6h^3}, \quad x \in [x_{\nu-2}, x_{\nu-1}],\\ &-\frac{(x - x_{\nu-1})^3}{2h^3} + \frac{(x - x_{\nu-1})^2}{2h^2} + \frac{(x - x_{\nu-1})}{2h} + \frac{1}{6}, \quad x \in [x_{\nu-1}, x_{\nu}],\\ &-\frac{(x_{\nu+1} - x)^3}{2h^3} +\frac{(x_{\nu+1} - x)^2}{2h^2} + \frac{(x_{\nu+1} - x)}{2h} + \frac{1}{6}, \quad x \in [x_{\nu}, x_{\nu+1}],\\ &\frac{(x_{\nu+2} - x)^3}{6h^3}, \quad x \in [x_{\nu+1}, x_{\nu+2}],\\ &0, \quad \textup{otherwise}, \end{split} \right. \end{equation} implying that $B_{\nu - 1}, B_{\nu}, B_{\nu+1}, B_{\nu+2}$ overlap a grid interval $(x_{\nu}, x_{\nu+1})$ \cite{MalevskyThomas1997}. Denote by $\bm{\eta} = (\eta_{-1}, \dots, \eta_{N+1})$. By taking derivatives of $B_{\nu}(x)$, it reads that \begin{equation} s^{\prime}(x_i) = - \frac{1}{2h} \eta_{i-1} + \frac{1}{2h} \eta_{i+1}, \quad s^{\prime\prime}(x_i) = \frac{1}{h^2} \eta_{i-1} - \frac{2}{h^2} \eta_i + \frac{1}{h^2} \eta_{i+1}. \end{equation} Since $B_{i \pm 1}(x_i) = \frac{1}{6}$ and $B_i(x_i) = \frac{2}{3}$, it yields $N+1$ equations for $N+3$ variables, \begin{equation}\label{three_term_relation} \varphi(x_i) = \frac{1}{6} \eta_{i-1} + \frac{2}{3} \eta_{i} + \frac{1}{6} \eta_{i+1}, \quad 0 \le i \le N. \end{equation} Two additional equations are needed to solve a unique $\bm{\eta}$ and can be completed by specified boundary conditions at both ends. For instance, consider the Hermite boundary condition (also termed the clamped spline) \cite{CrouseillesLatuSonnendrucker2009}, $s^{\prime}(x_0) = \phi_L, s^{\prime}(x_{N}) = \phi_R$, where $\phi_L$ and $\phi_R$ are parameters to be determined, it is equivalent to add two constraints, \begin{equation}\label{Hermite_boundary} \phi_L = -\frac{1}{2h} \eta_{-1} + \frac{1}{2h} \eta_1,\quad \phi_R = -\frac{1}{2h} \eta_{N-1} + \frac{1}{2h} \eta_{N+1}. \end{equation} In particular, when $\phi_L = \phi_R = 0$, it reduces to the Neumann boundary condition on both ends. Alternative choice is the natural boundary condition for cubic spline, which requires $s^{\prime \prime}(x_0) = 0, s^{\prime \prime}(x_N) = 0$, or equivalently, \begin{equation}\label{natural_boundary} \frac{1}{h^2} \eta_{-1} - \frac{2}{h^2} \eta_0 + \frac{1}{h^2} \eta_{1} = 0, \quad \frac{1}{h^2} \eta_{N-1} - \frac{2}{h^2} \eta_N + \frac{1}{h^2} \eta_{N+1} = 0. \end{equation} Combining Eqs.~\eqref{three_term_relation} and \eqref{Hermite_boundary} (or \eqref{natural_boundary}) yields an algebraic equations \begin{equation}\label{dual_equation} {A} \bm{\eta}^{T} = (\phi_L, \varphi(x_0), \dots, \varphi(x_{N}), \phi_R)^T, \end{equation} with a tridiagonal matrix $A$, which can be solved by the sweeping method \cite{CrouseillesLatuSonnendrucker2009}. \begin{remark} In our preceding ASM scheme, we suggested to use three-stage characteristic method and investigated its convergence and mass conservation property \cite{XiongChenShao2016}. However, after a thorough comparison among various integrators as well as the Strang splitting scheme, we have found that LPC1 outperforms others in both numerical accuracy and stability, as it avoids both multi-stage interpolations and splitting errors. In particular, LPC1 requires spatial interpolation once and calculations of ${\rm \Psi} \textup{DO}$ twice per step, so that its complexity is definitely lower than multi-stage ones. For details, the readers can refer to Section 4 of our supplementary material \cite{XiongZhangShao2022}. \end{remark} \section{Local spatial advection and local spline interpolation} \label{sec.characteristic} When we shift to a full 6-D simulation, the foremost problem encountered is to represent the Wigner function on a $N_x^3 \times N_k^3$ grid mesh, which is usually prohibitive for single machine and has to be distributed into multiple ones. This may cause some troubles in solving Eq.~\eqref{dual_equation} as it requires the information of all interpolated points, so that its efficiency on a distributed-memory environment is dramatically hindered by high communication costs. Fortunately, the cubic B-spline can be essentially constructed in a localized manner, laying the foundation for the parallel semi-Lagrangian scheme \cite{MalevskyThomas1997,CrouseillesLatuSonnendrucker2009,KormannReuterRampp2019}. The local cubic spline basis seems to be very suitable to tackle the local advection mainly for two reasons. First, it is possible for local splines to recover the global one as accurately as possible by imposing some effective boundary conditions on local pieces, which may potentially avoid global communications. Second, the constant advection on 3-D equidistributed grid mesh can be interpolated by a convolution with a $4\times 4\times 4$ window function with relatively small computational cost of about $4^3 N_x^3 N_k^3$. In particular, when $\hbar k_{\max} \tau/\hbar \le h$, it can avoid non-adjacent communications. \subsection{Perfectly matched boundary condition for local spline} Without loss of generality, we divide $N+1$ grid points on a line into $p$ uniform parts, with $M = N/p$, \begin{align} \underbracket{x_0 < x_1 < \cdots < x_{M-1}}_{\textup{the first processor}} < \underbracket{x_{M}}_{\textup{shared}} < \cdots < \underbracket{ x_{(p-1)M}}_{\textup{shared}} < \underbracket{x_{(p-1)M+1} < \cdots < x_{pM}}_{\textup{$p$-th processor}}, \end{align} where the $l$-th processor manipulates $M+1$ grid points $\mathcal{X}_l = (x_{(l-1)M}, \dots, x_{l M})$, $l = 1, \dots, p$ and $x_{M}, x_{2M}, \dots, x_{(p-1)M}$ are shared by the adjacent patches. Denote by $\bm{\eta}^{(l)} = (\eta_{-1}^{(l)}, \dots, \eta_{M+1}^{(l)})$ the local spline coefficients for $l$-th piece. The target is to approximate the global spline coefficients $(\eta_{-1 +(l-1)M}, \dots, \eta_{M+1+ (l-1)M})$ by $\bm{\eta}^{(l)}$. There are two approaches to solving $\bm{\eta}^{(l)}$ without global communications. One is based on a key observation that the off-diagonal elements of the inverse spline matrix $A^{-1}$ decay exponentially away from the main diagonal \cite{MalevskyThomas1997}, so that the coefficients shared by adjacent patches can be calculated by merging the left and right truncated sequences with only local communications. The other is to impose effective Hermite boundary conditions on local pieces and to approximate the unknown first derivatives on the shared grid points by finite difference stencils \cite{CrouseillesLatuSonnendrucker2009}. The former is more preferable in consideration of accuracy and a benchmark can be found in Section 2.3 of our supplementary note \cite{XiongZhangShao2022}, while the latter seems more friendly to implementations. Our PMBC combines the advantages of both approaches and provides a unified framework for different boundary conditions imposed on the global spline. \subsubsection{Truncation of off-diagonal elements} Denote $A^{-1} = (b_{ij})$, $-1 \le i, j \le pM+1$. The solutions of global set of equations \eqref{dual_equation} are represented as \begin{equation}\label{exact_solution_A} \eta_i = b_{ii} \varphi(x_i) + \sum_{j=-1}^{i-1} b_{ij} \varphi(x_j) + \sum_{j = i+1}^{pM+1} b_{i j} \varphi(x_{j}), \quad i = -1,\dots, p M +1, \end{equation} with the convention $\varphi(x_{-1}) = \phi_L$, $\varphi(x_{pM+1}) = \phi_R$. Despite the inverse spline matrix $A^{-1}$ is a full matrix, its off-diagonal elements exhibit a rapid and monetone decay away from the diagonal element \cite{MalevskyThomas1997} (see Figure \ref{plot_inverse_matrix_element}), which is a well-known fact in the wavelet theory \cite{bk:Chui1992}. One can see in Figure \ref{plot_inverse_matrix_element_slice} that the elements $b_{ij}$ decays exponentially as $|i-j|$ increases. \begin{figure}[h] \centering \subfigure[Distribution of $\log_{10}(|b_{ij}|)$ for $N = 33$.\label{plot_inverse_matrix_element}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./plot_inv_element.pdf}} \subfigure[Rapid decay of off-diagonal elements. \label{plot_inverse_matrix_element_slice}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./plot_inv_element_slice.pdf}} \caption{\small The distribution of elements in inverse spline transform matrix $A^{-1}$: The off-diagonal elements exhibit a rapid and monetone decay away from the main diagonal. } \end{figure} This fact allows us to truncate Eq.~\eqref{exact_solution_A} and throw away the terms $|i-j| \ge n_{nb}$, \begin{equation}\label{truncation} \eta_i \approx b_{ii} \varphi(x_i) + \sum_{j= i-n_{nb}+1}^{i-1} b_{ij} \varphi(x_j) + \sum_{j = i+1}^{i+n_{nb}-1} b_{ij} \varphi(x_j), \quad i = -1, \dots, pM+1. \end{equation} In particular, when $n_{nb} \le M$, the coefficients $\bm{\eta}^{(l)} = (\eta_{-1}^{(l)}, \dots, \eta_{M+1}^{(l)})$ can be well approximated when $\mathcal{X}_{l-1}$ and $\mathcal{X}_{l+1}$ are known, without information of $\mathcal{X}_1, \dots, \mathcal{X}_{l-2}$ and $\mathcal{X}_{l+2}, \dots, \mathcal{X}_p$ \cite{MalevskyThomas1997}. Thus the spline transform is localized as data exchanges are only needed in adjacent processors and global communications are completely avoided. \subsubsection{Construction of PBMC} Essentially, the role of spline boundary conditions is to give a closure of coefficients $\bm{\eta}$. Therefore, for $l$-th patch, it is equivalent to impose effective Hermite boundary conditions on both ends of the local spline, \begin{equation} \begin{split} -\frac{1}{2h} \eta_{-1}^{(l)} + \frac{1}{2h} \eta_{1}^{(l)} &= \phi_{L}^{(l)}(\varphi(x_0), \dots, \varphi(x_{pM+1})), \quad l = 2, \dots, p, \\ -\frac{1}{2h} \eta_{M-1}^{(l)} + \frac{1}{2h} \eta_{M+1}^{(l)} &= \phi_{R}^{(l)}(\varphi(x_0), \dots, \varphi(x_{pM+1})), \quad l = 1, \dots, p-1, \end{split} \end{equation} where $-\frac{1}{2h} \eta_{-1}^{(l+1)} + \frac{1}{2h} \eta_{1}^{(l+1)} = -\frac{1}{2h} \eta_{M-1}^{(l)} + \frac{1}{2h} \eta_{M+1}^{(l)}$, implying that $\phi_{R}^{(l)} = \phi_{L}^{(l+1)}$, $1\le l \le p-1$. Using the truncated stencils \eqref{truncation}, it yields the formulation of PMBC \begin{equation*} \begin{split} \phi_{R}^{(l)} = \phi_{L}^{(l+1)} \approx & \underbracket{\frac{1}{2}c_{0,l} \varphi(x_{lM}) + \sum_{j = 1}^{n_{nb}} c_{j, l}^{-} \varphi(x_{lM-j})}_{\textup{stored in left processor}} + \underbracket{\frac{1}{2}c_{0,l} \varphi(x_{lM}) + \sum_{j = 1}^{n_{nb}} c_{j, l}^{+} \varphi(x_{lM+j})}_{\textup{stored in right processor}}, \end{split} \end{equation*} where $c_{0, l} = -\frac{b_{lM-1, lM}}{2h} + \frac{b_{lM+1, lM}}{2h}$ and \begin{equation}\label{PMBC_coeffcients} \begin{split} &c_{j, l}^- = -\frac{b_{lM-1, lM-j}}{2h} + \frac{b_{lM+1, lM-j}}{2h}, \quad c_{j, l}^+ = -\frac{b_{lM-1, lM+j}}{2h} + \frac{b_{lM+1, lM+j}}{2h}. \end{split} \end{equation} Following the same idea, one can represent all kinds of spline boundary condition by PMBC. For instance, when the natural boundary conditions \eqref{natural_boundary} are adopted and denote $\widetilde{A}$ the corresponding coefficient matrix, $(\widetilde{b}_{ij}) = \widetilde{A}^{-1}, -1\le i, j \le N+1$, then the equation $\widetilde A\bm{\eta}^T= (0, \varphi(x_0), \dots, \varphi(x_N), 0)^T$ can be transformed into $A\bm{\eta}^T= (\phi_{L}^{(1)}, \varphi(x_0), \dots, \varphi(x_N), \phi_{R}^{(p)})^T$ with \begin{equation}\label{true_boundary_truncate} \phi_{L}^{(1)} = \frac{\eta_1-\eta_{-1}}{2h} \approx \underbracket{\sum_{j=0}^{n_{nb}} c_{j, 0}^{+} \varphi(x_j),}_{\textup{stored in first processor}} ~ \phi_{R}^{(p)}= \frac{\eta_{N+1}-\eta_{N-1}}{2h} \approx \underbracket{\sum_{j=0}^{n_{nb}} c_{j, p}^{-}\varphi(x_{N-j}),}_{\textup{stored in last processor}} \end{equation} where $c_{j, 0}^{+} = \frac{1}{2h}(-\widetilde{b}_{-1, j} + \widetilde{b}_{1, j})$ and $c_{j, p}^{-} = \frac{1}{2h}(-\widetilde{b}_{pM-1, pM-j} + \widetilde{b}_{pM+1, pM-j})$. \begin{figure}[!h] \centering \includegraphics[width=1\textwidth,height=0.30\textwidth]{./spline_graph.pdf} \caption{An illustration of the cubic spline coefficients in the distributed setting: Seven grid points are distributed evenly in three processors. For each processor, PMBCs are assembled by exchanging and merging the stencils in the adjacent neighborhood. The boundary condition for global spline can also be realized by imposing effective Hermite boundary conditions on the first and last processors. \label{fig_spline_illustration}} \end{figure} Figure \ref{fig_spline_illustration} illustrates the construction of three local splines by seven grid points $\mathcal{X} = (x_0, \dots, x_6)$, with $\mathcal{X}_1 = (x_0, x_1, x_2)$, $\mathcal{X}_2 = (x_2, x_3, x_4)$ and $\mathcal{X}_3 = (x_4, x_5, x_6)$. \begin{itemize} \item[(1)] The left boundary $\phi_L^{(1)}$ for the first processor (LB-p1) and the right boundary $\phi_R^{(p)}$ for the last processor (RB-p3) are calculated by Eq.~\eqref{true_boundary_truncate}. \item[(2)] The $l$-th processor calculates the following quantities, \begin{equation*} \begin{split} \textup{L-PMBC}: \quad &\xi_L^{(l)} = \frac{1}{2}c_{0,l} \varphi(x_{(l-1)M}) + \sum_{j = 1}^{n_{nb}} c_{j, l}^{+} \varphi(x_{(l-1)M+j}), \\ \textup{R-PMBC}: \quad &\xi_{R}^{(l)} = \frac{1}{2}c_{0,l} \varphi(x_{l M}) + \sum_{j = 1}^{n_{nb}} c_{j, l}^{-} \varphi(x_{l M-j}). \end{split} \end{equation*} \item[(3)] The $l$-th processor transfers $\xi_L^{(l)}$ to its left neighbor: $(l$-$1)$-th processor $(l > 1)$, and transfers $\xi_R^{(l)}$ to its right neighbor: $(l$+$1)$-th processor $(l < p)$. \item[(4)] For $l$-th processor, $\phi_L^{(l)} = \xi_L^{(l)} + \xi_R^{(l-1)} ~(l > 1)$ and $\phi_R^{(l)} =\xi_L^{(l+1)} + \xi_R^{(l)} ~(l < p)$. \item[(5)] Each patch solves spline coefficients $\bm{\eta}^{(l)}$ via the exact LU decomposition of $(M+3)\times(M+3)$ tridiagonal matrix $A^{(l)}$, \begin{equation*} A^{(l)} (\bm{\eta}^{(l)})^T = LU (\bm{\eta}^{(l)})^T = (\phi_{L}^{(l)}, \varphi(x_{(l-1)M}), \dots, \varphi(x_{lM}), \phi_{R}^{(l)})^T. \end{equation*} \end{itemize} \subsubsection{Interpolation and correction for constant advection} Once the spline coefficients $\bm{\eta}^{(l)}$ are determined, interpolating $\varphi(x - \alpha h)$ with a constant shift $\alpha h$ can be realized by taking a weighted summation of $B_{\nu}(x - \alpha h)$ over indices $\nu$ with the whole cost being $\mathcal{O}(4N)$. Suppose all grid points are shifted by $\alpha h$, \begin{equation} \varphi(x_j - \alpha h) = \sum_{\nu = -1}^{N +1} \eta_{\nu} B_{\nu}(x_j - \alpha h), \quad 0 \le j \le N, \end{equation} where $B_{\nu}(x_j)$ only takes five possible values $b_1, b_2, b_3, b_4$ and $0$, and \begin{equation} \begin{split} &b_1 = \frac{(1 - \alpha)^3}{6}, \quad b_2 = - \frac{(1- \alpha)^3}{2} + \frac{(1-\alpha)^2}{2} + \frac{1- \alpha}{2} + \frac{1}{6}, \\ &b_3 = - \frac{\alpha^3}{2} + \frac{\alpha^2}{2} + \frac{\alpha}{2} + \frac{1}{6}, \quad b_4 = \frac{\alpha^3}{6}. \end{split} \end{equation} As the shifted grid point may move outside the domain $[x_0, x_{N}]$, it shall add ghost splines $B_{-2}(x)$ and $B_{N+2}(x)$ with coefficients $\eta_{-2} = \eta_{N+2}$. When $0< \alpha < 1$, $x_{j} - \alpha h \in [x_{j-1}, x_j]$, a simple calculation yields that \begin{equation}\label{interpolation_left} \begin{split} \varphi(x_j - \alpha h) = &\eta_{j-2} B_{j-2}(x_j - \alpha h) + \eta_{j-1} B_{j-1}(x_j - \alpha h) \\ &+ \eta_{j} B_{j}(x_j - \alpha h) + \eta_{j+1} B_{j+1}(x_j - \alpha h). \end{split} \end{equation} Similarly, one can tackle the case $-1 < \alpha < 0$, $x_j - \alpha h \in [x_{j}, x_{j+1}]$, yielding that \begin{equation}\label{interpolation_vector} \varphi(x_j - \alpha h) = \left\{ \begin{split} &(\eta_{j-2}, \eta_{j-1}, \eta_{j}, \eta_{j+1}) \cdot (b_4, b_3, b_2, b_1), ~ &0< \alpha< 1,\\ & (\eta_{j-1}, \eta_{j}, \eta_{j+1}, \eta_{j+2}) \cdot (b_1, b_2, b_3, b_4), ~&-1 < \alpha < 0. \end{split} \right. \end{equation} The interpolation procedure under the parallel setting is almost the same except a correction procedure. Since the ghost splines with $\eta^{(l)}_{-2} = \eta^{(l)}_{M+2} = 0$ have to be added on both sides of all local splines, the shifted grid points outside the subdomain might not be properly interpolated. Therefore, the correct interpolated values need to be transferred from its adjacent processor. Figure \ref{fig_interpolation_illustration} illustrates the interpolation of the constant advection under the distributed environment. Again, seven grid points are distributed into three clusters, with $p = 3$ and $N = 6$. \begin{figure}[!h] \centering \includegraphics[width=1\textwidth,height=0.22\textwidth]{./correction_graph.pdf} \caption{Illustration of the local cubic spline interpolation of the constant advection. The shifted grid points are first interpolated within each processor independently. Then the boundary nodes that shifts to other local pieces are corrected from the adjacent neighborhood. The ghost regions are added on the first and last processors for imposing specified boundary condition on the global spline. \label{fig_interpolation_illustration}} \end{figure} \begin{itemize} \item[(1)] When $\alpha > 0$, $(x_0 - \alpha h) < x_0$, the interpolation of $\varphi(x_0 - \alpha h)$ uses the left ghost spline. Similarly, when $ \alpha < 0$, $(x_N - \alpha h) > x_N$, the interpolation of $\varphi(x_N - \alpha h)$ uses the right ghost spline. \item[(2)] For the shared grid points $x_{2l}$, e.g., $l = 1, 2$, when $\alpha > 0$, $(x_{2l} - \alpha h) < x_{2l}$, the left processor interpolates $\varphi(x_{2l} - \alpha h)$ correctly and sends the value to its right neighbor. Similarly, when $\alpha < 0$, $(x_{2l} - \alpha h) > x_{2l}$, the right processor interpolates $\varphi(x_{2l} - \alpha h)$ correctly and sends the value to its left neighbor. \end{itemize} \subsection{Parallel implementation in 6-D phase space} \label{sec:6d_parallel} For a 6-D problem, the Wigner function is expanded as the tensor product of cubic splines in three directions, \begin{equation} f(\bm{x}, \bm{k}, t) \approx \sum_{\nu_1 = -1}^{N_x+1} \sum_{\nu_2 = -1}^{N_x+1} \sum_{\nu_3 = -1}^{N_x+1} \eta_{\nu_1, \nu_2, \nu_3} (\bm{k}, t) \prod_{j=1}^3 B_{\nu_j}(x_j). \end{equation} Hereafter we take a $(N_x+1)^3 \times N_k^3$ uniform grid mesh for 6-D phase space. Because $\bm{k}$-domain involves nonlocal interaction, the domain decomposition is only performed in $\bm{x}$-space to split the whole domain into $p^3$ mutually disjoint rectangular patches, where $p$ divides into $N_x$. Each processor manipulates $(\frac{N_x}{p} + 1)^3 \times N_k^3$ grid points. The 3-D cubic splines can be constructed in each direction successively, but each `grid point' to be interpolated is a long vector of length $N_k^3$, and PMBC turns out to be a $(\frac{N_x}{p}+1)^2 N_k^3$ tensor. Thus for each processor, the cost of constructing the cubic spline is $\mathcal{O}((\frac{N_x}{p}+1)^3 N_k^3)$ and that of exchanging six PMBCs is about $6 (\frac{N_x}{p}+1)^2 N_k^3$. For the constant advection $\bm{\alpha}h = (\alpha_1 h, \alpha_2 h, \alpha_3 h)$, interpolating $f(\bm{x}_j - \bm{\alpha}h , \bm{k}, t)$ is a convolution of $64$ grid points with a $4\times 4 \times 4$ window function since \begin{equation}\label{3d_advection} f(\bm{x} - \bm{\alpha}h, \bm{k}, t) \approx \sum_{\nu_1 = -1}^{N_x+1} \sum_{\nu_2 = -1}^{N_x+1} \sum_{\nu_3 = -1}^{N_x+1} \eta_{\nu_1, \nu_2, \nu_3}(\bm{k}, t) \prod_{j=1}^3 B_{\nu_j}(x_j - \alpha_j h ) \end{equation} has only $4^3$ nonzero terms $B_{\nu_j}(x_j - \alpha_j h )$ obtained by Eqs.~\eqref{interpolation_left} and \eqref{interpolation_vector}. Thus interpolating one point involves 64 multiplications and 64 summations, and the computational and communication costs are $64 (\frac{N_x+1}{p})^3 N_k^3$ and $(\frac{N_x+1}{p})^2 N_k^3$, respectively. \section{Nonlocal quantum interaction and truncated kernel method} \label{spectral} Once CoD is alleviated via the local cubic spline construction, the remaining challenge is to seek a highly efficient approximation to ${\rm \Psi} \textup{DO}$ with a weakly singular symbol, as it has to be calculated twice per LPC1 evolution. To this end, we borrow the idea of TKM \cite{VicoGreengardFerrando2016,GreengardJiangZhang2018,LiuZhangZhang2022} to derive a spectrally accurate approximation for smooth and rapidly decreasing Wigner function, with its implementation greatly accelerated by FFTs. \subsection{Truncated kernel method} Here we omit the time variable for brevity. By a change of variables, we can rewrite \eqref{def.pdo} as follows \begin{equation*} \begin{split} \label{twistedConv} \Theta_{V}[f](\bm{x}, \bm{k}) &= \frac{2}{c_{3, 1} \mathrm{i}} \int_{\mathbb{R}^3} \frac{ \mathrm{e}^{2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}^{\prime}}-\mathrm{e}^{-2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}^{\prime}} }{ |\bm{k}^{\prime}|^2} f(\bm{x}, \bm{k} - \bm{k}^{\prime}) \D \bm{k}^{\prime} \coloneqq (I^{+} - I^{-}), \\ I^{\pm}(\bm{x}, \bm{k}) &= \frac{2}{c_{3, 1} \mathrm{i}} \int_{\mathbb{R}^3} \frac{ \mathrm{e}^{\pm 2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}^{\prime}} }{ |\bm{k}^{\prime}|^2} f(\bm{x}, \bm{k} - \bm{k}^{\prime}) \D \bm{k}^{\prime}. \end{split} \end{equation*} Note that $I^+ - I^- = 2\Re(I^+ )$ for real-valued function $f(\bm{x}, \bm{k})$, therefore, the above integral can be reduced to the computation of $I^{+}$. It notes that \begin{equation} \begin{split} \label{i1inte} I^{+}(\bm{x},\bm{k}) &= \frac{2}{c_{3, 1} \mathrm{i} } \mathrm{e}^{2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}} \int_{\mathbb{R}^3} \frac{1}{{ |\bm{k}^{\prime}|^2}} \mathrm{e}^{-2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot (\bm{k}-\bm{k}^{\prime})} f(\bm{x}, \bm{k} - \bm{k}^{\prime}) \D \bm{k}^{\prime} \\ &= \frac{2}{c_{3, 1} \mathrm{i} } \mathrm{e}^{2 \mathrm{i} (\bm{x} - \bm{x}_A)\cdot \bm{k} } \left( |\bm{k}|^{-2} \ast f^{s} \right)(\bm{x},\bm{k}), \end{split} \end{equation} where $f^s(\bm{x},\bm{k}):= f(\bm{x},\bm{k})\mathrm{e}^{-2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}}$ is a smooth and fast-decaying complex-valued function. The twisted convolution evaluation boils down to the standard convolution of singular kernel $|\bm{k}|^{-2}$ with smooth fast-decaying function $f^s(\bm{x},\bm{k})$. For brevity, we shall omit $\bm{x}$ and focus on the following convolution \begin{eqnarray*} \Phi(\bm{k}) = ( U \ast f^s ) (\bm{k}) :=\int_{\mathbb R^{3}} U(\bm{k}-\bm{k}^{\prime}) f^s(\bm{k}^{\prime}) {\rm d} {\bm{k}}^{\prime}, \end{eqnarray*} where the kernel $U(\bm{k}) =|\bm{k}|^{-2}$ is singular and the Wigner function $f(\bm{k})$ is assumed to be smooth and fast-decaying. It is reasonable to assume the density to be {\sl numerically} supported on a bounded domain, for example, a rectangular $\Omega :=[-L_k,L_k]^3 \subset \mathbb R^{3}$, and to utilize Fourier spectral method. To compute $\Phi$ on the same domain $\Omega$, we choose to apply TKM \cite{GreengardJiangZhang2018,VicoGreengardFerrando2016} which is an $O(N\log N )$ fast algorithm, implemented with FFT, and achieves spectral accuracy. The basic idea is to screen the unnecessary interaction and apply trapezodial quadrature to the smooth-integrand Fourier transform, i.e., for $\bm{k} \in \Omega$, it has that \begin{eqnarray*} \Phi(\bm{k}) &= &\int_{\mathbb R^{3}} U(\bm{k}-\bm{k}^{\prime}) f^s(\bm{k}^{\prime}) {\rm d} {\bm{k}}^{\prime} \approx \int_{\Omega} U(\bm{k}-\bm{k}^{\prime}) f^s(\bm{k}^{\prime}) {\rm d} {\bm{k}}^{\prime} = \int_{\mathbb R^{3}} U_{D}(\bm{k}-\bm{k}^{\prime}) f^s(\bm{k}^{\prime}) {\rm d} {\bm{k}}^{\prime}, \end{eqnarray*} where the truncated kernel $U_{D}(\bm{k})$ is defined as \begin{equation} U_{D}(\bm{k}):=\left\{ \begin{array}{ll} U(\bm{k}), & |\bm{k}| \leq D,\\ 0, & |\bm{k}| > D, \end{array} \right. \end{equation} with $D= \text{diam }{\Omega} := \max_{\bm{k},\bm{k}^{\prime}\in \Omega}|\bm{k}-\bm{k}^{\prime}|$. The second equality holds because $U_D(\bm{k}-\bm{k}^{\prime})= 0, ~\forall~ \bm{k}\in \Omega, ~\bm{k}^{\prime} \in \Omega^{c}$. By the Paley-Wiener Theorem \cite{bk:Rudin1991}, we know that the Fourier transform of $U_{D}$ is smooth, therefore, it is convenient to compute the convolution's Fourier transform as follows \begin{equation}\label{ktmFourier} \Phi(\bm{k}) = \frac{1}{(2\pi)^{3}}\int_{\mathbb R^{3}} \widehat U_{D}(\bm{\xi}) \widehat f^s(\bm{\xi}) ~\mathrm{e}^{\mathrm{i}\bm{k} \cdot \bm{\xi} }~{\rm d} {\bm{\xi}},\quad \bm{k} \in \Omega, \end{equation} with $\widehat f^s(\bm{\xi})= \mathcal{F}_{\bm{k} \to \bm{\xi}} f^s(\bm{k}) = \int_{\mathbb R^{3}} f^s(\bm{k}) ~\mathrm{e}^{-\mathrm{i}\bm{k} \cdot \bm{\xi} }~{\rm d} {\bm{k}}$ with its inverse denoted by $\mathcal{F}_{\bm{\xi} \to \bm{k}}^{-1}$ and \begin{eqnarray} \nonumber \widehat U_{D}(\bm{\xi}) &= &\int_{\mathbb R^{3}} U_{D}(\bm{k}) ~\mathrm{e}^{-\mathrm{i}\bm{k} \cdot \bm{\xi} }~{\rm d} {\bm{k}} = 4\pi \int_{0}^{D} U(\bm{k}) k^{2} \frac{\sin( k |\bm{\xi}|)}{k |\bm{\xi}|} {\rm d} k \\ &=& \frac{4\pi}{|\bm{\xi}|} \int_{0}^{|\bm{\xi}| D} \frac{\sin t}{t} {\rm d} t = \frac{4\pi}{|\bm{\xi}|} {\rm Si}(|\bm{\xi}| D), \end{eqnarray} with ${\rm Si}(x) := \int_{0}^{x} \sin t /t ~{\rm d } t$ being the sine integral function. The asymptotic is $\widehat U_{D}(\bm{\xi}) \approx 4 D\pi - \frac{2}{9} (D^{3}\pi )|\bm{\xi}|^{2} + O(|\bm{\xi}|^{4})$ as $|\bm{\xi}| \to 0$. As is seen, there is {\sl not} any singularity in $\widehat U_D(\bm{\xi})$. However, the kernel truncation brings in extra oscillations ${\rm Si}(|\bm{\xi}| D)$ to the integrand. To resolve such oscillations, we need a fine mesh in the frequency space $\bm{\xi}$, which, by the duality argument, corresponds to a large computational domain in the physical space $\bm{k}$. Recently, Liu {\sl et al} proved that a {\bf threefold}, instead of fourfold, zero-padding of $f^s(\cdot, \bm{k})$ is sufficient to resolve such extra oscillation in \eqref{ktmFourier}, and we refer the readers to \cite{LiuZhangZhang2022} for more details. To sum up, we derived a discretized approximation $ \Theta_V^{T}[f]$ to $\Theta_V[f]$ as follows \begin{equation}\label{discrete_quantization} \begin{split} \Theta_V^{T}[f](\bm{x}, \bm{k}_{\bm{p}}) =& \frac{2}{c_{3,1}\mathrm{i}} \mathrm{e}^{2\mathrm{i} \widetilde \bm{x} \cdot \bm{k}_{\bm{p}}} \mathscr{F}^{-1}_{\bm{\xi}_{\bm{n}} \to \bm{k}_{\bm{p}}}\left[ \widehat U_D(\bm{\xi}_{\bm{n}}) \mathscr{F}_{\bm{k}_{\bm{p}} \to \bm{\xi}_{\bm{n}}}\left(\mathrm{e}^{-2\mathrm{i}\widetilde \bm{x}\cdot \bm{k}_{\bm{p}}} f(\bm{x}, \bm{k}_{\bm{p}})\right)\right] \\ &- \frac{2}{c_{3,1}\mathrm{i}} \mathrm{e}^{-2\mathrm{i}\widetilde \bm{x}\cdot \bm{k}_{\bm{p}}} \mathscr{F}^{-1}_{\bm{\xi}_{\bm{n}} \to \bm{k}_{\bm{p}}} \left[\widehat U_D(\bm{\xi}_{\bm{n}}) \mathscr{F}_{\bm{k}_{\bm{p}} \to \bm{\xi}_{\bm{n}}}\left(\mathrm{e}^{2\mathrm{i}\widetilde \bm{x}\cdot \bm{k}_{\bm{p}}} f(\bm{x}, \bm{k}_{\bm{p}})\right)\right], \end{split} \end{equation} where $\widetilde \bm{x} = \bm{x}-\bm{x}_{A}$, $\bm{k}_{\bm{p}} = \bm{k}_{ijl}$ is the discrete grid point evenly spaced in each spatial direction of $\Omega$, and $\mathscr{F}_{\bm{k}_{\bm{p}} \to \bm{\xi}_{\bm{n}}}$ and $\mathscr{F}^{-1}_{\bm{\xi}_{\bm{n}} \to \bm{k}_{\bm{p}}}$ denote the forward and backward discrete Fourier transform of size $(3N_k)^3$ with threefold zero-padding of $f(\cdot, \bm{k}_{\bm{p}})$, respectively. \begin{remark} Before moving to the detailed implementation, let us make a comparison between TKM and the commonly used pseudo-spectral method \cite{Ringhofer1990,Goudon2002}. In fact, $\Theta_V^T[f](\bm{x}, \bm{k}_{\bm{p}})$ can be rewritten as \begin{equation}\label{lattice_pdo} \Theta_V^T[f](\bm{x}, \bm{k}_{\bm{p}}) = \mathscr{F}^{-1}_{\bm{\xi}_{\bm{n}} \to \bm{k}_{\bm{p}}} \left(\sigma_D(\bm{x}, \bm{\xi}_{\bm{n}}) \mathscr{F}_{\bm{k}_{\bm{p}} \to \bm{\xi}_{\bm{n}}} f(\bm{x}, \bm{k}_{\bm{p}})\right), \end{equation} with a {\bf non-singular} symbol $\sigma_D(\bm{x}, \bm{\xi})$ given by \begin{equation*} \sigma_D(\bm{x}, \bm{\xi}) = \frac{2}{c_{3,1} \mathrm{i}} \left(\mathcal{S}_{2 \widetilde\bm{x}} ~ \widehat{U}_D(\bm{\xi}) ~\mathcal{S}_{-2 \widetilde\bm{x}} - \mathcal{S}_{-2\widetilde\bm{x}} ~\widehat{U}_D(\bm{\xi})~ \mathcal{S}_{2\widetilde\bm{x}}\right), \quad\widetilde\bm{x} = \bm{x} - \bm{x}_{A}, \end{equation*} and $S_{\bm{\alpha}} g(\bm{\xi}) = g(\bm{\xi} - \bm{\alpha})$ is the shift operator, while ${\rm \Psi} \textup{DO}$ \eqref{def.pdo} in $\mathbb{R}^3 \times \mathbb{R}^3$ reads that \begin{equation}\label{pdo_definition_2} \Theta_V[f](\bm{x}, \bm{k}) = \mathcal{F}^{-1}_{\bm{\bm{\xi}} \to \bm{k}} (\sigma(\bm{x}, \bm{\xi})\widehat f(\bm{x}, \bm{\xi}) ), \end{equation} with a {\bf singular} symbol $ \sigma(\bm{x}, \bm{\xi}) = \frac{2}{c_{3,1} \mathrm{i}} (\mathcal{S}_{2\widetilde\bm{x}} ~\widehat U(\bm{\xi}) ~\mathcal{S}_{-2\widetilde\bm{x}} - \mathcal{S}_{-2\widetilde\bm{x}} ~ \widehat U(\bm{\xi}) ~\mathcal{S}_{2\widetilde\bm{x}})$. When $f$ is approximated by a truncated Fourier series in $\bm{k}$-space, the formula \eqref{lattice_pdo} is almost the same as the pseudo-spectral approach except the difference between $\sigma_D(\bm{x}, \bm{\xi})$ and $\sigma(\bm{x}, \bm{\xi})$, as well as zero-padding. In other words, the difficulty induced by singular symbol is resolved by exploiting an elegant fact the Fourier conjugate of truncated kernel $U_{D}$ removes the singularity at origin. By contrast, the widely used pseudo-spectral method suffers from large errors near singularity and numerical instability, which can be alleviated by TKM. Details are referred to Section 3 of our supplementary note \cite{XiongZhangShao2022}. \end{remark} In practice, with a precomputation technique, the above quadrature can be implemented only with {\sl twofold} zero-padding of the source function $f^s(\cdot, \bm{k}_{\bm{p}})$. As pointed out in \cite{VicoGreengardFerrando2016}, after plugging the finite Fourier series approximation of size $(3N_k)^3$ into \eqref{ktmFourier}, reducing zero-padding terms and utilizing the symmetry of $\widehat U_D$, we can reformulate the above quadrature \eqref{discrete_quantization} into the following discrete convolution \begin{equation}\label{DisConvPhi} \Phi(\bm{k}_{ijl}) \approx \Phi_{ijl} = \sum_{i^{\prime}=1}^{N_k}\sum_{j^{\prime}=1}^{N_k}\sum_{l^{\prime}=1}^{N_k} T_{i-i^{\prime},j-j^{\prime},l-l^{\prime} } f^s_{i^{\prime}j^{\prime}l^{\prime}}, \end{equation} where $f^s_{ijl}$ is the numerical approximation of function $f^s(\cdot,\bm{k}_{\bm{p}}), \bm{p} \in \Lambda$ with index set $\Lambda:=\left\{(i,j,l)\in \mathbb Z^{3}\big| 1\leq i,j,l \leq N_{k} \right\}$. The convolution tensor $T_{i,j,l}$ is symmetric in each direction, e.g., $T_{i,j,l} = T_{-i,j,l}$, and is given explicitly as follows \begin{equation}\label{convolution_tensor} T_{\bm{p}} \coloneqq \frac{1}{(3N_k)^{3}} \sum_{\bm{n}\in {\mathcal I }} \widehat{U}_{D}(\xi_{\bm{n}}) \mathrm{e}^{\frac{2\pi\mathrm{i} \bm{p} \cdot \bm{n}}{3N_k}}, \quad \bm{p} \in \Lambda, \end{equation} where $\xi_{\bm{n}} = \frac{2\pi}{6L_k} \bm{n}, ~\bm{n} \in {\mathcal I }$ is the Fourier mode and the dual index set ${\mathcal I}$ is defined \begin{equation} \label{InSetI} {\mathcal I }:= \left\{(n_{1},n_{2},n_{3})\in \mathbb Z^{3} \big | n_{j} = -3 N_{k}/2,\ldots, 3N_{k}/2 \!-\!1\right\}. \end{equation} It is clear that the tensor \eqref{convolution_tensor} can be calculated with a backward FFT of length $(3N_k)^{3}=27N_k^{3}$, which inevitably requires a quite large memory. Fortunately, compared with the original fourfold zero-padding TKM \cite{VicoGreengardFerrando2016,GreengardJiangZhang2018}, the minimal memory requirement of our algorithm is reduced further by a factor of $(\frac{4}{3})^{3}=\frac{64}{27} \approx 2.37$, and it shall bring about a significant improvement in real simulations, especially when the mesh size is large enough. Therefore, our algorithm grants a much easier access even on a personal computer. More importantly, the tensor is of size $(2N_k)^{3}$ and independent of the position variable $\bm{x}$ and time variable $t$, therefore, it can be precomputed only once for the whole lifetime. That is, the convolution \eqref{DisConvPhi} can be accelerated within $O(8N_k^{3} \log( 8N_k^{3}))$ flops with FFT as long as the tensor \eqref{convolution_tensor} is available. \subsection{Error estimates} Our error estimates focus on the TKM approximation to the nonlocal convolution potential $\Phi = U\ast f^{s}$ with the singular kernel $U(\bm{x}) = |\bm{x}|^{-2}$ and the effective density function $f^{s}(\bm{x},\bm{k}) =f(\bm{x},\bm{k})\mathrm{e}^{-2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}}$. \begin{theorem} \label{thm_1} Suppose that Wigner function $f(\bm{x},\bm{k})$ is a smooth and fast-decaying function of $\bm{k}$ and has a $\bm{x}$-independent common compact support, i.e., ${\rm supp}(f(\bm{x},\cdot))\subsetneq \Omega = [-L_k, L_k]^3$, then we have for any integer $m\in \mathbb Z^{+}$, \begin{eqnarray} \Vert \Theta_V[f] - \Theta^T_V[f] \Vert_{\infty} &\lesssim& ~ C~ |\bm{x}-\bm{x}_{A}|^{m}N_{k}^{-(m-\frac{3}{2})} \|f(\bm{x},\cdot) \|_m, \quad m \geq 2,\\ \Vert \Theta_V[f] - \Theta^T_V[f] \Vert_{2}& \lesssim & ~C ~ |\bm{x}-\bm{x}_{A}|^{m} N_{k}^{-m} \|f(\bm{x},\cdot) \|_m , \quad m \geq 1, \end{eqnarray} where constant $C = C(L_{k},m)$ is independent of $\bm{k}$ and $\|f(\bm{x},\cdot)\|_m$ is the standard Sobolev norm with respect to $\bm{k}$. \end{theorem} The proof is based on the recent error estimates of TKM given by Liu {\sl et al} \cite{LiuZhangZhang2022}. For brevity, we choose not to repeat the lengthy and technical proof but to directly quote them, and refer the readers to \cite{LiuZhangZhang2022} for more details. Here $H_p^m(\Omega)$ denotes the subspace of $H^m(\Omega)$ with derivatives of order up to $m-1$ being $\Omega$-periodic. \begin{lemma}[\cite{LiuZhangZhang2022}] Suppose $\rho(\bm{x}) \in H_p^m(\Omega)$ associated with the semi-norm \begin{equation} |\rho|_m = \left(\sum_{k_1= -\infty}^{\infty} \sum_{k_2= -\infty}^{\infty} \sum_{k_3= -\infty}^{\infty} |\bm{k}|^{2m} |\widehat \rho_{\bm{k}}|^2\right)^{1/2} \end{equation} and $\Phi_{N}$ is the numerical approximation to Eq.~\eqref{ktmFourier} with $N^{3}$ uniform grid points, then it has that \begin{eqnarray} \label{infyNormEsti} \|\Phi_{N}-\Phi\|_{{\infty}} &\leq &C~ N^{-(m-\frac{3}{2})} |\rho|_m, \quad m \geq 2, \\ \label{2NormEsti} \|\Phi_{N}-\Phi\|_{2}&\leq&C~ N^{-m} | \rho|_m,\quad m \geq 1, \end{eqnarray} where $C$ depends only on domain size $L_k$ and $m$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm_1}] The nonlocal potential is given by a similar convolution $\Phi = U\ast \rho$ where the density function $\rho$ is also smooth and fast decaying with a compact support and the kernel $U$ is singular. Since the Wigner function is smooth and fast decaying in $\bm{k}$ and shares a common compact support, substituting $f^{s}(\bm{x},\bm{k}) $ for $\rho$ in \eqref{infyNormEsti}-\eqref{2NormEsti}, and computing its $m$-th semi-norm, we have \begin{equation} | f^{s}(\bm{x},\bm{k}) |_{m} \lesssim C~ |\bm{x}-\bm{x}_{A}|^{m} \|f^{s}(\bm{x},\cdot)\|_{m}, \quad \forall ~m \in \mathbb Z^{+}. \end{equation} Plugging back into \eqref{i1inte}, we have \begin{eqnarray*} \|I^{+}- I^{+}_{N_k}\|_{\infty} &\lesssim &~C~ |\bm{x}-\bm{x}_{A}|^{m}N_{k}^{-(m-\frac{3}{2})} \|f(\bm{x},\cdot) \|_m,\quad m \geq 2,\\ \|I^{+}- I^{+}_{N_k}\|_{2} &\lesssim &~ C~ |\bm{x}-\bm{x}_{A}|^{m}N_{k}^{-m} \|f(\bm{x},\cdot) \|_m, \quad m \geq 1, \end{eqnarray*} where $I^{+}_{N_k}$ denotes the numerical approximation of $I^{+}$ using TKM. Obviously from \eqref{i1inte}, the desired twisted convolution \eqref{def.pdo} is effectively reduced to the real part of $I^{+}$, which immediately completes the proof. \end{proof} Next we present the numerical errors and computational time (in seconds) in Table \ref{TKM_convergence_data} to confirm the spectral convergence and efficiency of TKM with a localized Gaussian function $f(\bm{k})$, from which we can see clearly that our algorithm converges spectrally fast and the errors approach the machine precision as $N_{k}$ increases. \begin{example}\label{tkmGaussian} \textup{ For a symmetric Gaussian function $f(\bm{k})= e^{- |\bm{k}|^2}, \bm{k} \in \mathbb R^3$, the convolution potential $\Phi$ is symmetric and reads explicitly as follows \begin{equation}\label{test} \Phi(\bm{k}) = \left(\frac{1}{|\bm{k}|^2} \ast f\right)(\bm{k})= 2 \pi^{\frac{3}{2}} \frac{ \textrm{DawsonF}( k)}{k}, \quad k = |\bm{k}|, \end{equation} with $\textrm{DawsonF}(k) :=\int_0^\infty \sin (k r) ~ e^{-\frac{k^2}{4}} {\rm d} k$ \cite{ZaghloulAli2011}. Then, for a scaled and shifted Gaussian function $f_\alpha(\bm{k}) = f(\alpha (\bm{k}-\bm{k}_0)), ~\bm{k}_0 \in \mathbb R^3,\alpha >0 $, we have $\Phi_\alpha (\bm{k}) = {\alpha}^{-1} \Phi(\alpha (\bm{k}-\bm{k}_0))$. } \end{example} \begin{table}[!h] \centering \caption{\small {Numerical errors and computational time of TKM in Example~\ref{tkmGaussian}.}} \label{TKM_convergence_data} \begin{tabular}{c|c|c|c|c} \hline\hline Convergence & $N_k$ & $l^\infty$-error & $l^2$-error & Time(s)\\ \hline \multirow{6}{*}{\includegraphics[scale=0.18]{./TKM_convergence.pdf}} &8 & 9.380 & 34.209 & 8.300$\times10^{-5}$ \\ &16 & 2.044 & 2.784 & 8.500$\times10^{-4}$ \\ &32 & 5.575$\times10^{-2}$ & 2.423$\times10^{-2}$ & 8.424$\times10^{-3}$ \\ &64 & 3.434$\times10^{-6}$ & 1.556$\times10^{-6}$ & 8.624$\times10^{-2}$\\ &80 & 5.918$\times10^{-9}$ & 2.879$\times10^{-9}$ & 1.960$\times10^{-1}$ \\ &128 & 3.197$\times10^{-14}$ & 4.205$\times10^{-13}$ & 8.142$\times10^{-1}$\\ \hline\hline \end{tabular} \end{table} \section{Numerical experiments} \label{sec.num} From this section, it begins to perform a series of benchmark tests and make a thorough performance evaluation of CHASM. The scalability of our scheme up to 16000 cores is also presented, with details of parallel implementations and computational environments given in Section \ref{sec:parallel}. As the first step, we need to investigate the convergence, stability and mass conservation property of CHASM. To this end, we test the quantum harmonic oscillator in 2-D phase space, where the Wigner dynamics reduces to the classical Liouville systems and the exact solutions are obtained by solving the Hamiltonian trajectories. We will show that the setting of PMBC brings in very small errors for a nonlocal problem and have only a slight influence on the mass conservation when the stencil length $n_{nb}\ge 15$. After that, we turn to evaluate the performance of TKM. The stationary Hydrogen Wigner function of 1s state, which can be well approximated by FFTs, will be adopted as the initial and reference solution for the Wigner-Coulomb dynamics. Once the numerical accuracy is tested, it is able to study some typical quantum systems, such as the electron dynamics interacting with one or two protons, and reveal the presence of electron-proton coupling, quantum tunneling and uncertainty principle. The maximal error $\varepsilon_{\infty}(t) =\max_{(\bm{x},\bm{k})\in\mathcal{X}\times \mathcal{K}}\big |f^{\textup{ref}}\left(\bm{x},\bm{k},t\right)-f^{\textup{num}}\left(\bm{x},\bm{k},t\right) \big |$, the $L^2$-error $\varepsilon_{2}(t)= [\iint_{\mathcal{X}\times \mathcal{K}} \left(f^{\textup{ref}}\left(\bm{x},\bm{k},t\right)-f^{\textup{num}}\left(\bm{x},\bm{k},t\right)\right)^{2}\textup{d}\bm{x}\textup{d} \bm{k}]^{\frac{1}{2}}$, and the deviation of total mass $\varepsilon_{\textup{mass}}(t)= |\iint_{\mathcal{X}\times \mathcal{K}} (f^{\textup{num}}\left(\bm{x},\bm{k},t\right)- f^{\textup{ref}}\left(\bm{x},\bm{k},t=0\right))\textup{d}\bm{x}\textup{d} \bm{k}|$ are adopted as the performance metrics, with $f^{\textup{ref}}$ and $f^{\textup{num}}$ the reference and numerical solution, respectively, and $\mathcal{X}\times \mathcal{K}$ denotes the computational domain. In practice, the integral can be replaced by the average over all grid points. For a 6-D problem, we adopt the reduced Wigner function onto $(x_j$-$k_j$) plane, say, $W_j(x, k, t) = \iint_{\mathbb{R}^2 \times \mathbb{R}^2} f(\bm{x}, \bm{k}, t) \D \bm{x}_{\{1,2,3\}\setminus\{j\}} \D \bm{k}_{\{1,2,3\}\setminus\{j\}}$, and the spatial marginal distribution $P(x_1, x_2, t) = \iint_{\mathbb{R} \times \mathbb{R}^3} f(\bm{x}, \bm{k}, t) \D x_3 \D \bm{k}$ for visualizations. \subsection{2-D Quantum harmonic oscillator} The first example is the quantum harmonic oscillator $V(x) = {m \omega x^2}/{2}$ and its ${\rm \Psi} \textup{DO}$ reduces to the first-order derivative, \begin{equation}\label{Wigner_harmonic} \frac{\partial }{\partial t} f(x, k, t) + \frac{\hbar k}{m} \nabla_{x} f(x, k, t) - \frac{1}{\hbar}\nabla_{x} V(x) \nabla_{k} f(x, k, t) = 0. \end{equation} The exact solution can be solved by $f(x, k, t) = f(x(t), k(t), 0)$, where $(x(t), k(t))$ obey a (reverse-time) Hamiltonian system ${\partial x}/{\partial t} = -{\hbar k}/{m}, {\partial k}/{\partial t} = {m\omega x}/{\hbar}$, and reads \begin{equation} \begin{split} &x(t) = \cos \left(\sqrt{\omega} t\right) x(0) - \frac{\hbar}{m \sqrt{\omega}} \sin \left(\sqrt{ \omega}t\right) k(0), \\ &k(t) =\frac{m\sqrt{\omega}}{\hbar}\sin \left(\sqrt{\omega} t\right) x(0) + \cos \left(\sqrt{ \omega}t\right) k(0). \end{split} \end{equation} \begin{example} \textup{ Consider a quantum harmonic oscillator $V(x) = m \omega x^2/2$ and an initial Gaussian wavepacket $f_0(x, k) = \pi^{-1} \mathrm{e}^{-\frac{1}{2}(x-1)^2 - 2k^2}$. We choose $\omega = (\pi/5)^2$ so that the wavepacket returns back to the initial state at the final time $T = 10$. } \end{example} The computational domain is $\mathcal{X} \times \mathcal{K} = [-12, 12] \times [-6.4, 6.4]$, which is evenly decomposed into 4 patches for MPI implementation. The natural boundary condition is adopted at two ends so that there is a slight loss of mass (about $10^{-13}$) up to $T=10$, while the Neumann boundary condition may lead to artificial wave reflection and exhibits a rapid growth of errors when the wavepacket moves close to the boundary (see Section 2.4 of our supplementary material \cite{XiongZhangShao2022}). Since we mainly focus on the convergence with respect to $\Delta x$ and $n_{nb}$, several groups of simulations under $\Delta x = 0.025, 0.05, 0.1, 0.2,0.3$ and $n_{nb}=10, 15,20,30$ are performed, where other parameters are set as: the time step $\tau = 0.00002$ and $\Delta k = 0.025$ to achieve spectrally accurate approximation to ${\rm \Psi} \textup{DO}$. The convergence with respect to $\Delta x$ and the mass conservation under different $n_{nb}$ are given in Figure \ref{harmonic_convergence_LPC1}. From the results, we can make the following observations. \begin{figure}[!h] \centering \subfigure[$\varepsilon_{\infty}(t)$ under $n_{nb}=10$ and different $\Delta x$.\label{harmonic_comp_dx_nb10}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_maxerr_LPC1_nb10_dx.pdf}}} \subfigure[$\varepsilon_{\infty}(t)$ under $n_{nb}=20$ and different $\Delta x$.\label{harmonic_comp_dx_nb20}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_maxerr_LPC1_nb20_dx.pdf}} \\ \centering \subfigure[$\varepsilon_{\infty}(t)$ under $\Delta x= 0.1$ and different $n_{nb}$.\label{1s_maxerr}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_maxerr_LPC1_nb.pdf}}} \subfigure[$\varepsilon_{2}(t)$ under $\Delta x= 0.1$ and different $n_{nb}$.\label{1s_L2err}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_L2err_LPC1_nb.pdf}} \\ \centering \subfigure[Convergence with respect to $\Delta x$.\label{convergence_LPC1}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./convergence_LPC1.pdf}} \subfigure[Evolution of $\varepsilon_{\textup{mass}}(t)$.\label{mass_LPC1}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./LPC1_mass_nb.pdf}} \caption{\small 2-D quantum harmonic oscillator: The convergence and mass conservation of LPC1. LPC1 can achieve fourth-order convergence in $\Delta x$. PMBC brings in smaller errors and causes a slight loss of mass, but fortunately they are almost eliminated when $n_{nb} \ge 20$.\label{harmonic_convergence_LPC1}} \end{figure} {\bf Convergence with respect to $\Delta x$:} The convergence rate is plotted in Figure \ref{convergence_LPC1}. LPC1 can achieve spatial fourth order convergence when $n_{nb} \ge 15$, according with the theoretical value of the cubic spline interpolation. While a reduction in convergence is observed when $n_{nb} = 10$ because of the truncated stencils in Eq.~\eqref{truncation}. {\bf Influence of PMBCs:} From Figures \ref{harmonic_comp_dx_nb10} and \ref{harmonic_comp_dx_nb20}, one can see that $n_{nb} = 10$ only bring in additional errors about $10^{-5}$. Such errors seem to be negligible when $n_{nb} \ge 15$, which coincides with the observations made in \cite{MalevskyThomas1997}. However, the truncation of stencils indeed has a great influence on the mass conservation as seen in Figure \ref{mass_LPC1}, where $\varepsilon_{\textup{mass}}$ is about $10^{-6}$ when $n_{nb}=10$ or $10^{-9}$ when $n_{nb}=15$. Fortunately, its influence on total mass can be nearly eliminated when $n_{nb} \ge 20$. {\bf Numerical stability:} The first-order derivative in Eq.~\eqref{Wigner_harmonic} brings in strong numerical stiffness and puts a severe restriction on the time step $\tau$ in CHASM. Nevertheless, we have observed in \cite{XiongZhangShao2022} that LPC1 is more stable than the splitting scheme, which has also been pointed out in \cite{CrouseillesEinkemmerMassot2020}, as well as the multi-stage non-splitting scheme. Actually, LPC1 turns out to be stable up to $T=20$ under a much larger time step $\tau =0.0005$, while the Strang operator splitting becomes unstable under such setting (see Section 4.1 of our supplementary material \cite{XiongZhangShao2022}). \subsection{Hydrogen Wigner function: 1s state} We turn to evaluate the performance of CHASM in 6-D problems. The Hydrogen Wigner function is very useful for dynamical testing as it is the stationary solution of the Wigner equation. For the 1s orbital, $\phi_{\textup{1s}}(\bm{x}) = \frac{1}{2\sqrt{2} \pi^2} \exp( - |\bm{x}|)$, the Wigner function is given by Eq.~\eqref{def.Wigner_function} with $\rho(\bm{x}_1, \bm{x}_2) = \phi_{\textup{1s}}(\bm{x}_1) \phi_{\textup{1s}}^\ast(\bm{x}_2)$. Although it is too complicated to obtain an explicit formula, the Hydrogen Wigner function of 1s state can be highly accurately approximated by the discrete Fourier transform of Eq.~\eqref{def.Wigner_function}: For $\bm{k}_{\bm{\zeta}} = \bm{\zeta} \Delta k$, \begin{equation*} f_{\textup{1s}}(\bm{x}, \bm{k}_{\bm{\zeta}}) \approx \sum_{\eta_1 = -\frac{N_{y}}{2}}^{\frac{N_{y}}{2}-1} \sum_{\eta_2 = -\frac{N_{y}}{2}}^{\frac{N_{y}}{2}-1} \sum_{\eta_3 = -\frac{N_{y}}{2}}^{\frac{N_{y}}{2}-1} \phi_{\textup{1s}}(\bm{x} - \frac{\bm{\eta} \Delta y}{2}) \phi_{\textup{1s}}^\ast(\bm{x} + \frac{\bm{\eta} \Delta y}{2}) \mathrm{e}^{- \mathrm{i} (\bm{\zeta} \cdot \bm{\eta} ) \Delta k \Delta y } (\Delta y)^3. \end{equation*} By taking $\Delta y = \frac{2\pi}{N_k \Delta k}$, it can be realized by FFT (we use $N_y =128$). The spatial density of 1s orbital on $(x_1$-$x_2$) plane and the reduced Wigner function $W_1(x, k)$ projected on $(x_1$-$k_1$) plane are visualized in Figures \ref{1s_xdist} and \ref{1s_wigner}, respectively. \begin{figure}[!h] \centering \subfigure[$P(x_1, x_2)$ for 1s orbital.\label{1s_xdist}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_orbit.pdf}}} \subfigure[ $W_1(x, k)$ for 1s orbital.\label{1s_wigner}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_Wigner_init.pdf}}} \\ \centering \subfigure[The heavy tail in momental space.\label{1s_tail}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_kdist.pdf}}} \subfigure[$W_1^{\textup{num}} - W_1^{\textup{ref}}$ at $t = 5$a.u. ($N_k = 64$).\label{1s_error_visualization_Nk64}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_err_LPC_Nk64.pdf}} \caption{\small The Hydrogen 1s Wigner function: A visualization of the Hydrogen 1s orbital, the reduced Hydrogen 1s Wigner function $W_1(x, k)$ and the numerical errors $W_1^{\textup{num}} - W_1^{\textup{ref}}$ at $t = 5$a.u. Small errors are observed near the $\bm{k}$-boundary as $f_{\textup{1s}}(\bm{x}, \bm{k})$ has a heavy tail in $\bm{k}$-space, which have influences on the convergence rate of TKM and mass conservation. \label{1s_visualization}} \end{figure} The storage of 6-D grid mesh requires a tremendous amount of computer memory and hinders the benchmarks under very fine grid mesh. To alleviate such problem, we have to adopt SINGLE precision to save halves of memory, which is adequate for cubic spline interpolations, but still adopt DOUBLE precision for TKM. The computational domain is $\mathcal{X} \times \mathcal{K} = [-9, 9]^3 \times [-6.4, 6.4]^3$ with a fixed spatial spacing $\Delta x = 0.3$ ($N_{x} = 61$), where the accuracy of spline interpolation has been already tested in 2-D example. The natural boundary condition is again adopted at two ends. We mainly investigate the convergence of TKM with respect to $N_k$ by five groups: $N_k = 8, 16, 32, 64,80$ ($ \Delta k = 1.6, 0.8, 0.4, 0.2,0.16$). The domain is evenly divided into $4\times 4 \times 4$ patches and distributed by $64$ processors, and each processor provides 4 threads for shared-memory parallelization using the OpenMP library. Other parameters are set as: the stencil length in PMBC is $n_{nb} = 15$ and time step is $\tau = 0.025$. The numerical convergence and the deviation in total mass of LPC1 are presented in Figure \ref{1s_convergence_mass_LPC}, and numerical errors for reduced Wigner function $W_{1}^{\textup{num}} - W_{1}^{\textup{ref}}$ under $N_k = 64$ are visualized in Figure \ref{1s_error_visualization_Nk64}, respectively. From the results, we can make the following observations. \begin{figure}[!h] \centering \subfigure[Evolution of $\varepsilon_{\infty}(t)$ under different $N_k$.] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_maxerr_LPC.pdf}} \subfigure[Evolution of $\varepsilon_{2}(t)$ under different $N_k$.] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_L2err_LPC.pdf}} \\ \centering \subfigure[Convergence with respect to $N_k$. \label{1s_convergence}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_LPC_convergence.pdf}} \subfigure[Evolution of $\varepsilon_{\textup{mass}}(t)$.\label{1s_mass}] {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_mass_LPC.pdf}} \caption{\small The Hydorgen 1s Wigner function: The performance of TKM under different $\Delta k$, with $\Delta x = 0.3$. The convergence of TKM is verified, albeit with lower convergence rate due to errors caused by the spatial spline interpolation and the heavy tail of $f_{\textup{1s}}(\bm{x}, \bm{k})$ is $\bm{k}$-space.\label{1s_convergence_mass_LPC}} \end{figure} {\bf Convergence with respect to $\Delta k$:} The convergence of TKM is clearly verified in Figure \ref{1s_convergence}, albeit its convergence rate is slower than expectation due to the mixture of various error terms. Nonetheless, CHASM can still achieve $\varepsilon_{\infty}(5) = 1.11\times10^{-3}$ and $\varepsilon_{2}(5) = 4.706\times 10^{-3}$ under $61^3 \times 64^3$ grid mesh, where $\max(|f_{\textup{1s}}(\bm{x}, \bm{k})|) =1/\pi^3\approx 3.23\times 10^{-2}$. These metrics further reduce to $\varepsilon_{\infty}(5) = 9.48\times 10^{-4}$ and $\varepsilon_{2}(5) = 4.02\times 10^{-3}$ when $N_k = 80$. We have also tested the Strang splitting scheme for $N_k = 64$ and obtained $\varepsilon_{\infty}(5) = 2.0\times 10^{-3}$, $\varepsilon_{2}(5) = 7.0\times 10^{-3}$, which are significantly larger than the results of LPC1 (see Section 4.2 of our supplementary material \cite{XiongZhangShao2022}). {\bf Deviation of total mass}: A slight deviation of the total mass is observed due to the break of Eq.~\eqref{mass_conserve}. From Figure \ref{1s_mass}, one can see that $\varepsilon_{\textup{mass}}(5)$ of LPC1 is $0.66\%$, while that of the Strang splitting is $1.35\%$ (see Section 4.2 of \cite{XiongZhangShao2022}). Two reasons may explain the above observations. On one hand, $f_{\textup{1s}}(\bm{x}, \bm{k})$ exhibits a heavy tail in $\bm{k}$-space. In Figure \ref{1s_tail}, the reduced Wigner function $W_1(\bm{x}, \bm{k})$ is about $10^{-3}$ near $\bm{k}$-boundary, indicating that $f_{\textup{1s}}(\bm{x}, \bm{k})$ is not truly compactly supported in $[-6.4, 6.4]^3$. Thus the overlap with the periodic image may produce small oscillations near the $\bm{k}$-boundary, which is also visualized in Figure \ref{1s_error_visualization_Nk64}. On the other hand, the solution might also be contaminated by the interpolation errors in the spatial space, which are about $10^{-3}$ for $\Delta x = 0.3$ and $T = 5$a.u. as presented in Figure~\ref{harmonic_comp_dx_nb20}. \subsection{Electron dynamics interacting with one proton} \begin{figure}[!h] \centering \subfigure[$W_1(x, k, t)$ (left) and $W_2(x, k, t)$ (right) at $t=1$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_x_0010.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_y_0010.pdf}}} \subfigure[$P(x_1, x_2, t)$ at $t=0.5$a.u.\label{xdist_t005}]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_QCS_xy_0005.pdf}}} \centering \subfigure[$W_1(x, k, t)$ (left) and $W_2(x, k, t)$ (right) at $t=2$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_x_0020.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_y_0020.pdf}}} \subfigure[$P(x_1, x_2, t)$ at $t=1$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_QCS_xy_0010.pdf}}} \\ \centering \subfigure[$W_1(x, k, t)$ (left) and $W_2(x, k, t)$ (right) at $t=4$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_x_0040.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_y_0040.pdf}}} \subfigure[$P(x_1, x_2, t)$ at $t=2$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_QCS_xy_0020.pdf}}} \\ \centering \subfigure[$W_1(x, k, t)$ (left) and $W_2(x, k, t)$ (right) at $t=8$a.u.\label{xdist_t050}]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_x_0080.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_y_0080.pdf}}} \subfigure[$P(x_1, x_2, t)$ at $t=5$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_QCS_xy_0050.pdf}}} \\ \centering \subfigure[$W_1(x, k, t)$ (left) and $W_2(x, k, t)$ (right) at $t=12$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_x_0120.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_QCS_y_0120.pdf}}} \subfigure[$\langle x_1(t)\rangle$ and $\langle k_1(t)\rangle$.\label{averaged_pos_wvn}]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./avg_position_momentum.pdf}}} \caption{\small Electron-proton interaction: Snapshots of the reduced Wigner functions on $(x_1$-$k_1)$ plane (left) and on $(x_2$-$k_2)$ plane (middle), the spatial marginal distribution (right) and the averaged position and momentum. \label{qcs_time_evolution}} \end{figure} With above preparations, we can simulate several typical quantum systems and try to reveal the proton-electron coupling and the uncertainty principle under the Wigner function representation. The following example is motivated from the strong-field ionization process studied in \cite{TianWangEberly2017,HanGeFangYuGuoMaDengGongLiu2019}. The computational domain $[-9, 9]^3 \times [-4.8, 4.8]^3$ under a $81^3 \times 64^3$ uniform grid is decomposed into $4^3$ patches with $n_{nb} = 15$. The time step is $\tau = 0.025$. \begin{example}\label{example_one_proton} \textup{ Consider a electron interacting with a proton fixed at $(0, 0, 0)$. The initial condition is $f_0(\bm{x}, \bm{k}) = \pi^{-3} \mathrm{e}^{-\frac{1}{2} ((x_1-1)^2 + x_2^2 + x_3^2) - 2(k_1^2 + k_2^2 + k_3^2)}$, where the Gaussian wavepacket describes the coherent state. } \end{example} {\bf Spatial unharmonic oscillation:} As presented in the third column of Figure~\ref{qcs_time_evolution}, the electron wavepacket is soon attracted by the proton and then oscillates near the origin, and it presents an evident unharmonic oscillation pattern in the spatial space under the Coulomb interaction. We record the average position $\langle x_1(t) \rangle$ and momentum $\langle k_1(t) \rangle$ in Figure \ref{averaged_pos_wvn} and indeed observe that the amplitude of oscillations damp away in time, which is distinct from the harmonic trajectories. {\bf Uncertainty principle:} The time evolutions of $W_1(x, k, t)$ and $W_2(x, k, t)$ are plotted in the first two columns of Figure \ref{qcs_time_evolution}. Since the electron initially deviates from the origin in $x_1$-direction, $W_1(x, k, t)$ exhibits a highly asymmetric pattern and becomes more and more oscillating. The uncertainty principle is visualized by the negative parts of the Wigner function, which seem to be concentrated on the region opposite to the moving direction. By contrast, $W_2(x, k, t)$ is always symmetric, and only small negative components are observed. \subsection{$H^+_2$ system: Electron dynamics interacting with two protons} A more challenging problem is to put an electron in the delocalized potential produced by two protons, motivated from the Hydrogen tunneling phenomenon \cite{PakHammesSchiffer2004}. The computational domain is $[-9, 9]^3 \times [-4.8, 4.8]^3$ with a $61^3 \times 64^3$ uniform grid mesh, which is decomposed into $4\times 4\times 4$ patches with $n_{nb} = 15$. \begin{example} \textup{ Suppose there are two protons with fixed position $\bm{x}_A^- = (-R, 0, 0)$ and $\bm{x}_A^+ = (R, 0, 0)$, $R = 0.614161$a.u. (0.325 Angstrom), so that the potential is $V(\bm{x}) = -\frac{1}{|\bm{x} - \bm{x}_A^-|} - \frac{1}{|\bm{x} - \bm{x}_A^+|}$. The initial Gaussian wavepacket is set as $f_0(\bm{x}, \bm{k}) = \pi^{-3} \mathrm{e}^{-\frac{1}{2} \left(x_1^2 + x_2^2 + x_3^2\right) - 2 \left(k_1^2 + k_2^2 + k_3^2\right)}$. } \end{example} \begin{figure}[!h] \centering \subfigure[$W_1(x, k, t)$ (left), $W_2(x, k, t)$ (middle) and $P(x_1, x_2, t)$ (right) at $t=1$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_x_010.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_y_010.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./H2_xdist_x_0010.pdf}}} \centering \subfigure[$W_1(x, k, t)$ (left), $W_2(x, k, t)$ (middle) and $P(x_1, x_2, t)$ (right) at $t=2$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_x_020.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_y_020.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./H2_xdist_x_0020.pdf}}} \\ \centering \subfigure[$W_1(x, k, t)$ (left), $W_2(x, k, t)$ (middle) and $P(x_1, x_2, t)$ (right) at $t=4$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_x_040.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_y_040.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./H2_xdist_x_0040.pdf}}} \\ \centering \subfigure[$W_1(x, k, t)$ (left), $W_2(x, k, t)$ (middle) and $P(x_1, x_2, t)$ (right) at $t=8$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_x_080.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_y_080.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./H2_xdist_x_0080.pdf}}} \\ \centering \subfigure[$W_1(x, k)$ (left) and $W_2(x, k)$ (right) at $t=12$a.u.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_x_120.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./redist_H2_y_120.pdf}}} \subfigure[Projection on $x_1$ direction.\label{xdist_marginal}]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_evolution.pdf}}} \caption{\small $H^+_2$ system: Snapshots of the reduced Wigner functions on $(x_1$-$k_1)$ plane (left) and on $(x_2$-$k_2)$ plane (middle), and the spatial marginal distribution (right). \label{H2_time_evolution} } \end{figure} {\bf Spatial concentration}: The time evolutions of $P(x_1, x_2, t)$ are plotted in Figure \ref{H2_time_evolution}. In particular, Figure \ref{xdist_marginal} gives the projection of $P(x_1, x_2, t)$ onto $x_1$-direction, i.e., $\int_{\mathbb{R}} P(x_1, x_2, t) \D x_2$. It is seen that the electron is almost trapped in the field produced by two delocalized protons, and the wavepacket at $t = 1$a.u. is evidently more concentrated near the origin than the initial Gaussian. The peak of spatial marginal distribution reaches the maximum at $t = 2$a.u. Afterward, it gradually descends until $8$a.u., and begins to oscillate around a stable level. Clearly, the spatial marginal distribution has a fatter tail compared with the initial Gaussian profile. {\bf Quantum tunneling}: In fact, the spatial concentration seems to be an outcome of the quantum uncertainty and tunneling. From the reduced Wigner functions in Figure \ref{H2_time_evolution}, one can see (1) the electron has certain probability to escape from the attractive potentials by two protons; (2) The quantum Coulomb interactions produce some negative regions, indicating that the electron with certain momentum is forbidden to escape; (3) The concentration of $P(x_1, x_2, t)$ seems to be related to the negative parts of the Wigner function as they ``squeeze'' the Gaussian wavepacket inside and force the electron to occupy the centre region with larger probability, while the heavy tail corresponds to the wavepacket that escapes from the attractive potentials. \subsection{Implementation and parallelization} \label{sec:parallel} Finally, we provide details of parallel implementations in Table \ref{cpu_time}, including the memory requirement for storing a 6-D tensor in single precision, the computational time and corresponding platform. All the simulations are performed via our own Fortran implementation, with a mixture of MPI and OpenMP library to realize the distributed and shared-memory parallelization, respectively, and the domain is decomposed to $4^3$ patches ($2^3$ patches for the group with mesh size $41^3 \times 32^3$). It notes that the simulations under the mesh size $41^3 \times 32^3$ or $61^3 \times 32^3$ can be performed by a single computer without any difficulty in data storage, while other groups have to be performed on multiple computers due to the severe limitation of memory. \begin{table}[!h] \centering \caption{\small The memory requirement of storing a 6-D tensor of size $N_x^3 \times N_k^3$ in single precision, the computational time of LPC1 scheme up to $T = 5$a.u. ($\tau = 0.025$a.u., 200 steps) and the corresponding running platform. \label{cpu_time}} \label{notation} \begin{lrbox}{\tablebox} \begin{tabular}{c|c|c|c|c} \hline\hline $N_x^3 \times N_k^3$ & Memory & High-performance Computing Platform & Cores &Time(h)\\ \hline $41^3 \times 32^3$ & $8.41$GB & AMD 5950X (3.40GHz, 16C32T), 128GB Memory &32 &13.27\\ $61^3 \times 32^3$ & $27.71$GB & AMD 2990WX (3.00GHz, 32C64T), 256GB Memory &64 &66.16\\ $61^3 \times 64^3$ & $274.88$GB & E5-2697A v4 (2.60GHz,16C32T), 256GB Memory $\times 8$ &256 &66.79\\ $61^3 \times 80^3$ & $432.93$GB & E5-2697A v4 (2.60GHz,16C32T), 256GB Memory $\times 8$ &256 &88.67\\ $81^3 \times 64^3$ & $557.26$GB & E5-2680 v4 (2.40GHz,14C28T), 256GB Memory $\times 16$ &448 &66.13\\ \hline\hline \end{tabular} \end{lrbox} \scalebox{0.82}{\usebox{\tablebox}} \end{table} We have also tested the scalability of CHASM up to 1000 nodes and 16 threads per task (16000 cores in total) by simulating one-step Euler integration under the grid mesh $61^3 \times 16^3$. The speedup ratio is presented in Figure \ref{fig_speed_ratio}. CHASM achieves the speedup ratio at least $53.84\%$ under $10\times 10 \times 10$ decomposition, where the calculation of ${\rm \Psi} \textup{DO}$ occupies most of computational time. Since the nonlocal calculation turns out to be the bottleneck in complexity, which scales as $\mathcal{O}(N_k^3\log N_k)$ according to Table \ref{TKM_convergence_data}, it is expected that CHASM can achieve higher speedup ratio as $N_k$ increases. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./speedup_ratio.pdf} \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./time_component.pdf} \caption{\small Parallelization: CHASM achieves speedup ratio at least $53.84\%$ with the grid mesh $61^3\times 16^3$ distributed in $1000$ nodes, which is further boosted when larger $N_k$ is used. \label{fig_speed_ratio}} \end{figure} \section{Conclusion and discussion} \label{sec.discussion} Numerical algorithms for high-dimensional Wigner equation have drawn a growing attention, but the lack of reliable reference solutions poses a major bottleneck to their design and evaluations. For 6-D Wigner-Coulomb dynamics, we propose a massively parallel scheme, termed CHAracteristic-Spectral-Mixed (CHASM). It exploits the local spline interpolation and the truncated kernel method to tackle the local spatial advection and nonlocal pseudodifferential operator with weakly singular symbol, respectively. CHASM may provide accurate references for a relatively new branch of particle-based stochastic Wigner simulations \cite{KosinaNedjalkovSelberherr2003,MuscatoWagner2016,ShaoXiong2019}, which may be potentially extended to even realistic many-body quantum systems (D $=$ 12) and further overcome the curse of dimensionality. It deserves to mention that the proposed scheme can be straightforwardly applied to other 6-D problems, including the Vlasov equation \cite{CrouseillesLatuSonnendrucker2009,Kormann2015,KormannReuterRampp2019} and the Boltzmann equation \cite{DimarcoLoubereNarskiRey2018} due to their strong similarities. In addition, several issues, including the generalization of CHASM to the fully nonlinear Wigner-Poission-Boltzmann equation and the GPU implementation, will be discussed in our future work. \section*{Acknowledgement} This research was supported by the National Natural Science Foundation of China (No.~1210010642), the Projects funded by China Postdoctoral Science Foundation (No.~2020TQ0011, 2021M690227) and the High-performance Computing Platform of Peking University. SS is partially supported by Beijing Academy of Artificial Intelligence (BAAI). The authors are sincerely grateful to Haoyang Liu and Shuyi Zhang at Peking University for their technical supports on computing environment, which have greatly facilitated our numerical simulations.
2,869,038,155,131
arxiv
\section{Introduction} The abundance of swimmers is widespread from microscopic to macroscopic length scales in nature, such as algae\cite{Ringo543,Polin487}, bacteria\cite{Blair1995,Berg1973}, spermatozoa\cite{GRAY775,SHACK1974555,Woolley01082003}, {\it C. elegans}, fishes, etc. The self-propulsion helps them in the endurance of getting food, avoiding various chemical toxins, in reaching to female reproductive egg, and for several other biological processes in complex environments. Typically, motile organisms live near surfaces, which makes them vulnerable to any external perturbations, especially to a fluid-flow\cite{Tung:softmatt2014,lauga2009hydrodynamics}. The interplay between propulsive force and a flow induced motility is referred to rheotaxis in literature\cite{uspal2015rheotaxis,kantsler2014rheotaxis,rosengarten1988rheotactic}. Understanding the behavior of self-propelled organisms near a surface is crucial from several bio-applications point of view to a fundamental quest\cite{gao2012cargo}. Natural microswimmers play a significant role in various biological process, therefore their dynamics and structure are subject of immense research interest\cite{Berke:prl:2008,sabass:prl:2010,Schaar:PRL:2015,elgeti2015run,tournus2015flexibility,das2018confined,uspal2015rheotaxis,daddi2018swimming,ledesma2012circle,Elgeti2016,potomkin2017focusing,omori2016upward,tao2010swimming,de2016understanding,pagonabarraga2013structure,zhang:acs:2010,Hill:PRL2007,KAYABiophysical:2012,Yuan:PNAS2015,kantsler2014rheotaxis}. On the other hand, artificially designed microswimmers can be used as a potential model system for targeted delivery in pharmaceutical applications\cite{Howseprl2007,Paxton2006,palacci2015artificial}. Microswimmers' physical behavior can be influenced substantially under external perturbations\cite{Nili:rsc:2017,Chilukuri:jpcm:2014,ezhilan_saintillan_2015}. This can lead them to motile against the stream near surfaces\cite{Nili:rsc:2017,Chilukuri:jpcm:2014,ezhilan_saintillan_2015,Bretherton490,Zhang:srep:2016,katuri2018cross,rosengarten1988rheotactic,son2015live,rusconi2014bacterial,kaya2009characterization,meng2005upstream}. In living matter, swimming against the flow is common in nature, specifically for the fishes\cite{Montgomery:nat:1997}, {\it C. elegans}\cite{Yuan:PNAS2015}, {\it E. Coli}, sperms\cite{kantsler2014rheotaxis}, etc. The main difference in the mechanism of swimming at macroscopic and microscopic length-scales is the intervention of visual and tactile sensory cues\cite{Montgomery:nat:1997,ARNOLD:1974} in former case, while in the latter case motion is driven by the interplay of various physical interactions\cite{Marcos:NAS2012}. The physical reason behind the upstream motility is attributed to shear-induced orientation, active stresses, reduction in local viscosity, and inhomogeneous hydrodynamic drag\cite{Berke:prl:2008,Guanglai:PRL2009,lin2000direct}. In the literature, simpler yet effective models have been proposed to unravel dynamics of microswimmers near surfaces\cite{Nili:rsc:2017,Chilukuri:jpcm:2014,ezhilan_saintillan_2015,son2015live,rusconi2014bacterial,kaya2009characterization,meng2005upstream,de2016understanding,najafi2004simple,pande2015forces,babel2016dynamics}. Despite their simplicity, they are able to capture various complex behaviors of microswimmers such as, surface accumulation\cite{Elgeti:2013,Elgeti:2009,Elgeti2016}, upstream swimming\cite{ezhilan_saintillan_2015,Chilukuri:jpcm:2014,Nili:rsc:2017}, and flow-induced angular alignment\cite{Tung:PRL2015,martin2018active}, etc. The population splitting from a unimodal to a bimodal phase is also reported in terms of chirality and angular speed\cite{Nili2018}. Further, the upstream motion can be regulated using viscoelastic fluid\cite{Mathijssen:PRL2016}. The propulsion mechanism changes accumulation and angular alignment near walls, more specifically, puller swimmers orthogonally point towards the wall. Whereas, pusher or neutral swimmers tend to align along the wall\cite{Malgarettijcp:2017}. An external flow has tendency to suppress the excess adsorption of the dimer-like swimmers on the surfaces\cite{Chilukuri:jpcm:2014}. In this article, we attempt to provide a thorough study of slender-like motile objects near the solid interface subjected to flow. The influence of flow on the weakening of accumulation and angular distribution near the surfaces is addressed in an elaborate manner. We incorporate hydrodynamic interactions in our model, which is crucial in the study of active matter systems. The hydrodynamic interactions can induce effective attraction between the wall and an elongated shape swimmer\cite{Elgeti:2009,pedley1992hydrodynamic,berke2008hydrodynamic}. In the previous studies, long-range correlations among solvent, swimmers, and solid-boundaries were not taken into account\cite{Nili:rsc:2017,ezhilan_saintillan_2015}. We consider a simulation model that incorporates an explicit solvent based mesoscale approach known as multi-particle collision dynamics (MPC)\cite{Malevanets:jcp:1999,Kapral:2008,Gompper2009} clubbed with molecular dynamics (MD). The dilute suspension of active filaments exhibits an enhancement in average density near a solid-boundary with an increase in P{\'e}clet number, while fluid flow leads to desorption of the swimmers. The density variation of swimmers is quantified in terms of short time diffusion, alignment of a rod-like swimmer across the channel, and residence time near wall and bulk. The swimmers align (anti-align) parallel to the flow at the top (bottom) wall. The shear-induced orientational alignment exhibits a non-monotonic behavior on increasing flow rate, which is attributed to blindness of polarity at higher flow. The majority and minority populations at walls display orientation-switching. In the shear dominant regime, the population splits from a unimodal to bimodal phase. The orientational moment near the surfaces shows a power law variation on the low strength with an exponent slightly smaller than found in the bulk, it also exhibits a power law scaling with P{\'e}clet number in large $Pe$ regime. The organisation of the paper is as follows: In section 2, the simulation methodology of the self-propelled filaments and the fluid has been discussed. Results are presented in section 3, with the discussion of the competition among the flow, confinement, and active forces. We have summarised our study in the section 4. \section{Model} In this section, we present simulation method adopted for active filaments in solution. At first, modelling of an active filament is presented, and subsequently implementation of a coarse-grained model for the solvent is introduced. A schematic display of the system confined along y-direction is shown in Fig.~\ref{Fig:model}. In other two spatial directions (x and z), periodic boundary condition is applied. \begin{figure \includegraphics[width=\linewidth]{model12} \caption{A schematic picture of active filaments confined between walls. Bottom wall is in grey and top wall is shown in light blue for the better visibility. A red bead shows the head of swimmers, thus their direction of polarity. Arrow in the right diagram displays direction of flow.} \label{Fig:model} \end{figure} \subsection{Active Filament} We consider $N_p$ filaments, where each filament consists a linear sequence of $N$ monomeric units, connected via spring and bending potentials. Thus total $N_t=N \times N_p$ number of monomers are present in the solution. An excluded volume interaction among the monomers and walls are also taken into account. The total potential energy of a filament is written as $U = U_{sp} + U_{b} + U_{LJ}+U_{w}$. Here $U_{sp}$, $U_b$, $U_{w}$, and $U_{LJ}$ are harmonic, bending, wall and repulsive part of the Lennard-Jones (LJ) potentials, respectively. The harmonic and bending potentials for $j^{th}$ filament is given as, \begin{equation} U^{j}_{sp} + U^{j}_{b}= \frac{k_{s}}{2} \sum_{i=1}^{N-1}(|{\bf R}_{i}|-l_{0})^2 + \frac{\kappa}{2} \sum_{i=1}^{N-2}({\bf R}_{i+1}- {\bf R}_{i})^2, \label{bond_bend} \end{equation} where $l_0$ is the equilibrium bond length, $R_i$ is the length of the $i^{th}$ bond vector, $R_i= |{\bf r}_{i+1}-{\bf r}_{i}|$ with ${\bf r}_i$ to be position vector of the $i^{th}$ monomer, $k_s$ and $\kappa$ are spring constant and bending rigidity of the filament, respectively. The excluded volume potential $U_{LJ}$ is modelled as repulsive part of LJ potential for shorter distance, i.e., $R_{ij} < 2^{1/6}\sigma$, among all monomers, \begin{equation} U_{LJ} = \sum_{i=1}^{N_t-1} \sum_{j=i+1}^{N_t} 4 \epsilon \left[\left(\frac{\sigma}{R_{ij}}\right)^{12}- \left( \frac{\sigma}{R_{ij}}\right)^{6} + \frac{1}{4} \right], \label{Eq:LJ} \end{equation} and for $R_{ij} \geq 2^{1/6}\sigma$, $U_{LJ}$ is treated to be zero. Here, $\epsilon$ and $\sigma$ are the LJ interaction energy and the diameter, respectively. Interactions between wall and monomers ($U_{w}$) are also treated in same manner as given in Eq.~\ref{Eq:LJ}, to constrain the filaments within wall premises. A monomer feels the repulsive force from a boundary wall when it reaches within the distance $2^{1/6}\sigma/2$ from the wall. The self-propulsion is achieved by imposing tangential force along each bond vector of the filament, thus force on $i^{th}$ filament can be written as $F_a^{i}= \sum_{j=1}^{N-1}f_a {\hat t}({\bf R}_j) $\cite{isele2015self,anand2018structure}, $f_a$ and ${\hat t}({\bf R}_j)$ reads as the strength of the active force, and $j^{th}$ tangent vector, respectively. \subsection{MPC fluid} The MPC method is also known as stochastic rotation dynamics approach\cite{Malevanets:jcp:1999,Kapral:jcp2000,Kapral:2008,Gompper2009,Ihle:PRE:2001}, where solvent molecules are treated as point particles of mass $m$. Their dynamics consist of streaming followed by collision in the alternating steps. In the streaming step, solvent particles move ballistically with their respective velocities, and their positions are updated according to the following rule, ${\bf r}_{i}(t+h) = {\bf r}_{i}(t) + h {\bf v}_{i}(t)$, where $h$ is the MPC collision time-step and $i$ is the index for a solvent molecule. In the collision step, solvent molecules are sorted into cubic cells of side $a$ and their relative velocities, concerning the centre-of-mass velocity of the cell, are rotated around a randomly oriented axis by an angle $\alpha$. The particles' velocities are updated as \begin{equation} {\bf v}_{i}(t+h) = {\bf v}_{cm}(t) + \Omega(\alpha)({\bf v}_{i}(t) - {\bf v}_{cm}), \label{collision} \end{equation} where ${\bf v}_{cm}$ is the centre-of-mass velocity of the cell of $i^{th}$ particle, and $\Omega(\alpha)$ stands for the rotation operator. During collision, all solvent molecules within a cell interact with each other in a coarse-grained fashion by colliding at the same time to ensure the momentum conservation. This ensures the long-range spatial and temporal correlations among the solvent molecules that results hydrodynamic interactions. {\color{black} A solvent molecule's velocity is reversed by bounce-back rule as ${\bf v}_i=-{\bf v}_i$\cite{lamura2001multi,lamura2002numerical,Singh_2014_JCP}, when it collides with a wall during streaming step. This imposes no-slip boundary conditions on both walls}. The interaction of MPC-fluid and filament monomers are incorporated during the collision step. Here, momentum of monomers in the calculation of centre-of-mass velocity of the cell is included during the collision step\cite{Malevanets:EPL:2000,Ripoll:EPL:2004}. Furthermore, an active force on the filament adds momentum in the polarity direction of a filament, which destroys the local momentum conservation. To insure local momentum conservation, the same force to the solvent particles are imposed in opposite direction to only those cells where monomers are present during each collision step\cite{Elgeti:2009}. The presence of propulsive force and flow continuously increases energy of the system, which may lead to rise in the temperature of fluid. A cell level canonical thermostat known as Maxwell-Boltzmann scaling\cite{CCHuang2010,CCHuang2015} is incorporated to remove the excess energy and keep the desired temperature of the system. A random shift of the collision cell at every step is also performed to avoid the violation of Galilean invariance\cite{Ihle:PRE:2001,kroll:pre2003}. A linear fluid-velocity ($v_{x} = \dot{\gamma} y$) profile along the x-axis is generated by moving the wall at $y=L_y$ (top wall in the Fig.\ref{Fig:model}) at constant speed ($v_{x} = \dot{\gamma} L_y$). {\color{black} This gives a flow profile as shown in Fig.~\ref{Fig:model}, and it has a net flow along x- direction. } The equations of motion of solvent molecules are modified in the vicinity of wall\cite{Winkler:jcp:2009,Whitmer:2010,Singh_2014_JCP}. The velocity of a solvent particle relative to the surface is reversed with bounce-back rule in the streaming step when it touches a wall, which again ensures no-slip boundary conditions on the walls. During the solvent-wall interaction, a moving wall transfers momentum to the solvent molecules which drives a linear profile on average along the x-direction. In addition to that, virtual particles with the velocity taken from a Maxwell-Boltzmann distribution with mean equal to the wall velocity are added in the partially filled cells\cite{CCHuang2010,CCHuang2015}. \subsection{Simulation Parameters} All the physical parameters are scaled in terms of the MPC cell length $a$, mass of a fluid particle $m$, thermal energy $k_{B}T$ and time $\tau=\sqrt{ma^2/k_BT}$. The size of simulation box are taken here as ($L_x=80,L_y= 50,L_z= 25$) with periodic boundary conditions in x and z spatial directions, and solid walls are in y-directions at $0$ and $L_y$. Other parameters are chosen as spring constant $k_{s}=1000k_{B}T/l_{0}^{2}$, stiffness parameter $\kappa=5000k_{B}T/l_{0}^{2}$, $l_{0}/a =\sigma/a=1$ and $\epsilon/k_{B}T=1$. Unless explicitly mentioned, $N=10$ (number of monomers in a filament) and $N_p=50$ (number of filaments), which results a dilute concentration of monomer in solution as $\rho_m=0.005a^{-3}$, and number density of rod is $\rho_p=0.0005a^{-3}$. This is well below the isotropic-nematic transition\cite{prost1995physics}. The velocity-Verlet algorithm is used for the integration of equations of motion for active polymers, and integration time-step is chosen to be $h_m=0.01\tau$. The strength of active force is measured in terms of a dimensionless quantity called P{\`e}clet number, which is defined as $Pe = \frac{f_{a} \sigma}{k_{B} T}$. For the MPC fluid, collision time-step is taken as $h = 0.05\tau$, rotation angle $\alpha=130^{\circ}$, average number of fluid particles per cell $<N_{c}>=10$. These parameters correspond to the transport coefficient of the fluid as zero-shear viscosity $\eta_s \equiv 17 \sqrt{m k_{B}T}/a^{2}$. The strength of flow is expressed in terms of a dimensionless Weissenberg number $Wi$ defined as $Wi = \dot{\gamma} \tau_r$, where $\tau_r$ is the polymer's relaxation time. Here, $Wi$ is a measure of the flow strength over the thermal fluctuations, in the limit of $Wi\le 1$, thermal fluctuations dominate, however for $Wi\ge 1$, flow plays a significant role. All the simulations are performed in the range of $0 \le Wi \le 150$ and $0\le Pe \le 5$. For each data set, 50 independent runs have been generated for better statistics. All the simulations results are below the Reynolds number $Re<0.1$. \section{Results} In our model, an active filament moves along one of its end, thus it is a polar filament. It can align along the surfaces, which leads to absorption on the surfaces and depletion in bulk\cite{Elgeti:2013,Elgeti2016}. A polar active filament may preferentially align parallel or anti-parallel to the flow direction. Here, we unravel the influence of linear shear-flow on a dilute suspension of active filaments in a confined channel especially, on surface adsorption, average local orientation profile, population of upstream swimmers near the wall and also the importance of hydrodynamic interactions on swimmers. \subsection{Surface accumulation} The distribution of passive filaments is shown in Fig.~\ref{Fig:accumulation}-a at $Wi=0$ as a function of distance from the bottom wall $y'=y/L_y$, where walls are present at $y'=0$ (bottom) and $y'=1$ (top). The probability distribution $P(y')$ is uniform due to translational entropy, which favours homogeneity. Near the surfaces ($|L_y-y|\le l/2$), it reflects depletion due to steric repulsion of the filaments from the wall. As expected, $P(y')$ of the active filament increases with the speed of swimmers (see Fig.\ref{Fig:accumulation}-a) near the surfaces. The normalised distribution of swimmers near the surfaces reflects large inhomogeneity especially in the limit of large $Pe$. This is consistent with the previous findings\cite{Elgeti:2013,Elgeti2016,Chilukuri:jpcm:2014}. The distribution function has two identical peaks near the walls in the limit of large P{\'e}clet number, which reflects the adsorption of swimmers on the solid boundaries. \begin{figure \includegraphics*[width=\linewidth]{accum_wi0} \includegraphics*[width=\linewidth]{accum_dist} \caption{The probability distribution of active filaments as a function of channel height ($y'=y/L_y$), Fig. a) shows for $Wi=0$ for various $Pe$, and Fig. b) displays for various flow-rates at $Pe=1$. } \label{Fig:accumulation} \end{figure} The motility induced adsorption is understood in terms of the combination of active force, steric repulsion, and viscous-drag. Large active force results in longer persistence motion and rapid movement throughout the channel, which eventually causes filaments to reach on surfaces at a shorter time interval. Once they reach nearby the surface, they align and move along the surface. {\color{black} The relatively larger drag perpendicular to filament's axis causes it to reside near the wall for a longer time compared to the bulk, which results in higher probability density with $Pe$. This will be elucidated in terms of short time diffusion across the channel. However, inhomogeneous drag is not the sole reason for surface adsorption. Width of the channel, length of the filament, and its rotational diffusion coefficient also affect the adsorption significantly\cite{li2009accumulation,elgeti2013wall}. } \begin{figure \includegraphics*[width=\linewidth]{surface_excess} \caption{Surface excess as a function of flow-rate $Wi$ for various $Pe$ as shown in figure.} \label{Fig:surface_excess} \end{figure} The flow disrupts the concentration of filaments on the surfaces, which is reflected in probability density in Fig.~\ref{Fig:accumulation}-b for a range of $Wi$ at a fixed $Pe=1$. The surface adsorption is maximum for $Wi=0$ (without flow) at a given $Pe$. The height of peak slowly diminishes with the increase in fluid-flow for $Wi>1$, and it disappears nearly at $Wi\sim 30$. In the limit of $Wi>100$, density of rods near the surface is very small compared to bulk density. Interestingly, this is quite similar to the equilibrium distribution (see Fig.~\ref{Fig:accumulation}-a for $Pe=0$ at $Wi=0$). The desorption driven by flow can be justified in terms of orientational alignment of the polar filament, tumbling motion, and suppression of diffusion across the channel. An active filament aligns along the flow direction, which breaks the rotational symmetry of swimmers causing less number of filaments aligned towards a wall. Consequently, the probability of a filament residing near the wall decreases. Thus, the accumulation close to the wall is severely influenced by the shear-induced orientational alignment. Therefore, flow can act as a control variable for adsorption of the polar filaments at the wall. To quantify the depletion, we estimate surface excess from the probability distribution of the filament as\cite{Elgeti:2009} \begin{equation} s = \int_{0}^{l/2}[P(y)-P_{b}]dy, \label{Eq:surf_ex} \end{equation} where $s$ is a measure of excess surface density near the surface relative to bulk. Here, $P_b$ is read as bulk distribution. The definition of $s$ is constructed in a way that it becomes unity for full adsorption, whereas in case of uniform distribution it is zero. Smaller density relative to bulk results in a negative surface excess as that of the passive filament. For slow swimming speed $Pe<1$, probability density at wall is smaller than the bulk, thus the surface excess is negative. It increases with $Pe$ and becomes positive for $Pe>1$ (see Fig.~\ref{Fig:surface_excess}). The variation in $s$ with flow strength $Wi$ in the limit of $Pe<1$ is negligible, there is significant change in $s$ for larger P{\'e}clet number $Pe>1$ as Fig.~\ref{Fig:surface_excess} illustrates. The change in $s$ grows with $Pe$ as a function of {\color{black} Weissenberg} number. At a sufficiently high value of $Wi$, surface excess becomes negative suggesting the weakening of localisation of filaments at the walls. Desorption of filaments in the large $Wi$ attributes to the dominance of shear over active forces. The qualitative behavior of a surface adsorption can also be accessed from the scaling arguments. In this approach, trajectory of a swimmer is treated as a semi-flexible polymer\cite{Elgeti:2009} with persistence length of trajectory defined as $\xi_{p} = v/D_{r}$, where $v$ and $D_{r}$ are the average speed of the filament and the rotational diffusion coefficient, respectively. Furthermore, under a linear shear-flow, rotational diffusion can be approximated in the form of tumbling time as\cite{winkler2006semiflexible,Huang:Macromol2010} \begin{equation} D_{r} \approx \frac{k_{B}T}{\eta_s l^{3}}(1+c Wi^{-2/3})^{-1}, \end{equation} where $c$ is some constant. We present analysis in two extreme limits. In the limit of $Wi<<1$, $D_{r}$ is unperturbed by fluid-flow, whereas for $Wi>>1$, $D_{r}$ varies as $Wi^{-2/3}$. The probability of finding a filament in the vicinity of thickness $l/2$ from a wall is given as $p = \tau_{w}/(\tau_{w}+\tau_{b}) = 1/(1+\tau_{b}/\tau_{w})$, with $\tau_{w}$ to be the residence time of a filament at the wall and $\tau_{b}$ to be the residence time in bulk. In order to estimate $\tau_w$, we take $x_f$ to be the distance travelled along the surface. It is simply $x_f=vt$ and $y$ is normal distance of filament from the closest wall, then $y=x_f^{3/2}/\xi_{p}^{1/2}$ and at $y=l/2$, time is $t=\tau_{w}$. Following the same approach as derived in Ref.~\cite{Elgeti:2009} for the active filaments, we have extended the scaling approach under linear shear-flow. Therefore, the residence time $\tau_{w}$ can be expressed as, \begin{equation} \tau_{w} \sim \frac{1}{v} \Big(\frac{l^{2}\xi_{p}}{4}\Big)^{1/3} = \Big(\frac{l^{2}v^{-2}}{4D_{r}}\Big)^{1/3} . \label{Eq:tau_w} \end{equation} Here, the persistence length $\xi_{p}>L_y/2$ for all $Pe$, thus we approximate $\tau_{b}$ in diffusive regime as\cite{Elgeti:2009}, \begin{equation} \tau_{b} \sim \frac{L_{y}}{v_{b}} \Big(\frac{l}{\xi_{p}}\Big)^{1/3} = \frac{L_{y}}{v_{b}} \Big(\frac{lD_{r}}{v}\Big)^{1/3}, \label{Eq:tau_b} \end{equation} with $v=Pe/\gamma+a_{2}Wi$ and $v_{b}=Pe/\gamma$, where $\gamma$ is friction coefficient. Combining expression \ref{Eq:tau_w} and \ref{Eq:tau_b}, \begin{equation} \tau_{w}/\tau_{b} \sim \frac{v_{b}}{L_{y}} \Big( \frac{lv^{-1}}{4D_{r}^{2}} \Big)^{1/3}. \label{Eq:ratio} \end{equation} In the limit of $Wi<<1$, the ratio varies as $\tau_{w}/\tau_{b} \sim v^{2/3}$, which recovers the results proposed by {Elgeti} {\it et al}\cite{Elgeti:2009}. In the limit of $Wi>>1$, $\tau_{w}/\tau_{b} \sim Wi^{-\beta}$ with $\beta=7/9$, hence the probability of finding a filament near the wall also becomes $p \sim Wi^{-\beta}$. One can approximate surface excess in the large shear limit as, \begin{equation} s = \frac{p L_y - l}{L_y-l} \sim \frac{a_{0}Wi^{-\beta} - l}{L_{y}-l}. \label{Eq:surf_final} \end{equation} Furthermore, $Wi^{-\beta} \to 0$ for $Wi >>1$, which leads to the saturation of surface excess at $s=-l/(L_{y}-l)$. In the intermediate limit $Wi>1$, $s$ decrease as a power law $s\sim Wi^{-\beta}$ with an exponent $\beta \sim 0.8$, as displayed in Fig.~\ref{Fig:surface_excess}. \subsection{Residence time and Diffusion } \begin{figure \includegraphics*[width=\linewidth]{ratio_wall_bulk} \includegraphics*[width=\linewidth]{msd_pe2} \caption{a) Ratio of residence time of the active filament near the surface with the bulk as a function of $Wi$ for various $Pe$. Inset shows the scaled curve of $(\tau_b/\tau_w )Pe^{-0.8}$ in range of $Pe\ge 1$. b) The MSD of centre-of-mass of filament for various $Wi$ at a given $Pe=2$. Inset shows the MSD as function of time for various distances from the channel.} \label{Fig:msd} \end{figure} To clarify effect of shear on the surface adsorption, we estimate residence time of the filament near the wall and in bulk. The time spent by a filament in the neighborhood of wall with a constraint that once it aligns along the wall is defined as residence time $\tau_w$. Similarly, average time spent in the bulk is defined as twice of the time taken by a filament to reach either of the surfaces from the centre. The ratio of residence times is displayed in Fig.~\ref{Fig:msd}-a as a function of $Wi$ for various $Pe$. The qualitative behavior of $\tau_w/\tau_b$ is similar as surface excess (see Fig.\ref{Fig:surface_excess}). The ratio decreases with flow and increases with $Pe$. Our simulation results follow the relation $\tau_{w}/\tau_{b} \sim Pe^{0.8}$ for small $Wi$. The exponent is slightly larger than the scaling predictions obtained in Eq.~\ref{Eq:ratio}. The influence of shear is also displayed here as $\tau_{w}/\tau_{b} \sim Wi^{-\beta}$ with $\beta \approx 0.8$ in the intermediate limit of $Wi$ for $Pe>1$. The same exponent is also obtained from scaling behavior in Eq.~\ref{Eq:ratio}. The weak flow has negligible influence on $\tau_w/\tau_b$. On the other hand, stronger flow (for $Pe>1$) can lead to large change in $\tau_w/\tau_b$ (see Fig.~\ref{Fig:msd}-a). The influence of active forces is weak in the case of $Pe<1$, therefore shear has negligible influence on $\tau_w/\tau_b$, thus the adsorption is independent of shear flow. This also establishes the influence of activity and external flow on the accumulation of swimmers. Furthermore, in the limit of $Wi>>1$, $\tau_w/\tau_b$ saturates to a slightly smaller number than the passive case. This is associated with the alignment of filaments along the flow leading to transport via transverse diffusion across the channel. Hence, $\tau_w/\tau_b$ is constant irrespective of flow and propulsive forces in this regime. We quantify the influence of surface and flow on the short time diffusion across the channel. The mean-square-displacement(MSD) of the centre-of-mass of the filament along the gradient direction at $Pe=2$ for various $Wi$ is displayed in Fig.~\ref{Fig:msd}-b. For $Wi=0$, the MSD shows super-diffusive regime in short time succeeded by saturation in long time limit. The saturation occurs due to presence of wall, which is nearly at half-width of the channel. The flow strength of $Wi>1$ aligns filament along x-axis, thus the super-diffusive regime appears relatively at shorter time and for narrow window. The diffusive behavior appears at shorter time scales as shown in Fig.~\ref{Fig:msd}-b, and diffusion across the channel is much slower with $Wi$. The decrease in the short time diffusion across the channel at sufficiently higher shear-rate explains the lower density of the filament on the surfaces. The flow aligns them thus suppresses the ballistic motion of filaments in the gradient direction, consequently leading to a reduction in density on both surfaces. Hydrodynamic interactions influence the diffusion nearby surfaces. This is assessed more intricately in terms of the short time MSD across the channel with separation from the surface. Inset of Fig.\ref{Fig:msd}-b displays the MSD for various distances from the wall at $Pe=0$ and $Wi=0$. The short time MSD is nearly independent in bulk. Interestingly, it shows strong variation in the vicinity of the surface compared to that in bulk. The decrease in the short time diffusion occurs due to alignment along the surface, which forces filament to diffuse perpendicular to their axis. This exhibits a higher drag and leads to smaller MSD. The inhomogeneous drag and hydrodynamic interactions contribute to slow translational and rotational diffusion. Despite the decrease in absolute value of residence time\cite{Elgeti:2009}, surface adsorption grows with propulsive force. This is addressed here as follows, if a filament's polarity in bulk is pointing upwards then it may reach to the top wall and if it is aligned downward then it may end up to the bottom wall. The time required to reach on a surface drastically reduces with propulsive force, results in enhancement of the probability of collision with walls. This is clearly demonstrated in Fig.~\ref{Fig:msd}-a as ratio of $\tau_w/\tau_b$. Therefore, adsorption on the surfaces increases even though the residence time $\tau_w$ decreases with $Pe$. Thus, the decrease in surface excess is also attributed in terms of MSD across the channel in Fig.~\ref{Fig:msd}-b. We can conclude that the flow diminishes the persistence motion across the channel, which can be enhanced with the larger propulsive force. \begin{figure \includegraphics[width=\columnwidth]{alignment_inset} \includegraphics[width=\linewidth]{orientpe05} \caption{a) Orientation moment $\chi_p$ in the vicinity of the wall, it follows power law relation as $Wi^{-1/5}$ (dashed line) for $Pe>1$ and solid line shows. $Wi^{-1/3}$ for $Pe=0$. Inset shows a universal curve for the $\chi_p$ in the bulk for all $Pe$. Solid line shows power law variation with exponent $1/3$. b)Average orientation profile of active filaments as a function of channel height $(y'=y/L_y)$ from the bottom wall for $Pe=0.5$ for various $Wi$.} \label{Fig:orient} \end{figure} \begin{figure* \includegraphics*[width=0.2\linewidth]{angle_dist_g0_pe3}% \includegraphics*[width=0.2\linewidth]{angle_dist_f30g001}% \includegraphics*[width=0.2\linewidth]{angle_dist_f30g0025}% \includegraphics*[width=0.2\linewidth]{angle_dist_f30g005}% \includegraphics*[width=0.2\linewidth]{angle_dist_f30g05}% \caption{The distribution of angle $\phi$ made by the filaments as a function of channel width for a) $Wi=0$, b) $Wi=3$, c) $Wi=7.5$, d)$Wi=15$, and e) $Wi=150$ at $Pe=3$.} \label{Fig:orient:heat} \end{figure*} \subsection{Flow Induced Alignment} Now we embark on the quantification of flow-induced orientation of active filaments particularly near the surfaces, which enables characterization of direction of a swimmer. For this, we compute two different angles, one is the angle between flow direction and the projection of unit vector $\textbf{p}$ (tail-to-head as shown in Fig.~\ref{Fig:model}) on the flow-gradient plane, and latter one is the angle between $\textbf{p}$ and flow direction. The former one is azimuthal angle used in characterisation of the shear-induced alignment of a polymer\cite{Huang:Macromol2010,Chien-Cheng:2012}, and the latter one is used in identifying upstream and downstream swimming behavior\cite{Nili:rsc:2017,Chilukuri:jpcm:2014}. The variation of orientational moment along the flow direction, defined as $\chi_p=1-{ p}_x {p}_x$, as a function of $Wi$ and $Pe$ in bulk (inset) and close to the wall is displayed in Fig.\ref{Fig:orient}-a. Swimmers are oriented randomly in all possible ways in bulk at low $Wi$, which align along the flow direction for higher $Wi$. The variation of the alignment in the bulk shows a power law behavior as $Wi^{-\delta}$ with $\delta=1/3$\cite{park2009inhomogeneous,rahnama1995effect,chen1996rheology,leal1971effect}. Note that the scaling exponent is nearly independent of propulsive force. A qualitative picture is shown from a solid line in inset of Fig.~\ref{Fig:orient}-a. In the neighborhood of solid-boundaries, swimmers are more aligned along the flow due to propulsive and excluded volume forces. The angle decreases with increase in $Pe$, thus $\chi_p$ shows slower variation with $Wi$ for large $Pe\ge 1$. This is also reflected in the scaling exponent as $\delta\sim 1/5$ for all $Pe \ge 1$. Interestingly, the angular alignment also shows a power law with $Pe$ near the wall. A universal curve for all $Pe \ge 1$ is obtained by scaling with $Pe^{1/3}$ (see Fig\ref{Fig:orient}-a). This suggests that the orientational moment decreases as $\chi_p \sim Pe^{-1/3}$ for all $Wi$. In the large shear limit, it varies as $\chi_p \sim Wi^{-1/5}$ for $Pe\ge 1$. Now we compute the angle between vector $\bf p$ (tail-to-head) and flow direction as shown in Fig.\ref{Fig:model}. In our convention, if a filament's orientation angle lies in the range of $-\pi/2 \le \phi\le \pi/2$ then it is said to be a downstream swimmer, similarly, if this is between $\pi/2 \le \phi \le 3\pi/2$ then it is referred to an upstream swimmer. The profile for various $Wi$ ( see Fig.\ref{Fig:orient}-b) is shown across the channel for $Pe=0.5$. In absence of flow ($Wi=0$), a filament does not have any preferred direction, thus $<\cos(\phi(y'))>\approx 0$ throughout the channel. This suggests that the swimmers move symmetrically in all directions for $Wi=0$. For a non-zero $Wi$, symmetry is broken therefore $<\cos(\phi(y'))>$ displays a non-zero value. A positive value corresponds to downstream swimming in the upper halves ($y'\geq 0.5$) and a negative value for upstream in the lower halves ($y'\leq 0.5$). The average alignment of the system grows with the flow, as Fig.~\ref{Fig:orient}-b exhibits a peak near the top wall and a depth near the bottom wall. The height of peak grows with $Wi$, after reaching to a maximum value it start to decline (see Fig.~\ref{Fig:orient}-b for $Wi>30$ at $Pe=0.5$). In addition to decrease in the height of peak, the width of the distribution also gets narrower. This illustrates localisation of the upstream and downstream swimming in the vicinity of the surfaces. {\color{black} On the bottom surface filaments move against the flow, thus the average net flow is upstream in the intermediate shear range. However in the range of $Wi>>1$, average net flow becomes downstream. Similarly on the top wall, net flow is always downstream.} The average preference of swimmers diminishes as we go far from the surfaces. This can be visualised in terms of distribution of angle $\phi$ in the shear-gradient plane. Figure~\ref{Fig:orient:heat} displays a colormap of angle distribution $P(\phi)$ along the channel $y'=y/L_y$ for various values of $Wi$ at $Pe=3$. It exhibits two symmetric peaks at $\phi=0$ and $\pi$, which correspond to $Wi=0$. With flow, one of the peaks diminishes on both walls leaving only single dominant phase as displayed in Fig.\ref{Fig:orient:heat}-c and d. In the limit of large $Wi$, both halves exhibit symmetric distribution around $0$ and $\pi$, thus nearly equal concentration of upstream and downstream swimmers are present. \begin{figure \includegraphics*[width=\linewidth]{bimodal_topwall_pe05} \includegraphics*[width=\linewidth]{draw_orient} \caption{a) The variation of $\phi$ near the top wall $y'=1$ for $Pe=0.5$ for various $Wi=0,0.75,1.5,7.5,30,75$ and $150$. b) Schematics of flow induced alignment of filaments on the top and bottom half of the channel.} \label{Fig:cosphi} \end{figure} Now, we turn our attention to the flow driven angular distribution of the filament in close vicinity of the walls ($\Delta y=l/2$). As expected, the angle distribution $P(\phi)$ near the top wall is symmetric about $\pi/2$, which consists two identical peaks centred at $\phi=0$, and $\phi=\pi$ at $Wi=0$ (see Fig.\ref{Fig:cosphi}-a). Figure displays the distribution of $\phi$ at a fixed $Pe=0.5$ for several values of $Wi$. The peak height at $\phi=0$ (major) increases and $\phi=\pi$ (minor) decreases with $Wi$. The height of peak shows a non-monotonic behavior, which exhibits a decrease followed by an increase in the limit of large $Wi$ at $\pi$. Summarising above results here, the active filaments align themselves along (against) the flow direction at the top (bottom) wall assisted by the torque due to shear-force in the P{\'e}clet number dominant regime as shown in Fig.~\ref{Fig:cosphi}-b. However, in the shear-dominant regime, i.e., $Wi >>1$, filament also aligns against (along) the flow at the top (bottom) surface. \subsection{Population of filaments at surface} We address here population of upstream and downstream swimmers, especially near the surfaces. The coupling between torque (due to flow) and propulsive force leads downstream to be the majority population and upstream to be the minority population near the top wall (see for Fig.\ref{Fig:cosphi}-b). Figure\ref{near_wall}-a displays the fraction of majority population ($\rho_{mj}= N_{mj}/N_p^w$) with respect to the total population $N_p^w$ at $y=L_y$, here $N_{mj}$ is number of filaments aligned along flow. The fraction $\rho_{mj}$ grows in small shear limit (all $Pe$) followed by a reduction in the limit of $Wi/Pe>6$, which continues to decrease with $Wi/Pe>>6$. The flow assists alignment, therefore the majority population increases in the limit of $Wi/Pe<6$. Further in the limit of large shear, tumbling motion leads to an increase in the effective rotational diffusion, as a result, the alignment against the flow increases. This results a decline in the majority population and increase in the minority population. \begin{figure \includegraphics[width=\linewidth]{topwall_ratio} \includegraphics[width=1.1\linewidth]{phase_plot} \caption{a) Number density of majority population parallel to the flow at top wall. Inset shows the fraction of minority population of downstream to the total population rods at the top wall. A solid line shows the onset of the decrease in the majority population. b) The figure shows unimodal and bimodal phases in parameter space of $Pe$ and $Wi$. The green shaded area shows the unimodal phase in the $Wi$ and $Pe$, similarly white area shows the bimodal phase.} \label{near_wall} \end{figure} The magnitude of shear force required to decrease the majority population for higher activity strength is substantially large, as we can see that the location of peak shifts towards higher $Wi$ with $Pe$ (see Fig.~\ref{near_wall}-a). The minority fraction, $\rho_{mn}= N_{mn}/N_p^w$, at the top wall displays the same trend as shown in inset of Fig.\ref{near_wall}-a. Here the decrease in population is succeeded by an increase for all $Pe$. The behavior is identical on both surfaces. The difference in the preference of orientation at the top and bottom surfaces is due to clockwise torque on the filament, as Fig.\ref{Fig:cosphi}-b illustrates, which favours downstream on the top wall and upstream on the bottom wall. {\color{black} The critical value of shear-rate, where the onset of increase in minority population at the wall occurs, is displayed from a solid line in Fig.~\ref{near_wall}. This is known as onset of population splitting in the literature.} {\color{black} Based on the above summary, we have identified a phase-diagram in $Pe$ and $Wi$ parameter space. This can be categorized in the three regimes. i) Active force dominant regime,i.e., at small $Wi$, where swimming speed $Pe$ plays the sole role. Here the population of upstream and downstream are nearly same at the surfaces, thus we call it the biomodal phase. It is displayed below the green shaded area in Fig.~\ref{near_wall}-b. ii) Intermediate regime, where shear and active forces compete with each other, leading to dominance of the majority population over the minority ( $Pe>0.5$ and $Wi/Pe \ge 6$). We define a unimodal phase, if majority population exceeds more than $80\%$ out of all population on the wall. The unimodal phase is displayed in green shaded region in Fig.~\ref{near_wall}-b. Note that there are still some minority population on the wall but they are negligible as Fig.~\ref{Fig:orient:heat}-c and d illustrates. iii) At sufficiently high Weissenberg number, the flow dominates over the activity and the effect of self-propulsion is negligible. Hence, filaments behave like a passive one, and it leads to equal population of upstream and downstream filaments on the walls for $Wi/Pe>>6$. The transition from a unimodal to bimodal appears again as shown in Fig.~\ref{near_wall}-b above the green shaded region. If the minority population exceeds $20\%$, we call it bimodal phase. The color bar in Fig.~\ref{near_wall} shows the ratio of the minority to majority population. In the diffusive regime $Pe<<1$, system is always in the bimodal phase. } \begin{figure \includegraphics*[width=\linewidth]{excess_comp} \caption{Comparison of surface excess of the filament with (squares) and without (bullets) HI at $Pe=1$. Inset shows the distribution of rods as a function of distance from the wall at $Wi=0$ for $Pe=1$.} \label{Fig:HI-WHI} \end{figure} \subsection{Effect of hydrodynamics} In this section, we compare the role of hydrodynamic correlations on the accumulation of confined active filaments. We have performed simulations with the random MPC, which has similar background solvent. In this method, the long-range correlations among solvents are absent, however it exhibits same transport properties as MPC fluids\cite{kikuchi2002polymer,kikuchi2003transport,ripoll2007hydrodynamic}. The comparison of surface adsorption with and without HI is demonstrated in Fig.~\ref{Fig:HI-WHI} for the same $Pe$, which suggests that the adsorption is enhanced with hydrodynamics in weak shear limit. The inset illustrates the distribution of swimmers as a function of distance from the wall, which clearly shows a substantial difference in accumulation with and without HI. The role of flow on surface excess is also shown in Fig.\ref{Fig:HI-WHI}, which is higher with hydrodynamics for small $Wi$. The enhancement in the adsorption with HI occurs due to increase in inhomogeneous drag force and interaction of solvents with wall\cite{padding2010translational,lin2000direct}. Another important aspect needs to be mentioned here that the difference in surface excess diminishes in the intermediate shear-rate. In this limit, transport due to transverse diffusive motion is dominant thus a swimmer moves across the channel relatively slower due to small transverse diffusivity with HI for smaller filaments, which gives smaller surface excess. \section{Summary and Conclusion} In summary, we have presented dynamics of a dilute concentration of active filaments confined between parallel walls under linear shear flow using hybrid MD-MPC simulations. Active filaments display a strong tendency to accumulate on the wall and shear weakens this accumulation. The adsorption and desorption are analysed in terms of alignment across the channel, anisotropic friction of the filament, diffusion across the channel, collision with wall, and residence time. The filaments are aligned near the surfaces, and anisotropic nature of friction\cite{elgeti2015run,doi1988theory} causes slow short time translational and rotational diffusion across the channel. These mechanisms result relatively larger residence time near the wall, hence higher adsorption without flow. Similarly, shear induced angular alignment \cite{Huang:Macromol2010,Singh_2014_JCP,singh2013dynamical} suppresses the ballistic motion across the channel leading to small concentration. The surface excess "$s$" follows a power-law behavior as $s\sim Wi^{-\beta}$, with $\beta\approx .8$, in the intermediate regime of flow. The simulation results are also validated with the help of scaling arguments, which predicts nearly same exponent $\beta=7/9$. The adsorption is sufficiently suppressed by flow in the limit of $Wi>>1$, and density profile of active filament is similar to passive system with weak adsorption on the surfaces. The adsorption and desorption is also quantified with the help of residence time near the wall and bulk $\tau_w/\tau_b$, which exhibits similar dependence. The suppression of angular fluctuations causes slow decrease in the bulk residence time, thus the adsorption decreases with shear. The angular alignment of active filaments along the flow shows a non-monotonic distribution profile at the walls with $Wi$. The increase in angular alignment of upstream (downstream) swimmers at bottom (top) wall is followed by the onset of its decrease after a critical shear-rate at $\phi=0$ ($\phi=\pi$). In addition to that, the fraction of majority (upstream) population at the top wall also shows a non-monotonic behavior with $Wi$. The local angular distributions near the surfaces lead to a power law variation as $\chi_{p} \sim Wi^{-\delta}$, with an exponent $\delta\sim 1/5$. The smaller exponent than the bulk suggests weaker relative variation on the surfaces. More importantly, the orientational moment decreases with propulsive force near the wall unlike the bulk where it is independent of P{\'e}clet number. The dependence of $\chi_p$ is associated with the steep steric interactions with walls. The onset of decrease in the majority population illustrates the weakening of effect of self-propulsion over high shear forces, which eventually leads to equal population at angular separation $\pi$\cite{Nili:rsc:2017}. The dynamics of modelled microswimmers in complex active environments would be interesting to consider in the future studies, this may bring close to more realistic systems. More-specifically, curved and soft surfaces, shear-gradients, chiral-shaped swimmers, and viscoelastic media may lead to several distinguishable motional phases. {\color{black} It would be further interesting to consider conservation of angular momentum in such simulations, especially for a chiral-shaped or spherically symmetric swimmers. It may influence the stream lines near the swimmers, motility and density-profile of swimmers\cite{yang2015effect}.} \section{Acknowledgements} Authors acknowledge HPC facility at IISER Bhopal for computation time. We thank DST SERB Grant No. YSS/2015/000230 for the financial support. SKA thanks IISER Bhopal for the funding.
2,869,038,155,132
arxiv
\section{Introduction} This paper deals with the following fundamental question: \begin{itemize} \item [] \textit{What information is sufficient for learning, and what guarantees can it bring that regular data cannot} ? \end{itemize} By ``regular'', we mean the usual inputs provided to a learner. In our context of batch supervised learning, this is a training set of examples, each of which is an observation with a class, and learning means inducing in reduced time an accurate function from observations to classes, a \textit{classifier}. It turns out that we do not need the detail of classes to learn a classifier (linear or kernelized): an aggregate, whose size is the dimension of the observation space, is minimally sufficient, the mean operator \cite{pnrcAN}. But do we need examples ? This perhaps surprising and non-trivial question is becoming crucial now that the nature of stored and processed signals intelligence data is heavily debated in the public sphere \cite{lCU,seaBC}. In the context of machine learning (ML), the objective of being accurate is more and more frequently subsumed by more complex goals, sometimes involving challenging tradeoffs in which accuracy does not ultimately appear in the topmost requirements. Privacy is one such crucial goal \cite{djwPA,ecTE,gBP}. There are various models to capture the privacy requirement, such as secure multi-party computation and differential privacy (DP, \cite{drTA}). The former usually relies on cryptographic protocols, which can be heavy even for bare classification and simple algorithms \cite{bptgML}. The latter usually relies on the power of randomization to ensure that any ``local'' change cannot be spotted from the output delivered \cite{drvBA,drTA}. In a ML setting, randomization can be performed at various stages, from the examples to the output of a classifier. We focus on the upstream stage of the process, \textit{i.e.} the input to the learner, which grants the benefits that \textit{all} subsequent stages also comply with differential privacy. Randomization has its power: it also has its limits in this case, as it may significantly degrade the performance of learners. The way we address this problem starts from a surprising observation, whose relevance to supervised ML goes beyond learning with private data: learning a linear (or kernelized) classifier over examples throughout the minimization of the expected logistic loss is equivalent to learning \textit{the same classifier} by minimizing an exponential loss over a complete set of transformed data that we call \textit{Rademacher observations}, rados. Each rado is the sum of \textit{edge vectors} over examples (edge = observation $\times$ label). We also show that efficient learning from all rados may also be achieved when carried out over \textit{subsets} of all possible rados. This is our first contribution, and we expect it to be useful in several other areas of supervised learning. In the context of learning with private data, our other contributions can be summarized as showing how rados may yield new privacy guarantees --- not limited to differential privacy --- while authorising boosting-compliant rates for learning. More precisely, our second contribution is to propose a rado-based learning algorithm, which has boosting-compliant convergence rates over the \textit{logistic loss computed over the examples}. Thus, we learn an accurate classifier over rados, and the same classifier is accurate over examples as well. The fact that efficient learning may be achieved through subset of rados is interesting because it opens the problem of designing this particular subset to address domain-specific requirements that add to the ML accuracy requirement. Among our other contributions, we provide one important design example, showing how to build differentially private mechanisms for rado delivery, such as when protecting specific sensitive features in data. Experiments confirm in this case that learning from differentially private rados may still be competitive with learning from examples. We provide another design which pairs to our rado-based boosting algorithm, with the crucial property that when examples have been DP-protected by the popular Gaussian mechanism \cite{drTA}, the joint pair (rado delivery design, boosting algorithm) may achieve convergence rates \textit{comparable to the noise-free} setting with high probability, even over strong DP protection regimes. Our last contribution is to show that rados may protect the privacy of the original examples not only in the DP framework, but also from several algebraic, geometric and even computational-complexity theoretic standpoints. The remainder of this paper is organized as follows. Section \textsection\ref{srasl} presents Rademacher observations, shows the equivalence between learning from examples and learning from rados, and how learning from subsets of rados may be sufficient for efficient learning; \textsection\ref{sbur} presents our rado-based boosting algorithm, and \textsection\ref{exp_boost_rado} presents experiments with this algorithm; \textsection\ref{sradp} presents our results in DP models, \textsection\ref{exp_dp_rado} presents related experiments; \textsection\ref{sec_hardness} provides results on the hardness of reconstructing examples from rados from algebraic, geometric and computational standpoints. To keep a readable paper, proofs and additional experiments are given in two separate appendices available in Section \ref{app_proof_proofs} (proofs) and Section \ref{app_exp_expes} (experiments). \section{Rados and supervised learning}\label{srasl} Let $[n] = \{1, 2, ..., n\}$. We are given a set of $m$ examples ${\mathcal{S}} \defeq \{(\ve{x}_i, y_i), i \in [m]\}$, where $\ve{x}_i \in {\mathcal{X}} \subseteq {\mathbb{R}}^d$ is an observation and $y_i \in \{-1,1\}$ is a label, or class. ${\mathcal{X}}$ is the domain. A linear classifier $\ve{\theta} \in {\Theta}$ for some fixed ${\Theta} \subseteq {\mathbb{R}}^d$ gives a label to $\ve{x} \in {\mathcal{X}}$ equal to the sign of $\ve{\theta}^\top \ve{x} \in {\mathbb{R}}$. Our results can be lifted to kernels (at least with finite dimension feature maps) following standard arguments \cite{qsclEL}. We let $\Sigma_m \defeq \{-1,1\}^m$. \begin{definition} For any $\ve{\sigma} \in \Sigma_m$, the Rademacher observation $\ve{\rado}_{\ve{\sigma}}$ with signature $\ve{\sigma}$ is $\ve{\rado}_{\ve{\sigma}} \defeq (1/2) \cdot \sum_i (\sigma_i + y_i) \ve{x}_i$. \end{definition} The simplest way to randomly sample rados is to pick $\ve{\sigma}$ as i.i.d. Rademacher variables, hence the name. Reference to ${\mathcal{S}}$ is implicit in the definition of $\ve{\rado}_{\ve{\sigma}}$. A Rademacher observation sums \textit{edge vectors} (the terms $y_i\ve{x}_i$), over the subset of examples for which $y_i = \sigma_i$. When $\ve{\sigma} = \ve{y}$ is the vector of classes, $\ve{\rado}_{\ve{\sigma}} = m \ve{\mu}_{{\mathcal{S}}}$ is $m$ times the mean operator \cite{qsclEL,pnrcAN}. When $\ve{\sigma} = -\ve{y}$, we get the null vector $\ve{\rado}_{\ve{\sigma}} = \ve{0}$. A popular approach to learn $\ve{\theta}$ over ${\mathcal{S}}$ is to minimize the surrogate risk $\logloss\left({\mathcal{S}}, \ve{\theta}\right)$ built from the logistic loss (logloss): \begin{eqnarray} \logloss\left({\mathcal{S}}, \ve{\theta}\right) & \defeq & \frac{1}{m} \sum_{i} \log\left(1+\exp\left(-y_i \ve{\theta}^\top \ve{x}_i\right)\right)\:\:. \label{deflogloss} \end{eqnarray} We define the \textit{exponential rado-risk} $\explossrado({\mathcal{S}}, \ve{\theta}, \mathcal{U})$, computed on any ${\mathcal{U}} \subseteq \Sigma_m$ with cardinal $|{\mathcal{U}}| = n$, as: \begin{eqnarray} \explossrado({\mathcal{S}}, \ve{\theta}, \mathcal{U}) & \defeq & \frac{1}{n} \sum_{\ve{\sigma} \in {\mathcal{U}}} \exp\left( - \ve{\theta}^\top\ve{\rado}_{\ve{\sigma}}\right)\label{defExp}\:\:. \end{eqnarray} It turns out that $\logloss = g(\explossrado)$ for some continuous strictly increasing $g$; hence, minimizing one criterion is equivalent to minimizing the other and \textit{vice versa}. This is stated formally in the following Lemma. \begin{lemma}\label{lem_equivlogexp} The following holds true, for any $\ve{\theta}$ and ${\mathcal{S}}$: \begin{eqnarray} \logloss({\mathcal{S}}, \ve{\theta}) & = & \log(2) + \frac{1}{m} \log \explossrado({\mathcal{S}}, \ve{\theta}, \Sigma_m)\:\:. \label{eqq1} \end{eqnarray} \end{lemma} (Proof in the Appendix, Subsection \ref{proof_lem_equivlogexp}). Lemma \ref{lem_equivlogexp} shows that learning with examples via the minimization of $\logloss\left({\mathcal{S}}, \ve{\theta}\right)$, and learning with all rados via the minimization of $\explossrado({\mathcal{S}}, \ve{\theta}, \Sigma_m)$, are essentially equivalent tasks. Since the cardinal $|\Sigma_m| = 2^m$ is exponential, it is unrealistic, even on moderate-size samples, to pick that latter option. This raises however a very interesting question: if we replace $\Sigma_m$ by subset ${\mathcal{U}}$ of size $\ll 2^m$,what does the relationship between examples and rados in eq. (\ref{eqq1}) become? We answer this question under the setting that: \begin{itemize} \item [(i)] instead of $\Sigma_m$, we consider a predefined $\Sigma_r \subseteq \Sigma_m$; \item [(ii)] instead of considering ${\mathcal{U}} = \Sigma_r$, we sample uniformly i.i.d. ${\mathcal{U}} \sim \Sigma_r$ for $n \geq 1$ rados. \end{itemize} While (ii) is directly targeted at reducing the number of rados, (i) is an upper-level strategic design to tackle additional constraints, such as differential privacy. We now need following definition of the \textit{logistic rado-risk}: \begin{eqnarray} \loglossrado\left({\mathcal{S}}, \ve{\theta}, \mathcal{U}\right) & \defeq & \log(2) + \frac{1}{m} \log \explossrado({\mathcal{S}}, \ve{\theta}, \mathcal{U})\:\:, \label{deflogSU} \end{eqnarray} for any ${\mathcal{U}} \subseteq \Sigma_m$, so that $\logloss\left({\mathcal{S}}, \ve{\theta}\right) = \loglossrado\left({\mathcal{S}}, \ve{\theta}, \Sigma_m\right)$. We also define the open ball ${\mathcal{B}}(\ve{0},r) \defeq \{\ve{x} \in {\mathbb{R}}^d : \|\ve{x}\|_2 < r\}$. \begin{theorem}\label{thm_concentration} Assume $\Theta \subseteq {\mathcal{B}}(\ve{0},r_\theta)$, for some $r_\theta > 0$. Let: \begin{eqnarray*} \varrho & \defeq & \frac{ \sup_{\ve{\theta}' \in \Theta} \max_{\ve{\rado}_{\ve{\sigma}} \in \Sigma_r} \exp(-\ve{\theta}'^\top \ve{\rado}_{\ve{\sigma}})}{\explossrado({\mathcal{S}}, \ve{\theta}, \Sigma_r)}\:\:,\\ \varrho' & \defeq & \frac{\explossrado({\mathcal{S}}, \ve{\theta}, \Sigma_r)}{\explossrado({\mathcal{S}}, \ve{\theta}, \Sigma_m)} \:\:, \end{eqnarray*} where $\Sigma_r$ follows (i) above. Then $\forall \upeta > 0$, there is probability $\geq 1 - \upeta$ over the sampling of ${\mathcal{U}}$ in (ii) above that: \begin{eqnarray} \logloss\left({\mathcal{S}}, \ve{\theta}\right) & \leq & \loglossrado({\mathcal{S}}, \ve{\theta}, \mathcal{U}) + Q - \frac{1}{m} \cdot \log\left(1 - \frac{q}{\sqrt{n}}\right) \:\:,\label{thc11} \end{eqnarray} with \begin{eqnarray} q & = & \Omega\left( \varrho \cdot \sqrt{r_\theta \max_{\Sigma_r} \left\|\ve{\rado}_{\ve{\sigma}}\right\|_2 + d\log \frac{2en}{d} + \log \frac{1}{\upeta}} \right)\label{eq001} \end{eqnarray} and $Q \defeq - (1/m) \cdot \log \varrho'$ satisfies $Q = 0$ if $\Sigma_r = \Sigma_m$ and \begin{eqnarray} Q & \leq & r_\theta\left(\|\ve{\nabla}_{\ve{\theta}} \loglossrado\left({\mathcal{S}}, \ve{\theta}, \Sigma_m\right)\|_2 + \overline{\pi}_r\right)\label{bsupQ} \end{eqnarray} otherwise, letting $\overline{\pi}_r \defeq \left\|\expect_{\ve{\sigma}\sim \Sigma_r} (1/m) \cdot \ve{\rado}_{\ve{\sigma}}\right\|_2$. Furthermore, $\forall 0\leq \beta < 1/2$, if $m$ is sufficiently large, then letting $\pi_r^* \defeq \max_{\Sigma_r} \left\|(1/m) \cdot \ve{\rado}_{\ve{\sigma}}\right\|_2$, ineq. (\ref{thc11}) becomes: \begin{eqnarray} \logloss\left({\mathcal{S}}, \ve{\theta}\right) & \leq & \loglossrado({\mathcal{S}}, \ve{\theta}, \mathcal{U}) + Q\nonumber\\ & & + O\left( \frac{\varrho}{m^\beta} \cdot \sqrt{\frac{r_\theta \pi_r^*}{n} + \frac{d}{n m}\log \frac{2en}{d \upeta} } \right) \:\:.\label{thc22} \end{eqnarray} \end{theorem} (Proof in the Appendix, Subsection \ref{proof_thm_concentration}) Theorem \ref{thm_concentration} does not depend on the algorithm that learns $\ve{\theta}$. The right-hand side of ineq. (\ref{thc11}) shows two penalties. $Q$ arises from the choice of $\Sigma_r$ and is therefore structural. Regardless of $\Sigma_r$, when the classifier is reasonably accurate over all rados and expected examples edges in $\Sigma_r$ average to a ball of reduced radius, the upperbound on $Q$ in ineq. (\ref{bsupQ}) can be very small. The other penalty, which depends on $q$, is statistical and comes from the sampling in $\Sigma_r$. Theorem \ref{thm_concentration} shows that when $\Sigma_r = \Sigma_m$, even when $n\ll m$, the minimization of $\loglossrado\left({\mathcal{S}}, \ve{\theta}, \mathcal{U}\right)$ may still bring, with high probability, guarantees on the minimization of $\logloss\left({\mathcal{S}}, \ve{\theta}\right)$. Thus, a lightweight optimization procedure over a small number of rados may bring guarantees on the minimization of the expected logloss over \textit{examples} for the \textit{same} classifier. The following Section exhibits one such algorithm. \begin{algorithm}[t] \caption{Rado boosting ({\small \radoboost})}\label{algoRadoboost} \begin{algorithmic} \STATE \textbf{Input} set of rados ${\mathcal{S}}^r \defeq \{\ve{\rado}_{1},\ve{\rado}_{2}, ..., \ve{\rado}_{n}\}$; $T\in {\mathbb{N}}_*$; \STATE Step 1 : let $\ve{\theta}_0 \leftarrow \ve{0}$, $\ve{w}_0 \leftarrow (1/n)\ve{1}$ ; \STATE Step 2 : \textbf{for} $t = 1, 2, ..., T$ \STATE \hspace{1.1cm} Step 2.1 : $[d] \ni \iota(t) \leftarrow \textsc{\weak}({\mathcal{S}}^r, \ve{w}_t)$; \STATE \hspace{1.1cm} Step 2.2 : let \begin{eqnarray} r_t & \leftarrow & \frac{1}{\rado_{*\iota(t)}} \sum_{j=1}^{n} {w_{tj} \rado_{j \iota(t)}}\:\:;\label{defMu}\\ \alpha_{t} & \leftarrow & \frac{1}{2 \rado_{*\iota(t)}} \log \frac{1 + r_t}{1 - r_t}\:\:;\label{defalpha} \end{eqnarray} \STATE \hspace{1.1cm} Step 2.3 : \textbf{for} $j = 1, 2, ..., n$ \begin{eqnarray} w_{(t+1)j} & \leftarrow & w_{tj} \cdot \left(\frac{1-\frac{r_t \rado_{j \iota(t)}}{\rado_{*\iota(t)}}}{1-r^2_{t}}\right) \:\:;\label{defweights} \end{eqnarray} \STATE \textbf{Return} $\ve{\theta}_T$ defined by $\theta_{Tk} \defeq \sum_{t:\iota(t) = k} \alpha_{t} \:\:, \forall k \in [d]$; \end{algorithmic} \end{algorithm} \section{Boosting using rados}\label{sbur} \begin{table}[t] \begin{center} {\scriptsize \begin{tabular}{|crrc|r|rr|rr|c|c|} \hline \hline & & & & \multicolumn{1}{c|}{{\scriptsize \adaboostSS}} & \multicolumn{2}{c|}{{\scriptsize \adaboostSSS}} & \multicolumn{2}{c|}{{\scriptsize \radoboost}} & & \\ Domain & \multicolumn{1}{c}{$m$} & \multicolumn{1}{c}{$d$} & 100$\sigma$ & \multicolumn{1}{c|}{ err$\pm\sigma$} & \multicolumn{1}{c}{err$\pm\sigma$} & \multicolumn{1}{c|}{$\frac{n}{m}$} & \multicolumn{1}{c}{err$\pm\sigma$} & \multicolumn{1}{c|}{$\frac{n}{2^m}$} & $p$ & $p'$\\ \hline Fertility & 100 & 9 & -- & 47.00$\pm$18.99 & 44.00$\pm$16.47 & $0.50$ & 53.00$\pm$14.94 & {\tiny [$8$:$-28$]} & 0.23 & 0.09 \\ Haberman & 306 & 3 & -- & 25.72$\pm$10.62 & 33.01$\pm$9.58 & $0.50$ & 26.08$\pm$9.94 & {\tiny [$8$:$-90$]} & 0.70 & 0.02\\ Transfusion & 748 & 4 & -- & 39.42$\pm$6.13 & 37.83$\pm$4.94 & $0.50$ & 39.29$\pm$5.76 &{\tiny [$7$:$-223$]} & 0.81 & 0.36\\ Banknote & 1 372 & 4 & -- & 2.77$\pm$1.28 & 2.63$\pm$1.34 & $0.50$ & 14.21$\pm$3.22 & {\tiny [$9$:$-411$]} & $\varepsilon$ & $\varepsilon$\\ Breast wisc & 699 & 9 & -- & 3.00$\pm$1.42 & 3.43$\pm$2.25 & $0.50$ & 4.86$\pm$2.35 & {\tiny [$4$:$-208$]} & 0.03 & 0.13\\ Ionosphere & 351 & 33 & -- & 11.69$\pm$5.31 & 11.70$\pm$4.77 & $0.50$ & 15.40$\pm$9.93 & {\tiny [$2$:$-103$]} & 0.13 & 0.09\\ Sonar & 208 & 60 & -- & 26.88$\pm$9.36 & 25.43$\pm$6.61 & $0.50$ & 28.36$\pm$8.84 & {\tiny [$2$:$-60$]} & 0.76 & 0.42\\ Wine-red$^*$ & 1 599 & 11 & 1 & 26.14$\pm$3.10 & 26.39$\pm$3.15 & $0.50$ & 28.02$\pm$2.90 & {\tiny [$4$:$-479$]} & 0.05 & 0.03\\ Abalone$^*$ & 4 177 & 8 & -- & 22.96$\pm$1.44 & 23.20$\pm$1.44 & $0.24$ & 25.14$\pm$1.83 & {\tiny [$3$:$-$[$1$:$3$]]} & $\varepsilon$ & $\varepsilon$\\ Wine-white$^*$ & 4 898 & 11 & 1 & 30.93$\pm$3.42 & 30.44$\pm$3.25 & $0.20$ & 32.48$\pm$3.55 & {\tiny [$3$:$-$[$1$:$3$]]} & $\varepsilon$ & $\varepsilon$\\ Magic$^*$ & 19 020 & 10 & -- & 21.07$\pm$0.98 & 20.91$\pm$0.99 & $0.05$ & 22.75$\pm$1.51 & {\tiny [$3$:$-$[$5$:$3$]]} & $\varepsilon$ & 0.01\\ EEG & 14 980 & 14 & 14 & 46.04$\pm$1.38 & 44.36$\pm$1.99 & $0.07$ & 44.23$\pm$1.73 & {\tiny [$4$:$-$[$4$:$3$]]} & $\varepsilon$ & 0.86\\ Hardware$^*$ & 28 179 & 95 & -- & 16.82$\pm$0.72 & 16.76$\pm$0.73 & $0.04$ & 7.61$\pm$3.24 & {\tiny [$2$:$-$[$8$:$3$]]} & $\varepsilon$ & $\varepsilon$ \\ Twitter$^*$ & 583 250 & 77 & 44 & 53.75$\pm$1.48 & 53.09$\pm$11.23 & {\tiny [$1$:$-3$]} & 6.00$\pm$0.77 & {\tiny [$1$:$-$[$1$:$5$]]} & $\varepsilon$ & $\varepsilon$ \\ SuSy & 5 000 000 & 17 & -- & 27.76$\pm$0.14 & 27.43$\pm$0.19 & {\tiny [$2$:$-4$]} & 27.26$\pm$0.55 & {\tiny [$1$:$-$[$1$:$6$]]} & 0.02 & 0.39 \\ Higgs & 11 000 000 & 28 & -- & 42.55$\pm$0.19 & 45.39$\pm$0.28 & {\tiny [$9$:$-5$]} & 47.86$\pm$0.06 & {\tiny [$1$:$-$[$1$:$7$]]} & $\varepsilon$ & $\varepsilon$\\ \hline\hline \end{tabular} } \end{center} \caption{Comparison of \radoboost~($n$ random rados), \adaboostSS~\cite{ssIBj} (full training fold) and \adaboostSSS~($n$ random examples in training fold); domains ranked in increasing $d\cdot m$ value. Column ``$n/m$'' (resp. ``$n/2^m$'') for \adaboostSSS~(resp \radoboost) is proportion of training data with respect to fold size (resp. full set of rados). Notation [$a$:$b$] is shorthand for $a \times 10^{b}$. Column ``$100\sigma$'' is the number of features with outlier values distant from the mean by more than $100\sigma$ in absolute value. Column $p$ (resp. $p'$) is $p$-value for a two-tailed paired $t$-test on \adaboostSS~(resp. \adaboostSSS) vs \radoboost. $\varepsilon$ means $<0.01$.} \label{tc1_full} \end{table} Algorithm \ref{algoRadoboost} provides a boosting algorithm, \radoboost, that learns from a set of Rademacher observations ${\mathcal{S}}^r \defeq \{\ve{\rado}_{1},\ve{\rado}_{2}, ..., \ve{\rado}_{n}\}$. Their (unknown) Rademacher assignments are denoted ${\mathcal{U}} \defeq \{\ve{\sigma}_1, \ve{\sigma}_2, ..., \ve{\sigma}_n\} \subseteq \Sigma_m$. These rados have been computed from some sample ${\mathcal{S}}$, unknown to \radoboost. In the statement of the algorithm, $\rado_{jk}$ denotes coordinate $k$ of $\ve{\rado}_{j}$, and $\rado_{*k} \defeq \max_j |\rado_{jk}|$. More generally, the coordinates of some vector $\ve{z} \in {\mathbb{R}}^d$ are denoted $z_1, z_2, ..., z_d$. Step 2.1 gets a feature index $\iota(t)$ from a \textit{weak feature index oracle}, $\weak$. In its general form, \weak~returns a feature index maximizing $|r_t|$ in (\ref{defMu}). The weight update was preferred to AdaBoost's because rados can have large feature values and the weight update prevents numerical precision errors that could otherwise occur using AdaBoost's exponential weight update. We now prove a key Lemma on \radoboost, namely the fast convergence of the exponential rado-risk $\explossrado({\mathcal{S}}, \ve{\theta}, {\mathcal{U}})$ under a weak learning assumption (\textbf{WLA}). We shall then obtain the convergence of the logistic rado-risk (\ref{deflogSU}), and, via Theorem \ref{thm_concentration}, the convergence with high probability of $\logloss\left({\mathcal{S}}, \ve{\theta}\right)$. \begin{itemize} \item [(\textbf{WLA})] $\exists \upgamma > 0$ such that $\forall t\geq 1$, the feature returned by $\weak$ in Step 2.2 (\ref{defMu}) satisfies $|r_t| \geq \upgamma$. \end{itemize} \begin{lemma}\label{lem_radoboost} Suppose the (\textbf{WLA}) holds. Then after $T$ rounds of boosting in \radoboost, the following upperbound holds on the exponential rado-loss of $\ve{\theta}_T$: \begin{eqnarray} \explossrado({\mathcal{S}}, \ve{\theta}_T, \mathcal{U}) & \leq & \exp\left(-T\upgamma^2/2\right)\:\:.\label{explossbound} \end{eqnarray} \end{lemma} (Proof in the Appendix, Subsection \ref{proof_lem_radoboost}) We now consider Theorem \ref{thm_concentration} with $\Sigma_r = \Sigma_m$, and therefore $Q=0$. Blending Lemma \ref{lem_radoboost} and Theorem \ref{thm_concentration} using (\ref{deflogSU}) yields that, under the (\textbf{WLA}), we may observe with high probability (again, fixing $\Sigma_r = \Sigma_m$, so $Q=0$ in Theorem \ref{thm_concentration}): \begin{eqnarray} \logloss\left({\mathcal{S}}, \ve{\theta}_T\right) & \leq & \log(2) - \frac{T\upgamma^2}{2m} + Q'\:\:,\label{Qbound} \end{eqnarray} where $Q'$ is the rightmost term in ineq. (\ref{thc11}) or ineq. (\ref{thc22}). So provided $n \ll 2^m$ is sufficiently large, minimizing the exponential rado-risk over a \textit{subset of rados} brings a classifier whose average logloss on the \textit{whole set of examples} may decrease at rate $\Omega(\upgamma^2/m)$ under a weak learning assumption made over \textit{rados} only. This rate competes with those for direct approaches to boosting the logloss \cite{nnOT}, and we now show that our weak learning assumption is also essentially equivalent to the one done in boosting over examples \cite{ssIBj}. Let us rewrite $r_t(\ve{w})$ as the normalized edge in (\ref{defMu}), making explicit the dependence in the current rado weights. Let \begin{eqnarray} r_t^{ex}(\tilde{\ve{w}}) & \defeq & \frac{1}{x_{*\iota(t)}} \sum_{i=1}^{m} {w_{i} x_{i \iota(t)}} \label{exedge} \end{eqnarray} be the normalized edges for the same feature $\iota(t)$ as the one picked in step 2.1 of \radoboost, but computed over examples using some weight vector $\tilde{\ve{w}} \in {\mathbb{P}}^m$; here, ${\mathbb{P}}^m$ is the $m$-dim probability simplex and $x_{*\iota(t)} \defeq \max_i |x_{ik}|$. \begin{lemma}\label{lem_wla} $\forall \ve{w}_t \in {\mathbb{P}}^n$, $\forall \upgamma > 0$, there exists $\tilde{\ve{w}} \in {\mathbb{P}}^m$ and $\upgamma^{ex} > 0$ such that $|r_t(\ve{w}_t)| \geq \upgamma$ iff $|r_t^{ex}(\tilde{\ve{w}})| \geq \upgamma^{ex}$. \end{lemma} (Proof in the Appendix, Subsection \ref{proof_lem_wla}) The proof of the Lemma gives clues to explain why the presence of outlier feature values may favor \radoboost. \section{Basic experiments with \radoboost}\label{exp_boost_rado} We have compared \radoboost~to its main contender, \adaboostSS~\cite{ssIBj}, using the same weak learner; in \adaboostSS, it returns a feature maximizing $|r_t|$ as in eq. (\ref{exedge}). In these basic experiments, we have deliberately not optimized the set of rados in which we sample ${\mathcal{U}}$ for \radoboost; hence, we have $\Sigma_r = \Sigma_m$. We have performed comparisons with 10 folds stratified cross-validation (CV) on 16 domains of the UCI repository \cite{blUR} of varying size. For space considerations, Table \ref{tc1_full} presents the results. Each algorithm was ran for a total number of $T = 1000$ iterations; furthermore, the classifier kept for testing is the one minimizing the empirical risk throughout the $T$ iterations; in doing so, we also assessed the early convergence of algorithms. We fixed $n = \min\{1000, \mbox{train fold size}/2\}$. Table \ref{tc1_full} displays that \radoboost~compares favourably to \adaboostSS, and furthermore it tends to be all the better as $m$ and $d$ increase. On some domains like Hardware and Twitter, the difference is impressive and clearly in favor of \radoboost. As discussed for Lemma \ref{lem_wla}, we could interpret these comparatively very poor performances of \adaboostSS~as the consequence of outlier features that can trick \adaboostSS~in picking the wrong sign in the leveraging coefficient $\alpha_t$ for a large number of iterations if we use real-valued classifiers (see column $100\sigma$ in Table \ref{tc1_full}). This drawback can be easily corrected (Cf Appendix, Subsection \ref{exp_tc1}) by enforcing minimal $|r_t|$ values. This significantly improves \adaboostSS~on Hardware and Twitter. The improvements observed on \radoboost~are even more favorable. \section{Rados and differential privacy}\label{sradp} \begin{figure}[t] \begin{center} \includegraphics[width=0.4\columnwidth]{FigRadoPrivacy_4} \caption{Summary of the DP-related contributions of Section \ref{sradp} (in color). (a) : usual DP mechanism that protects examples (S) prior to delivery to learner (L); (b) : mechanism that crafts differentially private rados (R) from unprotected examples (\textsection \ref{sfwdp}); (c) : mechanism crafting rados from DP-compliant examples with objective to improve performances of rado-based learner L' (\textsection \ref{sbfdp}).} \label{dppic} \end{center} \end{figure} We now discuss the delivery of rados to comply with several DP constraints and their eventual impact on boosting. We thus adress both levels (i+ii) of rado delivery in \textsection\ref{srasl}. Our general model is the standard DP model \cite{drTA}. Intuitively, an algorithm is DP compliant if for any two neighboring datasets, it assigns similar probability to any possible output $O$. In other words, any particular record has only limited influence on the probability of any given output of the algorithm, and therefore the output discloses very little information about any particular record in the input. Formally, a randomized algorithm $\mathcal{A}$ is $(\upepsilon, \updelta)$-differentially-private \cite{dmnsCN} for some $\upepsilon, \updelta >0$ iff: \begin{eqnarray} \mathbb{P}_{\mathcal{A}}[O|\mathcal{S}] & \leq & \exp(\upepsilon)\cdot\mathbb{P}_\mathcal{A}[O|\mathcal{S}'] + \updelta, \forall \mathcal{S}\approx \mathcal{S}', O,\label{dpreq} \end{eqnarray} where the probability is over the coin tosses of $\mathcal{A}$. This model is very strong, especially when $\updelta = 0$, and in the context of ML, maintaining high accuracy in strong DP regimes is generally a tricky tradeoff \cite{djwPA}. \begin{algorithm}[t] \caption{Feature-wise DP-compliant rados (\dpfreal)}\label{algodpfeat} \begin{algorithmic} \STATE \textbf{Input} set of examples ${\mathcal{S}}$, sensitive feature $j_* \in [d]$, number of rados $n$, differential privacy parameter $\upepsilon > 0$; \STATE Step 1 : let $\beta \leftarrow 1/(1+\exp(\upepsilon/2)) \in [0,1/2)$; \STATE Step 2 : sample $\ve{\sigma}_1, \ve{\sigma}_2, ..., \ve{\sigma}_n$ i.i.d. (uniform) in $\sbj$; \STATE \textbf{Return} set of rados $\{\ve{\rado}_{\ve{\sigma}} : \ve{\sigma} \mbox{ sampled in Step 2}\}$; \end{algorithmic} \end{algorithm} Because rados are an intermediate step between training sample ${\mathcal{S}}$ and a rado-based learner, there are two ways to design rados with respect to the DP framework: crafting DP-compliant rados from unprotected examples, or crafting rados from DP-compliant examples with the aim to improve the performance of the rado-based learner (Figure \ref{sbfdp}). These scenarii can be reduced to the design of $\Sigma_r$. \subsection{A feature-wise DP mechanism for rados}\label{sfwdp} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\columnwidth]{FigExplainDPFeat} \caption{How \dpfreal~works: neighbor samples ${\mathcal{S}}$ and ${\mathcal{S}}'$ differ by one value for feature $j_*$ (\textit{i.e.} one edge coordinate, represented); the rado whose support relies only on the ``-1'' in ${\mathcal{S}}$ (dashed lines) yields infinite ratio $\pr_{{\mathcal{A}}}[O | I]/\pr_{{\mathcal{A}}}[O | I']$ in (\ref{dpreq}). This rado would never be sampled by \dpfreal. On the other hand, a rado that sums an equal number $s$ of ``+1'' and ``-1'' (dotted lines) may yield ratio very close to 1 (such a rado can be sampled by \dpfreal).} \label{dpexpl} \vspace{-0.5cm} \end{center} \end{figure} In this Subsection, we consider a relaxation of differential-privacy, namely \emph{feature-wise} differential privacy, where the differential privacy requirement applies to $j_*$-\textit{neighboring datasets}: we say that two samples $\mathcal{S}, \mathcal{S}'$ are \textit{$j_*$-neighbors}, noted $\mathcal{S}\approx_{j_*} \mathcal{S}'$, if they are the same except for the value of the $j_*^{th} \in [d]$ observation feature of some example. We further assume that the feature is boolean. For example, we may have a medical database containing a column representing the HIV status of a doctor's patients (1 row = a patient), and we do not wish that changing a single patient HIV status significantly changes the density of that feature's values in rados. This setting would also be very useful in genetic applications to hide in rados gene disorders that affect one or few genes. Feature-wise DP is analogous to the concept of $\alpha$-label privacy \cite{chSC}, where differential privacy is guaranteed with respect to the label. Algorithm ${{\mathcal{A}}}$ in ineq. (\ref{dpreq}) is given in Algorithm \ref{algodpfeat}. It relies on the following subset $\Sigma_r \defeq \sbj \subseteq \Sigma_m$: \begin{eqnarray} \sbj & \defeq & \left\{\ve{\sigma} \in \Sigma_m : \rado_{\ve{\sigma} j_*} \in \left[|\{i : y_i x_{ij_*} = +1\}| - \frac{m}{2} \pm \Delta_\beta \right]\right\} \label{defSrm}\:\:, \end{eqnarray} with $\Delta_\beta \defeq (m/2) - \beta(m+1)$. The key feature of this mechanism is that it does not alter the examples in the sense that DP-compliant rados belong to the set of cardinal $2^m$ that can be generated from ${\mathcal{S}}$. Usual data-centered DP mechanisms would rather alter data, \textit{e.g.} via noise injection \cite{gBP}. Algorithm \ref{algodpfeat} exploits the fact that it is the tails of feature $j_*$ that leak sensitive information about the feature in rados (see Figure \ref{dpexpl}). The following Theorem is stated so as we can pick small $\updelta$, typically $\updelta \ll 1/m$. Other variants are possible that bring different tradeoffs between $\upepsilon$ and $\delta$. \begin{theorem}\label{thm_dpfreal} Assume $\upepsilon$ is chosen so that $\upepsilon = o(1)$ but $\upepsilon = \Omega(1/m)$. In this case, \dpfreal~maintains $(n \cdot \upepsilon, n \cdot \updelta)$-differential privacy on feature $j_*$ for some $\updelta > 0$ such that $\upepsilon \cdot \updelta = O(m^{-5/2})$. \end{theorem} (Proof in the Appendix, Subsection \ref{proof_thm_dpfreal}) We have implemented Step 2 in Algorithm \dpfreal~in the simplest way, using a simple Rademacher rejection sampling where each $\ve{\sigma}_j$ is picked i.i.d. as $\ve{\sigma}_j \sim \Sigma_m$ until $\ve{\sigma}_j \in \sbj$. The following Theorem shows its algorithmic efficiency. \begin{theorem}\label{thm_rrs} For any $\upeta > 0$, let $n^*_\upeta \defeq \upeta (1-\exp(2\beta-1)) / (4\beta)$, and let $n_R$ denote the total number of rados sampled in $\Sigma_m$ until $n$ rados are found in $\sbj$. Then for any $\upeta > 0$, there is probability $\geq 1 - \upeta$ that \begin{eqnarray*} n_R & \leq & n \cdot \left\{ \begin{array}{ccl} 1 & \mbox{if} & n \leq n^*_\upeta\\ \left\lceil \frac{1}{m D_{BE}(1-\beta\| 1/2)} \log \frac{n}{n^*_\upeta} \right\rceil & \multicolumn{2}{l}{\mbox{ otherwise}} \end{array} \right. \:\:,\label{boundTRrs} \end{eqnarray*} where $D_{BE}$ is the bit-entropy divergence: $D_{BE}(p\|q) = p \log(p/q) + (1-p) \log((1-p)/(1-q))$, for $p, q \in (0,1)$. \end{theorem} (Proof in the Appendix, Subsection \ref{proof_thm_rrs}) Remark that replacing $\Sigma_m$ by $\Sigma_r = \sbj$ would not necessarily impair the boosting convergence of \radoboost~trained from rados samples from \dpfreal~(Lemma \ref{lem_radoboost}). The only systematic change would be in ineq. (\ref{Qbound}) where we would have to integrate the structural penalty $Q$ from Theorem \ref{thm_concentration} to further upperbound $\logloss\left({\mathcal{S}}, \ve{\theta}_T\right)$. In this case, the upperbound in (\ref{bsupQ}) reveals that at least when the mean operator in $\sbj$ has small norm --- which may be the case even when some examples in ${\mathcal{S}}$ have large norm --- and the gradient penalty is small, then $Q$ may be small as well. \begin{table}[t] \begin{center} \begin{tabular}{c||c|c}\hline\hline \hspace{-0.2cm} Section \ref{sfwdp} \hspace{-0.3cm} & \multicolumn{2}{c}{\hspace{-0.1cm} Section \ref{sbfdp}}\\ \hspace{-0.1cm} & \hspace{-0.1cm} \radoboost~vs \adaboostSS \hspace{-0.1cm} & \hspace{-0.1cm} \radoboost:~\textsection \ref{sbfdp} vs \textsection \ref{sfwdp}\\ \hline \hspace{-0.2cm} \includegraphics[width=0.31\columnwidth]{Plots/abalone} \hspace{-0.3cm} & \hspace{-0.1cm} \includegraphics[width=0.31\columnwidth]{Plots/Strong_Weak_Oracle/banknote_RadoBoostSupport-AdaBoost} & \hspace{-0.1cm} \includegraphics[width=0.31\columnwidth]{Plots/Median+SupportMinusStrong+Random/transfusion_Median+SupportMinusStrong+Random}\\ \hspace{-0.2cm} Abalone \hspace{-0.3cm} & \hspace{-0.3cm} Banknote \hspace{-0.3cm} & \hspace{-0.3cm} Transfusion\\ \hline \hspace{-0.2cm} \includegraphics[width=0.31\columnwidth]{Plots/ionosphere} \hspace{-0.3cm} & \hspace{-0.1cm} \includegraphics[width=0.31\columnwidth]{Plots/Strong_Weak_Oracle/eeg_RadoBoostSupport-AdaBoost} \hspace{-0.1cm} & \hspace{-0.1cm} \includegraphics[width=0.31\columnwidth]{Plots/Median+SupportMinusStrong+Random/magic_Median+SupportMinusStrong+Random}\\ \hspace{-0.2cm} Ionosphere \hspace{-0.3cm} & \hspace{-0.3cm} Eeg \hspace{-0.3cm} & \hspace{-0.3cm} Magic \\ \hline\hline \end{tabular} \caption{Left table: \radoboost~on feature-wise DP compliant rados (Subsection \ref{sfwdp}, showing standard deviations) vs \radoboost~on plain random rados baseline and \adaboostSS~baseline (trained with complete fold). Center: test error of \radoboost~\textit{minus} \adaboostSS's~(also showing \adaboostSS~error on right axis, dotted line), for rados with fixed support $s$ ($=m_*$, in green, red, blue) and plain random rados (dotted grey). Right: test error of \radoboost~using fixed support $s$ rados and a prudential learner, \textit{minus} \radoboost~using plain random rados and ``strong'' learner of Section \ref{exp_boost_rado} (See Table \ref{t-s52_1} through Table \ref{t-s52_6}). \label{t-edpr}} \end{center} \end{table} We end up with several important remarks, whose formal statements and proofs are left out due to space constraints. First, the tail truncation design exploited in \dpfreal~can be fairly simply generalized in two directions, to handle (a) real-valued features, and/or (b) several sensitive features instead of one. Second, we can do DP-compliant design of rado delivery beyond feature-wise privacy, \textit{e.g.} to protect ``rado-wide'' quantities like norms. \subsection{Boosting from DP-compliant examples via rados}\label{sbfdp} We now show how to craft rados from DP-compliant examples so as to approximately keep the convergence rates of \radoboost. More precisely, since edge vectors are sufficient to learn (eq. \ref{deflogloss}), we assume that edge vectors are DP-compliant (neighbor samples, $\mathcal{S}\approx \mathcal{S}'$, would differ on one edge vector). A gold standard to protect data in the DP framework is to convolute data with noise. One popular mechanism is the Gaussian mechanism \cite{drTA,hpTN}, which convolutes data with independent Gaussian random variables ${\mathcal{N}}(\ve{0}, \varsigma^2 \mathrm{I})$, whose standard deviation $\varsigma$ depends on the DP requirement ($\upepsilon, \updelta$). Strong DP regimes are tricky to handle for learning algorithms. For example, the approximation factor $\rho$ of the singular vectors under DP noise of the noisy power method roughly behaves as $\rho = \Omega(\varsigma / \Delta)$ \cite{hpTN} (Corollary 1.1) where $\Delta = O(d)$ is a difference between two singular values. When $\varsigma$ is small, this is a very good bound. When the DP requirement blows up, the bound remains relevant \textit{if} $d$ increases, which may be hard to achieve in practice --- it is easier in general to increase $m$ than $d$, which requires to compute new features for past examples. We consider ineq. (\ref{dpreq}) with neighbors $I$ and $I'$ being two sets of $m$ edge vectors differing by one edge vector, and $O$ is a noisified set of $m$ edge vectors generated through the Gaussian mechanism \cite{drTA} (Appendix A). We show the following non-trivial result: provided we design another particular $\Sigma_r$, the convergence rate of \radoboost, \textit{as measured over non-noisy rados}, essentially survives noise injection in the edge vectors through the Gaussian mechanism, even under strong noise regimes, as long as $m$ is large enough. The intuition is straightforward: we build rados summing a large number of edge vectors only (this is the design of $\Sigma_r$), so that the i.i.d. noise component gets sufficiently concentrated for the algorithm to be able to learn almost as fast as in the noise-free setting. We emphasize the non-trivial fact that convergence rate is measured over the non-noisy rados, which of course \radoboost~does \textit{not} see. The result is of independent interest in the boosting framework, since it makes use of a particular weak learner ($\weak$), which we call \textit{prudential}, which picks features with $|r_t|$ (\ref{defMu}) upperbounded. We start by renormalizing coefficients $\alpha_t$ (eq. (\ref{defalpha})) in \radoboost~by a parameter $\kappa \geq 1$ given as input, so that we now have $\alpha_{t} \leftarrow (1/(2 \kappa \rado_{*\iota(t)})) \log ((1 + r_t)/(1 - r_t))$ in Step 2.2. It is not hard to check that the convergence rate of \radoboost~now becomes, prior to applying the (\textbf{WLA}) \begin{eqnarray} \loglossrado({\mathcal{S}}, \ve{\theta}_T, \mathcal{U}) & \leq & \log(2) - \frac{1}{2\kappa m} \sum_t r_t^2\:\:.\label{Qbound2} \end{eqnarray} We say that $\weak$ is $\uplambda_p$-\textit{prudential} for $\uplambda_p > 0$ iff it selects at each iteration a feature such that $|r_t| \leq \uplambda_p$. Edges vectors have been DP-protected as $y_i (\ve{x}_i + \ve{x}_i^r)$, with $\ve{x}_i^r \sim {\mathcal{N}}(\ve{0}, \varsigma^2 \mathrm{I})$ (for $i\in [m]$). Let $m_{\ve{\sigma}} \defeq |\{ i : \sigma_{i} = y_i\}|$ denote the \textit{support} of a rado, and ($m_*>0$ fixed): \begin{eqnarray} \Sigma_r = \sbm & \defeq & \left\{\ve{\sigma} \in \Sigma_m : m_{\ve{\sigma}} = m_*\right\}\:\:.\label{defSrmm} \end{eqnarray} \begin{theorem}\label{thm_random_gau} $\forall {\mathcal{U}} \subseteq \Sigma_r, \forall \uptau > 0$, if $\sqrt{m_*} = \Omega \left(\varsigma \ln (1/\uptau)\right)$, then $\exists \uplambda_p>0$ such that \radoboost~having access to a $\uplambda_p$-prudential weak learner returns after $T$ iteration a classifier $\ve{\theta}_T$ which meets with probability $\geq 1 - \uptau$: \begin{eqnarray} \loglossrado({\mathcal{S}}, \ve{\theta}_T, \mathcal{U}) & \leq & \log(2) - \frac{1}{4\kappa m} \sum_t r_t^2\:\:. \label{llleqlast} \end{eqnarray} \end{theorem} The proof, in the Appendix (Subsection \ref{proof_thm_random_gau}), details parameters and dependencies hidden in the statement. The use of a prudential weak learner is rather intuitive in a noisy setting since $\alpha_t$ blows up when $|r_t|$ is close to 1. Theorem \ref{thm_random_gau} essentially yield that a sufficiently large support for rados is enough to keep with high probability the convergence rate of \radoboost~within noise-free regime. Of course, the weak learner is prudential, which implies bounded $|r_t| < 1$, and furthermore the leveraging coefficients $\alpha_t$ are normalized, which implies smaller margins. Still, Theorem \ref{thm_random_gau} is a good theoretical argument to rely on rados when learning from DP-compliant edge vectors. \section{Experiments on differential privacy}\label{exp_dp_rado} Table \ref{t-edpr} presents a subset of the experiments carried out with \radoboost~and \adaboostSS~in the contexts of Subsections \ref{sfwdp} and \ref{sbfdp} (see Section \ref{app_exp_expes} for all additional experiments). Unless otherwise stated, experimental settings (cross validation, number of rados for learning, etc.) are the same as in Section \ref{exp_boost_rado}. In a first set of experiments, we have assessed the impact on learning of the feature-wise DP mechanism: on each tested domain, we have selected at random a binary feature, and then used Algorithm \dpfreal~to protect the feature for different values of DP parameter $\upepsilon$, in a range that covers usual DP experiments \cite{hghknprDP} (Table 1). The main conclusion that can be drawn from the experiments is that learning from DP-compliant rados can compete with learning from random rados, and even learning from examples (\adaboostSS), even for rather small $\upepsilon$. We then have assessed the impact on learning of examples that have been protected using the Gaussian mechanism \cite{drTA}, with or without rados, with or without a prudential weak learner for boosting, and with or without using a fixed support for rado computation. The Appendix provides extensive results for all domains but the largest ones (Twitter, SuSy, Higgs). In the central column (and Tables \ref{t-s52_1} through \ref{t-s52_4} in the Appendix), computing the differences between \radoboost's error and \adaboostSS's reveals that, on domains where it is beaten by \adaboostSS~when there is no noise, \radoboost~almost always rapidly become competitive with \adaboostSS~as noise increases. Hence, \radoboost~is a good contender from the boosting family to learn from differentially private (or noisy) data. Second, using a prudential weak learner which picks the median feature (instead of the more efficient weak learner that picks the best as in Section \ref{exp_boost_rado}) can have \radoboost~with fixed support rados compete or beat \radoboost~with plain random rados, at least for small noise levels (see Transfusion and Magic in the right column of Table \ref{t-edpr}). Replacing the median-prudential weak learner by a strong learner can actually degrade \radoboost's results (see the Appendix, Tables \ref{t-s52_5} and \ref{t-s52_6}). These two observations advocate in favor of the theory developed in Subsection \ref{sbfdp}. Finally, using rados with fixed support instead of plain random rados (Section \ref{exp_boost_rado}) can significantly improve the performances of \radoboost~(see the Appendix, Tables \ref{t-s52_5} and \ref{t-s52_6}). \section{From rados to examples: hardness results}\label{sec_hardness} The problem we address here is how we can recover examples from rados, and when we \textit{cannot} recover examples from rados. This last setting is particularly useful from the privacy standpoint, as this may save us costly obfuscation techniques that impede ML tasks \cite{bptgML}. \subsection{Algebraic and geometric hardness} For any $m \in {\mathbb{N}}_*$, we define matrix $\matrice{G}_m \in \{0,1\}^{m\times 2^m}$ as: \begin{eqnarray} \matrice{G}_m & \defeq & \left[ \begin{array}{cc} \ve{0}^\top_{2^{m-1}} & \ve{1}^\top_{2^{m-1}}\\ \matrice{G}_{m-1} & \matrice{G}_{m-1} \end{array} \right] \end{eqnarray} if $m>1$, and $\matrice{G}_1 \defeq [0\:\: 1]$ otherwise ($\ve{z}_d$ denotes a vector in ${\mathbb{R}}^d$). Each column of $\matrice{G}_m$ is the binary indicator vector for the edge vectors considered in a rado. Hereafter, we let $\matrice{E} \in {\mathbb{R}}^{d\times m}$ the matrix of columnwise edge vectors from ${\mathcal{S}}$, $\matrice{$\Pi$} \in {\mathbb{R}}^{d\times n}$ the columnwise rado matrix and $\matrice{U} \in \{0,1\}^{2^m \times n}$ in which each column gives the index of a rado computed in ${\mathcal{S}}^r$. By construction, we have: \begin{eqnarray} \matrice{$\Pi$} & = & \matrice{E} \matrice{G}_m \matrice{U} \:\:,\label{linkpim} \end{eqnarray} and so we have the following elementary results for the (non) reconstruction of $\matrice{E}$ (proof omitted). \begin{lemma}\label{lem_reco1} (a) when recoverable, edge-vectors satisfy: $\matrice{E} = \matrice{$\Pi$} \matrice{U}^\top \matrice{G}_m^\top (\matrice{G}_m \matrice{U} \matrice{U}^\top \matrice{G}_m^\top)^{-1}$; (b) when $\matrice{U}$, $\matrice{$\Pi$}$, $m$ are known but $n<m$, there is not a single solution to eq. (\ref{linkpim}) in general. \end{lemma} Lemma \ref{lem_reco1} states that even when $\matrice{U}$, $\matrice{$\Pi$}$ and $m$ are known, elementary constraints on rados can make the recovery of edge vectors hard --- notice that such constraints are met in our experiments with \radoboost~in Sections \ref{exp_boost_rado} and \ref{exp_dp_rado}. But this represents a lot of \textit{unnecessary} knowledge to learn from rados: \radoboost~just needs $\matrice{$\Pi$}$ to learn. We now explore the guarantees that providing this sole information brings in terms of (not) reconstructing $\matrice{E}$. $\forall \matrice{M} \in {\mathbb{R}}^{a \times b}$, we let ${\mathcal{C}}(\matrice{M})$ denote the set of column vectors, and for any ${\mathcal{C}} \subseteq {\mathbb{R}}^d$, we let ${\mathcal{C}} \oplus \epsilon \defeq \cup_{\ve{z} \in {\mathcal{C}}} {\mathcal{B}}(\ve{z} , \epsilon)$. We define the Hausdorff distance, $D_{\mathrm{H}}({\matrice{E}}, {\matrice{E}}')$, between ${\matrice{E}}$ and ${\matrice{E}}'$: \begin{eqnarray*} \lefteqn{D_{\mathrm{H}}({\matrice{E}}, {\matrice{E}}')}\nonumber\\ & \defeq & \inf\{\epsilon: {\mathcal{C}}({\matrice{E}}) \subseteq {\mathcal{C}}({\matrice{E}}') \oplus \epsilon \wedge {\mathcal{C}}({\matrice{E}}') \subseteq {\mathcal{C}}({\matrice{E}}) \oplus \epsilon\}\:\:. \end{eqnarray*} The following Lemma shows that if the only information known is $\matrice{$\Pi$}$, then there exist samples that bring the same set of rados ${\mathcal{C}}(\matrice{$\Pi$})$ as the unknown $\matrice{E}$ \textit{but} who are at distance proportional to the ``width'' of the domain at hand. \begin{lemma}\label{lem_algebraic} For any $\matrice{$\Pi$} \in {\mathbb{R}}^{d\times n}$, suppose eq. (\ref{linkpim}) holds, for some unknowns $m > 0$, $\matrice{E} \in {\mathbb{R}}^{d\times m}$, $\matrice{U}\in \{0,1\}^{2^m \times n}$. Suppose ${\mathcal{C}}(\matrice{E}) \subset {\mathcal{B}}(\ve{0} , R)$ for some $R>0$. Then there exists $\matrice{E}' \in {\mathbb{R}}^{d\times {(m+1)}}$, $\matrice{U}' \in \{0,1\}^{2^{m+1} \times n}$ such that \begin{eqnarray} {\mathcal{C}}(\matrice{E}') \subset {\mathcal{B}}(\ve{0} , R) & \mbox{ and } & \matrice{$\Pi$} = \matrice{E}' \matrice{G}_{m+1} \matrice{U}' \:\:, \end{eqnarray} but \begin{eqnarray} D_{\mathrm{H}}(\matrice{E}, \matrice{E}') & = & \Omega\left( \frac{R \log d}{\sqrt{d} \log m} \right)\:\: \end{eqnarray} if $m\geq 2^d$, and $D_{\mathrm{H}}(\matrice{E}, \matrice{E}') = \Omega(R/\sqrt{d})$ otherwise. \end{lemma} (Proof in the Appendix, Subsection \ref{proof_lem_algebraic}) Hence, without any more knowledge, leaks, approximations or assumptions on the domain at hand, the recovery of $\matrice{E}$ pays in the worst case a price proportional to the radius of the smallest enclosing ${\mathcal{B}}(\ve{0},.)$ ball for the unknown set of examples. We emphasize that this inapproximability result does not rely on the computational power at hand. \subsection{Computational hardness} In this Subsection, we investigate two important problems in the recovery of examples. The first problem addresses whether we can \textit{approximately} recover \textit{sparse} examples from a given set of rados, that is, roughly, solve (\ref{linkpim}) with a sparsity constraint on examples. The first Lemma we give is related to the hardness of solving underdetermined linear systems for sparse solutions \cite{dtSN}. The sparsity constraint can be embedded in the compressed sensing framework \cite{doCS} to yield finer hardness \textit{and} approximability results, which is beyond the scope of our paper. We define problem ``Sparse-Approximation'' as: \begin{itemize} \item [] {\hspace{-0.7cm}(\textbf{Instance})} : set of rados ${\mathcal{S}}^r = \{\ve{\rado}_{1}, \ve{\rado}_{2}, ..., \ve{\rado}_{n}\}$, $m \in {\mathbb{N}}_*$, $r, \ell \in {\mathbb{R}}_+$, $\|.\|_p$, $L_p$-norm for $p \in {\mathbb{R}}_+$; \item [] {\hspace{-0.7cm}(\textbf{Question})} : Does there exist set ${\mathcal{S}} \defeq \{(\ve{x}_i,y_i), i \in [m]\}$ and set ${\mathcal{U}} \defeq \{\ve{\sigma}_1, \ve{\sigma}_2, ..., \ve{\sigma}_n \} \in\{-1,1\}^m$ such that: \begin{eqnarray*} \|\ve{x}_i\|_p & \leq & \ell\:\:, \forall i\in [m]\:\:, \:\:(\mbox{Sparse examples}) \\ \|\ve{\rado}_{j} - \ve{\rado}_{\ve{\sigma}_j}\|_p & \leq & r\:\:, \forall j \in [n]\:\:.\:\:(\mbox{Rado approximation}) \end{eqnarray*} \end{itemize} \begin{lemma}\label{lem_comp1} Sparse-Approximation is NP-Hard. \end{lemma} (Proof in the Appendix, Subsection \ref{proof_lem_comp1}) In the context of rados, the second problem we address has very large privacy applications. Suppose entity \textcircled{{\scriptsize A}} has a huge database of people (\textit{e.g.} clients), and obtains a set of rados emitted by another entity \textcircled{{\scriptsize B}}. An important question that \textcircled{{\scriptsize A}} may ask is whether the rados observed \textit{can} be \textit{approximately} constructed by its database, for example to figure out which of its clients are also its competitors'. We define this as problem ``Probe-Sample-Subsumption'': \begin{itemize} \item [] {\hspace{-0.7cm}(\textbf{Instance})} : set of examples ${\mathcal{S}}$, set of rados ${\mathcal{S}}^r = \{\ve{\rado}_{1}, \ve{\rado}_{2}, ..., \ve{\rado}_{n}\}$, $m \in {\mathbb{N}}_*$, $p, r \in {\mathbb{R}}_+$. \item [] {\hspace{-0.7cm}(\textbf{Question})} : Does there exist ${\mathcal{S}}' \defeq \{(\ve{x}_i,y_i), i \in [m]\} \subseteq {\mathcal{S}}$ and set ${\mathcal{U}} \defeq \{\ve{\sigma}_1, \ve{\sigma}_2, ..., \ve{\sigma}_n\}\in\{-1,1\}^m$ such that: \begin{eqnarray*} \|\ve{\rado}_{j} - \ve{\rado}_{\ve{\sigma}_j}\|_p & \leq & r\:\:, \forall j \in [n]\:\:.\:\:(\mbox{Rado approximation}) \end{eqnarray*} \end{itemize} \begin{lemma}\label{lem_comp2} Probe-Sample-Subsumption is NP-Hard. \end{lemma} (Proof in the Appendix, Subsection \ref{proof_lem_comp2}) This worst-case result calls for interesting domain-specific qualifications, such as in genetics where the privacy of raw data, \textit{i.e.} individual genomes, can be compromised by genome-wise statistics \cite{hsrdtmpsncRI,nslTB}. \section{Conclusion} We have introduced novel quantities that are sufficient for efficient learning, Rademacher observations. The fact that a subset of these can replace traditional examples for efficient learning opens interesting problems on how to craft these subsets to cope with additional constraints. We have illustrated these constraints in the field of efficient learning from privacy-compliant data, from various standpoints that include differential privacy as well as algebaric, geometric and computational considerations. In that last case, results rely on NP-Hardness, and thus go beyond the ``hardness'' of factoring integers on which rely some popular cryptographic techniques \cite{bptgML}. Finally, rados are cryptography-compliant: homomorphic encryption schemes can be used to compute rados in the encrypted domain from encrypted edge vectors or examples --- rado computation can thus be easily distributed in secure multiparty computation applications. \section{Acknowledgments} The authors are indebted to Tiberio Ca\'etano for early discussions that brought the idea of Rademacher observations and their use in privacy related applications. Thanks are also due to Stephen Hardy and Hugh Durrant-Whyte for many stimulating discussions and feedback on the subject. NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Center of Excellence Program. \bibliographystyle{plain}
2,869,038,155,133
arxiv
\section*{Notation} \begin{itemize} \item {$\mathbb{R}$ denotes the field of real numbers, $\mathbb{R}_{\geq 0}=\{x\in\mathbb{R}| \ x\geq 0\}$, and} $\mathbb{R}^n$ stands for the $n$-dimensional linear real vector space; \item $\mathbb{N}$ denotes the set of natural numbers; \item bold symbols $\boldsymbol{x} =(x_{1},\dots,x_{n})$ will denote elements of $\mathbb{R}^n$; \item $(\boldsymbol{x},\boldsymbol{y})=\sum_{k} x_{k} y_{k}$ is the inner product of $\boldsymbol{x}$ and $\boldsymbol{y}$, and $\|\boldsymbol{x}\|=\sqrt{(\boldsymbol{x},\boldsymbol{x})}$ is the standard Euclidean norm in $\mathbb{R}^n$; \item $\mathbb{B}_n$ denotes the unit ball in $\mathbb{R}^n$ centered at the origin: \[\mathbb{B}_n=\{\boldsymbol{x}\in\mathbb{R}^n| \ {\|\boldsymbol{x}\|\leq 1}\};\] \item $\mathbb{B}_n(r,\boldsymbol{y})$ stands for the ball in $\mathbb{R}^n$ of radius ${r> 0}$ centered at $\boldsymbol{y}$: \[\mathbb{B}_n(r,\boldsymbol{y})=\{\boldsymbol{x}\in\mathbb{R}^n| \ {\|\boldsymbol{x}-\boldsymbol{y}\|\leq r}\};\] \item $V_n$ is the $n$-dimensional Lebesgue measure, and $V_n(\mathbb{B}_n)$ is the volume of unit {$n$}-ball; \end{itemize} \section{Introduction} Recent years have seen significant progress in the application of Artificial Intelligence (AI) and Machine Learning tools to a host of practically relevant tasks. Most importantly, we are witnessing major successes in the application of advanced large-scale models featuring millions of trainable parameters \cite{sandler2018mobilenetv2} to problems for which the volumes of available prior knowledge for training do not conform to the requirements of classical Vapnik-Chervonenkis theory \cite{vapnik1999overview} or other similar combinatorial bounds. A well-known example of the task in which this striking phenomenon can be observed is the MNIST digits dataset which, being reasonably small in size, can be learned remarkably well by modern large-scale deep neural networks. This property is fascinating in its own right, especially in view of \cite{zhang2016understanding}, \cite{zhang2021understanding} reporting evidence that large-scale deep neural networks with identical architecture and training routines can both successfully generalise beyond training data and at the same time overfit or memorise random noise. However, what is particularly striking is that some times an appropriately trained model is capable of exhibiting an extreme behaviour - learning from merely few presentations. To date, many different successful few-shot learning schemes have been reported in the literature. Matching \cite{vinyals2016matching} and prototypical \cite{snell2017prototypical} networks are examples of such learning machines. However, comprehensive theoretical justification of these schemes is yet to be seen. Recent work \cite{tyukin2021demystification}, \cite{gorban2021high} suggested a new framework offering a pathway for understanding of few-shot learning. Instead of focusing on classical ideas rooted in empirical risk minimisation coupled with distribution-agnostic bounds, it explores the interplay between the geometry of feature spaces and concentration of measure phenomena \cite{ledoux2001concentration}. This enables an escape from the apparent paradox of generalisation discovered in \cite{zhang2016understanding}, \cite{zhang2021understanding}. Instead of posing the question of generalisation for all possible data distributions, one can ask a related but a different question: what properties of data distributions could be relevant or useful for few-shot learning? This refocusing might apparently be necessary in view of \cite{bartlett2020benign} showing that the spectrum of the data covariance matrix may hold the key to understanding benign overfitting. In this work we adopt the theoretical framework proposed in \cite{tyukin2021demystification}, \cite{gorban2021high} and generalise it beyond the original setting whereby the problem of few-shot learning is analysed in models' native feature spaces. Here we explore how the problem of few-shot learning changes if one allows a nonlinear transformation of these features. Our motivation to study this question is two-fold. First, many existing few-shot learning tools \cite{vinyals2016matching}, \cite{snell2017prototypical} already assume some sort of kernel-based transformation. Second, using kernels may enable mappings from original finite- or low-dimensional feature spaces into infinite- or essentially high-dimensional spaces. The potential advantage of these transformations are illustrated in Fig. \ref{fig:kernel_separability_orthogonality}. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{orthogonality_vs_dimensions.png} \includegraphics[width=0.475\textwidth]{separability_by_dimension.png} \caption{Empirical estimates of how easy it is to separate points using various nonlinear kernels. Top: separating points pairwise using kernel orthogonality. Bottom: separating a single point from a set of 20,000 other points using a linear separating surface in the kernel feature space. In both cases, the points are sampled from a uniform distribution in $[-1, 1]^n$. Here, $\phi_n(\boldsymbol{x})$ denotes the image of the point $x \in \mathbb{R}^n$ under the kernel's associated feature mapping, and $\mu = |Y|^{-1} \sum_{y \in Y} \phi(y)$.} \label{fig:kernel_separability_orthogonality} \end{figure} As these figures suggest, mapping vectors from their original spaces into their corresponding feature spaces induced by various kernels has a significant impact on data geometry in the mapped spaces. In particular, on the probability of the sample's quasi-orthogonality and linear separability. As we show here, the latter properties may offer new perspectives and capabilities affecting probabilities of success of such schemes. These results are stated formally in Theorem \ref{thm:few_shot} which is the main theoretical contribution of our work. The paper is organised as follows. In Section \ref{sec:preliminaries} we introduce some relevant notation and formulate the problem of few-shot learning, in which nonlinear feature transformations mapping input data into new feature spaces become important parameters of the problem. Section \ref{sec:main_results} presents our main results including appropriate assumptions on the data distributions enabling the few-shot learning rules analysed in this work. These few-shot learning rules are very similar to those proposed and empirically studied in \cite{snell2017prototypical}. In this respect, Section \ref{sec:main_results} presents theoretical underpinnings for such rules. Section \ref{sec:conclusion} concludes the paper. \section{Preliminaries and problem formulation}\label{sec:preliminaries} In what follows we consider the problem of few-shot learning in the framework of a standard classification task. In this framework, we assume the existence of two sets of labels $\mathcal{L}$ and $\mathcal{L}_{new}$ \[ \mathcal{L}\cap\mathcal{L}_{new} = \emptyset, \] and two finite data sets, \[\mathcal{X}=\{(\boldsymbol{x},\ell) \ | \ \boldsymbol{x}\in\mathbb{R}^n, \ \ell\in \mathcal{L}\}, \ |\mathcal{X}|=N, \] and \[ \mathcal{Y}=\{(\boldsymbol{x},\ell) \ | \ \boldsymbol{x}\in\mathbb{R}^n, \ \ell \in \mathcal{L}_{new}\}, \ |\mathcal{Y}|=k \] in which the pairs $(\boldsymbol{x},\ell)\in\mathcal{X}$ are i.i.d. samples from some distribution $P_{\mathcal{X}}$, and the pairs $(\boldsymbol{x},\ell) \in \mathcal{Y}$ are i.i.d. samples from some other distribution $P_{\mathcal{Y}}$. Elements $\ell\in\mathcal{L}\cup \mathcal{L}_{new}$ in the definitions of $\mathcal{X}$ and $\mathcal{Y}$ are the labels associated with the data vectors $\boldsymbol{x}$. In addition to the distributions $P_{\mathcal{X}}$ and $P_\mathcal{Y}$ it is convenient to consider the marginal distributions $P_{X}$ and $P_{Y}$: \[ P_{X}(\boldsymbol{x})=\sum_{\ell\in\mathcal{L}} P_{\mathcal{X}}(\boldsymbol{x},\ell), \] \[ P_{Y}(\boldsymbol{x})=\sum_{\ell\in\mathcal{L}_{new}} P_{\mathcal{Y}}(\boldsymbol{x},\ell). \] We assume that there is a function $F$ \begin{equation}\label{eq:classifier_general} F: \ \mathbb{R}^n \rightarrow \mathcal{L} \end{equation} assigning an element from $\mathcal{L}$ to a vector from $\mathbb{R}^n$. The function $F$ models expertise of the system in relation to it's capabilities to predict labels $\ell$ in the pairs $(\boldsymbol{x},\ell)$ drawn from $\mathcal{Y}$ on the basis of the information that is contained in $\boldsymbol{x}$. In this respect, the set $\mathcal{X}$ represents {\it existing knowledge} about the environment. This set may be arbitrarily large or even infinite, but the learner has no access to the elements from the set $\mathcal{X}$. The set $\mathcal{Y}$ represents {\it new knowledge} which is available to the learner. This new knowledge, however, is assumed to be scarce in the sense that $k\ll N$, $k \ll n$. In addition to the data vectors $\boldsymbol{x}\in\mathbb{R}^n$ we consider a parameterised family of feature maps $\phi_n$: \begin{equation}\label{eq:phi_map} \phi_n \ : \ \mathbb{R}^n \rightarrow \mathbb{H} \end{equation} mapping elements of $\mathbb{R}^n$ into a Hilbert space $\mathbb{H}$, which may be either finite- or infinite-dimensional. The map $\phi_n$ can represent transformations of the input data into the corresponding latent spaces in deep neural networks; it can also model other relevant data transformations emerging e.g. through the application of kernel tricks etc. For every $\boldsymbol{x}\in\mathbb{R}^n$, the map $\phi_n$, in turn, induces a kernel map $\kappa_n(\boldsymbol{x},\cdot)$: \[ \kappa_n(\boldsymbol{x},\cdot): \ \mathbb{R}^n \rightarrow \mathbb{R}, \ \kappa_n(\boldsymbol{x},\cdot)=(\phi_n(\boldsymbol{x}),\phi_n(\cdot)). \] \begin{rem} Examples of functions $\phi_n$ include the identity map $\phi_n(\boldsymbol{x})=\boldsymbol{x}$ and feature maps of polynomial, $\kappa_n(\boldsymbol{x},\boldsymbol{y})=((\boldsymbol{x},\boldsymbol{y})+1)^m$, $m=1,2,\dots$, Gaussian $\kappa_n(\boldsymbol{x},\boldsymbol{y})=\exp(-\frac{\|\boldsymbol{x}-\boldsymbol{y}\|^2}{2\sigma^2})$, $\sigma\in\mathbb{R}_{>0}$ and Laplacian $\kappa_n(\boldsymbol{x},\boldsymbol{y})=\exp(-\alpha \|\boldsymbol{x}-\boldsymbol{y}\|)$, $\alpha\in\mathbb{R}_{>0}$ kernels. \end{rem} The task is to learn a rule enabling the learner to discriminate between samples drawn from $P_{\mathcal{X}}$ and $P_{\mathcal{Y}}$ by accessing only the values of $\boldsymbol{x}_i$ and using available {\it training data} $\mathcal{Y}$, possibly some additional generic knowledge about $\mathcal{X}$, and the map $\phi_n$. More formally, the task is stated as follows (cf \cite{tyukin2021demystification}): \begin{problem}[Few-shot learning]\label{prob:few_shot} Consider a classifier $F$ defined by (\ref{eq:classifier_general}), trained on a sample $\mathcal{X}$ drawn from some distribution $P_{\mathcal{X}}$. Let $\mathcal{Y}$ be a new sample that is drawn from another distribution $P_{\mathcal{Y}}$ and whose cardinality $|\mathcal{Y}|\ll n$. Let $p_e,p_n\in(0,1]$ be given positive numbers determining the quality of learning. Find an algorithm $\mathcal{A}(\mathcal{Y})$ producing a new classification map \[ F_{new}: \mathcal{X}\rightarrow \mathcal{L}\cup \mathcal{L}_{new} \] such that \begin{equation}\label{eq:learining_from_few_1} P\big(F_{new}(\boldsymbol{x}) \in \mathcal{L}_{new} \big) \geq p_n \end{equation} for $\boldsymbol{x}$ drawn from $P_{Y}$, and \begin{equation}\label{eq:learining_from_few_2} P\big(F_{new}(\boldsymbol{x}) = F(\boldsymbol{x})\big)\geq p_e \end{equation} for $\boldsymbol{x}$ drawn from the distribution $P_X$. \end{problem} \begin{rem} Note that the set $\mathcal{L}_{new}$ in Problem \ref{prob:few_shot} is not necessarily a singleton. It may, in principle, contain more than one element. This allows questions to be posed regarding learning to discriminate between more than a single class. The other point that is articulated in the statement of Problem \ref{prob:few_shot} is the requirement that $|\mathcal{Y}|\ll n$ defining the context of what ``few'' is referring to in the definition of few-shot learning problems. \end{rem} In the next section we describe sufficient conditions for the existence of algorithms $\mathcal{A}$ presenting a solution of the class of few-shot learning problems, as formulated in Problem \ref{prob:few_shot}. \section{Main results}\label{sec:main_results} We begin with the introduction of several useful characterisations of the maps $\phi_n$ in (\ref{eq:phi_map}) which will enable us to formulate appropriate requirements on the distributions $P_X$ and $P_Y$. Consider \[ V_{\phi_n}(\boldsymbol{c},r,n)=\int_{\|\phi_n(\boldsymbol{x})-\boldsymbol{c}\|\leq r} 1 d\boldsymbol{x}. \] Symbol $n$ in the left-hand side of the above notation indicates that $\boldsymbol{x}$ are taken from $\mathbb{R}^n$. \begin{assume}\label{assume:rates} There exists a function $\alpha_{\phi_n}: \mathbb{H}\times\mathbb{H}\times\mathbb{N}\rightarrow \mathbb{R}_{\geq 0}$ such that for any $\boldsymbol{c}_1,\boldsymbol{c}_2\in\mathbb{H}$, $r_1\leq r_2\in\mathbb{R}_{>0}$ the following holds true \begin{equation}\label{eq:volume_rates} \frac{V_{\phi_n}(\boldsymbol{c}_1,r_1,n)}{V_{\phi_n}(\boldsymbol{c}_2,r_2,n)} \leq C \left(\frac{r_1}{r_2}\right)^{\alpha_{\phi_n}(\boldsymbol{c}_1,\boldsymbol{c}_2,n)} \end{equation} \begin{equation}\label{eq:volume_rates} V_{\phi_n}(\boldsymbol{c}_1,r_1,n) \leq f(r, n) \end{equation} whenever $V_{\phi_n}(\boldsymbol{c}_2,r_2,n)\neq 0$ and where the constant $C>0$ may be dependent on $\boldsymbol{c}_1$, $\boldsymbol{c}_2$. \end{assume} \begin{rem} Note that the class of functions satisfying Assumption \ref{assume:rates} is not empty. It holds, for example, for $\phi_n(\boldsymbol{x})= \boldsymbol{x}$ with $C=1$ and $\alpha_{\phi_n}(\boldsymbol{c}_1,\boldsymbol{c}_2,n)=n$. In principle for some combinations of $\boldsymbol{c}_1,\boldsymbol{c}_2$ the constant $C$ may be infinite, although $C$ is guaranteed to be finite for $\boldsymbol{c}_1 = \boldsymbol{c}_2$ by the monotonic nature of $V_{\phi_n}$ whenever $V_{\phi_n}$ is finite. In what follows we will require that this constant exists and is finite for $\boldsymbol{c}_1,\boldsymbol{c}_2$ in a vicinity of some characteristic points in $\mathbb{H}$ determining concentration properties of data distributions (namely points $\boldsymbol{c}_X$ and $\boldsymbol{c}_Y$ in Assumptions \ref{assume:x}, \ref{assume:y} below). We formalise this by supposing that \[ C^{\ast}(\boldsymbol{c},r)=\max_{\boldsymbol{\xi}: \ \|\boldsymbol{c}-\boldsymbol{\xi}\|\leq r }C(\boldsymbol{\xi},\boldsymbol{c}), \] is finite for certain combinations of $\boldsymbol{c}$ and $r$. If the dependency of $C$ on $\boldsymbol{c}_1,\boldsymbol{c}_2$ is clear from the context then we will omit such explicit specifications in relevant expressions. \end{rem} For the functions $\alpha_{\phi_n}$ satisfying (\ref{eq:volume_rates}) we introduce \begin{equation}\label{eq:beta} \beta_{\phi_n}(\boldsymbol{c},r,n)=\min_{\boldsymbol{\xi}: \ \|\boldsymbol{c}-\boldsymbol{\xi}\|\leq r} \alpha_{\phi_n}(\boldsymbol{c},\boldsymbol{\xi},n). \end{equation} We are now ready to proceed with specifying the requirements on $P_X$ and $P_Y$. \begin{assume}\label{assume:x} For the distribution $P_X$, there is a corresponding probability density function $p_X$, positive numbers $A_X>0$, $r_X>0$, and $\boldsymbol{c}_X\in\mathbb{H}$, such that $p_X$ is supported on the set \[ \mathcal{S}_X=\{\boldsymbol{x}\in\mathbb{R}^n \ | \ \|\phi_n(\boldsymbol{x})-\boldsymbol{c}_X\|\leq r_X\}, \ V_n(\mathcal{S}_X)>0, \] and satisfies the following growth bound: \[ p_X(\boldsymbol{x}) \leq \frac{A_X}{V_{\phi_n}(\boldsymbol{c}_X,r_X,n)}. \] \end{assume} \begin{assume}\label{assume:y} For the distribution $P_Y$, there is a corresponding probability density function $p_Y$, positive numbers $A_Y>0$, $r_Y>0$, and $\boldsymbol{c}_Y\in\mathbb{H}$, such that $p_Y$ is supported on the set \[ \mathcal{S}_Y=\{\boldsymbol{x}\in\mathbb{R}^n \ | \ \|\phi_n(\boldsymbol{x})-\boldsymbol{c}_Y\|\leq r_Y\}, \ V_n(\mathcal{S}_Y)>0, \] and satisfies the following growth bound: \[ p_Y(\boldsymbol{x}) \leq \frac{A_Y}{V_{\phi_n}(\boldsymbol{c}_Y,r_Y,n)} . \] \end{assume} Observe that the functions $V_{\phi_n}$, $\beta_{\phi_n}$ in Assumptions \ref{assume:x}, \ref{assume:y} are determined exclusively by the feature maps $\phi_n$, whereas their arguments $\boldsymbol{c}_X$, $r_X$ and $\boldsymbol{c}_Y$, $r_Y$ capture relevant properties of $P_X$, $P_Y$. The rest of this Section is organised as follows. Our main result, Theorem \ref{thm:few_shot}, justifying solutions of the few-shot learning problem (Problem \ref{prob:few_shot}) with the help of some auxiliary functions \[ \frac{1}{k}\sum_{i=1}^k \kappa_n(\boldsymbol{x}_i,\boldsymbol{x}) - \theta, \ \theta>0, \] where $\boldsymbol{x}_i$, $i=1,\dots,k$ are a part of the training sample, is stated and proved in Section \ref{sec:few_shot}. The proof of this theorem, however, is based on two other results. The first result is the generalised lemma on the typicality of quasi-orthogonality in high dimension (cf \cite{Kurkova}, \cite{kainen2020quasiorthogonal}, \cite{GorTyu:2016}) which we present in Section \ref{sec:sec:quasi_orthogonality}. The second result, which we call the law of high dimension, is presented in Section \ref{sec:sec:law_high_dimension}. Readers who may wish first to explore details of conditions and guarantees presented in our main theorem (Theorem \ref{thm:few_shot}) can skip the next two Sections and proceed to Section \ref{sec:few_shot}. \subsection{Quasi-orthogonality in Hilbert spaces}\label{sec:sec:quasi_orthogonality} \begin{lem}[Quasi orthogonality]\label{lem:quasi_orthogonality} Let $\mathcal{Z}=\{\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_k\}$ be a set of $k$ i.i.d. random vectors drawn from a distribution satisfying Assumption \ref{assume:y}, let $\delta,\varepsilon\in(0,1)$, and let $\phi_n$ satisfy Assumption \ref{assume:rates}. Consider the event $A_1$: \begin{equation}\label{eq:event_1} A_1: \ | (\phi_n(\boldsymbol{x}_i)-\boldsymbol{c}_Y,\phi_n(\boldsymbol{x}_j)-\boldsymbol{c}_Y)| \leq {\delta r_Y}, \ \forall \ i\neq j \end{equation} and the event $A_2$: \begin{equation}\label{eq:event_2} A_2: \ \|\phi_n(\boldsymbol{x}_i)-\boldsymbol{c}_Y\|\geq (1-\varepsilon)r_Y \ \forall \ i. \end{equation} Then \begin{equation}\label{eq:lem:orthogonality:statement:1} P( A_1 ) \geq 1 - k (k-1) C A_Y \left[ (1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}, \end{equation} and \begin{equation}\label{eq:lem:orthogonality:statement:2} \begin{split} &P\left( A_1 \wedge A_2 \right) \geq \\ & 1 - C A_Y k \left([1-\varepsilon]^{\beta(\boldsymbol{c}_Y,0,n)} \quad + \right.\\ & \quad \quad \ \left. (k-1)\left[(1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}\right). \end{split} \end{equation} \end{lem} {\it Proof of Lemma \ref{lem:quasi_orthogonality}}. Denote $\tilde{\phi}_i=\phi_n(\boldsymbol{x}_i)-\boldsymbol{c}_Y$ and consider the event \[ E_1(\tilde{\phi}_1,\tilde{\phi}_2): \ |(\tilde{\phi}_1/\|\tilde{\phi}_1\|,\tilde{\phi}_2)| > \delta. \] The probability that event $E_1(\tilde{\phi}_1,\tilde{\phi}_2)$ occurs is equal to \[ \int P(E_1(\tilde{\phi}_1,\tilde{\phi}_2) | \tilde{\phi_1}) p(\tilde{\phi}_1) d\phi_1. \] The conditional probability $P(E_1(\tilde{\phi}_1,\tilde{\phi}_2) | \tilde{\phi_1})$ is equal to the probability that the vector $\tilde{\phi}_2$ ends up in the union of the following sets \[ {\mathcal{C}_+}(\tilde{\phi}_1,\boldsymbol{c}_Y)=\left\{ \boldsymbol{\xi} \in \mathbb{H} \left| \ \left(\frac{\tilde{\phi}_1}{\|\tilde{\phi}_1\|}, \boldsymbol{\xi} - \boldsymbol{c}_Y \right) > \delta \right. \right\} \] \[ {\mathcal{C}_-}(\tilde{\phi}_1,\boldsymbol{c}_Y)=\left\{ \boldsymbol{\xi} \in \mathbb{H} \left| \ \left(\frac{\tilde{\phi}_1}{\|\tilde{\phi}_1\|}, \boldsymbol{\xi} - \boldsymbol{c}_Y \right) < - \delta \right. \right\}. \] Given that $\boldsymbol{x}_1,\dots,\boldsymbol{x}_k$ are drawn independently from the same distribution, this probability can be bounded from above as \[ \begin{split} &P(E_1(\tilde{\phi}_1,\tilde{\phi}_2)|\tilde{\phi}_1)=\int_{{\mathcal{C}_+}(\tilde{\phi}_1,\boldsymbol{c}_Y)} P_Y(\boldsymbol{x})d\boldsymbol{x}\\ &\quad \quad \quad +\int_{{\mathcal{C}_-}(\tilde{\phi}_1,\boldsymbol{c}_Y)} P_Y(\boldsymbol{x})d\boldsymbol{x}\\ &\leq \frac{A_Y}{V_{\phi_n}(\boldsymbol{c}_Y,r_Y,n)} \left(\int_{{\mathcal{C}_+}(\tilde{\phi}_1,\boldsymbol{c}_Y)} 1 d\boldsymbol{x} + \int_{{\mathcal{C}_-}(\tilde{\phi}_1,\boldsymbol{c}_Y)} 1 d\boldsymbol{x} \right). \end{split} \] Observe that \[ \int_{{\mathcal{C}_+}(\tilde{\phi}_1,\boldsymbol{c}_Y)} 1 d\boldsymbol{x} < V_{\phi_n}(\boldsymbol{c}_+,r_Y (1-\delta^2)^{1/2},n) \] and \[ \int_{{\mathcal{C}_-}(\tilde{\phi}_1,\boldsymbol{c}_Y)} 1 d\boldsymbol{x} < V_{\phi_n}(\boldsymbol{c}_-,r_Y (1-\delta^2)^{1/2},n) \] for some $\boldsymbol{c}_+,\boldsymbol{c}_-\in\mathbb{H}$ satisfying \[ \|\boldsymbol{c}_+-\boldsymbol{c}_Y\|\leq r_Y \delta, \ \|\boldsymbol{c}_- - \boldsymbol{c}_Y\|\leq r_Y\delta. \] Therefore, according to Assumption \ref{assume:rates} (eq. (\ref{eq:volume_rates})) \[ \begin{split} &P(E_1(\tilde{\phi}_1,\tilde{\phi}_2)|\tilde{\phi}_1)\leq C A_Y \left( \left[(1-\delta^2)^{1/2} \right]^{\alpha_{\phi_n}(\boldsymbol{c}_Y,\boldsymbol{c}_+,n)} \right.\\ &\left. + \left[(1-\delta^2)^{1/2} \right]^{\alpha_{\phi_n}(\boldsymbol{c}_Y,\boldsymbol{c}_-,n)} \right). \end{split} \] Taking (\ref{eq:beta}) into account, the above estimate results in \begin{equation}\label{eq:bound_pair} \begin{split} & P(E_1(\tilde{\phi}_1,\tilde{\phi}_2)|\tilde{\phi}_1)\leq \\ & \quad 2 C A_Y \left[(1-\delta^2)^{1/2} \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}. \quad \end{split} \end{equation} Hence, the probability that the event $E_1(\tilde{\phi_1},\tilde{\phi_2})$ occurs admits the following upper bound: \[ \begin{split} &\int P(E_1(\tilde{\phi}_1,\tilde{\phi}_2)|\tilde{\phi}_1) p(\tilde{\phi}_1) d\tilde{\phi}_1 \leq \\ &2 C A_Y \left[(1-\delta^2)^{1/2} \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)} \int p(\tilde{\phi}_1) d\tilde{\phi}_1\\ &=2 C A_Y \left[(1-\delta^2)^{1/2} \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}. \end{split} \] Now consider events \[ \begin{split} & E_{m}(\tilde{\phi}_1,\dots,\tilde{\phi}_m):\\ & \left[\left|\left(\frac{\tilde{\phi}_1}{\|\tilde{\phi}_1\|},\tilde{\phi}_{m}\right)\right| > \delta \right] \vee \cdots \vee \left[\left|\left(\frac{\tilde{\phi}_{m-1}}{\|\tilde{\phi}_{m-1}\|}, \tilde{\phi}_m\right)\right| > \delta\right] \end{split} \] for $m=2,\dots,k$. According to the union bound, \[ \begin{split} &P(E_{m}(\tilde{\phi}_1,\dots,\tilde{\phi}_m)|\tilde{\phi}_1,\dots,\tilde{\phi}_{m-1}) \leq\\ &\quad \quad \sum_{i=1}^{m-1} P\left(\left|\left(\frac{\tilde{\phi}_i}{\|\tilde{\phi}_i\|},\tilde{\phi}_{m}\right)\right| > \delta\left| \tilde{\phi}_i \right.\right) \end{split} \] Applying the same argument as has been used in the derivation of (\ref{eq:bound_pair}), we can conclude that the right-hand side of the above inequality does not exceed the value of \[ 2 (m-1) C A_Y \left[(1-\delta^2)^{1/2} \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}. \] Hence \begin{equation}\label{eq:bound_m_tuples} \begin{split} &P(E_{m}(\tilde{\phi}_1,\dots,\tilde{\phi}_m))\leq\\ &2 (m-1) C A_Y \left[(1-\delta^2)^{1/2} \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)} \end{split} \end{equation} for every $m=1,\dots,k$. Now consider events \[ B_m(\tilde{\phi}_m): \ \|\tilde{\phi}_m\| < (1-\varepsilon)r_Y \ m=1,\dots,k. \] The probability $P(B_m(\tilde{\phi}_m)|\tilde{\phi_i}, \ i\neq m)$ is: \begin{equation}\label{eq:norms_bound} \begin{split} & P(B_m(\tilde{\phi}_m)|\tilde{\phi_i}, \ i\neq m)=\int_{\|\phi_n(\boldsymbol{x})-\boldsymbol{c}_Y\|\leq (1-\varepsilon)r_Y} P_Y(\boldsymbol{x}) d\boldsymbol{x}\\ & \leq \frac{A_Y }{V_{\phi_n}(\boldsymbol{c}_Y,r_Y,n)} \int_{\|\phi_n(\boldsymbol{x})-\boldsymbol{c}_Y\|\leq (1-\varepsilon)r_Y} 1 d\boldsymbol{x}\\ & = A_Y \frac{V_{\phi_n}(\boldsymbol{c}_Y,(1-\varepsilon)r_Y,n)}{V_{\phi_n}(\boldsymbol{c}_Y,r_Y,n)} \\ & \leq C A_Y \left[(1-\varepsilon)\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,0,n)} \end{split} \end{equation} Recall that for any events $\Omega_1,\dots,\Omega_d$ the following holds true: \begin{equation}\label{eq:prob_bound} P(\Omega_1 \land \Omega_2 \land \cdots \land \Omega_d)\geq 1 - \sum_{i=1}^d P(\mbox{not} \ \Omega_i). \end{equation} Therefore, using (\ref{eq:bound_m_tuples}) and (\ref{eq:norms_bound}), one can conclude that \begin{equation}\label{eq:all_projections} \begin{split} & P((\mbox{not} \ E_1)\land \cdots \land(\mbox{not} \ E_k)) \geq 1 - \sum_{i=1}^d P(E_i)\\ & \geq 1 - k(k-1) C A_Y \left[(1-\delta^2)^{1/2} \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)} \end{split} \end{equation} and \begin{equation}\label{eq:all_norms} \begin{split} & P((\mbox{not} \ B_1)\land \cdots \land(\mbox{not} \ B_k)) \geq 1 - \sum_{i=1}^d P(B_i)\\ & \geq 1 - k C A_Y \left[(1-\varepsilon) \right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,0,n)}. \end{split} \end{equation} Finally, observe that $\|\tilde{\phi}_m\|$ is always bounded from above by $r_Y$. Therefore any $\tilde{\phi_1},\dots, \tilde{\phi_k}$ satisfying conditions \[ \left[\left|\left(\frac{\tilde{\phi}_1}{\|\tilde{\phi}_1\|},\tilde{\phi}_{m}\right)\right| \leq \delta \right] \land \cdots \land \left[\left|\left(\frac{\tilde{\phi}_{m-1}}{\|\tilde{\phi}_{m-1}\|}, \tilde{\phi}_m\right)\right| \leq \delta\right] \] for $m=1,\dots,k$ must necessarily satisfy \[ \left[\left|\left(\tilde{\phi}_1,\tilde{\phi}_{m}\right)\right| \leq \delta r_Y \right] \land \cdots \land \left[\left|\left(\tilde{\phi}_{m-1}, \tilde{\phi}_m\right)\right| \leq \delta r_Y\right]. \] Hence, the event $[\mbox{not} \ E_1 \wedge \cdots \wedge \mbox{not} \ E_{k-1}]$ is contained in the event $A_1$ defined by (\ref{eq:event_1}) and \[ P(A_1)\geq P(\mbox{not} \ E_1 \wedge\dots\wedge \mbox{not} \ E_{k-1}). \] and \[ \begin{split} & P(A_1 \wedge A_2)= P (A_1 \wedge \mbox{not} \ B_1 \wedge \cdots \mbox{not} \ B_k)\\ &\geq P(\mbox{not} \ E_1 \wedge\dots\wedge \mbox{not} \ E_{k-1} \wedge \mbox{not} \ B_1 \wedge \dots \wedge \mbox{not} \ B_k \cdots)\\ & \geq 1 - \sum_{i=1}^{k-1} P(E_i) - \sum_{i=1}^{k} P(B_i). \end{split} \] This together with (\ref{eq:all_norms}), (\ref{eq:all_projections}) concludes the proof. $\square$ \subsection{The Law of High dimension in Hilbert Spaces}\label{sec:sec:law_high_dimension} \begin{thm}[The law of high dimension]\label{thm:law_of_high_dimension} Consider a set $\mathcal{Z}=\{\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_k\}$ of $k$ i.i.d. random vectors drawn from a distribution satisfying Assumption \ref{assume:y}, and let the function $\phi_n$ satisfy Assumption \ref{assume:rates}. Introduce the empirical mean of the sample in the feature space $\mathbb{H}$: \[ \bar{\phi}_n = \frac{1}{k} \sum_{i=1}^k \phi_{n}(\boldsymbol{x}_i). \] Finally, define \[ \begin{split} U(k,\delta)=& k^{-1}(r_Y^2+(k-1)\delta r_Y), \\ L(k,\delta,\varepsilon) =& k^{-1}((1-\varepsilon)^2 r_Y^2 - (k-1)\delta r_Y), \end{split} \] where $\delta,\varepsilon$ are some real numbers from $(0,1)$. Then the following holds for any $\delta,\varepsilon\in(0,1)$: \begin{equation}\label{eq:lem:centering:2} \begin{split} &P\left(\|\bar{\phi}_n-\boldsymbol{c}_Y\|^2 \leq U(k,\delta) \right) \geq\\ & \quad \quad \quad \quad 1 - C A_Y k (k-1) \left[(1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}. \end{split} \end{equation} Moreover, \begin{equation}\label{eq:lem:centering:1} \begin{split} & P\left(L(k,\delta,\varepsilon) \leq \|\bar{\phi}_n-\boldsymbol{c}_Y\|^2 \leq U(k,\delta) \right)\geq 1 \\ & - \ C A_Y k[(1-\varepsilon)]^{\beta_{\phi_n}(\boldsymbol{c}_Y,0,n)} \\ & - \ C A_Y k (k-1) \left[(1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}. \end{split} \end{equation} \end{thm} {\it Proof of Theorem \ref{thm:law_of_high_dimension}.} The proof follows from the Quasi-orthogonality Lemma (Lemma \ref{lem:quasi_orthogonality}). Consider \[ \begin{split} & \|\bar{\phi}_n-\boldsymbol{c}_Y\|^2 = (\bar{\phi}_n-\boldsymbol{c}_Y, \bar{\phi}_n-\boldsymbol{c}_Y)\\ &= \left(\frac{1}{k}\sum_{i=1}^k \phi_n(\boldsymbol{x}_i) - \boldsymbol{c}_Y, \frac{1}{k}\sum_{i=1}^k \phi_n(\boldsymbol{x}_i) - \boldsymbol{c}_Y \right) \\ &=\frac{1}{k^2} \sum_{i=1}^k \|\phi_n(\boldsymbol{x}_i)-\boldsymbol{c}_Y\|^2 \\ & + \frac{1}{k^2} \sum_{i\neq j} (\phi_n(\boldsymbol{x}_i) - \boldsymbol{c}_Y,\phi_n(\boldsymbol{x}_j) - \boldsymbol{c}_Y ). \end{split} \] Lemma \ref{lem:quasi_orthogonality} (statement (\ref{eq:lem:orthogonality:statement:1})), states that the probability of that the below holds true \[ \frac{1}{k^2} \sum_{i\neq j} \left|(\phi_n(\boldsymbol{x}_i) - \boldsymbol{c}_Y,\phi_n(\boldsymbol{x}_j) - \boldsymbol{c}_Y )\right| \leq \frac{k-1}{k} r_Y \delta \] is at least \[ 1 - k (k-1) C A_Y \left[ (1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}. \] Noticing that $\|\phi_n(\boldsymbol{x}_i)-\boldsymbol{c}_Y\|\leq r_Y $ for all $i=1,\dots,k$ assures that statement (\ref{eq:lem:centering:2}) holds. Combining the union bound, (\ref{eq:lem:centering:2}), and invoking statement (\ref{eq:lem:orthogonality:statement:2}) of Lemma \ref{lem:quasi_orthogonality}, results in bound (\ref{eq:lem:centering:2}). $\square$ \subsection{Few-shot learning with nonlinear feature maps}\label{sec:few_shot} \begin{thm}[Few-shot learning]\label{thm:few_shot} Let $F$ be a classifier defined by (\ref{eq:classifier_general}) and trained on a sample $\mathcal{X}$ drawn from some distribution $P_\mathcal{X}$ and whose marginal distribution $P_X$ satisfies Assumption \ref{assume:x} with $\boldsymbol{c}_X=0$. Let $\mathcal{Z}=\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_k\}$, $i=1,\dots,k$ be an i.i.d. sample drawn from a distribution $P_Y$ satisfying Assumption \ref{assume:y}, and whose corresponding class labels are from the set $\mathcal{L}_{new}$. Finally, suppose that the function $\phi_n$ satisfies Assumption \ref{assume:rates}. Consider \[ D(\mathcal{Z})=\frac{1}{k} \left( \sum_{i=1}^{k} \sum_{j=1}^k \kappa_{n}(\boldsymbol{x}_i,\boldsymbol{x}_j) \right)^{1/2} \] and let $\delta\in(0,1)$ be a solution of \[ \Delta=D(\mathcal{Z}) -\left(\frac{r_Y^2}{k}+\frac{k-1}{k} r_Y \delta\right)^{1/2} >0. \] Then the map \begin{equation}\label{eq:learning_from_few_algorithm} F_{new}(\boldsymbol{x})=\left\{\begin{array}{ll} \ell_{\mathrm{new}}, & \frac{1}{k}\sum_{i=1}^k \kappa_n(\boldsymbol{x}_i, \boldsymbol{x}) - \theta D(\mathcal{Z}) \geq 0\\ F(\boldsymbol{x}) , & \mbox{otherwise} \end{array}\right. \end{equation} with $\ell_{new}\in\mathcal{L}_{new}$, parameterised by \[ \theta\in \left[\max\{\Delta - r_Y, 0\}, \Delta \right] \] is a solution of Problem \ref{prob:few_shot} with \begin{equation}\label{eq:thm:learning_few:bound_p_n} \begin{split} &p_n= \\ &\left(1 - C^\ast(\boldsymbol{c}_Y,\Delta-\theta) A_Y \times \right.\\ &\quad \quad \quad \left. \left[ \left(r_Y^2-(\Delta-\theta)^2\right)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,\Delta-\theta,n)}\right) \times \\ &\left(1- C^\ast(\boldsymbol{c}_Y,r_Y\delta) A_Y k (k-1) \times \right. \\ & \quad \quad \quad \left. \left[ r_Y (1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,r_Y\delta,n)}\right), \end{split} \end{equation} \begin{equation}\label{eq:thm:learning_few:bound_p_e} p_e= 1 - C^\ast(0,\theta) A_X \left[\left(1-\frac{\theta^2}{r_X^2}\right)^{1/2}\right]^{\beta_{\phi_n}(0,\theta,n)}. \end{equation} \end{thm} {\it Proof of Theorem \ref{thm:few_shot}}. The proof of the theorem relies on the law of high dimension property captured in Theorem \ref{thm:law_of_high_dimension}. According to this property, the probability that the parameter $\boldsymbol{c}_Y\in\mathbb{H}$ determining concentration properties of the unknown distribution $P_Y$ is at most \[ U(k,\delta)=\left(\frac{r_Y^2}{k}+\frac{k-1}{k}r_Y\delta\right)^{1/2} \] away in the space $\mathbb{H}$ from the empirical mean \[ \bar{\phi}_n=\sum_{i=1}^k \phi_n(\boldsymbol{x}_i) \] is at least \begin{equation}\label{eq:prebound_p_n} 1- C^\ast(\boldsymbol{c}_Y,\delta r_Y) A_Y k (k-1)\left[(1-\delta^2)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,\delta r_Y,n)}. \end{equation} Now, suppose that \[ \|\boldsymbol{c}_Y-\bar{\phi_n}\|\leq U(k,\delta) \] holds true. Pick $0< \theta < \Delta$ and consider two sets: \[ \mathcal{S}_1=\left\{\boldsymbol{\xi}\in\mathbb{H} \ | \ \left(\frac{\bar{\phi}_n}{\|\bar{\phi}_n\|},\boldsymbol{\xi} \right) - \theta = 0 \right\} \] and \[ \mathcal{S}_2=\left\{\boldsymbol{\xi}\in\mathbb{H} \ | \ \left(\frac{\bar{\phi}_n}{\|\bar{\phi}_n\|},\boldsymbol{\xi} \right) - \left(\frac{\bar{\phi}_n}{\|\bar{\phi}_n\|},\boldsymbol{c}_Y \right) = 0 \right\} \] The sets $\mathcal{S}_1$ and $\mathcal{S}_2$ define hyperplanes in $\mathbb{H}$ which are parallel to each other with the set $\mathcal{S}_2$ containing the point $\boldsymbol{c}_Y$ (that is $\mathcal{S}_2$ passes through the vector $\boldsymbol{c}_Y$). We observe that \[ \min_{\boldsymbol{\xi}: \ \|\boldsymbol{\xi}-\bar{\phi}_n\|\leq U(k,\delta)} \left(\frac{\bar{\phi}_n}{\|\bar{\phi}_n\|},\boldsymbol{\xi}\right)=\|\bar{\phi}_n\|-U(k,\delta) = \Delta, \] since $\|\bar{\phi}_n\|=D(\mathcal{Z})$, and can therefore conclude that the set $\mathcal{S}_1$ is at least $D(\mathcal{Z})-U(k,\delta)-\theta=\Delta-\theta$ away from the set $\mathcal{S}_2$. Note that all points $\boldsymbol{x}\in\mathbb{R}^n$ for which \begin{equation}\label{eq:new_classifier_Y} \begin{split} & \left(\bar{\phi}_n, \phi_n(\boldsymbol{x}) \right) - \|\bar{\phi}_n\| \theta = \left(\bar{\phi}_n, \phi_n(\boldsymbol{x}) \right) - D(\mathcal{Z})\theta\\ & = \frac{1}{k}\sum_{i=1}^k \kappa_n(\boldsymbol{x}_i,\boldsymbol{x})-D(\mathcal{Z})\theta > 0 \end{split} \end{equation} will be assigned label $\ell_{new}$ from $\mathcal{L}_{new}$ by the classifier $F_{new}$. Let $\boldsymbol{u}$ be the orthogonal projection of $\boldsymbol{c}_Y$ onto the set $\mathcal{S}_1$. Then the probability that (\ref{eq:new_classifier_Y}) occurs for $\boldsymbol{x}$ drawn from $P_Y$ is \[ 1-\int_{\mathcal{C}(\boldsymbol{u},\|\boldsymbol{u}-\boldsymbol{c}_Y\|)} p_Y(\boldsymbol{x})d\boldsymbol{x}, \] where \[ {\mathcal{C}(\boldsymbol{u},d)}=\left\{ \boldsymbol{x}\in\mathbb{R}^n \ \left| \left(\frac{\boldsymbol{u}-\boldsymbol{c}_Y}{\|\boldsymbol{u}-\boldsymbol{c}_Y\|},\phi_n(\boldsymbol{x})-\boldsymbol{c}_Y\right)- d > 0 \right. \right\}. \] Noticing that $\|\boldsymbol{u}-\boldsymbol{c}_Y\|\geq \Delta-\theta$ since it is just the separation distance between $\mathcal{S}_1$ and $\mathcal{S}_2$, this probability is at least \[ 1-\int_{\mathcal{C}(\boldsymbol{u},\Delta-\theta)} p_Y(\boldsymbol{x})d\boldsymbol{x}. \] Taking Assumptions \ref{assume:rates}, \ref{assume:y}, the latter integral can be bounded from below as \[ 1 - C^\ast(\boldsymbol{c}_Y,\Delta-\theta) A_Y \left[ \left(r_Y^2-(\Delta-\theta)^2\right)^{1/2}\right]^{\beta_{\phi_n}(\boldsymbol{c}_Y,\Delta-\theta,n)}. \] This together with (\ref{eq:prebound_p_n}) assures that (\ref{eq:thm:learning_few:bound_p_n}) holds. Let $\boldsymbol{x}$ be drawn from $P_X$. The probability that $F_{new}(\boldsymbol{x})\neq F(\boldsymbol{x})$ is \[ \int_{\left(\frac{\bar{\phi}_n}{\|\bar{\phi}_n\|},\phi_n(\boldsymbol{x})\right)- \theta > 0} p_X(\boldsymbol{x})d\boldsymbol{x}. \] Introducing $\boldsymbol{v} = \frac{\theta}{\| \bar{\phi}_n \|} \bar{\phi}_n$, this probability may be estimated by \[ \int_{\|\boldsymbol{v}-\phi_n(\boldsymbol{x})\|\leq (r_X^2 - \theta^2)^{1/2}} p_X(\boldsymbol{x})d\boldsymbol{x}. \] \[ \begin{split} &\int_{\|\boldsymbol{v}-\phi_n(\boldsymbol{x})\|\leq (r_X^2 - \theta^2)^{1/2}} p_X(\boldsymbol{x})d\boldsymbol{x} \\ & \leq A_X \frac{1}{V_{\phi_n}(0,r_X,n)} \int_{\|\boldsymbol{v}-\phi_n(\boldsymbol{x})\|\leq (r_X^2 - \theta^2)^{1/2}} 1 d\boldsymbol{x}\\ & = A_X \frac{V_{\phi_n}(\boldsymbol{v},(r_X^2 - \theta^2)^{1/2},n))}{V_{\phi_n}(0,r_X,n)} \\ &\leq C A_X \left[\left(1-\frac{\theta^2}{r_X^2}\right)^{1/2}\right]^{\alpha_{\phi_n}(\boldsymbol{v},0,n)}\\ &\leq C^{\ast}(0, \theta) A_X \left[\left(1-\frac{\theta^2}{r_X^2}\right)^{1/2}\right]^{\beta_{\phi_n}(0,\theta,n)}, \end{split} \] and the bound (\ref{eq:thm:learning_few:bound_p_e}) follows. $\square$ \section{Conclusion}\label{sec:conclusion} This paper provides, for the first time, a very general treatment of the challenge of few-shot learning. The main thrust of the work is to explicitly include the influence of non-linear feature transformations into the problem, assumptions, and solutions. The work determines key desired properties of these nonlinear transformations, captured by Assumption \ref{assume:rates}, as well as the properties of data, specified by Assumptions \ref{assume:x}, \ref{assume:y}, which are important for successful few-shot learning. These assumptions relate dimension of the original latent feature spaces with properties of nonlinear feature maps that are sufficient efficient learning. Potentially, these assumptions could also serve as explicit high-level specifications for the task of shaping or learning these nonlinear transformations from data. Detailed analysis of these properties and their practical feasibility are beyond the scope of this theoretical study. As our numerical examples show (see Fig. \ref{fig:kernel_separability_orthogonality}), exploration of the impact of nonlinear feature maps and their corresponding kernels on quasi-orthogonality, volume compression, and separability is a non-trivial and creative intellectual challenge which will be the focus of our future work. \bibliographystyle{IEEEtran}
2,869,038,155,134
arxiv
\section{Proofs of Preliminary Lemmas} \label{appendix prelim} \textbf{Proof of Lemma \ref{lemma0}} \begin{proof} Because $P$, $Q$ and $\Phat$ are basis matrix, $P'P=I$, ${Q}'Q = I$ and $\Phat'\Phat=I$. \begin{enumerate} \item Using $P'P = I$ and $\|M\|_2^2 = \|MM'\|_2$, $\|(I-\Phat{\Phat}')P P'\|_2 =\|(I-\Phat{\Phat}')P\|_2$. Similarly, $\|(I-PP')\Phat {\Phat}'\|_2=\|(I-PP')\Phat\|_2$. Let $D_1 = (I-\Phat{\Phat}')P P'$ and let $D_2=(I-PP')\Phat {\Phat}'$. Notice that $\|D_1\|_2 = \sqrt{\lambda_{\max}(D_1'D_1)} = \sqrt{\|D_1'D_1\|_2}$ and $\|D_2\|_2 = \sqrt{\lambda_{\max}(D_2'D_2)} = \sqrt{\|D_2'D_2\|_2}$. So, in order to show $\|D_1\|_2 = \|D_2\|_2$, it suffices to show that $\|D_1' D_1\|_2 = \|D_2'D_2\|_2$. Let $P'\Phat\overset{SVD}{=} U\Sigma V'$. Then, $D_1'D_1 = P (I - P'\Phat{\Phat}'P)P' = P U (I-\Sigma^2) U' P' $ and $D_2'D_2 = \Phat ( I - {\Phat}'PP'\Phat){\Phat}' = \Phat V (I-\Sigma^2) V'{\Phat}'$ are the compact SVD's of $D_1' D_1$ and $D_2' D_2$ respectively. Therefore, $\|D_1' D_1\| = \|D_2'D_2\|_2 = \|I-\Sigma^2\|_2$ and hence $\|(I-\Phat{\Phat}')PP'\|_2 =\|(I - P{P}')\Phat{\Phat}'\|_2$. \item $\|P{P}' -\Phat {\Phat}'\|_2 = \| PP' - \Phat{\Phat}'PP' + \Phat{\Phat}'PP'-\Phat {\Phat}'\|_2 \leq \| (I- \Phat{\Phat}')PP'\|_2 + \|(I-PP')\Phat {\Phat}'\|_2 = 2 \zeta_*$. \item Since ${Q}'P = 0$, then $\|{Q}'\Phat\|_2 = \|{Q}'(I-P P')\Phat\|_2 \leq \|(I-P P')\Phat\|_2 = \zeta_*$. \item Let $M = (I-\Phat {\Phat}') Q)$. Then $M'M = Q'(I-\Phat {\Phat}')Q$ and so $\sigma_i ((I-\Phat {\Phat}') Q) = \sqrt{\lambda_i (Q'(I-\Phat {\Phat}')Q)}$. Clearly, $\lambda_{\max} (Q'(I-\Phat {\Phat}')Q) \leq 1$. By Weyl's Theorem, $\lambda_{\min} (Q'(I-\Phat {\Phat}')Q) \geq 1 - \lambda_{\max} (Q'\Phat {\Phat}'Q) = 1- \|{Q}'\Phat\|_2^2 \geq 1-\zeta_*^2$. Therefore, $\sqrt{1-\zeta_*^2} \leq \sigma_{i}((I-\Phat {\Phat}') Q) \leq 1$. \end{enumerate} For the case when $P$ and $\Phat$ are not the same size, the proof of 1 is used, but $\Sigma^2$ becomes $\Sigma\Sigma'$ for $D_1$ and $\Sigma'\Sigma$ for $D_2$. Since $\Sigma$ is of size $r_1 \times r_2$, $\Sigma\Sigma'$ will be of size $r_1\times r_1$ and $\Sigma'\Sigma$ will be of size $r_2\times r_2$. Because $r_1 \leq r_2$, every singular value of $D_1'D_1$ will be a singualr value of $D_2'D_2$ (using the SVD as in the proof of 1 above ). Using the characterization of the matrix 2-norm as the largest singluar value, $\|D_1'D_1\|_2 \leq \|D_2'D_2\|$. \end{proof} \textbf{Proof of Lemma \ref{rem_prob}} \begin{proof} It is easy to see that $\mathbf{P}(\mathcal{B}^e, \mathcal{C}^e) = \mathbf{E}[\mathbb{I}_\mathcal{B}(X,Y) \mathbb{I}_\mathcal{C}(X)].$ If $\mathbf{E}[\mathbb{I}_{\mathcal{B}}(X,Y)|X] \ge p$ for all $X \in \mathcal{C}$, this means that $\mathbf{E}[\mathbb{I}_{\mathcal{B}}(X,Y)|X] \mathbb{I}_{\mathcal{C}}(X) \ge p \mathbb{I}_{\mathcal{C}}(X) $. This, in turn, implies that \begin{align*} \mathbf{P}(\mathcal{B}^e, \mathcal{C}^e) = \mathbf{E}[\mathbb{I}_\mathcal{B}(X,Y) \mathbb{I}_\mathcal{C}(X)] &= \mathbf{E}[ \mathbf{E}[\mathbb{I}_{\mathcal{B}}(X,Y)|X] \mathbb{I}_{\mathcal{C}}(X) ] \\ &\ge p \mathbf{E}[\mathbb{I}_{\mathcal{C}}(X) ]. \end{align*} Recall from Definition \ref{probdefs} that $\mathbf{P}(\mathcal{B}^e|X) = \mathbf{E}[\mathbb{I}_{\mathcal{B}}(X,Y)|X]$ and $\mathbf{P}(\mathcal{C}^e)= \mathbf{E}[\mathbb{I}_{\mathcal{C}}(X) ]$. Thus, we conclude that if $\mathbf{P}(\mathcal{B}^e|X) \ge p$ for all $X \in \mathcal{C}$, then $\mathbf{P}(\mathcal{B}^e, \mathcal{C}^e) \ge p \mathbf{P}(\mathcal{C}^e)$. Using the definition of $\mathbf{P}(\mathcal{B}^e|\mathcal{C}^e)$, the claim follows. \end{proof} \textbf{Proof of Corollary \ref{hoeffding_nonzero}} \begin{proof} \begin{enumerate} \item Since, for any $X \in {\cal C}$, conditioned on $X$, the $Z_t$'s are independent, the same is also true for $Z_t - g(X)$ for any function of $X$. Let $Y_t := Z_t - \mathbf{E}(Z_t|X)$. Thus, for any $X \in {\cal C}$, conditioned on $X$, the $Y_t$'s are independent. Also, clearly $\mathbf{E}(Y_t|X) = 0$. Since for all $X \in \mathcal{C}$, $\mathbf{P}(b_1 I \preceq Z_t \preceq b_2 I|X)=1$ and since $\lambda_{\max}(.)$ is a convex function, and $\lambda_{\min}(.)$ is a concave function, of a Hermitian matrix, thus $b_1 I \preceq \mathbf{E}(Z_t|X) \preceq b_2 I$ w.p. one for all $X \in \mathcal{C}$. Therefore, $\mathbf{P}(Y_t^2 \preceq (b_2 -b_1)^2 I|X) = 1$ for all $X \in \mathcal{C}$. Thus, for Theorem \ref{hoeffding}, $\sigma^2 = \|\sum_t (b_2 - b_1)^2I\|_2 = \alpha (b_2-b_1)^2$. For any $X \in \mathcal{C}$, applying Theorem \ref{hoeffding} for $\{Y_t\}$'s conditioned on $X$, we get that, for any $\epsilon > 0$, \begin{multline*} \mathbf{P}\left( \lambda_{\max} \left(\frac{1}{\alpha} \sum_t Y_t \right) \leq \epsilon \Big | X \right) > \\1- n\exp\left(\frac{-\alpha \epsilon^2}{8 (b_2 -b_1)^2}\right) \ \text{for all} \ X \in \mathcal{C} \end{multline*} % By Weyl's theorem, $\lambda_{\max} (\frac{1}{\alpha} \sum_t Y_t) = \lambda_{\max} (\frac{1}{\alpha} \sum_t (Z_t - \mathbf{E}(Z_t|X)) \geq \lambda_{\max} (\frac{1}{\alpha} \sum_t Z_t) + \lambda_{\min} (\frac{1}{\alpha} \sum_t -\mathbf{E}(Z_t|X))$. Since $\lambda_{\min} (\frac{1}{\alpha} \sum_t -\mathbf{E}(Z_t|X)) = - \lambda_{\max} (\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X))\geq -b_4$, thus $ \lambda_{\max} (\frac{1}{\alpha} \sum_t Y_t) \geq \lambda_{\max} (\frac{1}{\alpha} \sum_t Z_t) - b_4$. Therefore, \begin{multline*} \mathbf{P}\left( \lambda_{\max} \left(\frac{1}{\alpha} \sum_t Z_t \right) \leq b_4 + \epsilon\Big | X \right) > \\ 1- n\exp\left(\frac{-\alpha \epsilon^2}{8 (b_2 -b_1)^2}\right) \ \text{for all} \ X \in \mathcal{C} \end{multline*} \item Let $Y_t = \mathbf{E}(Z_t|X) - Z_t$. As before, $\mathbf{E}(Y_t|X) = 0$ and conditioned on any $X \in {\cal C}$, the $Y_t$'s are independent and $\mathbf{P}(Y_t^2 \preceq (b_2 -b_1)^2 I|X) = 1$. As before, applying Theorem \ref{hoeffding}, we get that for any $\epsilon >0$, % \begin{multline*} \mathbf{P}\left( \lambda_{\max} \left(\frac{1}{\alpha} \sum_t Y_t \right) \leq \epsilon \Big | X \right) > \\ 1- n\exp\left(\frac{-\alpha \epsilon^2}{8 (b_2 -b_1)^2} \right) \ \text{for all} \ X \in \mathcal{C} \end{multline*} % By Weyl's theorem, $\lambda_{\max}(\frac{1}{\alpha}\sum_t Y_t) = \lambda_{\max}(\frac{1}{\alpha} \sum_t(\mathbf{E}(Z_t|X) - Z_t)) \geq \lambda_{\min} (\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X)) + \lambda_{\max} (\frac{1}{\alpha} \sum_t -Z_t) = \lambda_{\min} (\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X)) - \lambda_{\min} (\frac{1}{\alpha} \sum_t Z_t) \ge b_3 - \lambda_{\min} (\frac{1}{\alpha} \sum_t Z_t)$ Therefore, for any $\epsilon >0$, \begin{multline*} \mathbf{P} \left(\lambda_{\min}\left(\frac{1}{\alpha}\sum_t Z_t\right) \geq b_3 -\epsilon \Big| X \right) \\ \geq 1- n \exp\left(\frac{-\alpha \epsilon^2}{8(b_2-b_1)^2}\right) \ \text{for all} \ X \in \mathcal{C} \end{multline*} \end{enumerate} \end{proof} \textbf{Proof of Corollary \ref{hoeffding_rec}} \begin{proof} Define the dilation of an $n_1 \times n_2$ matrix $M$ as $\text{dilation} (M) := \left[\begin{array}{cc}0 & {M}' \\ M & 0 \\\end{array} \right]$. Notice that this is an $(n_1+n_2) \times (n_1 +n_2)$ Hermitian matrix \cite{tail_bound}. As shown in \cite[equation 2.12]{tail_bound}, \begin{eqnarray} \lambda_{\max}(\text{dilation}(M)) = \|\text{dilation} (M)\|_2 = \|M\|_2 \label{dilM} \end{eqnarray} Thus, the corollary assumptions imply that $\mathbf{P}(\|\text{dilation} (Z_t)\|_2 \leq b_1 |X) = 1$ for all $X \in \mathcal{C}$. Thus, $\mathbf{P}(-b_1 I \preceq \text{dilation} (Z_t) \preceq b_1 I | X) = 1$ for all $X \in \mathcal{C}$. Using (\ref{dilM}), the corollary assumptions also imply that $\frac{1}{\alpha}\sum_t \mathbf{E}( \text{dilation}(Z_t) |X) = \text{dilation} ( \frac{1}{\alpha}\sum_t \mathbf{E}(Z_t|X)) \preceq b_2 I$ for all $X \in \mathcal{C}$. Finally, $Z_t$'s conditionally independent given $X$, for any $X \in \mathcal{C}$, implies that the same thing also holds for $\text{dilation} (Z_t)$'s. Thus, applying Corollary \ref{hoeffding_nonzero} for the sequence $\{\text{dilation} (Z_t)\}$, we get that, \begin{multline*} \mathbf{P} \left(\lambda_{\max}\left(\frac{1}{\alpha}\sum_t \text{dilation}(Z_t)\right) \leq b_2 + \epsilon \Big| X \right) \geq \\ 1- (n_1+n_2) \exp\left(\frac{-\alpha \epsilon^2}{32 b_1^2}\right) \ \text{for all} \ X \in \mathcal{C} \end{multline*} Using (\ref{dilM}), $\lambda_{\max}(\frac{1}{\alpha}\sum_t \text{dilation}(Z_t)) = \lambda_{\max}(\text{dilation}(\frac{1}{\alpha}\sum_t Z_t)) = \|\frac{1}{\alpha}\sum_t Z_t\|_2$ and this gives the final result. \end{proof} \textbf{Proof of Lemma \ref{delta_kappa}} \begin{proof Let $A = I - PP'$. By definition, $\delta_s(A) := \max\{ \max_{|T| \leq s}(\lambda_{\max}(A_T'A_T) -1),\max_{|T| \leq s} ( 1 - \lambda_{\min} (A_T' A_T))) \} $. Notice that $A_T'A_T = I - I_T' PP'I_T$. Since $I_T' PP'I_T$ is p.s.d., by Weyl's theorem, $\lambda_{\max}(A_T'A_T) \leq1$. Since $\lambda_{\max}(A_T'A_T)- 1\leq 0$ while $1 - \lambda_{\min}(A_T'A_T) \geq 0$, thus, \begin{equation} \delta_s(I - PP') = \max_{|T| \leq s}\Big(1 - \lambda_{\min} ( I - I_T' PP'I_T)\Big) \label{defn_kappa_1} \end{equation} By Definition, $\kappa_s(P) = \max_{|T| \leq s} \frac{\|I_T' P\|_2}{\|P\|_2} =\max_{|T| \leq s} \|I_T' P\|_2$. Notice that $\|I_T' P\|_2^2 = \lambda_{\max} (I_T' PP'I_T) = 1-\lambda_{\min} (I - I_T'PP'I_T)$ \footnote{This follows because $B=I_T'PP'I_T$ is a Hermitian matrix. Let $B = U \Sigma U'$ be its EVD. Since $UU'=I$, $\lambda_{\min}(I-B) = \lambda_{\min}(U(I - \Sigma)U') =\lambda_{\min}(I - \Sigma) = 1 - \lambda_{\max}(\Sigma) = 1-\lambda_{\max}(B)$.}, and so \begin{equation} \kappa_s^2(P) =\max_{|T| \leq s} \Big(1 - \lambda_{\min} (I - I_T'PP'I_T)\Big)\label{defn_kappa_2} \end{equation} From (\ref{defn_kappa_1}) and (\ref{defn_kappa_2}), we get $ \delta_s(I-PP') = \kappa_s^2 (P) $. \end{proof} \section{The Need for Projection PCA} \label{projpca} \subsection{Projection-PCA vs Standard PCA} The reason that we cannot use standard PCA for subspace update in our work is because, in our case, the error $e_t= L_t - \Lhat_t$ in the observed data vector $\Lhat_t$ is correlated with the true data vector $L_t$; and the condition number of $\operatorname{Cov}[L_t]$ is large (see Remark \ref{large_f}). In other works that study finite sample PCA, e.g. \cite{nadler} and references therein, the large condition number does not cause a problem because they assume that the error/noise ($e_t$) is uncorrelated with the true data vector ($L_t$). Moreover, $e_t$ or $L_t$ or both are zero mean (which we have too). Thus, the dominant term in the perturbation of the estimated covariance matrix, $(1/\alpha) \sum_t \Lhat_t \Lhat_t'$ w.r.t. the true one is $(1/\alpha) \sum_t e_t e_t'$. For $\alpha$ large enough, the other two terms $(1/\alpha) \sum_t L_t e_t'$ and its transpose are close to zero w.h.p. due to law or large numbers. Thus, the subspace error bound obtained using the $\sin \theta$ theorem and the matrix Hoeffding inequality, will depend, w.h.p., only on the ratio of the maximum eigenvalue of $\operatorname{Cov}[e_t]$ to the smallest eigenvalue of $\operatorname{Cov}[L_t]$. The probability with which this bound holds depends on $f$, however the probability can be made large by increasing the number of data points $\alpha$. However, in our case, because $e_t$ and $L_t$ are correlated, this strategy does not work. We explain this below. In this discussion, we remove the subscript $j$. Also, let $P_*:= P_{j-1}$, $\Phat_*:= \Phat_{j-1}$, $r_* = \operatorname{rank}(P_*)$. Consider $t=t_j + k\alpha-1$ when the $k^{th}$ projection PCA or PCA is done. Since the error $e_t = L_t - \Lhat_t$ is correlated with $L_t$, the dominant terms in the perturbation matrix seen by PCA are $(1/ (t_j+ k\alpha)) \sum_{t=1}^{t_j+k\alpha-1} L_t e_t'$ and its transpose, while for projection PCA, they are $(1/ \alpha) \Phi_0 \sum_{t \in \mathcal{I}_{j,k}} L_t e_t' \Phi_0$ and its transpose. The magnitude of $L_t$ can be large. The magnitude of $e_t$ is smaller than a constant times that of $L_t$. The constant is less than one but, at $t=t_j+\alpha-1$, it is not negligible. Thus, the norm of the perturbation seen by PCA at this time may not be small. As a result, the bound on the subspace error, $\SE_{(t)}$, obtained by applying the $\sin \theta$ theorem may be more than one (and hence meaningless since by definition $\SE_{(t)} \le 1$). For projection PCA, because of $\Phi_0$, the perturbation is much smaller and hence so is the bound on $\SE_{(t)}$. Let $\SE_k: = \SE_{(t_j+k\alpha-1)}= \SE_{(t)}$ denote the subspace error for $t \in \mathcal{I}_{j,k}$. Consider $k=1$ first. For PCA, we can show that $\SE_1 \lesssim \check{C} \kappa_s^+ g^+ + \check{C}' f \zeta_*^+ $ for constants $\check{C}, \check{C}'$ that are more than one but not too large. Here $g^+$ is the upper bound on the condition number of $\text{Cov}(a_{t,\new}))$ and it is valid to assume that $g^+$ is small so that $\check{C} \kappa_s^+ g^+ < 1$. However, $f$ is a bound on the maximum condition number of $\text{Cov}(a_t)=\text{Cov}(L_t)$ and this can be large. When it is, the second term may not be less than one. On the other hand, for projection PCA, we have $\SE_k \le \zeta_k + \zeta_* \le \zeta_k^+ + \zeta_*^+$ with $\zeta_*^+ = r \zeta$, and $\zeta_k^+ \approx \check{C} \kappa_s^+ g^+ \zeta_{k-1}^+ + \check{C}' f (\zeta_*^+)^2$ and $\zeta_0^+=1$. Thus $\SE_1 \lesssim \check{C} \kappa_s^+ g^+ + \check{C}' f (\zeta_*^+)^2 + \zeta_*^+$. The first term in this bound is similar to that of PCA, but the second term is much smaller. The third term is negligibly small. Thus, in this case, it is easier to ensure that the bound is less than one. Moreover, our goal is to show that within a finite delay after a subspace change time, the subspace error decays down from one to a value proportional to $\zeta$. For projection PCA, this can be done because we can separately bound the subspace error of the existing subspace, $\zeta_*$, and of the newly added one, $\zeta_k$, and then bound the total subspace error, $\SE_{(t)}$, by $\zeta_* + \zeta_k$ for $t \in \mathcal{I}_{j,k}$. Assuming that, by $t=t_j$, $\zeta_*$ is small enough, i.e. $\zeta_* \le r_* \zeta$ with $\zeta < 0.00015/r^2f$, we can show that within $K$ iterations, $\zeta_k$ also becomes small enough so that $\SE_{(t)} \le (r_*+c)\zeta$. However, for PCA, it is not possible to separate the subspace error in this fashion. For $k > 1$, all we can claim is that $\SE_k \lesssim \check{C} \kappa_s^+ f \ \SE_{k-1}$. Since $f$ can be large (larger than $1/\kappa_s^+$), this cannot be used to show that $\SE_k$ decreases with $k$. \subsection{Why not use all $k \alpha$ frames at $t=t_j+ k \alpha-1$} Another possible way to implement projection PCA is to use the past $k \alpha$ estimates $\Lhat_t$ at the $k^{th}$ projection PCA time, $t=t_j+ k \alpha-1$. This may actually result in an improved algorithm. We believe that it can also be analyzed using the approaches developed in this paper. However, the analysis will be more complicated. We briefly try to explain why. The perturbation seen at $t=t_j+ k \alpha-1$, $\mathcal{H}_k$, will now satisfy $\mathcal{H}_k \approx (1/ (k \alpha) )\sum_{k'=1}^k \sum_{t \in \mathcal{I}_{j,k'}} \Phi_0 (- L_t e_t' - e_t L_t' + e_t e_t' ) \Phi_0$ instead of just being approximately equal to the last ($k'=k$) term. Bounds on each of these terms will hold with a different probability. Thus, proving a lemma similar to Lemma \ref{termbnds} will be more complicated. \section{Proof of Lemma \ref{termbnds}} \label{appendix termbnds} For convenience, we will use $\frac{1}{\alpha}\sum_t$ to denote $\frac{1}{\alpha} \sum_{t \in \mathcal{I}_{j,k}}$. The proof follows using the following key facts and the Hoeffding corollaries. \begin{fact}\label{keyfacts} Under the assumptions of Theorem \ref{thm1} the following are true. \begin{enumerate} \item The matrices $D_{\new}$, $R_{\new}$, $E_{\new}$, $D_{*}, D_{\new,k-1}$, $\Phi_{k-1}$ are functions of the r.v. $X_{j,k-1}$. Since $X_{j,k-1}$ is independent of any $a_{t}$ for $t \in \mathcal{I}_{j,k}$ the same is true for the matrices $D_{\new}$, $R_{\new}$, $E_{\new}$, $D_{*}, D_{\new,k-1}$, $\Phi_{k-1}$. \\ All terms that we bound for the first two claims of the lemma are of the form $\frac{1}{\alpha} \sum_{t \in \mathcal{I}_{j,k}} Z_t$ where $Z_t= f_1(X_{j,k-1}) Y_t f_2(X_{j,k-1})$, $Y_t$ is a sub-matrix of $a_t a_t'$ and $f_1(.)$ and $f_2(.)$ are functions of $X_{j,k-1}$. Thus, conditioned on $X_{j,k-1}$, the $Z_t$'s are mutually independent. (Recall that we assume independence of the $a_t$'s. \label{X_at_indep} \label{zt_indep} \\ All the terms that we bound for the third claim contain $e_t$. Using Lemma \ref{cslem}, conditioned on $X_{j,k-1}$, $e_t$ satisfies (\ref{etdef0}) w.p. one whenever $X_{j,k-1} \in \Gamma_{j,k-1}$. Using (\ref{etdef0}), it is easy to see that all these terms are also of the above form whenever $X_{j,k-1} \in \Gamma_{j,k-1}$. \\ Thus, conditioned on $X_{j,k-1}$, the $Z_t$'s for all the above terms are mutually independent, whenever $X_{j,k-1} \in \Gamma_{j,k-1}$. \item It is easy to see that $\|\Phi_{k-1} P_*\|_2 \le \zeta_*$, $\zeta_0 = \|D_\new\|_2 \le 1$, $\Phi_0 D_\new = \Phi_0'D_\new = D_\new$, $\|R_\new\| \le 1$, $\|(R_\new)^{-1}\| \le 1/\sqrt{1 - \zeta_*^2}$, ${E_{\new,\perp}}' D_\new = 0$, and $\|{E_{\new}}' \Phi_0 e_t\| =\|(R_\new')^{-1} D_\new' \Phi_0 e_t\| = \|(R_\new)^{-1} D_\new' e_t\| \le \|(R_\new')^{-1} D_\new' I_{T_t}\| \|e_t\| \le \frac{\kappa_s(D_\new)}{\sqrt{1 - \zeta_*^2}}\|e_t\|$. The bounds on $\|R_\new\|$ and $\|(R_\new)^{-1}\|$ follow using Lemma \ref{lemma0} and the fact that $\sigma_{i}(R_\new) = \sigma_{i}(D_\new)$. \label{rnew} \item $X_{j,k-1} \in \Gamma_{j,k-1}$ implies that \begin{enumerate} \item $\zeta_{j,*} \le \zeta_*^+$ (By definition of $\Gamma_{j,k-1}$ (Definition \ref{Gamma_def})) \item $\zeta_{k-1}\leq \zeta_{k-1}^+ \leq 0.6^{k-1} + 0.4c\zeta$ (This follows by the definition of $\Gamma_{j,k-1}$ and Lemma \ref{expzeta}.) \end{enumerate} \label{leqzeta} \item Item \ref{leqzeta} implies that conditioned on $X_{j,k-1} \in \Gamma_{j,k-1}$ \begin{enumerate} \item $\kappa_s(D_\new) \le \kappa_s^+$ (follows by Lemma \ref{Dnew0_lem}), \item $\lambda_{\min}(R_{\new}{R_{\new}}') \geq 1-(\zeta_*^+)^2$ (follows from Lemma \ref{lemma0} and the fact that $\sigma_{\min}(R_\new) = \sigma_{\min}(D_\new)$), \item $\|{I_{T_t}}' \Phi_{k-1} P_* \|_2 \leq \|\Phi_{k-1} P_* \|_2 \leq \zeta_{j,*} \leq \zeta_{j,*}^+$, \item $\|{I_{T_t}}'D_{\new,k-1}\|_2 \leq \kappa_{s}(D_{\new,k-1}) \zeta_{k-1} \leq \kappa_s^+ \zeta_{k-1}^+$. \end{enumerate} \label{X_in_Gamma} \item By Weyl's theorem (Theorem \ref{weyl}), for a sequence of matrices $B_t$, $\lambda_{\min}(\sum_t B_t) \ge \sum_t \lambda_{\min}(B_t)$ and $\lambda_{\max}(\sum_t B_t) \le \sum_t \lambda_{\max}(B_t)$. \end{enumerate} \end{fact} \begin{proof} Consider $A_k := \frac{1}{\alpha} \sum_t {E_{\new}}' \Phi_{0} L_t {L_t}' \Phi_{0} E_{\new}$. Notice that ${E_{\new}}' \Phi_{0} L_t = R_{\new} a_{t,\new} + {E_{\new}}' D_* a_{t,*}$. Let $Z_t = R_{\new} a_{t,\new} {a_{t,\new}}' {R_{\new}}'$ and let $Y_t = R_{\new} a_{t,\new}{a_{t,*}}' {D_*}' {E_{\new}}' + {E_{\new}}' D_* a_{t,*}{a_{t,\new}}' {R_{\new}}'$, then \begin{equation} A_k \succeq \frac{1}{\alpha} \sum_t Z_t + \frac{1}{\alpha} \sum_t Y_t \label{lemmabound_1} \end{equation} Consider $\sum_t Z_t = \sum_t R_{\new} a_{t,\new} {a_{t,\new}}' R_{\new}'$. \begin{enumerate} \item Using item \ref{zt_indep} of Fact \ref{keyfacts}, the $Z_t$'s are conditionally independent given $X_{j,k-1}$. \item Using item \ref{X_at_indep}, Ostrowoski's theorem (Theorem \ref{ost}), and item \ref{X_in_Gamma}, for all $X_{j,k-1} \in \Gamma_{j,k-1}$, $\lambda_{\min}\left( \mathbf{E}(\frac{1}{\alpha}\sum_t Z_t|X_{j,k-1})\right) = \lambda_{\min}\left( R_{\new} \frac{1}{\alpha}\sum_t \mathbf{E}(a_{t,\new}{a_{t,\new}}') {R_{\new}}'\right) \ge \lambda_{\min} \left(R_{\new} {R_{\new}}'\right)\lambda_{\min} \left(\frac{1}{\alpha}\sum_t \mathbf{E}(a_{t,\new}{a_{t,\new}}')\right) \geq (1-(\zeta_{j,*}^+)^2)\lambda_{\new,k}^-$. \item Finally, using items \ref{rnew} and the bound on $\| a_t \|_{\infty}$ from the model, conditioned on $X_{j,k-1}$, $0 \preceq Z_t \preceq c \gamma_{\new,k}^2 I \preceq c \max\left((1.2)^{2k} \gamma_{\new}^2, \gamma_*^2 \right) I$ holds w.p. one for all $X_{j,k-1} \in \Gamma_{j,k-1}$. \end{enumerate} Thus, applying Corollary \ref{hoeffding_nonzero} with $\epsilon = \frac{c\zeta\lambda^-}{24}$, we get \begin{multline} \mathbf{P}\left(\lambda_{\min} \left(\frac{1}{\alpha} \sum_t Z_t\right) \geq (1-(\zeta_*^+)^2)\lambda_{\new,k}^- \right.\\ \left. - \frac{c\zeta\lambda^-}{24} \bigg| X_{j,k-1}\right) \geq \\ 1- c \exp \left(\frac{-\alpha \zeta^2 (\lambda^-)^2}{8 \cdot 24^2 \cdot \min(1.2^{4k} \gamma_{\new}^4, \gamma_*^4)}\right) \label{lemma_add_A1} \end{multline} for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Consider $Y_t = R_{\new} a_{t,\new}{a_{t,*}}' {D_*}' {E_{\new}} + {E_{\new}}' D_* a_{t,*}{a_{t,\new}}' {R_{\new}}'$. \begin{enumerate} \item Using item \ref{zt_indep}, the $Y_t$'s are conditionally independent given $X_{j,k-1}$. \item Using item \ref{X_at_indep} and the fact that $a_{t,\new}$ and $a_{t,*}$ are mutually uncorrelated, $\mathbf{E}\left(\frac{1}{\alpha}\sum_t Y_t|X_{j,k-1}\right) = 0$ for all $X_{j,k-1} \in \Gamma_{j,k-1}$. \item Using the bound on $\| a_t \|_{\infty}$, items \ref{rnew}, \ref{X_in_Gamma}, and Fact \ref{constants}, conditioned on $X_{j,k-1}$, $\|Y_t\| \le 2\sqrt{c r} \zeta_*^+ \gamma_* \gamma_{\new,k} \leq 2\sqrt{c r} \zeta_*^+ \gamma_*^2 \le 2$ holds w.p. one for all $X_{j,k-1} \in \Gamma_{j,k-1}$. \end{enumerate} Thus, under the same conditioning, $-b I \preceq Y_t \preceq b I$ with $b = 2$ w.p. one. Thus, applying Corollary \ref{hoeffding_nonzero} with $\epsilon = \frac{c\zeta\lambda^-}{24}$, we get \begin{multline} \mathbf{P}\left(\lambda_{\min} \left(\frac{1}{\alpha} \sum_t Y_t \right) \geq \frac{-c\zeta\lambda^-}{24} \Big| X_{j,k-1} \right) \geq \\ 1- c \exp \left( \frac{-\alpha c^2 \zeta^2(\lambda^-)^2} {8 \cdot 24^2 \cdot (2b)^2}\right) \ \text{for all $X_{j,k-1} \in \Gamma_{j,k-1}$} \label{lemma_add_A2} \end{multline} Combining (\ref{lemmabound_1}), (\ref{lemma_add_A1}) and (\ref{lemma_add_A2}) and using the union bound, $\mathbf{P} (\lambda_{\min}(A_k) \geq \lambda_{\new,k}^-(1 - (\zeta_*^+)^2) - \frac{c\zeta\lambda^-}{12}| X_{j,k-1}) \geq 1-p_a(\alpha,\zeta) \ \text{for all $X_{j,k-1} \in \Gamma_{j,k-1}$}$. The first claim of the lemma follows by using $\lambda_{\new,k}^- \ge \lambda^-$ and then applying Lemma \ref{rem_prob} with $X \equiv X_{j,k-1}$ and $\mathcal{C} \equiv \Gamma_{j,k-1}$. Now consider $A_{k,\perp} := \frac{1}{\alpha} \sum_t {E_{\new,\perp}}' \Phi_{0} L_t {L_t}' \Phi_{0} E_{\new,\perp}$. Using item \ref{rnew}, ${E_{\new,\perp}}' \Phi_{0} L_t = {E_{\new,\perp}}' D_* a_{t,*}$. Thus, $A_{k,\perp} = \frac{1}{\alpha} \sum_t Z_t$ with $Z_t={E_{\new,\perp}}' D_* a_{t,*} {a_{t,*}}' {D_*}' E_{\new,\perp}$ which is of size $(n-c)\times (n-c)$. Using the same ideas as above we can show that $0 \preceq Z_t \preceq r (\zeta_*^+)^2 \gamma_*^2 I \preceq \zeta I$ and $\mathbf{E}\left(\frac{1}{\alpha}\sum_t Z_t|X_{j,k-1}\right) \preceq (\zeta_*^+)^2 \lambda^+ I$. Thus by Corollary \ref{hoeffding_nonzero} with $\epsilon = \frac{c\zeta\lambda^-}{24}$ and Lemma \ref{rem_prob} the second claim follows. Using the expression for $\mathcal{H}_k$ given in Definition \ref{defHk}, it is easy to see that \begin{align} \|\mathcal{H}_k \|_2 &\leq \max\{ \|H_k\|_2, \|H_{k,\perp}\|_2 \} + \|B_k\|_2 \nonumber \\ &\leq \Big\|\frac{1}{\alpha} \sum_t e_t {e_t}'\Big\|_2 + \max(\|T2\|_2, \|T4\|_2) + \|B_k\|_2 \label{add_calH1} \end{align} where $T2:= \frac{1}{\alpha} \sum_t {E_{\new}}' \Phi_{0}( L_t {e_t}' + e_t {L_t}')\Phi_{0} E_{\new}$ and $T4 :=\frac{1}{\alpha} \sum_t {E_{\new,\perp}}'\Phi_{0} (L_t {e_t}' + {e_t}'L_t)\Phi_{0} E_{\new,\perp}$. The second inequality follows by using the facts that (i) $H_k = T1 - T2$ where $T1 := \frac{1}{\alpha} \sum_t {E_{\new}}' \Phi_{0} e_t {e_t}'\Phi_{0} E_{\new}$, (ii) $H_{k,\perp} = T3 - T4$ where $T3 := \frac{1}{\alpha} \sum_t {E_{\new,\perp}}'\Phi_0 e_t {e_t}'\Phi_0 E_{\new,\perp}$, and (iii) $\max(\|T1\|_2, \|T3\|_2) \le \|\frac{1}{\alpha} \sum_t e_t {e_t}'\|_2$. Next, we obtain high probability bounds on each of the terms on the RHS of (\ref{add_calH1}) using the Hoeffding corollaries. Consider $\|\frac{1}{\alpha} \sum_t e_t {e_t}'\|_2$. Let $Z_t = e_t {e_t}'$. \begin{enumerate} \item Using item \ref{zt_indep}, conditioned on $X_{j,k-1}$, the various $Z_t$'s in the summation are independent, for all $X_{j,k-1} \in \Gamma_{j,k-1}$. \item Using item \ref{X_in_Gamma}, and the bound on $\| a_t \|_{\infty}$, conditioned on $X_{j,k-1}$, $0 \preceq Z_t \preceq b_1 I$ w.p. one for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Here $b_1:=(\kappa_s^+ \zeta_{k-1}^+ \phi^+ \sqrt{c} \gamma_{\new,k} + \zeta_*^+ \phi^+ \sqrt{r} \gamma_*)^2$. \item Also using item \ref{X_in_Gamma}, $0 \preceq \frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X_{j,k-1}) \preceq b_2I$, with $b_2:= (\kappa_s^+)^2 (\zeta_{k-1}^+)^2 (\phi^+)^2 \lambda_{\new,k}^+ + (\zeta_*^+)^2 (\phi^+)^2 \lambda^+$ for all $X_{j,k-1} \in \Gamma_{j,k-1}$. \end{enumerate} Thus, applying Corollary \ref{hoeffding_nonzero} with $\epsilon = \frac{c\zeta\lambda^-}{24}$, \begin{multline} \mathbf{P} \left( \Big\|\frac{1}{\alpha} \sum_t e_t {e_t}' \Big\|_2 \leq b_2 + \frac{c\zeta\lambda^-}{24} \Big| X_{j,k-1} \right) \geq \\ 1- n \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{ 8 \cdot 24^2 b_1^2}\right) \ \text{for all $X_{j,k-1} \in \Gamma_{j,k-1}$} \label{add_etet} \end{multline} Consider $T2$. Let $Z_t: = {E_{\new}}' \Phi_{0} (L_t {e_t}' + e_t{L_t}')\Phi_{0} E_{\new}$ which is of size $c \times c$. Then $T2 = \frac{1}{\alpha} \sum_t Z_t$. \begin{enumerate} \item Using item \ref{zt_indep}, conditioned on $X_{j,k-1}$, the various $Z_t$'s used in the summation are mutually independent, for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Using item \ref{rnew}, ${E_{\new}}'\Phi_{0} L_t = R_{\new} a_{t,\new} + {E_\new}' D_* a_{t,*}$ and ${E_{\new}}'\Phi_{0} e_t = ({R_{\new}}')^{-1} {D_{\new}}' e_t$. \item Thus, using items \ref{rnew}, \ref{X_in_Gamma}, and the bound on $\| a_t \|_{\infty}$, it follows that conditioned on $X_{j,k-1}$, $\|Z_t\|_2 \leq 2 \tilde{b}_3 \leq 2 b_3$ w.p. one for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Here, $\tilde{b}_3:= \frac{\kappa_s^+}{\sqrt{1-(\zeta_*^+)^2}} \phi^+( \kappa_s^+ \zeta_{k-1}^+ \sqrt{c} \gamma_{\new,k} + \sqrt{r} \zeta_*^+ \gamma_*)(\sqrt{c} \gamma_{\new,k} + \sqrt{r} \zeta_*^+ \gamma_*)$ and $b_3:= \frac{1}{\sqrt{1-(\zeta_*^+)^2}}( \phi^+ c {\kappa_s^+}^2 \zeta_{k-1}^+ \gamma_{\new,k}^2 + \phi^+ \sqrt{rc} {\kappa_s^+}^2 \zeta_{k-1}^+ \zeta_*^+ \gamma_{\new,k} \gamma_* + \phi^+ \sqrt{rc} \kappa_s^+ \zeta_*^+ \gamma_* \gamma_{\new,k} + \phi^+ r {\zeta_*^+}^2 \gamma_*^2)$. \item Also, $\|\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X_{j,k-1})\|_2 \leq 2 \tilde{b}_4 \leq 2 b_4$ where $\tilde{b}_4: = \frac{\kappa_s^+}{\sqrt{1-(\zeta_*^+)^2}}\phi^+ \kappa_s^+ \zeta_{k-1}^+ \lambda_{\new,k}^+ + \frac{\kappa_s^+}{\sqrt{1-(\zeta_*^+)^2}} \phi^+ (\zeta_*^+)^2 \lambda^+$ and $b_4:=\frac{\kappa_s^+}{\sqrt{1-(\zeta_*^+)^2}}\phi^+ \kappa_s^+ \zeta_{k-1}^+ \lambda_{\new,k}^+ + \frac{1}{\sqrt{1-(\zeta_*^+)^2}} \phi^+ (\zeta_*^+)^2 \lambda^+$. \end{enumerate} Thus, applying Corollary \ref{hoeffding_rec} with $\epsilon = \frac{c\zeta\lambda^-}{24}$, \begin{multline} \mathbf{P}\left( \|T2\|_2 \leq 2 b_4 + \frac{c\zeta\lambda^-}{24}\Big|X_{j,k-1}\right) \\ \geq 1- c \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{32 \cdot 24^2 \cdot 4 b_3^2}\right) \ \text{for all $X_{j,k-1} \in \Gamma_{j,k-1}$} \nonumber \end{multline} Consider $T4$. Let $Z_t: = {E_{\new,\perp}}'\Phi_{0} (L_t {e_t}' + e_t{L_t}')\Phi_{0} E_{\new,\perp}$ which is of size $(n-c)\times (n-c)$. Then $T4 = \frac{1}{\alpha} \sum_t Z_t$. \begin{enumerate} \item Using item \ref{zt_indep}, conditioned on $X_{j,k-1}$, the various $Z_t$'s used in the summation are mutually independent, for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Using item \ref{rnew}, ${E_{\new,\perp}}'\Phi_{0} L_t ={E_{\new,\perp}}' D_* a_{t,*}$. \item Thus, conditioned on $X_{j,k-1}$, $\|Z_t\|_2 \leq 2b_5$ w.p. one for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Here $b_5:= \phi^+ r(\zeta_*^+)^2 \gamma_*^2 + \phi^+ \sqrt{rc} \kappa_s^+ \zeta_*^+ \zeta_{k-1}^+ \gamma_* \gamma_{\new,k}$ This follows using items \ref{X_in_Gamma} and the bound on $\| a_t \|_{\infty}$. \item Also, $\|\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X_{j,k-1})\|_2 \leq 2 b_6, \ b_6:= \phi^+ (\zeta_*^+)^2 \lambda^+$. \end{enumerate} Applying Corollary \ref{hoeffding_rec} with $\epsilon = \frac{c\zeta\lambda^-}{24}$, \begin{multline} \mathbf{P} \left( \|T4 \|_2 \leq 2b_6 + \frac{c\zeta\lambda^-}{24} \Big| X_{j,k-1}\right) \geq \\ 1- (n-c) \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{32 \cdot 24^2 \cdot 4 b_5^2}\right) \ \text{for all $X_{j,k-1} \in \Gamma_{j,k-1}$} \nonumber \end{multline} Consider $\max(\|T2 \|_2,\|T4 \|_2)$. Since $b_3 > b_5$ (follows because $\zeta_{k-1}^+ \le 1$) and $b_4 > b_6$, so $2b_6 + \frac{c\zeta\lambda^-}{24} < 2b_4 + \frac{c\zeta\lambda^-}{24}$ and $1- (n-c) \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{8\cdot 24^2 \cdot 4 b_5^2}\right) > 1- (n-c) \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{8 \cdot 24^2 \cdot 4 b_3^2}\right)$. Therefore, for all $X_{j,k-1} \in \Gamma_{j,k-1}$, $\mathbf{P}\left( \|T4 \|_2 \leq 2 b_4 + \frac{c\zeta\lambda^-}{24} \Big| X_{j,k-1} \right) \geq 1- (n-c) \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{32 \cdot 24^2 \cdot 4 b_3^2}\right)$. By the union bound, for all $X_{j,k-1} \in \Gamma_{j,k-1}$, \begin{multline} \mathbf{P} \left( \max(\|T2 \|_2,\|T4 \|_2)\leq 2b_4 + \frac{c\zeta\lambda^-}{24} \Big|X_{j,k-1}\right) \geq \\ 1- n \exp\left(\frac{-\alpha c^2\zeta^2 (\lambda^-)^2}{32 \cdot 24^2 \cdot 4b_3^2}\right) \label{add_maxT} \end{multline} Consider $\|B_k\|_2$. Let $Z_t := {E_{\new,\perp}}'\Phi_{0} (L_t-e_t)({L_t}'-{e_t}')\Phi_{0} E_{\new}$ which is of size $(n-c)\times c$. Then $B_k = \frac{1}{\alpha} \sum_t Z_t$. Using item \ref{rnew}, ${E_{\new,\perp}}'\Phi_{0} (L_t-e_t) = {E_{\new,\perp}}'( D_{*} a_{t,*} - \Phi_{0} e_t)$, ${E_{\new}}' \Phi_{0} (L_t - e_t) = R_{\new} a_{t,\new}+ {E_{\new}}' D_* a_{t,*} + (R_\new')^{-1} D_\new' e_t$. Also, $\|Z_t\|_2 \leq b_7$ w.p. one for all $X_{j,k-1} \in \Gamma_{j,k-1}$ and $\|\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|X_{j,k-1})\|_2 \leq b_8$ for all $X_{j,k-1} \in \Gamma_{j,k-1}$. Here \begin{align*} b_7 := &(\sqrt{r} \zeta_*^+ (1+ \phi^+)\gamma_* + (\kappa_s^+) \zeta_{k-1}^+ \phi^+ \sqrt{c} \gamma_{\new,k})\cdot \\ &\left( \sqrt{c} \gamma_{\new,k} + \sqrt{r} \zeta_*^+ \left(1+\frac{1}{\sqrt{1-(\zeta_*^+)^2}} \kappa_s^+ \phi^+\right) \gamma_* + \right.\\ & \left. \frac{1}{\sqrt{1-(\zeta_*^+)^2}} {\kappa_s^+}^2 \zeta_{k-1}^+ \phi^+ \sqrt{c} \gamma_{\new,k}\right) \end{align*} and \begin{align*} b_8 := &\left(\kappa_s^+ \zeta_{k-1}^+ \phi^+ + \frac{1}{\sqrt{1-(\zeta_*^+)^2}}(\kappa_s^+)^3 (\zeta_{k-1}^+)^2 (\phi^+)^2\right) \lambda_{\new,k}^+ \\ &+ (\zeta_*^+)^2 \left(1 + \phi^+ + \frac{1}{\sqrt{1-(\zeta_*^+)^2}}\kappa_s^+ \phi^+ + \right. \\ &\hspace{1.7in}\left. \frac{1}{\sqrt{1-(\zeta_*^+)^2}}\kappa_s^+(\phi^+)^2 \right) \lambda^+ \end{align*} Thus, applying Corollary \ref{hoeffding_rec} with $\epsilon=\frac{c\zeta\lambda^-}{24}$, \begin{multline} \mathbf{P} \left(\|B_k\|_2 \leq b_8 + \frac{c\zeta\lambda^-}{24} \Big| X_{j,k-1}\right) \geq \\ 1 - n \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{32 \cdot 24^2 b_7^2}\right) \ \text{for all $X_{j,k-1} \in \Gamma_{j,k-1}$} \label{Bk} \end{multline} Using (\ref{add_calH1}), (\ref{add_etet}), (\ref{add_maxT}) and (\ref{Bk}) and the union bound, for any $X_{j,k-1} \in \Gamma_{j,k-1}$, \begin{align*} &\mathbf{P} \left(\|\mathcal{H}_k\|_2 \leq b_9 + \frac{c\zeta\lambda^-}{8} \Big|X_{j,k-1}\right)\geq \\ & 1-n \exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2}{8 \cdot 24^2 b_1^2}\right)- n \exp\left(\frac{-\alpha c^2\zeta^2 (\lambda^-)^2}{32\cdot 24^2 \cdot 4 b_3^2}\right) \\ &\hspace{1.7in} - n\exp\left(\frac{-\alpha c^2 \zeta^2 (\lambda^-)^2 }{32 \cdot 24^2 b_7^2}\right) \end{align*} where \begin{align*} b_9 &:= b_2 +2b_4+ b_8 \\ &= \left( (\frac{2(\kappa_s^+)^2 \phi^+}{\sqrt{1-(\zeta_*^+)^2}} + \kappa_s^+ \phi^+ )\zeta_{k-1}^+ + \right.\\ &\hspace{.8in}\left. ( (\kappa_s^+)^2 (\phi^+)^2 + \frac{(\kappa_s^+)^3 (\phi^+)^2 }{\sqrt{1-(\zeta_*^+)^2}}) (\zeta_{k-1}^+)^2 \right) \lambda_{\new,k}^+ \ \\ & + \left((\phi^+)^2 + \frac{2\phi^+ }{\sqrt{1-(\zeta_*^+)^2}} + 1 + \phi^+ + \right.\\ &\hspace{1in}\left.\frac{\kappa_s^+ \phi^+}{\sqrt{1-(\zeta_*^+)^2}} + \frac{\kappa_s^+(\phi^+)^2 }{\sqrt{1-(\zeta_*^+)^2}} \right) (\zeta_*^+)^2 \lambda^+ \end{align*} Using $\lambda_{\new,k}^- \ge \lambda^-$ and $f := \lambda^+/\lambda^-$, $b_9 + \frac{c\zeta\lambda^-}{8} \le \lambda_{\new,k}^- (b + 0.125c\zeta)$ where $b$ is defined in Definition \ref{zetakplus}. Using Fact \ref{constants} and substituting $\kappa_s^+ = 0.15$, $\phi^+=1.2$, one can upper bound $b_1$, $b_3$ and $b_7$ and show that the above probability is lower bounded by $1- p_c(\alpha,\zeta)$. Finally, applying Lemma \ref{rem_prob}, the third claim of the lemma follows. \end{proof} \section{Proof of Lemma \ref{lem_add}} \section{Proof of Lemma \ref{bound_R}}\label{proof_lem_bound_R} \begin{proof} [Proof of Lemma \ref{bound_R}] \begin{enumerate} \item The first claim follows because $\|D_{\text{det},k}\|_2 = \|\Psi_{k-1} G_{\text{det},k}\|_2 = \| \Psi_{k-1} [G_1 G_2 \cdots G_{k-1}]\|_2 \leq \sum_{k_1=1}^{k-1}\|\Psi_{k-1} G_{k_1}\|_2 \leq \sum_{k_1=1}^{k-1} \|\Psi_{k_1} G_{k_1}\|_2 = \sum_{k_1=1}^{k-1} \tilde{\zeta}_{k_1} \leq \sum_{k_1=1}^{k-1} \tilde{c}_{k_1} \zeta \leq r\zeta$. The first inequality follows by triangle inequality. The second one follows because $\hat{G}_1,\cdots,\hat{G}_{k-1}$ are mutually orthonormal and so $\Psi_{k-1} = \prod_{k_2=1}^{k-1}(I - \hat{G}_{k_2}{\hat{G}_{k_2}}')$. \item By the first claim, $\|(I - \hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}') G_{\text{det},k} \|_2 =\|\Psi_{k-1} G_{\text{det},k}\|_2 \leq r\zeta$. By item 2) of Lemma \ref{lemma0} with $P = G_{\text{det},k}$ and $\hat{P} = \hat{G}_{\text{det},k}$, the result $\|G_{\text{det},k} {G_{\text{det},k}}' - \hat{G}_{\text{det},k}{\hat{G}_{\text{det},k}}'\|_2 \leq 2 r\zeta$ follows. \item Recall that $D_k \overset{QR}{=} E_k R_k$ is a QR decomposition where $E_k$ is orthonormal and $R_k$ is upper triangular. Therefore, $\sigma_i(D_k) = \sigma_i (R_k)$. Since $\|(I - \hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}')G_{\text{det},k}\|_2 =\|\Psi_{k-1} G_{\text{det},k}\|_2 \leq r\zeta$ and $G_k' G_{\text{det},k} = 0$, by item 4) of Lemma \ref{lemma0} with $P=G_{\text{det},k}$, $\Phat=\hat{G}_{\text{det},k}$ and $Q=G_k$, we have $\sqrt{1-r^2\zeta^2} \leq \sigma_i((I - \hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}')G_k)=\sigma_i(D_k)\leq 1$. \item Since $D_k \overset{QR}{=} E_k R_k$, so $\|{D_{\text{undet},k}}'E_k \|_2 =\|{D_{\text{undet},k}}'D_k R_k^{-1} \|_2 = \|{G_{\text{undet},k}}'\Psi_{k-1}' \Psi_{k-1} G_k R_k^{-1} \|_2 = \|{G_{\text{undet},k}}'\Psi_{k-1} G_k R_k^{-1} \|_2 = \|{G_{\text{undet},k}}'D_k R_k^{-1} \|_2 = \|{G_{\text{undet},k}}'E_k\|_2$. Since $E_k = D_k R_k^{-1} = ( I -\hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}') G_k R_k^{-1}$, \begin{eqnarray} \|{G_{\text{undet},k}}'E_k \|_2 &=& \| {G_{\text{undet},k}}' ( I -\hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}') G_k R_k^{-1}\|_2 \nonumber \\ &\leq& \frac{\| {G_{\text{undet},k}}' ( I -\hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}') G_k \|_2}{ \sqrt{1-r^2 \zeta^2})} \nonumber \\ &=& \frac{\| {G_{\text{undet},k}}' \hat{G}_{\text{det},k} {\hat{G}_{\text{det},k}}' G_k \|_2}{\sqrt{1-r^2 \zeta^2})} \nonumber \end{eqnarray} By item 3) of Lemma \ref{lemma0} with $P = {G}_{\text{det},k}$, $\Phat = \hat{G}_{\text{det},k}$ and $Q= G_{\text{undet},k}$, we get $\| {G_{\text{undet},k}}' \hat{G}_{\text{det},k}\|_2 \leq r\zeta$. By item 3) of Lemma \ref{lemma0} with $\Phat = \hat{G}_{\text{det},k}$ and $Q= G_k$, we get $ \| {\hat{G}_{\text{det},k}}'G_{k} \|_2 \leq r\zeta$. Therefore, $\|{G_{\text{undet},k}}'E_k \|_2 = \|{E_k}' G_{\text{undet},k}\|_2 \leq \frac{r^2 \zeta^2}{\sqrt{1-r^2\zeta^2}}$. \end{enumerate} \end{proof} \section{Proof of Lemma \ref{lem_bound_terms}} \label{proof_lem_bound_terms} \begin{proof} We use $\frac{1}{\tilde{\alpha}}\sum_t$ to denote $\frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{\mathcal{I}}_{j,k}}$. For $ t \in \tilde{\mathcal{I}}_{j,k}$, let $a_{t,k} := {G_{j,k}}'L_t$, $a_{t,\text{det}} := {G_{\text{det},k}}'L_t = [G_{j,1},\cdots G_{j,k-1}]'L_t$ and $a_{t,\text{undet}}:= {G_{\text{undet},k}}'L_t = [G_{j,k+1}\cdots G_{j,\vartheta_j}]'L_t$. Then $a_t:= P_j'L_t$ can be split as $a_t = [ a_{t,\text{det}}' \ a_{t,k}' \ a_{t,\text{undet}}']'$. This lemma follows using the following facts and the Hoeffding corollaries, Corollary \ref{hoeffding_nonzero} and \ref{hoeffding_rec}. \begin{enumerate} \item The matrices $D_k$, $R_k$, $E_k$, $D_{\text{det},k}, D_{\text{undet},k}$, $\Psi_{k-1}$, $\Phi_K$ are functions of the r.v. $\tilde X_{j,k-1}$. All terms that we bound for the first two claims of the lemma are of the form $\frac{1}{\alpha} \sum_{t \in \mathcal{\tilde{I}}_{j,k}} Z_t$ where $Z_t= f_1(\tilde X_{j,k-1}) Y_t f_2(\tilde X_{j,k-1})$, $Y_t$ is a sub-matrix of $a_t a_t'$ and $f_1(.)$ and $f_2(.)$ are functions of $\tilde X_{j,k-1}$. For instance, one of the terms while bounding $\lambda_{\min}(\mathcal{A}_k)$ is $\frac{1}{\tilde{\alpha}} \sum_t R_k a_{t,k} {a_{t,k}}'{R_k}'$. % $\tilde X_{j,k-1}$ is independent of any $a_{t}$ for $t \in \mathcal{\tilde{I}}_{j,k}$ , and hence the same is true for the matrices $D_k$, $R_k$, $E_k$, $D_{\text{det},k}, D_{\text{undet},k}$, $\Psi_{k-1}$, $\Phi_K$. Also, $a_{t}$'s for different $t \in \mathcal{\tilde{I}}_{j,k}$ are mutually independent. Thus, conditioned on $\tilde X_{j,k-1}$, the $Z_t$'s defined above are mutually independent. \label{X_at_indep} \item All the terms that we bound for the third claim contain $e_t$. Using Lemma \ref{cslem}, conditioned on $\tilde X_{j,k-1}$, $e_t$ satisfies (\ref{etdef0}) w.p. one whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Conditioned on $\tilde X_{j,k-1}$, all these terms are also of the form $\frac{1}{\alpha} \sum_{t \in \mathcal{\tilde{I}}_{j,k}} Z_t$ with $Z_t$ as defined above, whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Thus, conditioned on $\tilde X_{j,k-1}$, the $Z_t$'s for these terms are mutually independent, whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. \item By Remark \ref{Gamma_rem} and the definition of $\tilde\Gamma_{j,k-1}$, $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$ implies that $\zeta_{*} \le r \zeta$, $\tilde\zeta_{k'} \le c_{k'} \zeta, \ \text{for all} \ k'=1,2,\dots k-1$, $\zeta_K \le \zeta_K^+ \le c \zeta$, (iv) $\phi_K \le \phi^+$ (by Lemma \ref{cslem}); (v) $\|\Phi_K P_j\|_2 \le (r+c)\zeta$; and (vi) all conclusions of Lemma \ref{bound_R} hold. \item By the clustering assumption, $ \lambda_k^- \le \lambda_{\min}(\mathbf{E}(a_{t,k}{a_{t,k}}')) \le \lambda_{\max}(\mathbf{E}(a_{t,k}{a_{t,k}}')) \le \lambda_k^+$; $\lambda_{\max}(\mathbf{E}(a_{t,\text{det}}{a_{t,\text{det}}}')) \le \lambda_1^+ = \lambda^+$; and $\lambda_{\max}(\mathbf{E}(a_{t,\text{undet}}{a_{t,\text{undet}}}')) \le \lambda_{k+1}^+$. Also, $\lambda_{\max}(\mathbf{E}(a_t a_t')) \le \lambda^+$. \item By Weyl's theorem, for a sequence of matrices $B_t$, $\lambda_{\min}(\sum_t B_t) \ge \sum_t \lambda_{\min}(B_t)$ and $\lambda_{\max}(\sum_t B_t) \le \sum_t \lambda_{\max}(B_t)$. \end{enumerate} Consider $\tilde{A}_k = \frac{1}{\tilde{\alpha}} \sum_t {E_k}' \Psi_{k-1} L_t{L_t}' \Psi_{k-1} E_k$. Notice that ${E_k}' \Psi_{k-1} L_t = R_k a_{t,k} + {E_k}'(D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}})$. Let $Z_t = R_k a_{t,k} {a_{t,k}}'{R_k}'$ and let $Y_t = R_k a_{t,k} ({a_{t,\text{det}}}'{D_{\text{det},k}}' + {a_{t,\text{undet}}}'{D_{\text{undet},k}}')E_k + E_k'(D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}}) {a_{t,k}}'{R_k}'$. Then \begin{equation} \tilde{A}_k \succeq \frac{1}{\tilde{\alpha}} \sum_t Z_t + \frac{1}{\tilde{\alpha}} \sum_t Y_t\label{lemmabound_1_pt2} \end{equation} Consider $\frac{1}{\tilde{\alpha}} \sum_t Z_t = \frac{1}{\tilde{\alpha}} \sum_t R_k a_{t,k} {a_{t,k}}'{R_k}'$. (a) As explained above, the $Z_t$'s are conditionally independent given $\tilde X_{j,k-1}$. (b) Using Ostrowoski's theorem and Lemma \ref{bound_R}, for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, $\lambda_{\min}( \mathbf{E}(\frac{1}{\tilde{\alpha}}\sum_t Z_t|\tilde X_{j,k-1})) = \lambda_{\min}( R_{k} \frac{1}{\tilde{\alpha}}\sum_t \mathbf{E}(a_{t,k}{a_{t,k}}') {R_{k}}') \ge \lambda_{\min} (R_{k} {R_{k}}')\lambda_{\min} (\frac{1}{\tilde{\alpha}}\sum_t \mathbf{E}(a_{t,k}{a_{t,k}}')) \geq (1- r^2 \zeta^2)\lambda_{k}^-$. (c) Finally, using $\|R_k\|_2 \leq 1$ and $\|a_{t,k}\|_2 \leq \sqrt{\tilde{c}_k} \gamma_* $, conditioned on $\tilde X_{j,k-1}$, $0 \preceq Z_t \preceq \tilde{c}_k \gamma_{*}^2 I $ holds w.p. one for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Thus, applying Corollary \ref{hoeffding_nonzero} with $\epsilon = 0.1 \zeta \lambda^-$, and using $\tilde{c}_k \leq r$, for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, \begin{multline} \mathbf{P}\left(\lambda_{\min} \Big(\frac{1}{\tilde{\alpha}} \sum_t Z_t\Big) \geq (1- r^2\zeta^2)\lambda_{k}^- - 0.1 \zeta \lambda^- \Big| \tilde X_{j,k-1}\right) \geq \\ 1- \tilde{c}_k \exp \left(\frac{-\tilde{\alpha} \epsilon^2 }{8 (\tilde{c}_k \gamma_{*}^2)^2}\right) \geq 1 - r \exp \left(\frac{-\tilde{\alpha} \cdot (0.1 \zeta \lambda^-)^2 }{8 r^2 \gamma_{*}^4}\right) \label{lemma_add_A1_pt2} \end{multline} Consider $Y_t = R_k a_{t,k} ({a_{t,\text{det}}}'{D_{\text{det},k}}' + {a_{t,\text{undet}}}'{D_{\text{undet},k}}')E_k + E_k'(D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}}) {a_{t,k}}'{R_k}'$. (a) As before, the $Y_t$'s are conditionally independent given $\tilde X_{j,k-1}$. (b) Since $\mathbf{E}[a_t]=0$ and $\text{Cov}[a_t]=\Lambda_t$ is diagonal, $\mathbf{E}(\frac{1}{\alpha}\sum_t Y_t|\tilde X_{j,k-1}) = 0$ whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. (c) Conditioned on $\tilde X_{j,k-1}$, $\|Y_t\|_2 \le 2 \sqrt{\tilde{c}_k r} \gamma_*^2 r\zeta(1+ \frac{r\zeta}{\sqrt{1-r^2\zeta^2}}) \leq 2 r^2 \zeta \gamma_*^2 ( 1+ \frac{10^{-4}}{\sqrt{1-10^{-4}}}) \leq \frac{2}{r} ( 1+ \frac{10^{-4}}{\sqrt{1-10^{-4}}}) < 2.1$ holds w.p. one for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. This follows because $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$ implies that $\|D_{\text{det},k}\|_2 \leq r\zeta$, $\|{E_k}' D_{\text{undet},k}\|_2 = \| {E_k}' G_{\text{undet},k}\|_2 \leq \frac{r^2 \zeta^2}{\sqrt{1-r^2\zeta^2}}$. Thus, under the same conditioning, $-b I \preceq Y_t \preceq b I$ with $b =2.1$ w.p. one. Thus, applying Corollary \ref{hoeffding_nonzero} with $\epsilon = 0.1 \zeta \lambda^-$, we get \begin{multline} \mathbf{P}\left(\lambda_{\min} \Big(\frac{1}{\tilde{\alpha}} \sum_t Y_t\Big) \geq - 0.1 \zeta \lambda^- \Big| \tilde X_{j,k-1} \right) \geq \\ 1- r \exp \left( \frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2} {8 \dot (4.2)^2 }\right) \ \text{for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$} \label{lemma_add_A2_pt2} \end{multline} Combining (\ref{lemmabound_1_pt2}), (\ref{lemma_add_A1_pt2}) and (\ref{lemma_add_A2_pt2}) and using the union bound, $\mathbf{P} (\lambda_{\min}(\tilde{A}_k) \geq \lambda_{k}^-(1 - r^2\zeta^2) - 0.2 \zeta \lambda^-| \tilde X_{j,k-1}) \geq 1-\tilde{p}_1(\tilde{\alpha},\zeta) \ \text{for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$}$ where \begin{equation} \tilde{p}_1 (\tilde{\alpha},\zeta) := r \exp \left(\frac{-\tilde{\alpha} \cdot (0.1 \zeta \lambda^-)^2 }{8 r^2 \gamma_{*}^4}\right) + r \exp \left( \frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2} {8 \dot (4.2)^2}\right) \label{prob1} \end{equation} The first claim of the lemma follows by using $\lambda_{k}^- \ge \lambda^-$ and applying Lemma \ref{rem_prob} with $X \equiv \tilde X_{j,k-1}$ and $\mathcal{C} \equiv \tilde\Gamma_{j,k-1}$. Consider $\tilde{A}_{k,\perp} := \frac{1}{\alpha} \sum_t {E_{k,\perp}}' \Psi_{k-1} L_t {L_t}' \Psi_{k-1} E_{k,\perp}$. Notice that ${E_{k,\perp}}' \Psi_{k-1} L_t = {E_{k,\perp}}' (D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}})$. Thus, $\tilde{A}_{k,\perp} = \frac{1}{\tilde{\alpha}} \sum_t Z_t$ with $Z_t={E_{k,\perp}}' (D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}})(D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}})'E_{k,\perp}$ which is of size $(n-\tilde{c}_k)\times (n-\tilde{c}_k)$. (a) As before, given $\tilde X_{j,k-1}$, the $Z_t$'s are independent. (b) Conditioned on $\tilde X_{j,k-1}$, $0 \preceq Z_t \preceq r \gamma_*^2 I$ w.p. one for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. (c) $\mathbf{E}(\frac{1}{\alpha}\sum_t Z_t|\tilde X_{j,k-1}) \preceq (\lambda_{k+1}^+ + r^2\zeta^2 \lambda^+)I$ for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Thus applying Corollary \ref{hoeffding_nonzero} with $\epsilon =0.1 \zeta \lambda^-$ and using $\tilde{c}_k \geq \tilde{c}_{\min}$, we get \begin{multline} \mathbf{P}(\lambda_{\max}(\tilde{A}_{k,\perp}) \leq \lambda_{k+1}^+ + r^2 \zeta^2 \lambda^+ + 0.1 \zeta \lambda^- | \tilde X_{j,k-1}) \geq\\ 1- \tilde{p}_2(\tilde{\alpha},\zeta) \ \text{for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$} \end{multline} where \begin{equation} \tilde{p}_2(\tilde{\alpha},\zeta) := (n-\tilde{c}_{\min}) \exp \left(\frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2}{8 r^2 \gamma_*^4 }\right) \label{prob2} \end{equation} The second claim follows using $\lambda_{k}^- \ge \lambda^-$, $f:=\lambda^+/\lambda^-$, $\tilde{h}_k := {\lambda_{k+1}}^+ / {\lambda_{k}}^-$ in the above expression and applying Lemma \ref{rem_prob}. Consider the third claim. Using the expression for $\tilde{\mathcal{H}}_k$ given in Definition \ref{defHk_del}, it is easy to see that \begin{align} \|\tilde{\mathcal{H}}_k \|_2 &\leq \max\{ \|\tilde{H}_k\|_2, \|\tilde{H}_{k,\perp}\|_2 \} + \|\tilde{B}_k\|_2 \nonumber \\ &\leq \Big\|\frac{1}{\tilde{\alpha}} \sum_t e_t {e_t}' \Big\|_2 + \max(\|T2\|_2, \|T4\|_2) + \|\tilde{B}_k\|_2 \label{add2_calH1} \end{align} where $T2:= \frac{1}{\tilde{\alpha}} \sum_t {E_{k}}' \Psi_{k-1}( L_t {e_t}' + e_t {L_t}')\Psi_{k-1} E_{k}$ and $T4 :=\frac{1}{\tilde{\alpha}} \sum_t {E_{k,\perp}}'\Psi_{k-1} (L_t {e_t}' + {e_t}'L_t)\Psi_{k-1} E_{k,\perp}$. The second inequality follows by using the facts that (i) $\tilde{H}_k = T1 - T2$ where $T1 := \frac{1}{\tilde{\alpha}} \sum_t {E_{k}}' \Psi_{k-1} e_t {e_t}'\Psi_{k-1} E_{k}$, (ii) $\tilde{H}_{k,\perp} = T3 - T4$ where $T3 := \frac{1}{\tilde{\alpha}} \sum_t {E_{k,\perp}}'\Psi_{k-1} e_t {e_t}'\Psi_{k-1} E_{k,\perp}$, and (iii) $\max(\|T1\|_2, \|T3\|_2) \le \|\frac{1}{\tilde{\alpha}} \sum_t e_t {e_t}'\|_2$. Next, we obtain high probability bounds on each of the terms on the RHS of (\ref{add_calH1}) using the Hoeffding corollaries. Consider $\|\frac{1}{\tilde{\alpha}} \sum_t e_t {e_t}'\|_2$. Let $Z_t = e_t {e_t}'$. (a) As explained in the beginning of the proof, conditioned on $\tilde X_{j,k-1}$, the various $Z_t$'s in the summation are independent whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Also, by Lemma \ref{cslem}, under this conditioning, $\That_t=T_t$ for all $t \in \tilde{I}_{j,k}$ and hence $e_t$ satisfies (\ref{etdef0}) in this interval. Recall also that in this interval, $\Phi_{(t)} = \Phi_K$. Thus, using $\|\Phi_K P_j\|_2 \leq (r+c) \zeta$, $$\|e_t\|_2 \le \phi^+ \sqrt{\zeta}$$ (b) Conditioned on $\tilde X_{j,k-1}$, $0 \preceq Z_t \preceq b_1 I$ w.p. one for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Here $b_1:={\phi^+}^2 \zeta$. (c) Using $\|\Phi_K P_j\|_2 \leq (r+c) \zeta$, $0 \preceq \frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|\tilde X_{j,k-1}) \preceq b_2I, \ b_2:= (r+c)^2 \zeta^2 {\phi^+}^2 \lambda^+$ for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Thus, applying Corollary \ref{hoeffding_nonzero} with $\epsilon = 0.1 \zeta \lambda^-$, \begin{multline} \mathbf{P} \left( \Big\|\frac{1}{\tilde{\alpha}} \sum_t e_t {e_t}' \Big\|_2 \leq b_2 + 0.1 \zeta \lambda^- \Big| \tilde X_{j,k-1} \right) \\ \geq 1- n \exp\left(\frac{-\tilde{\alpha}( 0.1 \zeta \lambda^-)^2}{ 8 \cdot b_1^2}\right) \ \text{for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$} \label{add2_etet} \end{multline} Consider $T2$. Let $Z_t: = {E_{k}}' \Psi_{k-1} (L_t {e_t}' + e_t{L_t}')\Psi_{k-1} E_{k}$ which is of size $\tilde{c}_k \times \tilde{c}_k$. Then $T2 = \frac{1}{\tilde{\alpha}} \sum_t Z_t$. (a) Conditioned on $\tilde X_{j,k-1}$, the various $Z_t$'s used in the summation are mutually independent whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. (b) Notice that ${E_{k}}'\Psi_{k-1} L_t = R_{k} a_{t,k} + {E_k}' (D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k}a_{t,\text{undet}})$ and ${E_{k}}'\Psi_{k-1} e_t = (R_{k}^{-1})' D_k' e_t = (R_{k}^{-1})' D_k' I_{T_t} [(\Phi_K)_{T_t}' (\Phi_K)_{T_t}]^{-1} {I_{T_t}}' \Phi_K P_j a_{t}$. Thus conditioned on $\tilde X_{j,k-1}$, $\|Z_t\|_2 \leq 2 b_3$ w.p. one for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Here, $b_3:= \frac{\sqrt{r\zeta}}{\sqrt{1-r^2\zeta^2}} \phi^+ \gamma_*$. This follows using $\|(R_{k}^{-1})'\|_2\leq 1/\sqrt{1-r^2\zeta^2}$, $\|e_t\|_2\leq \phi^+ \sqrt{\zeta}$ and $\|E_k'\Psi_{k-1}L_t\|_2 \le \|L_t\|_2 \leq \sqrt{r} \gamma_*$. (c) Also, $\|\frac{1}{\alpha} \sum_t \mathbf{E}(Z_t|\tilde X_{j,k-1})\|_2 \leq 2 b_4$ where $b_4:= \kappa_{s,D}^+ \kappa_{s,e}^+ (r+c) \zeta \phi^+ ( \lambda_k^+ + r \zeta \lambda^+ + \frac{r^2\zeta^2}{\sqrt{1-r^2\zeta^2}} \lambda_{k+1}^+)$. Here $\kappa_{s,D}^+ = \kappa_{s,*}^+ + r\zeta$ defined in Remark \ref{rem_kappa_D} is the bound on $\max_j \max_k \kappa_s( D_{j,k})$ Thus, applying Corollary \ref{hoeffding_rec} with $\epsilon = 0.1 \zeta \lambda^-$, for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, \begin{multline*} \mathbf{P}( \|T2\|_2 \leq 2 b_4 + 0.1 \zeta \lambda^- | \tilde X_{j,k-1}) \\ \geq 1- \tilde{c}_k\exp\left(\frac{-\tilde{\alpha}(0.1 \zeta \lambda^-)^2}{32 \cdot 4 b_3^2}\right) \nonumber \end{multline*} Consider $T4$. Let $Z_t: = {E_{k,\perp}}'\Psi_{k-1} (L_t {e_t}' + e_t{L_t}')\Psi_{k-1} E_{k,\perp}$ which is of size $(n-\tilde{c}_k)\times (n-\tilde{c}_k)$. Then $T4 = \frac{1}{\tilde{\alpha}} \sum_t Z_t$. (a) conditioned on $\tilde X_{j,k-1}$, the various $Z_t$'s used in the summation are mutually independent whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. (b) Notice that ${E_{k,\perp}}'\Psi_{k-1} L_t ={E_{k,\perp}}' (D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k}a_{t,\text{undet}})$. Thus, conditioned on $\tilde X_{j,k-1}$, $\|Z_t\|_2 \leq 2b_5$ w.p. one for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. Here $b_5:= \sqrt{r\zeta} \phi^+ \gamma_*$. (c) Also, for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, $\|\frac{1}{\tilde{\alpha}} \sum_t \mathbf{E}(Z_t|\tilde X_{j,k-1})\|_2 \leq 2 b_6, \ b_6:= \kappa_{s,e}^+ (r+c) \zeta \phi^+ (\lambda_{k+1}^+ + r \zeta \lambda^+)$. Applying Corollary \ref{hoeffding_rec} with $\epsilon = 0.1 \zeta \lambda^-$, for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, \begin{align*} \mathbf{P}( \|T4\|_2 \leq 2b_6 + & 0.1 \zeta \lambda^-| \tilde X_{j,k-1}) \geq\\ & 1- (n-\tilde{c}_k) \exp\left(\frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2}{32 \cdot4 b_5^2}\right) \\ \geq & 1- (n-\tilde{c}_{\min}) \exp\left(\frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2}{32 \cdot4 b_5^2}\right).\nonumber \end{align*} Consider $\max(\|T2\|_2,\|T4\|_2)$. By union bound and using $b_3 > b_5$, for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, \begin{multline} \mathbf{P}( \max(\|T2 \|_2,\|T4 \|_2)\leq 2\max(b_4,b_6) + 0.1 \zeta \lambda^- |\tilde X_{j,k-1}) \geq \\ 1- n \exp\left(\frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2}{32 \cdot 4b_3^2}\right) \label{add2_maxT} \end{multline} Consider $\|\tilde{B}_k\|_2$. Let $Z_t := {E_{k,\perp}}'\Psi_{k-1} (L_t-e_t)({L_t}'-{e_t}')\Psi_{k-1} E_{k}$ which is of size $(n-\tilde{c}_k)\times \tilde{c}_k$. Then $\tilde{B}_k = \frac{1}{\tilde{\alpha}} \sum_t Z_t$. (a) conditioned on $\tilde X_{j,k-1}$, the various $Z_t$'s used in the summation are mutually independent whenever $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$. (b) Notice that ${E_{k,\perp}}'\Psi_{k-1} (L_t-e_t) = {E_{k,\perp}}'( D_{\text{det},k}a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}} - \Psi_{k-1} e_t)$ and ${E_{k}}' \Psi_{k-1} (L_t - e_t) = R_{k} a_{t,k}+ {E_{k}}'( D_{\text{det},k} a_{t,\text{det}} + D_{\text{undet},k} a_{t,\text{undet}} - \Psi_{k-1} e_t)$. Thus, conditioned on $\tilde X_{j,k-1}$, $ \|Z_t\|_2 \leq b_7$ w.p. one for all $X_{j,K, k-1} \in \Gamma_{j,K, k-1}$. Here $b_7 := (\sqrt{r}\gamma_* + \phi^+ \sqrt{\zeta})^2$. (c) $\|\frac{1}{\tilde{\alpha}} \sum_t \mathbf{E}(Z_t|\tilde X_{j,k-1})\|_2 \leq b_8$ for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$ where \begin{align*} b_8 := & (r+c) \zeta \kappa_{s,e}^+ \phi^+ \lambda_k^+ \\ & + \left[(r+c)\zeta \kappa_{s,e}^+ \phi^+ + (r+c)\zeta \kappa_{s,e}^+ \frac{r^2\zeta^2}{\sqrt{1-r^2\zeta^2}}\right]\lambda_{k+1}^+ \\ & + [r^2\zeta^2 + 2(r+c)r \zeta^2 \kappa_{s,e}^+ \phi^+ + (r+c)^2 \zeta^2 {\kappa_{s,e}^+}^2 {\phi^+}^2] \lambda^+ \nonumber \end{align*} Thus, applying Corollary \ref{hoeffding_rec} with $\epsilon=0.1 \zeta \lambda^-$, \begin{multline}\label{Bktil} \mathbf{P} (\|\tilde{B}_k\|_2 \leq b_8 + 0.1 \zeta \lambda^- | \tilde X_{j,k-1}) \geq \\ 1 - n \exp\left(\frac{-\tilde{\alpha} (0.1 \zeta \lambda^-)^2}{32 \cdot b_7^2}\right) \ \text{for all $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$} \end{multline} Using (\ref{add2_calH1}), (\ref{add2_etet}), (\ref{add2_maxT}) and (\ref{Bktil}) and the union bound, for any $\tilde X_{j,k-1} \in \tilde\Gamma_{j,k-1}$, $$\mathbf{P} (\|\tilde{\mathcal{H}}_k\|_2 \leq b_9 + 0.2 \zeta \lambda^-|\tilde X_{j,k-1}) \geq 1- \tilde{p}_3(\tilde{\alpha},\zeta)$$ where $b_9 := b_2 +2b_4+ b_8$ and \begin{multline} \tilde{p}_3(\tilde{\alpha},\zeta):= n \exp\left(\frac{-\tilde{\alpha} \epsilon^2}{8 \cdot b_1^2}\right) + n \exp\left(\frac{-\tilde{\alpha} \epsilon^2}{32\cdot 4 b_3^2}\right) \\ + n \exp\left(\frac{-\tilde{\alpha} \epsilon^2}{32 \cdot b_7^2}\right) \label{prob3} \end{multline} with $b_1 = {\phi^+}^2 \zeta$, $b_3 := \sqrt{r\zeta} \phi^+ \gamma_*$, $b_7 := (\sqrt{r} \gamma_* + \phi^+ \sqrt{\zeta})^2$. Using $\lambda_{k}^- \ge \lambda^-$, $f:= \lambda^+/ \lambda^-$, $\tilde{g}_k := \lambda_k^+/\lambda_k^-$ and $\tilde{h}_k := \lambda_{k+1}^+ / \lambda_k^-$, and then applying Lemma \ref{rem_prob}, the third claim of the lemma follows. \end{proof} \subsection{Compressive Sensing result} The error bound for noisy compressive sensing (CS) based on the RIC is as follows \cite{candes_rip}. \begin{theorem}[\cite{candes_rip}] \label{candes_csbound} Suppose we observe \begin{equation} y := \Psi x + z \nonumber \end{equation} where $z$ is the noise. Let $\hat{x}$ be the solution to following problem \begin{equation} \min_{x} \|x\|_1 \ \text{subject to} \ \|y - \Psi x\|_2 \leq \xi \label{*} \end{equation} Assume that $x$ is $s$-sparse, $\|z\|_2 \leq \xi$, and $\delta_{2s}(\Psi) < b (\sqrt{2}-1)$ for some $0 \le b < 1$. Then the solution of (\ref{*}) obeys $$\|\hat{x} - x\|_2 \leq C_1 \xi$$ with $\displaystyle C_1 = \frac{4\sqrt{1+\delta_{2s}(\Psi)}}{1-(\sqrt{2}+1)\delta_{2s}(\Psi)} \le \frac{4\sqrt{1+b (\sqrt{2}-1)}}{1-b}$. \end{theorem} \begin{remark} Notice that if $b$ is small enough, $C_1$ is a small constant but $C_1 >1$. For example, if $\delta_{2s}(\Psi) \le 0.15$, then $C_1 \le 7$. If $C_1 \xi > \|x\|_2$, the normalized reconstruction error bound would be greater than $1$, making the result useless. Hence, (\ref{*}) gives a small reconstruction error bound only for the small noise case, i.e., the case where $\|z\|_2 \leq \xi \ll \|x\|_2$. \end{remark} \subsection{Results from linear algebra} Davis and Kahan's $\sin \theta$ theorem \cite{davis_kahan} studies the rotation of eigenvectors by perturbation. \begin{theorem}[$\sin \theta$ theorem \cite{davis_kahan}] \label{sin_theta} Given two Hermitian matrices $\mathcal{A}$ and $\mathcal{H}$ satisfying \begin{align} \label{sindecomp} \mathcal{A} &= \left[ \begin{array}{cc} E & E_{\perp} \\ \end{array} \right] \left[\begin{array}{cc} A\ & 0\ \\ 0 \ & A_{\perp} \\ \end{array} \right] \left[ \begin{array}{c} E' \\ {E_{\perp}}' \\ \end{array} \right]\nonumber, \\ \mathcal{H} &= \left[ \begin{array}{cc} E & E_{\perp} \\ \end{array} \right] \left[\begin{array}{cc} H \ & B'\ \\ B \ & H_{\perp} \\ \end{array} \right] \left[ \begin{array}{c} E' \\ {E_{\perp}}' \\ \end{array} \right] \end{align} where $[E \ E_{\perp}]$ is an orthonormal matrix. The two ways of representing $\mathcal{A}+\mathcal{H}$ are \begin{align*} \mathcal{A} + \mathcal{H} &= \left[ \begin{array}{cc} E & E_{\perp} \\ \end{array} \right] \left[\begin{array}{cc} A + H \ & B'\ \\ B \ & A_{\perp} + H_{\perp} \\ \end{array} \right] \left[ \begin{array}{c} E' \\ {E_{\perp}}' \\ \end{array} \right] \\ &= \left[ \begin{array}{cc} F & F_{\perp} \\ \end{array} \right] \left[\begin{array}{cc} \Lambda\ & 0\ \\ 0 \ & \Lambda_{\perp} \\ \end{array} \right] \left[ \begin{array}{c} F' \\ {F_{\perp}}' \\ \end{array} \right] \nonumber \end{align*} where $[F\ F_{\perp}]$ is another orthonormal matrix. Let $\mathcal{R} := (\mathcal{A}+\mathcal{H}) E - \mathcal{A}E = \mathcal{H} E $. If $ \lambda_{\min}(A) >\lambda_{\max}(\Lambda_{\perp})$, then \begin{equation} \|(I-F F')E \|_2 \leq \frac{\|\mathcal{R}\|_2}{\lambda_{\min}(A) - \lambda_{\max}(\Lambda_{\perp})} \nonumber \end{equation} \end{theorem} The above result bounds the amount by which the two subspaces $\Span(E)$ and $\Span(F)$ differ as a function of the norm of the perturbation $\|\mathcal{R}\|_2$ and of the gap between the minimum eigenvalue of $A$ and the maximum eigenvalue of $\Lambda_{\perp}$. Next, we state Weyl's theorem which bounds the eigenvalues of a perturbed Hermitian matrix, followed by Ostrowski's theorem. \begin{theorem}[Weyl \cite{hornjohnson}]\label{weyl} Let $\mathcal{A}$ and $\mathcal{H}$ be two $n \times n$ Hermitian matrices. For each $i = 1,2,\dots,n$ we have $$\lambda_i(\mathcal{A}) + \lambda_{\min}(\mathcal{H}) \leq \lambda_i(\mathcal{A}+\mathcal{H}) \leq \lambda_i(\mathcal{A}) + \lambda_{\max}(\mathcal{H})$$ \end{theorem} \begin{theorem}[Ostrowski \cite{hornjohnson}]\label{ost} Let $H$ and $W$ be $n \times n$ matrices, with $H$ Hermitian and $W$ nonsingular. For each $i=1,2 \dots n$, there exists a positive real number $\theta_i$ such that $\lambda_{\min} (WW') \leq \theta_i \leq \lambda_{\max}(W{W}')$ and $\lambda_i(W H {W}') = \theta_i \lambda_i(H)$. Therefore, $$\lambda_{\min}(W H {W}') \geq \lambda_{\min} (W{W}') \lambda_{\min} (H)$$ \end{theorem} The following lemma proves some simple linear algebra facts. \begin{lem} \label{lemma0}\label{hatswitch} Suppose that $P$, $\Phat$ and $Q$ are three basis matrices. Also, $P$ and $\Phat$ are of the same size, ${Q}'P = 0$ and $\|(I-\Phat{\Phat}')P\|_2 = \zeta_*$. Then, \begin{enumerate} \item $\|(I-\Phat{\Phat}')PP'\|_2 =\|(I - P{P}')\Phat{\Phat}'\|_2 = \|(I - P P')\Phat\|_2 = \|(I - \Phat \Phat')P\|_2 = \zeta_*$ \item $\|P{P}' - \Phat {\Phat}'\|_2 \leq 2 \|(I-\Phat{\Phat}')P\|_2 = 2 \zeta_*$ \item $\|{\Phat}' Q\|_2 \leq \zeta_*$ \label{lem_cross} \item $ \sqrt{1-\zeta_*^2} \leq \sigma_i((I-\Phat \Phat')Q)\leq 1 $ \end{enumerate} Further, if $P$ is an $n \times r_1$ basis matrix and $\Phat$ is an $n \times r_2$ basis matrix with $r_2 \geq r_1$, then $\|(I-\Phat{\Phat}')PP'\|_2 \leq \|(I - P{P}')\Phat{\Phat}'\|_2$ \end{lem} The proof is in the Appendix. \subsection{High probability tail bounds for sums of independent random matrices} The following lemma follows easily using Definition \ref{probdefs}. We will use this at various places in the paper. \begin{lem} Suppose that $\mathcal{B}$ is the set of values that the r.v.s $X,Y$ can take. Suppose that $\mathcal{C}$ is a set of values that the r.v. $X$ can take. For a $0 \le p \le 1$, if $\mathbf{P}(\mathcal{B}^e|X) \ge p$ for all $X \in \mathcal{C}$, then $\mathbf{P}(\mathcal{B}^e|\mathcal{C}^e) \ge p$ as long as $\mathbf{P}(\mathcal{C}^e)> 0$. \label{rem_prob} \end{lem} The proof is in the Appendix. The following lemma is an easy consequence of the chain rule of probability applied to a contracting sequence of events. \begin{lem} \label{subset_lem} For a sequence of events $E_0^e, E_1^e, \dots E_m^e$ that satisfy $E_0^e \supseteq E_1^e \supseteq E_2^e \dots \supseteq E_m^e$, the following holds $$\mathbf{P}(E_m^e|E_0^e) = \prod_{k=1}^{m} \mathbf{P}(E_k^e | E_{k-1}^e).$$ \end{lem} \begin{proof} $\mathbf{P}(E_m^e|E_0^e) = \mathbf{P}(E_m^e, E_{m-1}^e, \dots E_0^e | E_0^e) = \prod_{k=1}^{m} \mathbf{P}(E_k^e | E_{k-1}^e, E_{k-2}^e, \dots E_0^e) = \prod_{k=1}^{m} \mathbf{P}(E_k^e | E_{k-1}^e)$. \end{proof} Next, we state the matrix Hoeffding inequality \cite[Theorem 1.3]{tail_bound} which gives tail bounds for sums of independent random matrices. \begin{theorem}[Matrix Hoeffding for a zero mean Hermitian matrix \cite{tail_bound}]\label{hoeffding} Consider a finite sequence $\{Z_t\}$ of independent, random, Hermitian matrices of size $n\times n$, and let $\{A_t\}$ be a sequence of fixed Hermitian matrices. Assume that each random matrix satisfies (i) $\mathbf{P}(Z_t^2 \preceq A_t^2)=1$ and (ii) $\mathbf{E}(Z_t) = 0$. Then, for all $\epsilon >0$, \[ \mathbf{P} \left(\lambda_{\max}\left(\sum_t Z_t \right) \leq \epsilon \right) \geq 1 - n \exp\left(\frac{-\epsilon^2}{8 \sigma^2}\right), \] where $\sigma^2 = \Big\| \sum_t A_t^2 \Big\|_2$. \end{theorem} The following two corollaries of Theorem \ref{hoeffding} are easy to prove. The proofs are given in Appendix \ref{appendix prelim}. \begin{corollary}[Matrix Hoeffding conditioned on another random variable for a nonzero mean Hermitian matrix]\label{hoeffding_nonzero} Given an $\alpha$-length sequence $\{Z_t\}$ of random Hermitian matrices of size $n\times n$, a r.v. $X$, and a set ${\cal C}$ of values that $X$ can take. Assume that, for all $X \in \mathcal{C}$, (i) $Z_t$'s are conditionally independent given $X$; (ii) $\mathbf{P}(b_1 I \preceq Z_t \preceq b_2 I|X) = 1$ and (iii) $b_3 I \preceq \frac{1}{\alpha}\sum_t \mathbf{E}(Z_t|X) \preceq b_4 I $. Then for all $\epsilon > 0$, \begin{multline} \mathbf{P} \left( \lambda_{\max}\left(\frac{1}{\alpha}\sum_t Z_t \right) \leq b_4 + \epsilon \Big | X \right) \\ \geq 1- n \exp\left(\frac{-\alpha \epsilon^2}{8(b_2-b_1)^2}\right) \ \text{for all} \ X \in \mathcal{C} \nonumber \end{multline} \begin{multline} \mathbf{P} \left(\lambda_{\min}\left(\frac{1}{\alpha}\sum_t Z_t \right) \geq b_3 -\epsilon \Big| X \right) \\ \geq 1- n \exp\left(\frac{-\alpha \epsilon^2}{8(b_2-b_1)^2} \right) \ \text{for all} \ X \in \mathcal{C} \nonumber \end{multline} \end{corollary} The proof is in Appendix \ref{appendix prelim}. \begin{corollary}[Matrix Hoeffding conditioned on another random variable for an arbitrary nonzero mean matrix]\label{hoeffding_rec} Given an $\alpha$-length sequence $\{Z_t\}$ of random matrices of size $n_1 \times n_2$, a r.v. $X$, and a set ${\cal C}$ of values that $X$ can take. Assume that, for all $X \in \mathcal{C}$, (i) $Z_t$'s are conditionally independent given $X$; (ii) $\mathbf{P}(\|Z_t\|_2 \le b_1|X) = 1$ and (iii) $\|\frac{1}{\alpha}\sum_t \mathbf{E}( Z_t|X)\|_2 \le b_2$. Then, for all $\epsilon >0$, \begin{multline*} \mathbf{P} \left(\bigg\|\frac{1}{\alpha}\sum_t Z_t \bigg\|_2 \leq b_2 + \epsilon \Big| X \right) \\ \geq 1-(n_1+n_2) \exp\left(\frac{-\alpha \epsilon^2}{32 b_1^2}\right) \ \text{for all} \ X \in \mathcal{C} \end{multline*} \end{corollary} The proof is in Appendix \ref{appendix prelim}. \section{Conclusions and Future Work} \label{conc} In this work, we studied the recursive (online) robust PCA problem, which can also be interpreted as a problem of recursive sparse recovery in the presence of large but structured noise (noise that is dense and lies in a ``slowly changing" low dimensional subspace). We analyzed a novel solution approach called Recursive Projected CS or ReProCS that was introduced in our earlier work \cite{rrpcp_allerton,rrpcp_allerton11,han_tsp}. The ReProCS algorithm that we analyze assumes knowledge of the subspace change model on the $L_t$'s. We showed that, under mild assumptions and a denseness assumption on the currently unestimated subspace, $\Span(D_{j,\new,k})$ (this assumption depends on algorithm estimates), w.h.p., ReProCS can exactly recover the support set of $S_t$ at all times; the reconstruction errors of both $S_t$ and $L_t$ are upper bounded by a time-invariant and small value; and after every subspace change time, w.h.p., the subspace recovery error decays to a small enough value within a finite delay. The most important open question that is being addressed in ongoing work is how to make our result a correctness result, i.e. how to remove the denseness assumption on $D_{j,\new,k}$ (see a forthcoming paper). Two other issues being studied are (i) how to get a result for the correlated $L_t$'s case \cite{zhan_ISIT}, and (ii) how to analyze the ReProCS algorithm when subspace change times are not known. Finally, an open question is how to to bound the sparse recovery error even when the support set is not exactly recovered. The undersampled measurements' case is also being studied \cite{rrpcp_globalsip}. \subsection{Discussion} \label{discuss_del} Notice from Definition \ref{defn_alpha} that $K = K(\zeta)$ is larger if $\zeta$ is smaller. Also, both $\alpha_\add(\zeta)$ and $\alpha_\del(\zeta)$ are inversely proportional to $\zeta$. Thus, if we want to achieve a smaller lowest error level, $\zeta$, we need to compute both addition proj-PCA and cluster-PCA's over larger durations, $\alpha$ and $\tilde\alpha$ respectively, and we will need more number of addition proj-PCA steps $K$. This means that we also require a larger delay between subspace change times, i.e. larger $t_{j+1}-t_j$. Let us first compare the above result with that for ReProCS for the same subspace change model, i.e. the result from Corollary \ref{cor_rep}. The most important difference is that ReProCS requires $\kappa_{2s}([P_0, P_{1,\new}, \dots P_{J,\new}]) \le 0.3$ whereas ReProCS-cPCA only requires $\max_j \kappa_{2s}(P_j) \le 0.3$. Moreover in case of ReProCS, the denominator in the bound on $\zeta$ also depends on $J$ whereas in case of ReProCS-cPCA, it only depends on $r_{max} + c_{\max}$. Because of this, in Theorem \ref{thm2} for ReProCS-cPCA, the only place where $J$ appears is in the definitions of $\alpha_\add$ and $\alpha_\del$. These govern the delay between subspace change times, $t_{j+1}-t_j$. Thus, with ReProCS-cPCA, $J$ can keep increasing, as long as $\min_j (t_{j+1}-t_j)$ also increases accordingly. Moreover, notice that the dependence of $\alpha_\add$ and $\alpha_\del$ on $J$ is only logarithmic and thus $\min_j (t_{j+1}-t_j)$ needs to only increase in proportion to $\log J$. The main extra assumptions that ReProCS-cPCA needs are the clustering assumption; a longer delay between subspace change times; and a denseness assumption similar to that on $D_{j,\new,k}$. We verify the clustering assumption in Sec \ref{model_verify}. The ReProCS-cPCA algorithm also needs to know the cluster sizes of the eigenvalues. These can, however, be estimated by computing the eigenvalues of the estimated covariance matrix at $t= \tilde{t}_j + \tilde\alpha$ and clustering them. {\em Comparison with the PCP result from \cite{rpca}. } Our results need many more assumptions compared with the PCP result \cite{rpca} which only assumes independent support change of the sparse part and a denseness assumption on the low-rank part. The most important limitation of our work is that both our results need an assumption on the algorithm estimates, thus neither can be called a correctness result. Moreover, both the results assume that the algorithms know the model parameters while the result for PCP does not. The key limiting aspect here is the knowledge of the subspace change times. The advantages of our results w.r.t. that for PCP are as follows. (a) Both results are for online algorithms; and (b) both need weaker denseness assumptions on the singular vectors of ${\cal L}_t$ as compared to PCP. PCP \cite{rpca} requires denseness of both the left and right singular vectors of ${\cal L}_t$ and it requires a bound on $\|UV'\|_{\infty}$ where $U$ and $V$ denote the left and right singular vectors. Denseness of only the left singular vectors is needed in our case (notice that $U=[P_{j-1}, P_{j,\new}]$). (c) Finally, the most important advantage of the ReProCS-cPCA result is that it does not need a bound on $J$ (number of subspace change times) as long as $\min_j (t_{j+1}-t_j)$ increases in proportion to $\log J$, and equivalently, does not need a bound on the rank of ${\cal L}_t$. However PCP needs a tight bound on the rank of ${\cal L}_t$. \section{Experimental Results} \subsection{Simulation Experiments} \label{sims} The simulated data is generated as follows. The measurement matrix $\mathcal{M}_t := [M_1, M_2,\cdots, M_t]$ is of size $2048 \times 4200$. It can be decomposed as a sparse matrix $\mathcal{S}_t:= [S_1, S_2,\cdots, S_t]$ plus a low rank matrix $\mathcal{L}_t:= [L_1, L_2,\cdots, L_t]$. The sparse matrix $\mathcal{S}_t := [S_1,S_2,\cdots,S_t]$ is generated as follows. \begin{enumerate} \item For $1 \leq t \leq t_{\train} = 200$, $S_t = 0$. \item For $t_{\train} < t \leq 5200$, $S_t$ has $s$ nonzero elements. The initial support $T_0 = \{1,2,\dots s\}$. Every $\Delta$ time instants we increment the support indices by 1. For example, for $t \in [t_\train +1, t_\train + \Delta-1]$, $T_t = T_0$, for $t \in [t_\train + \Delta, t_\train + 2\Delta-1]$. $T_t = \{2,3,\dots s+1\}$ and so on. Thus, the support set changes in a highly correlated fashion over time and this results in the matrix ${\cal S}_t$ being low rank. The larger the value of $\Delta$, the smaller will be the rank of ${\cal S}_t$ (for $t > t_\train+\Delta$). \item The signs of the nonzero elements of $S_t$ are $\pm 1$ with equal probability and the magnitudes are uniformly distributed between $2$ and $3$. Thus, $S_{\min} = 2$. \end{enumerate} The low rank matrix $\mathcal{L}_t := [L_1,L_2,\cdots,L_t]$ where $L_t := P_{(t)} a_t$ is generated as follows: \begin{enumerate} \item There are a total of $J=2$ subspace change times, $t_1=301$ and $t_2= 2701$. Let $U$ be an $2048 \times (r_0 + c_{1,\new} + c_{2,\new})$ orthonormalized random Gaussian matrix. \begin{enumerate} \item For $1\leq t \leq t_1 -1$, $P_{(t)} = P_0$ has rank $r_0$ with $P_0 = U_{[1,2,\cdots,r_0]}$. \item For $t_1 \leq t \leq t_2-1$, $P_{(t)} = P_1 = [P_0 \ P_{1,\new}]$ has rank $r_1 = r_0 + c_{1,\new}$ with $ P_{1,\new} = U_{[r_0+1,\cdots,r_0+c_{1,\new}]}$. \item For $t\geq t_2$, $P_{(t)} = P_2 = [P_1 \ P_{2,\new}]$ has rank $r_2 = r_1 + c_{2,\new}$ with $ P_{2,\new} = U_{[r_0+c_{1,\new}+1,\cdots,r_0+c_{1,\new}+c_{2,\new}]}$ \end{enumerate} \item $a_t$ is independent over $t$. The various $(a_t)_i$'s are also mutually independent for different $i$. \begin{enumerate} \item For $1\leq t < t_1$, we let $(a_t)_i$ be uniformly distributed between $-\gamma_{i,t}$ and $\gamma_{i,t}$, where \begin{multline*} \gamma_{i,t}=\\ \begin{cases} 400 & \text{if $i=1,2,\cdots,r_0/4, \forall t$,} \\ 30 &\text{if $i=r_0/4+1,r_0/4+2,\cdots,r_0/2, \forall t$.} \\ 2 &\text{if $i=r_0/2+1,r_0/2+2,\cdots,3r_0/4, \forall t$.} \\ 1 &\text{if $i=3r_0/4+1,3r_0/4+2,\cdots,r_0, \forall t$.} \end{cases} \end{multline*} \item For $t_1 \leq t < t_2$, $a_{t,*}$ is an $r_0$ length vector, $a_{t,\new}$ is a $c_{1,\new}$ length vector and $L_t := P_{(t)} a_t = P_1 a_t = P_0 a_{t,*} + P_{1,\new} a_{t,\new}$. $(a_{t,*})_i$ is uniformly distributed between $-\gamma_{i,t}$ and $\gamma_{i,t}$ and $a_{t,\new}$ is uniformly distributed between $-\gamma_{r_1,t}$ and $\gamma_{r_1,t}$, where \[ \gamma_{r_1,t}= \begin{cases} 1.1^{k-1} \quad \text{ if } t_1 + (k-1) \alpha \leq t \leq \\ \hspace{1 in}t_1 +k\alpha-1 \\ \hspace{1in} k=1,2,3,4 \\ 1.1^{4-1} = 1.331 \quad \text{if $t \geq t_1 + 4\alpha$.} \end{cases} \] \item For $t\geq t_2$, $a_{t,*}$ is an $r_1=r_0 + c_{1,\new}$ length vector, $a_{t,\new}$ is a $c_{2,\new}$ length vector and $L_t := P_{(t)} a_t = P_2 a_t = [P_0 \ P_{1,\new}] a_{t,*} + P_{2,\new} a_{t,\new}$. Also, $(a_{t,*})_i$ is uniformly distributed between $-\gamma_{i,t}$ and $\gamma_{i,t}$ for $i=1,2,\cdots,r_0$ and is uniformly distributed between $-\gamma_{r_1,t}$ and $\gamma_{r_1,t}$ for $i=r_0+1, \dots r_1$. $a_{t,\new}$ is uniformly distributed between $-\gamma_{r_2,t}$ and $\gamma_{r_2,t}$, where \[ \gamma_{r_2,t}= \begin{cases} 1.1^{k-1} \quad \text{if } t_2 + (k-1) \alpha \leq t \leq \\ \hspace{1 in} t_2 +k\alpha-1, \\ \hspace{1 in} k=1,2,\cdots,7 \\ 1.1^{7-1} = 1.7716 \quad \text{if $t \geq t_2 + 7\alpha$.} \end{cases} \] \end{enumerate} \end{enumerate} Thus for the above model, $\gamma_* = 400$, $\gamma_{\new} =1$, $\lambda^+ = 53333$, $\lambda^- = 0.3333$ and $f:=\frac{\lambda^+}{\lambda^-} = 1.6 \times 10^{5}$. Also, $S_{\min}=2$. We used $\mathcal{L}_{t_{\train}} + \mathcal{N}_{t_{\train}}$ as the training sequence to estimate $\Phat_0$. Here $\mathcal{N}_{t_{\train}} = [N_1, N_2 ,\cdots, N_{t_{\train}}]$ is i.i.d. random noise with each $(N_t)_i$ uniformly distributed between $-10^{-3}$ and $10^{-3}$. This is done to ensure that $\Span(\Phat_0) \neq \Span(P_0)$ but only approximates it. \begin{figure*} \subfigure[$\Delta = 2$] {\includegraphics[width = 8 cm, height = 4 cm]{Delta2Big}\label{D2}} \subfigure[$\Delta = 10$] {\includegraphics[width = 8 cm, height = 4cm]{Delta10Big}\label{D10}}\\ \subfigure[$\Delta = 50$] {\includegraphics[width = 8 cm, height = 4 cm]{Delta50Big}\label{D50}} \subfigure[$\Delta = 100$] {\includegraphics[width = 8 cm, height = 4 cm]{Delta100Big}\label{D100}} \caption{Plots of $d_t$, $SE$ and $e_t$ for simulated data with $r_0 =36$, $s = \max_t |T_t| = 20$ \label{add sim}} \end{figure*} \begin{figure} \centerline{ {\label{del sim a} \includegraphics[width =\columnwidth]{r36_s20_Delta10_reconS_del} } } \caption{Reconstruction errors of $S_t$ with $r_0 =36$, $s = \max_t |T_t| = 20$. The times at which PCP is done are marked by red triangles. $\Delta:10$, comparing PCP with ReProCS and ReProCS-cPCA. \label{del sim}} \end{figure} Figure \ref{add sim} shows the results of applying Algorithm \ref{reprocs} (ReProCS) to data generated according to the above model. The model parameters used were $s=20$, $r_0=36$ and $c_{1,\new}= c_{2,\new} = 1$, and each subfigure corresponds to a different value of $\Delta$. Because of the correlated support change, the $2048 \times t$ sparse matrix $\mathcal{S}_t = [S_1 ,S_2,\cdots,S_t]$ is rank deficient in either case, e.g. for Fig. \ref{D2}, $\mathcal{S}_t$ has rank $69,119,169,1219$ at $t=300,400,500,2600$; for Fig. \ref{D10}, $\mathcal{S}_t$ has rank $29,39,49,259$ at $t=300,400,500,2600$. We plot the subspace error $\SE_{(t)}$ and the normalized error for $S_t$, $\frac{\|\hat{S}_t-S_t\|_2}{\|S_t\|_2}$ averaged over 100 Monte Carlo simulations. We also plot the ratio $d_t :=\frac{\|{I_{T_t}}' D_{j,\new,k}\|_2}{\|D_{j,\new,k}\|_2}$. This serves as a proxy for $\kappa_s(D_{j,\new,k})$ (which has exponential computational complexity). In fact, in our proofs, we only need this ratio to be small. As can be seen from Figs. \ref{D2} and \ref{D10}, the subspace error $\SE_{(t)}$ of ReProCS decreased exponentially and stabilized after about $4$ projection PCA update steps. The averaged normalized error for $S_t$ followed a similar trend. In Fig. \ref{D10} where $\Delta=10$, the subspace error $\SE_{(t)}$ also decreased but the decrease was a bit slower as compared to Fig. \ref{D2} where $\Delta =2$. In Fig. \ref{D100} we set $\Delta = 100.$ In this case $\mathcal{S}_t$ is very low rank. The rank of $\mathcal{S}_t$ at $t=300,1000,2600$ is $20,27,43$. We can see here that the subspace error decays rather slowly and does not return all the way to $.01$ within the $K\alpha$ frames. Finally, if we set $\Delta = \infty$, the ratio $\frac{\|{I_{T_t}}' D_{j,\new,k}\|_2}{\|D_{j,\new,k}\|_2}$ was $1$ always. As a result, the subspace error and hence the reconstruction error of ReProCS did not decrease from its initial value at the subspace change time. We also did one experiment in which we generated $T_t$ of size $s=100$ uniformly at random from all possible $s$-size subsets of $\{1,2,\dots n\}$. $T_t$ at different times $t$ was also generated independently. In this case, the reconstruction error of ReProCS is $\frac{1}{5000} \sum_{t=201}^{5200} \frac{\|\hat{S}_t - S_t\|_2}{\|S_t\|_2}=2.8472\times 10^{-4}$. The error for PCP was $3.5 \times 10^{-3}$ which is also quite small. The data for figure \ref{del sim} was generated the same as above except that we use the more general subspace model that allows for deletion of directions. Here, for $1\leq t \leq t_1 -1$, $P_{(t)} = P_0$ has rank $r_0$ with $P_0 = U_{[1,2,\cdots,36]}$. For $t_1 \leq t \leq t_2-1$, $P_{(t)} = P_1 = [P_0\setminus P_{1,\old} \ P_{1,\new}]$ has rank $r_1 = r_0 + c_{1,\new} - c_{1,\old} = 34$ with $ P_{1,\new} = U_{[37]}$ and $P_{1,\old} = U _{[9,18,36]}$. For $t\geq t_2$, $P_{(t)} = P_2 = [P_1\setminus P_{2,\old} \ P_{2,\new}]$ has rank $r_2 = r_1 + c_{2,\new} - c_{2,\old} = 32$ with $ P_{2,\new} = U_{[38]}$ and $P_{1\old} = U _{[8,17,35]}$. Again, we average over 100 Monte Carlo simulations. As can be seen from Figure \ref{del sim}, the normalized sparse recovery error of ReProCS and ReProCS-cPCA decreased exponentially and stabilized. Furthermore, ReProCS-cPCA outperforms over ReProCS greatly when deletion steps are done. We also compared against PCP \cite{rpca}. At every $t = t_j + 4 k\alpha$, we solved (\ref{pcp_prob}) with $\lambda = 1/\sqrt{\max(n,t)}$ as suggested in \cite{rpca} to recover ${\cal S}_t$ and ${\cal L}_t$. We used the estimates of $S_t$ for the last $4 \alpha$ frames as the final estimates of $\Shat_t$. So, the $\Shat_t$ for $t=t_j+1, \dots t_j + 4 \alpha$ is obtained from PCP done at $t=t_j + 4 \alpha$, the $\Shat_t$ for $t=t_j+4\alpha + 1, \dots t_j + 8 \alpha$ is obtained from PCP done at $t=t_j + 8 \alpha$ and so on. Because of the correlated support change, the error of PCP was larger in both cases. \section{Introduction} \label{intro} Principal Components Analysis (PCA) is a widely used dimension reduction technique that finds a small number of orthogonal basis vectors, called principal components (PCs), along which most of the variability of the dataset lies. It is well known that PCA is sensitive to outliers. Accurately computing the PCs in the presence of outliers is called robust PCA \cite{Roweis98emalgorithms,Torre03aframework,rpca,rpca2}. Often, for time series data, the PCs space changes gradually over time. Updating it on-the-fly (recursively) in the presence of outliers, as more data comes in is referred to as online or recursive robust PCA \cite{sequentialSVD,ipca_weightedand,Li03anintegrated}. ``Outlier" is a loosely defined term that refers to any corruption that is not small compared to the true data vector and that occurs occasionally. As suggested in \cite{error_correction_PCP_l1,rpca}, an outlier can be nicely modeled as a sparse vector whose nonzero values can have any magnitude. A key application where the robust PCA problem occurs is in video analysis where the goal is to separate a slowly changing background from moving foreground objects \cite{Torre03aframework,rpca}. If we stack each frame as a column vector, the background is well modeled as being dense and lying in a low dimensional subspace that may gradually change over time, while the moving foreground objects constitute the sparse outliers \cite{error_correction_PCP_l1,rpca}. Other applications include detection of brain activation patterns from functional MRI (fMRI) sequences (the ``active" part of the brain can be interpreted as a sparse outlier), detection of anomalous behavior in dynamic social networks and sensor networks based detection and tracking of abnormal events such as forest fires or oil spills. Clearly, in all these applications, an online solution is desirable. The moving objects or the active regions of the brain or the oil spill region may be ``outliers" for the PCA problem, but in most cases, these are actually the signals-of-interest whereas the background image is the noise. Also, all the above signals-of-interest are sparse vectors. Thus, this problem can also be interpreted as one of recursively recovering a time sequence of sparse signals, $S_t$, from measurements $M_t: = S_t + L_t$ that are corrupted by (potentially) large magnitude but dense and structured noise, $L_t$. The structure that we require is that $L_t$ be dense and lie in a low dimensional subspace that is either fixed or changes ``slowly enough" in the sense quantified in Sec \ref{slowss}. \subsection{Related Work} There has been a large amount of work on robust PCA, e.g. \cite{Torre03aframework,rpca,rpca2,Roweis98emalgorithms,novel_m_estimator,outlier_pursuit, mccoy_tropp11}, and recursive robust PCA e.g. \cite{sequentialSVD,ipca_weightedand,Li03anintegrated}. In most of these works, either the locations of the missing/corruped data points are assumed known \cite{sequentialSVD} (not a practical assumption); or they first detect the corrupted data points and then replace their values using nearby values \cite{ipca_weightedand}; or weight each data point in proportion to its reliability (thus soft-detecting and down-weighting the likely outliers) \cite{Torre03aframework,Li03anintegrated}; or just remove the entire outlier vector \cite{outlier_pursuit, mccoy_tropp11}. Detecting or soft-detecting outliers ($S_t$) as in \cite{ipca_weightedand,Torre03aframework,Li03anintegrated} is easy when the outlier magnitude is large, but not otherwise. When the signal of interest is $S_t$, the most difficult situation is when nonzero elements of $S_t$ have small magnitude compared to those of $L_t$ and in this case, these approaches do not work. In recent works \cite{rpca,rpca2}, a new and elegant solution to robust PCA called Principal Components' Pursuit (PCP) has been proposed, that does not require a two step outlier location detection/correction process and also does not throw out the entire vector. It redefines batch robust PCA as a problem of separating a low rank matrix, ${\cal L}_t := [L_1,\dots,L_t]$, from a sparse matrix, ${\cal S}_t := [S_1,\dots,S_t]$, using the measurement matrix, ${\cal M}_t := [M_1,\dots,M_t] = {\cal L}_t+ {\cal S}_t$. Other recent works that also study batch algorithms for recovering a sparse ${\cal S}_t$ and a low-rank ${\cal L}_t$ from ${\cal M}_t := {\cal L}_t+ {\cal S}_t$ or from undersampled measurements include \cite{rpca_tropp, linear_inverse_prob, rpca_hu,SpaRCS,rpca_vayatis,rpca_zhang, rpca_Giannakis,compressivePCP,rpca_reduced,noisy_undersampled_yuan}. Let $\|A\|_*$ be the nuclear norm of $A$ (sum of singular values of $A$) while $\|A\|_1$ is the $\ell_1$ norm of $A$ seen as a long vector. It was shown in \cite{rpca} that, with high probability (w.h.p.), one can recover ${\cal L}_t$ and ${\cal S}_t$ exactly by solving PCP: \begin{equation} \underset{{\cal L},{\cal S}} {\min}\|{\cal L}\|_* + \lambda\|{\cal S}\|_1 \ \text{subject to} \ \ {\cal L} + {\cal S} = {\cal M}_t \label{pcp_prob} \vspace{-2mm} \end{equation} provided that (a) the left and right singular vectors of ${\cal L}_t$ are dense; (b) any element of the matrix ${\cal S}_t$ is nonzero w.p. $\varrho$, and zero w.p. $1-\varrho$, independent of all others; and (c) the rank of ${\cal L}_t$ is bounded by a small enough value. As described earlier, many applications where robust PCA is required, such as video surveillance, require an online (recursive) solution. Even for offline applications, a recursive solution is typically faster than a batch one. In recent work \cite{rrpcp_allerton,rrpcp_allerton11,han_tsp}, we introduced a novel solution approach, called Recursive Projected Compressive Sensing (ReProCS), that recursively recovered $S_t$ and $L_t$ at each time $t$. In simulation and real data experiments (see \cite{han_tsp} and \url{http://www.ece.iastate.edu/~chenlu/ReProCS/ReProCS_main.htm}), it was faster than batch methods such as PCP and also significantly outperformed them in situations where the support changes were correlated over time (as long as there was some support change every few frames) or when the background subspace dimension was large (for a given support size). In this work we develop a simple modification of the original ReProCS idea and analyze it. This modification assumes knowledge of the subspace change model on the $L_t$'s. \subsection{Our Contributions} We show that (i) if an estimate of the subspace of $L_t$ at the initial time is available; (ii) if $L_t$, lies in a slowly changing low dimensional subspace as defined in Sec \ref{slowss}, (iii) if this subspace is dense, if (iv) the unestimated part of the changed subspace is dense at all times, and (v) if the subspace change model is known to the algorithm, then, w.h.p., ReProCS can exactly recover the support set of $S_t$ at all times; and the reconstruction errors of both $S_t$ and $L_t$ are upper bounded by a time invariant and small value. Moreover, after every subspace change time, w.h.p., the subspace error decays to a small enough value within a finite delay. Because (iv) depends on an algorithm estimate, our result, in its current form, cannot be interpreted as a correctness result but only a useful step towards it. From simulation experiments, we have observed that (iv) holds for correlated support changes as long as the support changes every few frames. This connection is being quantified in ongoing work. Assumption (v) is also restrictive and we explain in Sec \ref{discuss_add} how it can possibly be removed in future work. We also develop and analyze a generalization of ReProCS called ReProCS with cluster-PCA (ReProCS-cPCA) that is designed for a more general subspace change model, and that needs an extra clustering assumption. Its main advantage is that it does not require a bound on the number of subspace changes, $J$, as long as the separation between the change points is allowed to grow logarithmically with $J$. Equivalently, it does not need a bound on the rank of ${\cal L}_t$. If $L_t$ is the signal of interest, then ReProCS is a solution to recursive robust PCA in the presence of sparse outliers. To the best of our knowledge, this is the first analysis of any recursive (online) robust PCA approach. If $S_t$ is the signal of interest, then ReProCS is a solution to recursive sparse recovery in large but low-dimensional noise. To our knowledge, this work is also the first to analyze any recursive (online) sparse plus low-rank recovery algorithm. Another online algorithm that addresses this problem is given in \cite{grass_undersampled}, however, it does not contain any performance analysis. Our results directly apply to the recursive version of the matrix completion problem \cite{matrix_com_candes,Bresler_matrix} as well since it is a simpler special case of the current problem (the support set of $S_t$ is the set of indices of the missing entries and is thus known) \cite{rpca}. The proof techniques used in our work are very different from those used to analyze other recent batch robust PCA works \cite{rpca,rpca2, novel_m_estimator,mccoy_tropp11, outlier_pursuit,rpca_tropp, linear_inverse_prob, rpca_reduced, rpca_Giannakis,rpca_zhang,compressivePCP,noisy_undersampled_yuan}. The works of \cite{mccoy_tropp11,outlier_pursuit} also study a different case: that where an entire vector is either an outlier or an inlier. Our proof utilizes (a) sparse recovery results \cite{candes_rip}; (b) results from matrix perturbation theory that bound the estimation error in computing the eigenvectors of a perturbed Hermitian matrix with respect to eigenvectors of the original Hermitian matrix (the famous sin $\theta$ theorem of Davis and Kahan \cite{davis_kahan}) and (c) high probability bounds on eigenvalues of sums of independent random matrices (matrix Hoeffding inequality \cite{tail_bound}). A key difference of our approach to analyzing the subspace estimation step compared with most existing work analyzing finite sample PCA, e.g. \cite{nadler} and references therein, is that it needs to provably work in the presence of error/noise that is correlated with $L_t$. Most existing works, including \cite{nadler} and the references it discusses, assume that the noise is independent of (or at least uncorrelated with) the data. However, in our case, because of how the estimate $\Lhat_t$ is computed, the error $e_t := L_t - \Lhat_t$ is correlated with $L_t$. As a result, the tools developed in these earlier works cannot be used for our problem. This is also the reason why simple PCA cannot be used and we need to develop and analyze projection-PCA based approaches for subspace estimation (see Appendix \ref{projpca} for details). The ReProCS approach is related to that of \cite{decodinglp,rpca_regression, rpca_regression_sparse} in that all of these first try to nullify the low dimensional signal by projecting the measurement vector into a subspace perpendicular to that of the low dimensional signal, and then solve for the sparse ``error" vector (outlier). However, the big difference is that in all of these works the basis for the subspace of the low dimensional signal is {\em perfectly known.} Our work studies {\em the case where the subspace is not known}. We have an initial approximate estimate of the subspace, but over time it can change significantly. In this work, to keep things simple, we use $\ell_1$ minimization done separately for each time instant (also referred to as basis pursuit denoising (BPDN)) \cite{candes_rip,bpdn}. However, this can be replaced by any other sparse recovery algorithm, either recursive or batch, as long as the batch algorithm is applied to $\alpha$ frames at a time, e.g. one can replace BPDN by modified-CS or support-predicted modified-CS \cite{rrpcp_isit}. \subsection{Paper Organization} The rest of the paper is organized as follows. We give the notation and background required for the rest of the paper in Sec \ref{bgnd}. The problem definition and the model assumptions are given in Sec \ref{probdef}. We explain the ReProCS algorithm and give its performance guarantees (Theorem \ref{thm1}) in Sec \ref{reprocs_sec}. The terms used in the proof are defined in Sec \ref{detailed}. The proof is given in Sec \ref{mainlemmas}. A more general subspace change model and ReProCS-cPCA which is designed to handle this model are given in Sec. \ref{Del_section}. We also give the main result for ReProCS-cPCA in this section and discuss it. A discussion with respect to the result for PCP \cite{rpca} is also provided here. Section \ref{thmproof} contains the proof of this theorem. In Sec \ref{model_verify}, we show that our slow subspace change model indeed holds for real videos. In Sec \ref{sims}, we show numerical experiments demonstrating Theorem \ref{thm1}, as well as comparisons of ReProCS with PCP. Conclusions and future work are given in Sec \ref{conc}. \subsection{Two Main Lemmas} In this and the following subsections we remove the subscript $j$ at most places. Also recall from earlier that $P_{*} = P_{j-1}$. The theorem is a direct consequence of Lemmas \ref{lem_add} and \ref{lem_del} given below. Lemma \ref{lem_add} is a restatement of Lemmas \ref{expzeta} and \ref{mainlem} with using the new definition of $\zeta_*^+$ and the new bound on $\zeta$ from Theorem \ref{thm2}. It summarizes the final conclusions of the addition step for ReProCS-cPCA. \begin{lem}[Final lemma for addition step]\label{lem_add} Assume that all the conditions in Theorem \ref{thm2} holds. Also assume that $\mathbf{P}(\Gamma_{j,k-1}^e ) > 0$. Then \begin{enumerate} \item $\zeta_0^+=1$, $\zeta_k^+ \leq 0.6^{k} + 0.4 c\zeta$ for all $k=1,2,\dots K$; \item $\mathbf{P}(\Gamma_{j,k}^e \ | \Gamma_{j,k-1}^e ) \geq p_k(\alpha,\zeta) \ge p_K(\alpha,\zeta)$ for all $k=1,2,\dots K$. \end{enumerate} where $\zeta_k^+$ is defined in Definition \ref{zetakplus} and $p_k(\alpha,\zeta)$ is defined in equation \eqref{pk}. \end{lem} The lemma below summarizes the final conclusions for the cluster-PCA step. \begin{lem}[Final lemma for deletion (cluster-PCA) step]\label{lem_del} Assume that all the conditions in Theorem \ref{thm2} hold. Also assume that $\mathbf{P}(\tilde{\Gamma}_{j,k-1}^e ) > 0$. Then, \begin{enumerate} \item for all $k=1,2,\dots \vartheta_j$, $\mathbf{P} ( \tilde\Gamma_{j,k}^e \ | \ \tilde\Gamma_{j,k-1}^e) \geq \tilde{p}(\tilde{\alpha},\zeta)$ where $\tilde{p}(\tilde{\alpha},\zeta)$ is defined in Lemma \ref{lem_bound_terms}. \item $\mathbf{P} ( \Gamma_{j+1,0}^e \ | \ \tilde\Gamma_{j,\vartheta_j}^e) = 1$. \end{enumerate} \end{lem} \begin{proof} Notice that $\mathbf{P} ( \tilde\Gamma_{j,k}^e \ | \ \tilde\Gamma_{j,k-1}^e) = \mathbf{P} (\tilde{\zeta}_k \leq \tilde{c}_k \zeta \ \text{and} \ \That_t = T_t \ \text{for all} \ t \in \tilde{\mathcal{I}}_{j,k} \ | \ \tilde\Gamma_{j,k-1}^e)$ and $\mathbf{P} ( \Gamma_{j+1,0}^e \ | \ \tilde\Gamma_{j,\vartheta_j}^e) = \mathbf{P} ( \hat{T}_t = T_t \ \text{for all} \ t \in \mathcal{I}_{j,\vartheta_j+1} )$. The first claim of the lemma follows by combining Lemma \ref{tilde_zeta} and the last claim of Lemma \ref{cslem}. The second claim follows using the last claim of Lemma \ref{cslem}. \end{proof} \begin{remark}\label{Gamma_rem2_del} Under the assumptions of Theorem \ref{thm2}, \[ \Gamma_{j,0} \cap (\cap_{k=1}^{K} \check{\Gamma}_{j,k}) \cap (\cap_{k=1}^{\vartheta_j} \tilde{\check{\Gamma}}_{j,k}) \subseteq \Gamma_{j+1,0} \] This follows easily using Remark \ref{SE_rem} and the fact that $\sum_k \tilde{c}_k = r_j \le r$. \end{remark} \begin{remark}\label{Gamma_rem} Under the assumptions of Theorem \ref{thm2}, the following hold. \begin{enumerate} \item For any $k=1,2 \dots \vartheta_j+1$, $\tilde{\Gamma}_{j,k}^e$ implies (i) $\zeta_{j,K} \le c \zeta$, (ii) $\|\Phi_{j,K} P_j\|_2 \le (r+c) \zeta$. \begin{itemize} \item (i) follows from the first claim of Lemma \ref{lem_add} and the definition of $K$, (ii) follows using $\|\Phi_{j,K} P_j\|_2 \le \|\Phi_{j,K} [P_{*}, P_{\new}]\|_2 \le \zeta_{*} + \zeta_{K} \leq \zeta_*^+ +\zeta_{K}^+ \leq (r + c)\zeta$. \end{itemize} \item $\Gamma_{J+1,0}^e$ implies (i) $\zeta_{j,*} \le \zeta_{*}^+$ for all $j$, (ii) $\zeta_{j,k} \le 0.6^{k} + 0.4 c \zeta$ for all $k=1,\cdots,K$ and all $j$, (iii) $\zeta_{j,K} \le c \zeta$ for all $j$ \end{enumerate} \end{remark} \subsection{Proof of Theorem \ref{thm2}} \begin{proof} From Remark \ref{Gamma_rem2_del}, \begin{align*} \mathbf{P}(\Gamma_{j+1,0}^e | \Gamma_{j,0}^e) &\geq \mathbf{P}(\check{\Gamma}_{j,1}^e,\dots,\check{\Gamma}_{j,K}^e, \tilde{\check{\Gamma}}^e_{j,1},\dots,\tilde{\check{\Gamma}}^e_{j,\vartheta_j} | \Gamma_{j,0}) \\ & = \prod_{k=1}^{K}\mathbf{P}(\check{\Gamma}_{j,k}^e | {\Gamma}_{j,k-1}^e)\prod_{k=1}^{\vartheta_j}\mathbf{P}(\tilde{\check{\Gamma}}_{j,k}^e | \tilde{\Gamma}_{j,k-1}^e) \end{align*} Also, since $\Gamma_{j+1,0} \subseteq \Gamma_{j,0}$ using Lemma \ref{subset_lem}, $\mathbf{P}(\Gamma_{J+1,0}^e|\Gamma_{1,0}^e) = \prod_{j=1}^{J} \mathbf{P}(\Gamma_{j+1,0}^e|\Gamma_{j,0}^e)$. Thus \begin{align*} \mathbf{P}(\Gamma_{J+1,0}^e&|\Gamma_{1,0}^e)\geq \\ & \prod_{j=1}^J \left[ \prod_{k=1}^{K}\mathbf{P}(\check{\Gamma}_{j,k}^e | {\Gamma}_{j,k-1}^e)\prod_{k=1}^{\vartheta_j}\mathbf{P}(\tilde{\check{\Gamma}}_{j,k}^e | \tilde{\Gamma}_{j,k-1}^e) \right] \end{align*} Using Lemmas \ref{lem_add} and \ref{lem_del}, and the fact that $p_k(\alpha,\zeta) \ge p_K(\alpha,\zeta)$, we get $\mathbf{P}(\Gamma_{J+1,0}^e| \Gamma_{1,0}) \ge {p}_K(\alpha,\zeta)^{KJ} \tilde{p}(\tilde{\alpha},\zeta)^{\vartheta_{\max} J}$. Also, $\mathbf{P}(\Gamma_{1,0}^e)=1$. This follows by the assumption on $\hat{P}_0$ and Lemma \ref{cslem}. Thus, $\mathbf{P}(\Gamma_{J+1,0}^e) \ge {p}_K(\alpha,\zeta)^{KJ} \tilde{p}(\tilde{\alpha},\zeta)^{\vartheta_{\max} J}$. Using the definitions of $\alpha_\add(\zeta)$ and $\alpha_\del(\zeta)$ and $\alpha \ge \alpha_\add$ and $\tilde{\alpha} \ge \alpha_\del$, \begin{align*} \mathbf{P}(\Gamma_{J+1,0}^e) &\ge {p}_K(\alpha,\zeta)^{KJ} \tilde{p}(\tilde{\alpha},\zeta)^{\vartheta_{\max} J} \\ & \ge (1-n^{-10})^2 \ge 1- 2n^{-10} \end{align*} The event $\Gamma_{J+1,0}^e$ implies that $\That_t=T_t$ for all $t < t_{J+1}$. Using Remark \ref{SE_rem} and the last claim of Remark \ref{Gamma_rem}, $\Gamma_{J+1,0}^e$ implies that all the bounds on the subspace error hold. Using these, Remark \ref{etdef_rem}, $\|a_{t,\new}\|_2 \le \sqrt{c} \gamma_{\new,k}$ and $\|a_t\|_2 \le \sqrt{r} \gamma_*$, $\Gamma_{J+1,0}^e$ implies that all the bounds on $\|e_t\|_2$ hold (the bounds are obtained in Lemma \ref{cslem}). Thus, all conclusions of the the result hold w.p. at least $1- 2n^{-10}$. \end{proof} \subsection{A lemma needed for getting high probability bounds on the subspace error} The following lemma is needed for bounding the subspace error, $\tilde\zeta_k$ \begin{lem}\label{bound_R} Assume that $\tilde{\zeta}_{k'} \leq \tilde{c}_{k'} \zeta$ for $k'=1,\cdots, k-1$. Then \begin{enumerate} \item $\|D_{\text{det},k}\|_2 = \|\Psi_{k-1} G_{\text{det},k}\|_2 \leq r \zeta$. \item $\|G_{\text{det},k} {G_{\text{det},k}}' - \hat{G}_{\text{det},k}{\hat{G}_{\text{det},k}}'\|_2 \leq 2 r\zeta$. \item $0< \sqrt{1-r^2 \zeta^2} \leq \sigma_i(D_k) = \sigma_i(R_k) \leq 1$. Thus, $\|D_k\|_2 = \|R_k\|_2 \le 1$ and $\|D_k^{-1}\|_2 = \|R_k^{-1}\|_2 \le 1/\sqrt{1-r^2 \zeta^2} $. \item $\|{D_{\text{undet},k}}'E_k \|_2 = \|{G_{\text{undet},k}}'E_k \|_2 \leq \frac{r^2 \zeta^2}{\sqrt{1-r^2\zeta^2}}$. \end{enumerate} \end{lem} \begin{proof} The proof is given in Appendix \ref{proof_lem_bound_R}. \end{proof} \subsection{Bounding the subspace error, $\tilde\zeta_k$} \begin{lem}[High probability bound on $\tilde{\zeta_k}$]\label{tilde_zeta} Assume that the conditions of Theorem \ref{thm2} hold. Then, \begin{equation} \mathbf{P} (\tilde{\zeta}_k \leq \tilde{c}_k \zeta \ | \tilde\Gamma_{j,k-1}^e) \geq \tilde{p}(\tilde{\alpha},\zeta) \nonumber \end{equation} where $\tilde{p}(.)$ is defined in Lemma \ref{lem_bound_terms}. \end{lem} \begin{proof} This follows by combining Lemma \ref{bnd_tzetakp} and the last claim of Lemma \ref{lem_bound_terms}, both of which are given below. \end{proof} \begin{lem}[Bounding $\tilde{\zeta_k}^+$] \label{bnd_tzetakp} If \begin{align*} f_{dec}(\tilde{g}_{\max},\tilde{h}_{\max}, &\kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta) - \\ &\frac{f_{inc}(\tilde{g}_{\max},\tilde{h}_{\max},\kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta)}{\tilde{c}_{\min} \zeta} > 0 \label{Func_del} \end{align*} then $f_{dec}(\tilde{g}_k,\tilde{h}_k, \kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta) >0$ and $\tilde\zeta_k^+ \le \tilde{c}_k \zeta$. \end{lem} \begin{proof} Recall from Definition \ref{def_zeta} that $\tilde{\zeta_k}^+ := \frac{f_{inc}(\tilde{g}_k,\tilde{h}_k,\kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta)}{f_{dec}(\tilde{g}_k,\tilde{h}_k,\kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta)}$. Notice that $f_{inc}(.)$ is an increasing function of $\tilde{g},\tilde{h}$, and $f_{dec}(.)$ is a decreasing function. Using the definition of $\tilde{g}_{\max},\tilde{h}_{\max}, \tilde{c}_{\min}$ given in Assumption \ref{clusterass}, the result follows. \end{proof} \begin{lem}[Bounding $\tilde{\zeta_k}$] \label{defnPCA} If $\lambda_{\min}(\tilde{A}_k) - \lambda_{\max}(\tilde{A}_{k,\perp}) - \|\tilde{\mathcal{H}}_k\|_2 >0$, then \begin{equation} \tilde{\zeta_k} \leq \frac{\|\tilde{\mathcal{H}}_k\|_2}{\lambda_{\min} (\tilde{A}_k) - \lambda_{\max} (\tilde{A}_{k,\perp}) - \|\tilde{\mathcal{H}}_k\|_2} \label{zetakbnd_del} \end{equation} \end{lem} \begin{proof} The proof is the same as that of Lemma \ref{zetakbnd}. \end{proof} \begin{lem}[High probability bounds for each of the terms in the $\tilde{\zeta}_k$ bound and for $\tilde\zeta_k$]\label{lem_bound_terms} Assume that the conditions of Theorem \ref{thm2} hold. Also, assume that $\mathbf{P}(\tilde\Gamma_{j,k-1}^e)>0$. Then, for all $1 \leq k \leq \vartheta_j$, \begin{enumerate} \item $\mathbf{P}(\lambda_{\min}(\tilde{A}_{k}) \geq \lambda_{k}^-(1-r^2 \zeta^2 - 0.1 \zeta) | \tilde\Gamma_{j,k-1}^e) >1- \tilde{p}_1(\tilde{\alpha},\zeta)$ with $\tilde{p}_1(\tilde{\alpha},\zeta)$ given in (\ref{prob1}). \item $\mathbf{P}(\lambda_{\max}(\tilde{A}_{k,\perp}) \leq \lambda_k^- (\tilde{h}_k+r^2 \zeta^2 f+0.1 \zeta) | \tilde\Gamma_{j,k-1}^e) > 1-\tilde{p}_2(\tilde{\alpha},\zeta)$ with $\tilde{p}_2(\tilde{\alpha},\zeta)$ given in (\ref{prob2}). \item $\mathbf{P}(\|\tilde{\mathcal{H}}_{k}\|_2 \leq \lambda_k^- f_{inc}(\tilde{g}_k,\tilde{h}_k,\kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta) \ |\tilde\Gamma_{j,k-1}^e) \geq 1 - \tilde{p}_3(\tilde{\alpha},\zeta)$ with $\tilde{p}_3(\tilde{\alpha},\zeta)$ given in (\ref{prob3}). \item $\mathbf{P}( \lambda_{\min} (\tilde{A}_k) - \lambda_{\max} (\tilde{A}_{k,\perp}) - \|\tilde{\mathcal{H}}_k\|_2 \ge \lambda_k^- f_{dec}(\tilde{g}_k,\tilde{h}_k,\kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta) \ |\tilde\Gamma_{j,k-1}^e) \geq \tilde{p}(\tilde{\alpha},\zeta) :=1- \tilde{p}_{1}(\tilde{\alpha},\zeta) - \tilde{p}_{2}(\tilde{\alpha},\zeta) - \tilde{p}_{3}(\tilde{\alpha},\zeta)$. \item If $f_{dec}(\tilde{g}_k,\tilde{h}_k) >0$, then $\mathbf{P}(\tilde{\zeta}_k \leq \tilde\zeta_k^+ \ | \tilde\Gamma_{j,k-1}^e) \geq \tilde{p} (\tilde{\alpha},\zeta)$ \nonumber \end{enumerate} \end{lem} \begin{proof} Recall that $f_{inc}(.)$, $f_{dec}(.)$ and $\tilde\zeta_k^+$ are defined in Definition \ref{def_zeta}. The proof of the first three claims is given in Appendix \ref{proof_lem_bound_terms}. This proof uses Lemmas \ref{bound_R} and \ref{cslem}, Remark \ref{rem_kappa_D}, and the Hoeffing corollaries. The fourth claim follows directly from the first three using the union bound on probabilities. The fifth claim follows from the fourth using Lemma \ref{defnPCA}. \end{proof} \section{Proof of Theorem \ref{thm1}} \label{mainlemmas} \subsection{Two Main Lemmas and Proof of Theorem \ref{thm1}} The proof of Theorem \ref{thm1} essentially follows from two main lemmas that we state below. Lemma \ref{expzeta} gives an exponentially decaying upper bound on $\zeta_k^+$ defined in Definition \ref{zetakplus}. $\zeta_k^+$ will be shown to be a high probability upper bound for $\zeta_k$ under the assumptions of the Theorem. Lemma \ref{mainlem} says that conditioned on $X_{j,k-1}\in\Gamma_{j,k-1}$, $X_{j,k}$ will be in $\Gamma_{j,k}$ w.h.p.. In words this says that if, during the time interval $\mathcal{I}_{j,k-1}$, the algorithm has worked well (recovered the support of $S_t$ exactly and recovered the background subspace with subspace recovery error below $\zeta_{k-1}^+ + \zeta_*^+$), then it will also work well in $\mathcal{I}_{j,k}$ w.h.p.. \begin{lem}[Exponential decay of $\zeta_k^+$] \label{expzeta} Assume that the bounds on $\zeta$ from Theorem \ref{thm1} hold. Define the sequence $\zeta_k^+$ as in Definition \ref{zetakplus}. Then \begin{enumerate} \item $\zeta_0^+ = 1$ and $\zeta_k^+ \leq 0.6^k + 0.4c\zeta$ for all $k = 1, 2, \dots, K,$ \item the denominator of $\zeta_k^+$ is positive for all $k = 1, 2, \dots, K$. \end{enumerate} \end{lem} We prove this lemma in Section \ref{pfoflem1}. \begin{lem}\label{mainlem} Assume that all the conditions of Theorem \ref{thm1} hold. Also assume that $\mathbf{P}(\Gamma^e_{j,k-1})>0.$ Then \[ \mathbf{P}(\Gamma^e_{j,k}|\Gamma^e_{j,k-1}) \geq p_k(\alpha,\zeta) \geq p_K(\alpha,\zeta) \] for all $ k = 1, 2, \ldots, K$, and \[ \mathbf{P}(\Gamma^e_{j,K+1}|\Gamma^e_{j,K}) = 1 \] where $p_k(\alpha,\zeta)$ is defined in equation \eqref{pk}. \end{lem} We prove this lemma in Section \ref{pfoflem2}. \begin{remark}\label{Gamma_rem2} Using Lemma \ref{expzeta} and Remark \ref{zetastar} and the value of $K$ given in the theorem, it is easy to see that, under the assumptions of Theorem \ref{thm1}, $$\Gamma_{j,0} \cap (\cap_{k=1}^{K+1} \check\Gamma_{j,k}) \subseteq \Gamma_{j+1,0}.$$ Thus $\mathbf{P}(\Gamma_{j+1,0}^e|\Gamma^e_{j,0}) \ge \mathbf{P}(\check\Gamma^e_{j,1}, \dots \check\Gamma^e_{j,K+1} | \Gamma^e_{j,0})$. \end{remark} \vspace{.4 cm} \begin{proof}[\textbf{Proof of Theorem \ref{thm1}}] The theorem is a direct consequence of Lemmas \ref{expzeta}, \ref{mainlem}, and Lemma \ref{subset_lem}. From Remark \ref{Gamma_rem2}, $\mathbf{P}(\Gamma_{j+1,0}^e|\Gamma_{j,0}^e) \ge \mathbf{P}(\check\Gamma_{j,1}^e, \dots \check\Gamma_{j,K+1}^e | \Gamma_{j,0}^e) = \prod_{k=1}^{K+1} P(\check\Gamma_{j,k}^e|\Gamma_{j,k-1}^e)$. Also, since $\Gamma_{j+1,0} \subseteq \Gamma_{j,0}$, using Lemma \ref{subset_lem}, $\mathbf{P}(\Gamma_{J+1,0}^e | \Gamma_{1,0}^e) = \prod_{j=1}^{J} \mathbf{P}(\Gamma_{j+1,0}^e | \Gamma_{j,0}^e)$. Thus, \[ \mathbf{P}(\Gamma_{J+1,0}^e | \Gamma_{1,0}^e) \ge \prod_{j=1}^{J} \prod_{k=1}^{K+1} \mathbf{P}(\check\Gamma_{j,k}^e|\Gamma_{j,k-1}^e) \] Using Lemma \ref{mainlem}, and the fact that $p_k(\alpha,\zeta) \geq p_K(\alpha,\zeta)$ (see their respective definitions in Lemma \ref{termbnds} and equation \eqref{pk} and observe that $p_k(\alpha,\zeta)$ is decreasing in $k$), we get $$\mathbf{P}(\Gamma_{J+1,0}^e| \Gamma_{1,0}) \geq {p}_K(\alpha,\zeta)^{KJ}.$$ Also, $\mathbf{P}(\Gamma_{1,0}^e)=1$. This follows by the assumption on $\hat{P}_0$ and Lemma \ref{cslem}. Thus, $\mathbf{P}(\Gamma_{J+1,0}^e) \geq {p}_K(\alpha,\zeta)^{KJ}$. Using the definition of $\alpha_\add$, and $\alpha \geq \alpha_{\add}$, we get that $$\mathbf{P}(\Gamma_{J+1,0}^e) \geq {p}_K(\alpha,\zeta)^{KJ} \geq 1- n^{-10}$$ The event $\Gamma_{J+1,0}^e$ implies that $\That_t=T_t$ and $e_t$ satisfies (\ref{etdef0}) for all $t < t_{J+1}$. Using Remarks \ref{zetastar} and \ref{Gamma_rem2}, $\Gamma_{J+1,0}^e$ implies that all the bounds on the subspace error hold. Using these, $\|a_{t,\new}\|_2 \le \sqrt{c} \gamma_{\new,k}$, and $\|a_t\|_2 \le \sqrt{r} \gamma_*$, $\Gamma_{J+1,0}^e$ implies that all the bounds on $\|e_t\|_2$ hold (the bounds are obtained in Lemma \ref{cslem}). Thus, all conclusions of the the result hold w.p. at least $1- n^{-10}$. \end{proof} \subsection{Proof of Lemma \ref{expzeta} } \label{pfoflem1} \begin{proof} First recall the definition of $\zeta_k^+$ (Definition \ref{zetakplus}). Recall from Definition \ref{kappaplus} that $\kappa_s^+ := 0.15$ , $\phi^+ := 1.1735$, and $g^+ := \sqrt{2}$. So we can make these substitutions directly. Notice that $\zeta_k^+$ is an increasing function of $\zeta_*^+, \zeta, c$, and $f$. Therefore we can use upper bounds on each of these quantities to get an upper bound on $\zeta_k^+$. From the definition of $\zeta$ in Theorem \ref{thm1} and $\zeta_{j,*}^+ := (r_0 + (j-1)c)\zeta$ we get \begin{itemize} \item $\zeta_{j,*}^+ \leq 10^{-4} $ \item $\zeta_{j,*}^+ f \leq 1.5 \times 10^{-4}$ \item $c\zeta \leq 10^{-4}$ \item $\displaystyle \frac{\zeta_{j,*}^+}{c\zeta} = \frac{(r_0+(j-1)c)\zeta}{c\zeta} \leq \frac{r_0 +(J-1)c}{c} = \frac{r}{c}\leq r$ (Without loss of generality we can assume that $c = c_{\max} \geq 1$ because if $c=0$ then there is no subspace estimation problem to be solved. $c=0$ is the trivial case where all conclusions of Theorem \ref{thm1} will hold just using Lemma \ref{cslem}.) \item $\zeta_{j,*}^+ f r \leq r^2 f \zeta \leq 1.5 \times 10^{-4}$ \end{itemize} First we prove by induction that $\zeta_k^+ \leq \zeta_{k-1}^+ \leq 0.6$ for all $k\geq 1$. Notice that $\zeta_0^+ =1$ by definition. \begin{itemize} \item Base case ($k=1$): Using the above bounds we get that $\zeta_1^+ < 0.5985 < 1 = \zeta_0^+$. \item For the induction step, assume that $\zeta_{k-1}^+ \leq \zeta_{k-2}^+$. Then because $\zeta_k^+$ is increasing in $\zeta_{k-1}^+$ (denote the increasing function by $f_{inc}$) we get that $\zeta_{k}^+ = f_{inc}(\zeta_{k-1}^+) \leq f_{inc}(\zeta_{k-2}^+) = \zeta_{k-1}^+$. \end{itemize} \begin{enumerate} \item To prove the first claim, first rewrite $\zeta_k^+$ as \begin{multline*} \zeta_k^+ = \zeta_{k-1}^+ \frac{ C \kappa_s^+ g^+ + \tilde{C} (\kappa_s^+)^2 g^+ (\zeta_{k-1}^+) }{1 - (\zeta_*^+)^2 - (\zeta_*^+)^2 f - 0.125 c \zeta - b} + \\ c\zeta\frac{ C(\zeta_*^+ f)\frac{ (\zeta_*^+)}{c\zeta} + .125}{1 - (\zeta_*^+)^2 - (\zeta_*^+)^2 f - 0.125 c \zeta - b} \end{multline*} where $C, \tilde{C},$ and $b$ are as in Definition \ref{zetakplus}. Using the above bounds including $\zeta_{k-1}^+ \leq .6$ we get that \begin{align*} \zeta_k^+ &\leq \zeta_{k-1}^+(0.6) + c\zeta(0.16) \\ &= \zeta_0^+ (0.6)^{k} + \sum_{i = 0}^{k-1}(0.6)^k(0.16) c\zeta \\ &\leq \zeta_0^+ (0.6)^{k} + \sum_{i = 0}^{\infty}(0.6)^k(0.16) c\zeta \\ &\leq 0.6^k + 0.4 c\zeta \end{align*} \item To see that the denominator is positive, observe that the denominator is decreasing in all of its arguments: $\zeta_{j,*}^+, \zeta_{j,*}^+ f, c\zeta$, and $b$. Using the same upper bounds as before, we get that the denominator is greater than or equal to $0.78 > 0$. \end{enumerate} \end{proof} \subsection{Proof of Lemma \ref{mainlem} }\label{pfoflem2} The proof of Lemma \ref{mainlem} follows from two lemmas. The first, Lemma \ref{cslem}, is the final conclusion for the projected CS step for $t\in \mathcal{I}_{j,k}$. Its proof follows using Lemmas \ref{expzeta}, \ref{delta_kappa}, \ref{hatswitch}, the CS error bound (Theorem \ref{candes_csbound}) and some straightforward steps. The second, Lemma \ref{zetak}, is the final conclusion for one projection PCA step, i.e. for $t\in \mathcal{I}_{j,k}$. Its proof is much longer. It first uses a lemma based on the $\sin\theta$ and Weyl theorems (Theorems \ref{sin_theta} and \ref{weyl}) to get a bound on $\zeta_k$. This is Lemma \ref{zetakbnd}. Next we bound $\kappa_s(D_\new)$ in Lemma \ref{Dnew0_lem}. Finally in Lemma \ref{termbnds}, we use the expression for $e_t$ from Lemma \ref{cslem}, the matrix Hoeffding inequalities (Corollaries \ref{hoeffding_nonzero} and \ref{hoeffding_rec}) and the bound from Lemma \ref{Dnew0_lem} to bound each of the terms in the bound on $\zeta_k$ to finally show that, conditioned on $\Gamma_{j,k-1}^e$, $\zeta_k \le \zeta_k^+$ w.h.p.. We state the two lemmas first and then proceed to prove them in order. \begin{lem}[Projected CS Lemma]\label{cslem} Assume that all conditions of Theorem \ref{thm1} hold. \begin{enumerate} \item For all $t \in \mathcal{I}_{j,k}$, for any $k=1,2,\dots K$, if $X_{j,k-1} \in \Gamma_{j,k-1}$, \begin{enumerate} \item the projection noise $\beta_t$ satisfies $\|\beta_t\|_2 \leq \zeta_{k-1}^+ \sqrt{c} \gamma_{\new,k} + \zeta_{*}^+ \sqrt{r} \gamma_* \le \sqrt{c} 0.72^{k-1} \gamma_{\new} + 1.06 \sqrt{\zeta} \le \xi_0$. \item the CS error satisfies $\|\hat{S}_{t,\cs} - S_t\|_2 \le7 \xi_0$. \item $\hat{T}_t = T_t$ \item $e_t$ satisfies \eqref{etdef0} and $\|e_t\|_2 \leq \phi^+ [\kappa_s^+ \zeta_{k-1}^+ \sqrt{c} \gamma_{\new,k} + \zeta_{*}^+ \sqrt{r} \gamma_*] \le 0.18 \cdot 0.72^{k-1} \sqrt{c}\gamma_{\new} + 1.17 \cdot 1.06 \sqrt{\zeta}$. Recall that \eqref{etdef0} is \[ I_{T_t} {(\Phi_{(t)})_{T_t}}^{\dag} \beta_t = I_{T_t} [ (\Phi_{(t)})_{T_t}'(\Phi_{(t)})_{T_t}]^{-1} {I_{T_t}}' \Phi_{(t)} L_t \] \end{enumerate} \item For all $k=1,2,\dots K$, $\mathbf{P}(\That_t = T_t \ \text{and} \ e_t \ \text{satisfies (\ref{etdef0})} \text{ for all } t \in \mathcal{{I}}_{j,k} | X_{j,k-1} ) = 1$ for all $X_{j,k-1} \in \Gamma_{j,k-1}$. \end{enumerate} \end{lem} \begin{lem}[Projection PCA Lemma] \label{zetak} Assume that all the conditions of Theorem \ref{thm1} hold. Then, for all $k=1,2, \dots K$, \[ \mathbf{P}(\zeta_{k} \le \zeta_k^+|\egam_{j,k-1}) \ge p_k(\alpha,\zeta) \] where $\zeta_k^+$ is defined in Definition \ref{zetakplus} and $p_k(\alpha,\zeta)$ is defined in \eqref{pk}. \end{lem} \vspace{.5 cm} \begin{proof}[Proof of Lemma \ref{mainlem}] Observe that $\mathbf{P}(\Gamma_{j,k}|\Gamma_{j,k-1}) = \mathbf{P}( \check{\Gamma}_{j,k} | \Gamma_{j,k-1})$. The lemma then follows by combining Lemma \ref{zetak} and item 2 of Lemma \ref{cslem} and Lemma \ref{rem_prob}. \end{proof} \vspace{.4 cm} \subsection{Proof of Lemma \ref{cslem}} We begin by first bounding the RIC of the CS matrix $\Phi_k.$ \begin{lem}[Bounding the RIC of $\Phi_k$] \label{RIC_bnd} Recall that $\zeta_*:= \|(I-\Phat_*{\Phat_*}')P_*\|_2$. The following hold. \begin{enumerate} \item Suppose that a basis matrix $P$ can be split as $P = [P_1, P_2]$ where $P_1$ and $P_2$ are also basis matrices. Then $\kappa_s^2 (P) = \max_{T: |T| \le s} \|I_T'P\|_2^2 \le \kappa_s^2 (P_1) + \kappa_s^2 (P_2)$. \item $\kappa_s^2(\Phat_*) \leq \kappa_{s,*}^2 + 2\zeta_*$ \item $\kappa_s (\Phat_{\new,k}) \leq \kappa_{s,\new} + \tilde{\kappa}_{s,k} \zeta_k + \zeta_*$ \item $\delta_{s} (\Phi_0) = \kappa_s^2 (\Phat_*) \leq \kappa_{s,*}^2 + 2 \zeta_*$ \item $\delta_{s}(\Phi_k) = \kappa_s^2 ([\Phat_* \ \Phat_{\new,k}]) \leq \kappa_s^2 (\Phat_*) + \kappa_s^2 (\Phat_{\new,k})\leq \kappa_{s,*}^2 + 2\zeta_* + (\kappa_{s,\new} + \tilde{\kappa}_{s,k} \zeta_k + \zeta_*)^2$ for $k \ge 1$ \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Since $P$ is a basis matrix, $\kappa_s^2 (P) = \max_{|T| \leq s} \|{I_T}' P\|_2^2$. Also, $\|{I_T}' P\|_2^2 = \|{I_T}' [P_1, P_2] [P_1, P_2]' I_T \|_2 = \|{I_T}' (P_1P_1' + P_2 P_2') I_T \|_2 \le \|{I_T}' P_1 P_1' I_T\|_2 + \|{I_T}' P_2 P_2' I_T\|_2$. Thus, the inequality follows. \item For any set $T$ with $|T| \le s$, $\|{I_T}' \Phat_*\|_2^2 = \|{I_T}' \Phat_* {\Phat_*}'I_T\|_2 =\|{I_T}'( \Phat_* {\Phat_*}' -P_* {P_*}' + P_*{P_*}')I_T\|_2 \leq \|{I_T}'( \Phat_* {\Phat_*}' -P_* {P_*}')I_T\|_2 + \|{I_T}'P_* {P_*}' I_T\|_2 \leq 2\zeta_* + \kappa_{s,*}^2$. The last inequality follows using Lemma \ref{lemma0} with $P=P_*$ and $\hat{P} = \hat{P}_*$. \item By Lemma \ref{lemma0} with $P = P_*$, $\Phat = \Phat_*$ and $Q = P_\new$, $\|{P_{\new}}' \Phat_*\|_2 \leq \zeta_*$. By Lemma \ref{lemma0} with $P = P_\new$ and $\hat{P} = \hat{P}_{\new,k}$, $\|(I-P_\new P_\new')\Phat_{\new,k}\|_2 = \|(I-\Phat_{\new,k}{\Phat_{\new,k}}')P_{\new}\|_2$. For any set $T$ with $|T| \leq s$, $\|{I_T}'\Phat_{\new,k}\|_2 \leq \|{I_T}'(I-P_{\new}P_{\new}') \Phat_{\new,k}\|_2 + \|{I_T}'P_{\new}P_{\new}' \Phat_{\new,k}\|_2 \leq \tilde{\kappa}_{s,k} \|(I- P_{\new}{P_{\new}}')\Phat_{\new,k}\|_2 + \|{I_T}'P_{\new}\|_2 = \tilde{\kappa}_{s,k} \|(I-\Phat_{\new,k}{\Phat_{\new,k}}')P_{\new}\|_2 + \|{I_T}'P_{\new}\|_2 \leq \tilde{\kappa}_{s,k} \|D_{\new,k}\|_2 + \tilde{\kappa}_{s,k}\| \Phat_* {\Phat_*}'P_{\new}\|_2 + \|{I_T}'P_{\new}\|_2 \le \tilde{\kappa}_{s,k}\zeta_{k} + \tilde{\kappa}_{s,k} \zeta_* + \kappa_{s,\new} \leq \tilde{\kappa}_{s,k}\zeta_{k} + \zeta_* + \kappa_{s,\new}$. Taking $\max$ over $|T| \le s$ the claim follows. \item This follows using Lemma \ref{delta_kappa} and the second claim of this lemma. \item This follows using Lemma \ref{delta_kappa} and the first three claims of this lemma. \end{enumerate} \end{proof} \begin{corollary}\label{RICnumbnd} If the conditions of Theorem \ref{thm1} are satisfied, and $X_{j,k-1}\in \Gamma_{j,k-1}$, then \begin{enumerate} \item $\delta_s(\Phi_0) \leq \delta_{2s}(\Phi_0) \leq {\kappa_{2s,*}^+}^2 + 2\zeta_*^+ <0.1 < 0.1479$ \item $\delta_s(\Phi_{k-1}) \leq \delta_{2s}(\Phi_{k-1}) \leq {\kappa_{2s,*}^+}^2 + 2\zeta_*^+ +(\kappa_{2s,\new}^+ + \tilde{\kappa}_{2s,k-1}^+ \zeta_{k-1}^+ + \zeta_*^+)^2 <0.1479$ \item $\phi_{k-1} \le \frac{1}{1-\delta_s(\Phi_{k-1})} < \phi^+$ \end{enumerate} \end{corollary} \begin{proof} This follows using Lemma \ref{RIC_bnd}, the definition of $\Gamma_{j,k-1}$, and the bound on $\zeta_{k-1}^+$ from Lemma \ref{expzeta}. \end{proof} The following are straightforward bounds that will be useful for the proof of Lemma \ref{cslem} and later. \begin{fact}\label{constants} Under the assumptions of Theorem \ref{thm1}: \begin{enumerate} \item $\zeta \gamma_* \le \frac{\sqrt{\zeta}}{(r_0 + (J-1) c)^{3/2}} \le \sqrt{\zeta}$ \item $\zeta_{j,*}^+ \leq \frac{10^{-4}}{(r_0 + (J-1) c)} \le 10^{-4}$ \item $ \zeta_{j,*}^+ \gamma_*^2 \leq \frac{1}{(r_0 + (J-1)c)^2} \le 1$ \item $ \zeta_{j,*}^+ \gamma_* \le \frac{\sqrt{\zeta}}{\sqrt{r_0 + (J-1)c}} \le \sqrt{\zeta}$ \item $\zeta_{j,*}^+ f \leq \frac{1.5 \times 10^{-4}}{r_0 + (J-1)c} \le 1.5 \times 10^{-4}$ \item $\zeta_{k-1}^+ \leq 0.6^{k-1} + 0.4 c\zeta$ (from Lemma \ref{expzeta}) \item $\zeta_{k-1}^+ \gamma_{\new,k} \leq (0.6 \cdot 1.2)^{k-1} \gamma_{\new} + 0.4c\zeta \gamma_* \le 0.72^{k-1}\gamma_{\new} + \frac{0.4\sqrt{\zeta}}{\sqrt{r_0 + (J-1)c}} \le 0.72^{k-1}\gamma_{\new} + 0.4\sqrt{\zeta}$ \item $\zeta_{k-1}^+ \gamma_{\new,k}^2 \leq (0.6 \cdot 1.2^2)^{k-1} \gamma_{\new}^2 + 0.4 c\zeta \gamma_*^2 \le 0.864^{k-1}\gamma_{\new}^2 + \frac{0.4}{{(r_0 + (J-1)c})^2} \le 0.864^{k-1}\gamma_{\new}^2 + 0.4$ \end{enumerate} \end{fact} \begin{proof}[Proof of Lemma \ref{cslem}] Recall that $X_{j,k-1} \in \Gamma_{j,k-1}$ implies that $\zeta_{j,*} \leq \zeta_{j,*}^+$ and $\zeta_{k-1}\leq \zeta_{k-1}^+$. \begin{enumerate} \item \begin{enumerate} \item For $t \in \mathcal{I}_{j,k}$, $\beta_t := (I-\Phat_{(t-1)} {\Phat_{(t-1)} }') L_t = D_{*,k-1} a_{t,*} + D_{\new,k-1} a_{t,\new} $. Thus, using Fact \ref{constants} \begin{align*} \|\beta_t\|_2 & \leq \zeta_{j,*} \sqrt{r} \gamma_* + \zeta_{k-1} \sqrt{c} \gamma_{\new,k} \\ & \leq \sqrt{\zeta}\sqrt{r} + (0.72^{k-1}\gamma_{\new} + .4\sqrt{\zeta})\sqrt{c} \\ & = \sqrt{c} 0.72^{k-1} \gamma_{\new} + \sqrt{\zeta} (\sqrt{r} + 0.4\sqrt{c}) \leq \xi_0. \end{align*} \item By Corollary \ref{RICnumbnd}, $\delta_{2s} (\Phi_{k-1}) < 0.15< \sqrt{2}-1$. Given $|T_t| \leq s$, $\|\beta_t\|_2 \leq \xi_0 = \xi$, by Theorem \ref{candes_csbound}, the CS error satisfies \[ \|\hat{S}_{t,\cs} - S_t\|_2 \leq \frac{4\sqrt{1+\delta_{2s}(\Phi_{k-1})}}{1-(\sqrt{2}+1)\delta_{2s}(\Phi_{k-1})} \xi_0 < 7 \xi_0. \] \item Using the above, $\|\hat{S}_{t,\cs} - S_t\|_{\infty} \leq 7 \xi_0$. Since $\min_{i\in T_t} |(S_t)_{i}| \geq S_{\min}$ and $(S_t)_{T_t^c} = 0$, $\min_{i\in T_t} |(\hat{S}_{t,cs})_i| \geq S_{\min} - 7 \xi_0$ and $\min_{i\in T_t^c} |(\hat{S}_{t,\cs})_i| \leq 7 \xi_0$. If $\omega < S_{\min} - 7 \xi_0$, then $\hat{T}_t \supseteq T_t$. On the other hand, if $\omega > 7 \xi_0$, then $\hat{T}_t \subseteq T_t$. Since $S_{\min} > 14 \xi_0$ (condition 3 of the theorem) and $\omega$ satisfies $7 \xi_0 \leq \omega \leq S_{\min} -7 \xi_0$ (condition 1 of the theorem), then the support of $S_t$ is exactly recovered, i.e. $\hat{T}_t = T_t$. \item Given $\hat{T}_t = T_t$, the LS estimate of $S_t$ satisfies $(\hat{S}_t)_{T_t} = [(\Phi_{k-1})_{T_t}]^{\dag} y_t =[(\Phi_{k-1})_{T_t}]^{\dag} (\Phi_{k-1} S_t + \Phi_{k-1}L_t)$ and $(\hat{S}_t)_{T_t^c} = 0$ for $t \in \mathcal{I}_{j,k}$. Also, ${(\Phi_{k-1})_{T_t}}' \Phi_{k-1} = {I_{T_t}}' \Phi_{k-1}$ (this follows since $(\Phi_{k-1})_{T_t} = \Phi_{k-1} I_{T_t}$ and $\Phi_{k-1}'\Phi_{k-1} = \Phi_{k-1}$). Using this, the LS error $e_t := \hat{S}_t - S_t$ satisfies (\ref{etdef0}). Thus, using Fact \ref{constants} and condition 2 of the theorem, \begin{align*} \|e_t\|_2 & \le \phi^+ (\zeta_{j,*}^+ \sqrt{r}\gamma_* + \kappa_{s,k-1} \zeta_{k-1}^+ \sqrt{c}\gamma_{\new,k}) \\ & \le 1.2 \left(\sqrt{r}\sqrt{\zeta}+ \sqrt{c} 0.15 (0.72)^{k-1} + \right. \\ & \hspace{1.5in} \left. \sqrt{c} 0.06\sqrt{\zeta}\right) \\ & = 0.18 \sqrt{c}0.72^{k-1}\gamma_{\new} + \\ &\hspace{.8in} 1.2 \sqrt{\zeta}(\sqrt{r} + 0.06 \sqrt{c}). \end{align*} \end{enumerate} \item The second claim is just a restatement of the first. \end{enumerate} \end{proof} \subsection{Proof of Lemma \ref{zetak}} The proof of Lemma \ref{zetak} will use the next three lemmas \begin{lem}\label{zetakbnd} If $\lambda_{\min}(A_k) - \|A_{k,\perp}\|_2 - \|\mathcal{H}_k\|_2 >0$, then \begin{align*} \label{zetakbound} \zeta_k &\leq \frac{\|\mathcal{R}_k\|_2}{\lambda_{\min} (A_k) - \|A_{k,\perp}\|_2 - \|\mathcal{H}_k\|_2} \\ & \leq \frac{\|\mathcal{H}_k\|_2}{\lambda_{\min} (A_k) - \|A_{k,\perp}\|_2 - \|\mathcal{H}_k\|_2} \end{align*} where $\mathcal{R}_k := \mathcal{H}_k E_\new$ and $A_k$, $A_{k,\perp}$, $\mathcal{H}_k$ are defined in Definition \ref{defHk}. \end{lem} \begin{proof} Since $\lambda_{\min}(A_k) - \|A_{k,\perp}\|_2 - \|\mathcal{H}_k\|_2 >0$, so $\lambda_{\min}(A_k) > \|A_{k,\perp}\|_2$. Since $A_k$ is of size $c_\new \times c_\new$ and $\lambda_{\min}(A_k) > \|A_{k,\perp}\|_2$, $\lambda_{c_\new+1} (\mathcal{A}_k) = \|A_{k,\perp}\|_2$. By definition of EVD, and since $\Lambda_k$ is a $c_\new \times c_\new$ matrix, $\lambda_{\max}(\Lambda_{k,\perp}) = \lambda_{c_\new+1}(\mathcal{A}_k + \mathcal{H}_k)$. By Weyl's theorem (Theorem \ref{weyl}), $\lambda_{c_\new+1}(\mathcal{A}_k + \mathcal{H}_k) \leq \lambda_{c_\new+1} (\mathcal{A}_k) + \|\mathcal{H}_k\|_2 = \|A_{k,\perp}\|_2 + \|\mathcal{H}_k\|_2$. Therefore, $\lambda_{\max}(\Lambda_{k,\perp})\leq \|A_{k,\perp}\|_2 + \|\mathcal{H}_k\|_2$ and hence $\lambda_{\min}(A_k) - \lambda_{\max}(\Lambda_{k,\perp})\geq \lambda_{\min}(A_k) - \|A_{k,\perp}\|_2 - \|\mathcal{H}_k\|_2 > 0$. Apply the $\sin \theta$ theorem (Theorem \ref{sin_theta}) with $\lambda_{\min}(A_k) - \lambda_{\max}(\Lambda_{k,\perp})> 0$, we get \begin{align*} \|(I- \Phat_{\new,k} {\Phat_{\new,k}}') E_{\new} \|_2 &\leq \frac{\|\mathcal{R}_k\|_2}{ \lambda_{\min}(A_k) - \lambda_{\max}(\Lambda_{k,\perp})}\\ & \leq \frac{\|\mathcal{H}_k\|_2}{\lambda_{\min}(A_k) - \|A_{k,\perp}\|_2 - \|\mathcal{H}_k\|_2} \end{align*} Since $\zeta_k = \|(I- \Phat_{\new,k} {\Phat_{\new,k}}') D_{\new}\|_2 = \|(I- \Phat_{\new,k} {\Phat_{\new,k}}') E_{\new} R_{\new} \|_2 \leq \|(I- \Phat_{\new,k} {\Phat_{\new,k}}') E_{\new}\|_2$, the result follows. The last inequality follows because $\|R_\new\|_2 = \|E_\new' D_\new\|_2 \le 1$. \end{proof} \begin{lem} \label{Dnew0_lem} Assume that the assumptions of Theorem \ref{thm1} hold. Conditioned on $\Gamma_{j,k-1}^e$, \begin{align*} \kappa_s(D_{\new}) &\le \frac{\kappa_s(P_{\new}) + \zeta_*^+}{\sqrt{1-\zeta_*^+}} \\ & \le \frac{\kappa_{2s,\new}^+ + 0.0015}{\sqrt{1 - 0.0015}} \approx 0.1516 \le \kappa_s^+. \end{align*} \end{lem} \begin{proof} Recall that $D_\new = D_{\new,0} = (I - \Phat_{j-1}\Phat_{j-1}') P_\new$. Also $D_{\new} \overset{\mathrm{QR}}{=} E_{\new}R_{\new}$. By lemma \ref{hatswitch} $\|{R_{\new}}^{-1}\|_2 \leq \frac{1}{\sqrt{1-\zeta_*^+}}$. $\kappa_s(D_{\new}) = \kappa_s(E_{\new}) = \max_{|T|\leq s} \|I_T'D_{\new}{R_{\new}}^{-1}\|_2 \leq \max_{|T|\leq s} \|I_T'D_{\new}\|_2\|{R_{\new}}^{-1}\|_2 \leq \frac{\kappa_s(P_{\new}) + \zeta_*}{\sqrt{1-\zeta_*^+}}$. The event $\Gamma_{j,k-1}^e$ implies that $\zeta_* \le \zeta_*^+ \le 0.0015$. Thus, the lemma follows. \end{proof} \begin{lem}[High probability bounds for each of the terms in the $\zeta_k$ bound (\ref{zetakbnd})]\label{termbnds} Assume the conditions of Theorem \ref{thm1} hold. Also assume that $\mathbf{P}(\Gamma_{j,k-1}^e)>0$ for all $1\leq k \leq K+1$. Then, for all $1 \le k \le K$ \begin{enumerate} \item $\mathbf{P} \left(\lambda_{\min} (A_{k}) \geq \lambda_{\new,k}^- \left(1 -(\zeta_{j,*}^+)^2 - \frac{c \zeta}{12}\right) \big|\egam_{j,k-1}\right) > 1- p_{a,k}(\alpha,\zeta)$ where \begin{align*} p_{a,k}(\alpha,\zeta) &:= c \exp \left(\frac{-\alpha \zeta^2 (\lambda^-)^2}{8 \cdot 24^2 \cdot \min(1.2^{4k} \gamma_{\new}^4,\gamma_*^4)}\right) \\ &\quad + c \exp \left( \frac{-\alpha c^2 \zeta^2(\lambda^-)^2} {8 \cdot 24^2 \cdot 4^2}\right) \end{align*} \item $\mathbf{P}\left(\lambda_{\max}(A_{k,\perp}) \leq \lambda_{\new,k}^- \left( (\zeta_{j,*}^+)^2 f + \frac{c \zeta}{24}\right) \big| \egam_{j,k-1}\right) > 1- p_b(\alpha,\zeta)$ where \[ p_b (\alpha,\zeta) := (n-c) \exp \left(\frac{-\alpha c^2 \zeta (\lambda^-)^2}{8 \cdot 24^2}\right) \] \item $\mathbf{P}\left(\|\mathcal{H}_{k}\|_2 \leq \lambda_{\new,k}^- (b + 0.125c\zeta) \ \big|\egam_{j,k-1}\right) \geq 1 - p_c(\alpha,\zeta)$ where $b$ is as defined in Definition \ref{zetakplus} and \begin{align*} & p_c(\alpha,\zeta) := \\ & n \exp\left(\frac{-\alpha \zeta^2 (\lambda^-)^2}{8 \cdot 24^2 (.0324 \gamma_\new^2 + .0072 \gamma_\new + .0004)^2}\right) + \\ & n \exp\left(\frac{-\alpha \zeta^2 (\lambda^-)^2}{32\cdot 24^2 (.06 \gamma_\new^2 + .0006 \gamma_\new + .4)^2}\right)+ \\ & n \exp\left(\frac{-\alpha \zeta^2 (\lambda^-)^2 \epsilon^2}{32 \cdot 24^2 ( .186 \gamma_\new^2 + .00034 \gamma_\new + 2.3)^2}\right). \end{align*} \end{enumerate} \end{lem} \begin{proof} The proof is quite long and hence is given in Appendix \ref{appendix termbnds}. The first two claims are obtained by simplifying the terms and then appropriately applying the Hoeffding corollaries. The third claim first uses Lemma \ref{cslem} to argue that conditioned on $X_{j,k-1} \in \Gamma_{j,k-1}$, $e_t$ satisfies (\ref{etdef0}). It then simplifies the resulting expressions and eventually uses the Hoeffding corollaries. The simplification also uses the bound on $\kappa_s(D_\new)$ from Lemma \ref{Dnew0_lem}. \end{proof} \begin{proof}[Proof of Lemma \ref{zetak}] Lemma \ref{zetak} now follows by combining Lemmas \ref{zetakbnd} and \ref{termbnds} and defining \begin{equation} p_k(\alpha,\zeta) := 1 - p_{a,k}(\alpha,\zeta) - p_b(\alpha,\zeta) - p_c(\alpha,\zeta). \label{pk} \end{equation} \end{proof} \section{Problem Definition and Model Assumptions} \label{probdef} We give the problem definition below followed by the model and then describe the two key assumptions. \subsection{Problem Definition} \label{model} The measurement vector at time $t$, $M_t$, is an $n$ dimensional vector which can be decomposed as \begin{equation} M_t = L_t + S_t \label{problem_defn} \end{equation} Here $S_t$ is a sparse vector with support set size at most $s$ and minimum magnitude of nonzero values at least $S_{\min}$. $L_t$ is a dense but low dimensional vector, i.e. $L_t = P_{(t)} a_t$ where $P_{(t)}$ is an $n \times r_{(t)}$ basis matrix with $r_{(t)} < n$, that changes every so often according to the model given below. We are given an accurate estimate of the subspace in which the initial $t_\train$ $L_t$'s lie, i.e. we are given a basis matrix $\Phat_0$ so that $\|(I-\Phat_0 \Phat_0')P_0 \|_2$ is small. Here $P_0$ is a basis matrix for $\Span({\cal L}_{t_{\train}})$, i.e. $\Span(P_0) = \Span({\cal L}_{t_{\train}})$. Also, for the first $t_{\train}$ time instants, $S_t$ is zero. The goal is \begin{enumerate} \item to estimate both $S_t$ and $L_t$ at each time $t > t_\train$, and \item to estimate $\Span({\cal L}_t)$ every so often, i.e. compute $\Phat_{(t)}$ so that the subspace estimation error, $\SE_{(t)}:=\|(I- \hat{P}_{(t)} \hat{P}_{(t)}')P_{(t)}\|_2$ is small. \label{item2} \end{enumerate} We assume a subspace change model that allows the subspace to change at certain change times $t_j$ rather than continuously at each time. It should be noted that this is only a model for reality. In practice there will typically be some changes at every time $t$; however this is difficult to model in a simple fashion. Moreover the analysis for such a model will be a lot more complicated. However, we do allow the variance of the projection of $L_t$ along the subspace directions to change continuously. The projection along the new directions is assumed to be small initially and allowed to gradually increase to a large value (see Sec \ref{slowss}). \begin{sigmodel}[Model on $L_t$] \label{Ltmodel} \ \begin{enumerate} \item We assume that $L_t = P_{(t)} a_t$ with $P_{(t)} = P_j$ for all $t_j \leq t <t_{j+1}$, $j=0,1,2 \cdots J$. Here $P_j$ is an $n \times r_j$ basis matrix with $r_j < \min(n,(t_{j+1} - t_j))$ that changes as $$P_j = [P_{j-1} \ P_{j,\new}]$$ where $P_{j,\new}$ is a $n \times c_{j,\new}$ basis matrix with $P_{j,\new}'P_{j-1} = 0$. Thus $$r_j = \operatorname{rank}(P_j) = r_{j-1} + c_{j,\new}.$$ We let $t_0=0$. Also $t_{J+1}$ can be the length of the sequence or $t_{J+1} = \infty$. This model is illustrated in Figure \ref{model_fig}. \item The vector of coefficients, $a_t:={P_{(t)}}'L_t$, is a zero mean random variable (r.v.) with mutually uncorrelated entries, i.e. $\mathbf{E}[a_t]=0$ and $\mathbf{E}[(a_t)_i (a_t)_j]=0$ for $i \neq j$. \end{enumerate} \end{sigmodel} \begin{figure} \centerline{ \epsfig{file = model_fig.pdf, width = \columnwidth} } \caption{\small{The subspace change model explained in Sec \ref{model}. Here $t_0=0$ and $0 < t_\train < t_1$.}} \label{model_fig} \end{figure} \begin{definition} Define the covariance matrix of $a_t$ to be the diagonal matrix $$\Lambda_t: = \mathrm{Cov}[a_t] = \mathbf{E}(a_ta_t').$$ Define For $t_j \le t < t_{j+1}$, $a_t$ is an $r_j$ length vector which can be split as $$a_t ={P_j}'L_t = \vect{a_{t,*}}{a_{t,\new}}$$ where $a_{t,*}: = {P_{j-1}}'L_t$ and $a_{t,\new}: = {P_{j,\new}}'L_t$. Thus, for this interval, $L_t$ can be rewritten as \[ L_t = \left[ P_{j-1} \ P_{j,\new}\right] \vect{a_{t,*}}{a_{t,\new}} = P_{j-1} a_{t,*} + P_{j,\new} a_{t,\new} \] Also, $\Lambda_t$ can be split as $$\Lambda_t = \left[ \begin{array}{cc} (\Lambda_t)_* & 0 \nonumber \\ 0 & (\Lambda_t)_\new \nonumber \\ \end{array} \right] $$ where $(\Lambda_t)_* = \mathrm{Cov}[a_{t,*}] $ and $(\Lambda_t)_\new = \operatorname{Cov}[a_{t,\new}]$ are diagonal matrices. Define \begin{align*} \lambda^- &: = \inf_t \lambda_{\min}(\Lambda_t), \quad \lambda^+:=\sup_t \lambda_{\max} (\Lambda_t),\\ &\text{and}\\ \lambda_\new^- &: = \inf_t \lambda_{\min}((\Lambda_t)_\new), \quad \lambda_\new^+ :=\sup_t \lambda_{\max} ( (\Lambda_t)_\new). \end{align*} Also let, \[ f : = \frac{\lambda^+}{\lambda^-} \] and \[ g : = \frac{\lambda_\new^+}{\lambda_\new^-}. \] \end{definition} The above simple model only allows new additions to the subspace and hence the rank of $P_j$ can only grow over time. The ReProCS algorithm designed for this model can be interpreted as a recursive algorithm for solving the robust PCA problem studied in \cite{rpca} and other batch robust PCA works. At time $t$ we estimate the subspace spanned by $L_1,L_2, \dots L_t$. For the above model, the subspace dimension is bounded by $r_0+J c_{\max}$. Thus a bound on $J$ is needed to keep the subspace dimension small at all times. We remove this limitation in Sec \ref{Del_section} where we also allow for subspace deletions and correspondingly design a ReProCS algorithm that does the same thing. For that algorithm, as we will see, we will not need a bound on the number of changes, $J$, as long as the separation between the subspace change times is allowed to grow logarithmically with $J$ and a clustering assumption holds. Define the following quantities for the sparse part. \begin{definition} Let $T_t :=\{i: \ (S_t)_i \neq 0 \}$ denote the support of $S_t$. Define \[ S_{\min}: = \min_{t> t_\train} \min_{i \in T_t} |(S_t)_i |, \ \text{and} \ s: = \max_t |T_t| \] \end{definition} \subsection{Slow subspace change} \label{slowss} By slow subspace change we mean all of the following. First, the delay between consecutive subspace change times is large enough, i.e., for a $d$ large enough, \begin{eqnarray} t_{j+1} - t_j \ge d \label{delay} \end{eqnarray} Second, the magnitude of the projection of $L_t$ along the newly added directions, $a_{t,\new}$, is initially small but can increase gradually. We model this as follows. Assume that for an $\alpha > 0$ \footnote{As we will see in the algorithm $\alpha$ is the number of previous frames used to get a new estimate of $P_{j,\new}$.} the following holds \begin{eqnarray} \|a_{t,\new}\|_{\infty} \le \min\Big(v^{\frac{t-t_j}{\alpha}-1} \gamma_\new,\gamma_*\Big) \label{atnew_inc} \end{eqnarray} when $ t \in [t_j, t_{j+1}-1]$ for a $v>1$ but not too large and with $\gamma_\new < \gamma_* \ \text{and} \ \gamma_\new < S_{\min}$. Clearly, the above assumption implies that \[ \|a_{t,\new}\|_{\infty} \le \gamma_{\new,k}:= \min(v^{k-1} \gamma_\new,\gamma_*) \] for all $t \in [t_j + (k-1) \alpha, t_{j}+k\alpha-1]$. This assumption is verified for real video data in Sec. \ref{model_verify}. Third, the number of newly added directions is small, i.e. $c_{j,\new} \le c_{\max} \ll r_{0}$. This is also verified in Sec. \ref{model_verify} \begin{remark}[Large $f$] \label{large_f} Since our problem definition allows large noise, $L_t$, but assumes slow subspace change, thus the maximum condition number of $\operatorname{Cov}[L_t]$, which is bounded by $f$, cannot be bounded by a small value. The reason is as follows. Slow subspace change implies that the projection of $L_t$ along the new directions is initially small, i.e. $\gamma_\new$ is small. Since $\lambda^- \le \gamma_\new$, this means that $\lambda^-$ is small. Since $\mathbf{E}[\|L_t\|^2] \le r_{\max} \lambda^+ $ and $r_{\max}$ is small (low-dimensional), thus, large $L_t$ means that $\lambda^+$ needs to be large. As a result $f=\lambda^+/\lambda^-$ cannot be upper bounded by a small value. \end{remark} \subsection{Measuring denseness of a matrix and its relation with RIC} \label{denseness} Before we can state the denseness assumption, we need to define the denseness coefficient. \begin{definition}[denseness coefficient]\label{subspace_kappa} For a matrix or a vector $B$, define \begin{equation} \kappa_s(B)=\kappa_s(\Span(B)) : = \max_{|T| \le s} \|{I_T}' \mathrm{basis}(B)\|_2 \end{equation} where $\|.\|_2$ is the vector or matrix $\ell_2$-norm. \end{definition} Clearly, $\kappa_s(B) \le 1$. First consider an $n$-length vector $B$. Then $\kappa_s$ measures the denseness (non-compressibility) of $B$. A small value indicates that the entries in $B$ are spread out, i.e. it is a dense vector. A large value indicates that it is compressible (approximately or exactly sparse). The worst case (largest possible value) is $\kappa_s(B)=1$ which indicates that $B$ is an $s$-sparse vector. The best case is $\kappa_s(B) = \sqrt{s/n}$ and this will occur if each entry of $B$ has the same magnitude. Similarly, for an $n \times r$ matrix $B$, a small $\kappa_s$ means that most (or all) of its columns are dense vectors. \begin{remark}\label{kapparemark} The following facts should be noted about $\kappa_s(.)$: \begin{enumerate} \item For a given matrix $B$, $\kappa_s(B)$ is an non-decreasing function of $s$. \item $\kappa_s([B_1]) \leq \kappa_s([B_1 \ B_2])$ i.e. adding columns cannot decrease $\kappa_s$. \item A bound on $\kappa_s(B)$ is $\kappa_s(B) \le \sqrt{s} \kappa_1(B)$. This follows because $\|B\|_2 \leq \big\| \big[\|b_1\|_2 \dots \|b_r\|_2 \big] \big\|_2$ where $b_i$ is the $i^{\text{th}}$ column of $B$. \end{enumerate} \label{dense_remark} \end{remark} The lemma below relates the denseness coefficient of a basis matrix $P$ to the RIC of $I-PP'$. The proof is in the Appendix. \begin{lem}\label{delta_kappa} For an $n \times r$ basis matrix $P$ (i.e $P$ satisfying $P'P=I$), $$\delta_s(I-PP') = \kappa_s^2 (P).$$ \end{lem} In other words, if $P$ is dense enough (small $\kappa_s$), then the RIC of $I-PP'$ is small. In this work, we assume an upper bound on $\kappa_{2s}(P_{j})$ for all $j$, and a tighter upper bound on $\kappa_{2s}(P_{j,\new})$, i.e., there exist $\kappa_{2s,*}^+<1$ and a $\kappa_{2s,\new}^+ < \kappa_{2s,*}^+$ such that \begin{align} \max_{j} \kappa_{2s}(P_{j-1}) \leq \kappa_{2s,*}^+ \label{kappa plus}\\ \max_{j} \kappa_{2s}(P_{j,\new}) \leq \kappa_{2s,\new}^+ \label{kappa new plus} \end{align} Additionally, we also assume denseness of another matrix, $D_{j,\new,k}$, whose columns span the currently unestimated part of $\Span(P_{j,\new})$ (see Theorem \ref{thm1}). The denseness coefficient $\kappa_s(B)$ is related to the denseness assumption required by PCP \cite{rpca}. That work uses $\kappa_1(B)$ to quantify denseness. \section{Recursive Projected CS (ReProCS) and its Performance Guarantees} \label{reprocs_sec} In this section we introduce the ReProCS algorithm and state the performance guarantee for it. We begin by first stating the result in \ref{result}, and then describe and explain the algorithm in Section \ref{basic_rep}. In Section \ref{proj PCA} we describe the projection-PCA algorithm that is used in the ReProCS algorithm. The assumptions used by the result are discussed in Section \ref{discuss_add}. \input{theorem} The above result says the following. Consider Algorithm \ref{reprocs}. Assume that the initial subspace error is small enough. If the algorithm parameters are appropriately set, if slow subspace change holds, if the subspaces are dense, if the condition number of $\mathrm{Cov}[a_{t,\new}]$ is small enough, and if the currently unestimated part of the newly added subspace is dense enough (this is an assumption on the algorithm estimates), then, w.h.p., we will get exact support recovery at all times. Moreover, the sparse recovery error will always be bounded by $0.18\sqrt{c} \gamma_\new$ plus a constant times $\sqrt{\zeta}$. Since $\zeta$ is very small, $\gamma_\new < S_{\min}$, and $c$ is also small, the normalized reconstruction error for recovering $S_t$ will be small at all times. In the second conclusion, we bound the subspace estimation error, $\SE_{(t)}$. When a subspace change occurs, this error is initially bounded by one. The above result shows that, w.h.p., with each projection PCA step, this error decays exponentially and falls below $0.01\sqrt{\zeta}$ within $K$ projection PCA steps. The third conclusion shows that, with each projection PCA step, w.h.p., the sparse recovery error as well as the error in recovering $L_t$ also decay in a similar fashion. As we explain in Section \ref{discuss_add}, the most important limitation of our result is that it requires an assumption on $D_{\new,k}$ and $Q_{\new,k}$ which depend on algorithm estimates. Moreover, it studies an algorithm that requires knowledge of model parameters. \subsection{Projection-PCA algorithm for ReProCS} \label{proj PCA} Given a data matrix $\mathcal{D}$, a basis matrix $P$ and an integer $r$, projection-PCA (proj-PCA) applies PCA on $\mathcal{D}_{\text{proj}}:=(I-PP')\mathcal{D}$, i.e., it computes the top $r$ eigenvectors (the eigenvectors with the largest $r$ eigenvalues) of $\frac{1}{\alpha} \mathcal{D}_{\text{proj}} {\mathcal{D}_{\text{proj}}}'$. Here $\alpha$ is the number of column vectors in $\mathcal{D}$. This is summarized in Algorithm \ref{algo_pPCA}. If $P =[.]$, then projection-PCA reduces to standard PCA, i.e. it computes the top $r$ eigenvectors of $\frac{1}{\alpha} \mathcal{D} {\mathcal{D}}'$. The reason we need projection PCA algorithm in step 3 of Algorithm \ref{reprocs} is because the error $e_t = \Lhat_t - L_t = S_t - \Shat_t$ is correlated with $L_t$; and the maximum condition number of $\operatorname{Cov}(L_t)$, which is bounded by $f$, cannot be bounded by a small value (see Remark \ref{large_f}). This issue is explained in detail in Appendix \ref{projpca}. Most other works that analyze standard PCA, e.g. \cite{nadler} and references therein, do not face this issue because they assume uncorrelated-ness of the noise/error and the true data vector. With this assumption, one only needs to increase the PCA data length $\alpha$ to deal with the larger condition number. We should mention that the idea of projecting perpendicular to a partly estimated subspace has been used in other different contexts in past work \cite{PP_PCA_Li_Chen, mccoy_tropp11}. \begin{algorithm} \caption{projection-PCA: $Q \leftarrow \text{proj-PCA}(\mathcal{D},P,r)$}\label{algo_pPCA} \begin{enumerate} \item Projection: compute $\mathcal{D}_{\text{proj}} \leftarrow (I - P P') \mathcal{D}$ \item PCA: compute $\frac{1}{\alpha} \mathcal{D}_{\text{proj}}{\mathcal{D}_{\text{proj}}}' \overset{EVD}{=} \left[ \begin{array}{cc}Q & Q_{\perp} \\\end{array}\right] \left[ \begin{array}{cc} \Lambda & 0 \\0 & \Lambda_{\perp} \\\end{array}\right] \left[ \begin{array}{c} Q' \\ {Q_{\perp}}'\\\end{array}\right]$ where $Q$ is an $n \times r$ basis matrix and $\alpha$ is the number of columns in $\mathcal{D}$. \end{enumerate} \end{algorithm} \subsection{Recursive Projected CS (ReProCS)} \label{basic_rep} \begin{algorithm*}[ht] \caption{Recursive Projected CS (ReProCS)}\label{reprocs} {\em Parameters: } algorithm parameters: $\xi$, $\omega$, $\alpha$, $K$, model parameters: $t_j$, $c_{j,\new}$ \\ (set as in Theorem \ref{thm1} ) \\ {\em Input: } $M_t$, {\em Output: } $\Shat_t$, $\Lhat_t$, $\Phat_{(t)}$ \\ Initialization: Compute $\Phat_0 \leftarrow$ proj-PCA$\left( [L_{1},L_{2},\cdots,L_{t_{\train}}], [.], r_0 \right)$ where $r_0 = \operatorname{rank}([L_{1},L_{2},\cdots,L_{t_{\train}}])$. \\ Set $\Phat_{(t)} \leftarrow \Phat_0$, $j \leftarrow 1$, $k\leftarrow 1$. For $t > t_{\train}$, do the following: \begin{enumerate} \item Estimate $T_t$ and $S_t$ via Projected CS: \begin{enumerate} \item \label{othoproj} Nullify most of $L_t$: compute $\Phi_{(t)} \leftarrow I-\Phat_{(t-1)} {\Phat_{(t-1)}}'$, compute $y_t \leftarrow \Phi_{(t)} M_t$ \item \label{Shatcs} Sparse Recovery: compute $\hat{S}_{t,\cs}$ as the solution of $\min_{x} \|x\|_1 \ s.t. \ \|y_t - \Phi_{(t)} x\|_2 \leq \xi$ \item \label{That} Support Estimate: compute $\hat{T}_t = \{i: \ |(\hat{S}_{t,\cs})_i| > \omega\}$ \item \label{LS} LS Estimate of $S_t$: compute $(\hat{S}_t)_{\hat{T}_t}= ((\Phi_t)_{\hat{T}_t})^{\dag} y_t, \ (\hat{S}_t)_{\hat{T}_t^{c}} = 0$ \end{enumerate} \item Estimate $L_t$: $\hat{L}_t = M_t - \hat{S}_t$. \item \label{PCA} Update $\Phat_{(t)}$: K Projection PCA steps. \begin{enumerate} \item If $t = t_j + k\alpha-1$, \begin{enumerate} \item $\Phat_{j,\new,k} \leftarrow$ proj-PCA$\left(\left[\hat{L}_{t_j+(k-1)\alpha}, \dots, \hat{L}_{t_j+k\alpha-1}\right],\Phat_{j-1},c_{j,\new}\right)$. \item set $\Phat_{(t)} \leftarrow [\Phat_{j-1} \ \Phat_{j,\new,k}]$; increment $k \leftarrow k+1$. \end{enumerate} Else \begin{enumerate} \item set $\Phat_{(t)} \leftarrow \Phat_{(t-1)}$. \end{enumerate} \item If $t = t_j + K \alpha - 1$, then set $\Phat_{j} \leftarrow [\Phat_{j-1} \ \Phat_{j,\new,K}]$. Increment $j \leftarrow j + 1$. Reset $k \leftarrow 1$. \end{enumerate} \item Increment $t \leftarrow t + 1$ and go to step 1. \end{enumerate} \end{algorithm*} \begin{figure*}[!t] \centerline{ \includegraphics[width =15cm]{algo_fig.pdf} } \caption{\small{The K projection PCA steps. }} \label{algo_fig} \end{figure*} \section{ReProCS with Cluster PCA} \label{Del_section} The ReProCS approach studied so far is designed under the assumption that the subspace in which $L_t$ lies can only grow over time. In practice, usually, the dimension of this subspace typically remains roughly constant. A simple way to model this is to assume that at every change time, $t_j$, some new directions can get added and some directions from the existing subspace can get deleted and to assume an upper bound on the difference between the total number of added and deleted directions. We specify this model next. \begin{sigmodel} \label{del model} Assume that $L_t =P_{(t)} a_t$ where $P_{(t)} = P_j$ for all $t_j \leq t <t_{j+1}$, $j=0,1,2 \cdots J$, $P_j$ is an $n \times r_j$ basis matrix with $r_j \ll \min(n,(t_{j+1} - t_j))$. We let $t_0=0$ and $t_{J+1}$ equal the sequence length. This can be infinity also. \begin{enumerate} \item At the change times, $t_j$, $P_j$ changes as \[ P_j = [(P_{j-1}R_j\setminus P_{j,\old}) \quad P_{j,\new}] \] Here, $R_j$ is a rotation matrix, $P_{j,\new}$ is an $n \times c_{j,\new}$ basis matrix with $P_{j,\new}'P_{j-1} = 0$ and $P_{j,\old}$ contains $c_{j,\old}$ columns of $P_{j-1}R_j$. Thus $r_j = r_{j-1} + c_{j,\new} - c_{j,\old}$. Also, $0 < t_{\train} \le t_1$. This model is illustrated in Figure \ref{add_del_model}. \item There exist constants $c_{\max}$ and $c_{\text{dif}}$ such that $0 \le c_{j,\new} \leq c_{\max}$ and $\sum_{i=1}^{j} (c_{i,\new} - c_{i,\old}) \leq c_{\text{dif}}$ for all $j$. Thus, $r_j = r_0+\sum_{i=1}^{j} (c_{i,\new} - c_{i,\old}) \leq r_{\max}: = r_0 + c_{\text{dif}}$, i.e., the rank of $P_j$ is upper bounded by $r_{\max}$. \end{enumerate} \end{sigmodel} \begin{figure}[t!] \centerline{ \epsfig{file = add_del_model_R, width =\columnwidth} } \caption{\small{The subspace change model given in Signal Model \ref{del model}. Here $t_0=0$. }} \label{add_del_model} \end{figure} The ReProCS algorithm (Algorithm \ref{reprocs}) still applies for the above more general model. We can conclude the following for it. \begin{corollary} \label{cor_rep} Consider Algorithm \ref{reprocs} for the model given above. The result of Theorem \ref{thm1} applies with the following change: we also need $\kappa_{2s}([P_0, P_{1,\new}, \dots , P_{J-1,\new}]) \le 0.3$. \end{corollary} Because Algorithm \ref{reprocs} never deletes directions, the rank of $\Phat_{(t)}$ keeps increasing with every subspace change time (even though the rank of $P_{(t)}$ is now bounded by $r_0 + c_{\text{dif}}$). As a result, the performance guarantee above still requires a bound on $J$ that is imposed by the denseness assumption. In this section, we address this limitation by re-estimating the current subspace after the newly added directions have been accurately estimated. This helps to ``delete" $\Span(P_\old)$ from the subspaces estimate. For the resulting algorithm, as we will see, we do not need a bound on the number of changes, $J$, as long as the separation between the subspace change times is allowed to grow logarithmically with $J$. One simple way to re-estimate the current subspace would be by standard PCA: at $t=\tilde{t}_j + \tilde{\alpha}-1$, compute $\Phat_j \leftarrow \text{proj-PCA}([\Lhat_t; \tilde{\mathcal{I}}_{j,1}], [.], r_j)$ and let $\Phat_{(t)} \leftarrow \Phat_j$. Using the $\sin \theta$ theorem \cite{davis_kahan} and the matrix Hoeffding inequality \cite{tail_bound}, and using the procedure used earlier to analyze projection PCA, it can be shown that, as long as $f$, a bound on the maximum condition number of $\operatorname{Cov}[L_t]$, is small enough, doing this is guaranteed to give an accurate estimate of $\Span(P_j)$. However as explained in Remark \ref{large_f}, $f$ cannot be small because our problem definition allows large noise, $L_t$, but assumes slow subspace change. In other works that analyze standard PCA, e.g. \cite{nadler} and references therein, the large condition number does not cause a problem because they assume that the error ($e_t$ in our case) in the observed data vector ($\Lhat_t$) is uncorrelated with the true data vector ($L_t$). Under this assumption, one only needs to increase the PCA data length $\alpha$ to deal with larger condition numbers. However, in our case, because $e_t$ is correlated with $L_t$, this strategy does not work. This issue is explained in detail in Appendix \ref{projpca}. In this section, we introduce a generalization of the above strategy called cluster-PCA that removes the requirement that $f$ be small, but instead only requires that the eigenvalues of $\text{Cov}(L_t)$ be clustered for the times when the changed subspace has stabilized. Under this assumption, cluster-PCA recovers one cluster of entries of $P_j$ at a time by using an approach that generalizes the projection PCA step developed earlier. We first explain the clustering assumption in Sec \ref{eigencluster} below and then give the cluster-PCA algorithm. \subsection{Clustering assumption}\label{eigencluster} \newcommand{\mathcal{G}}{\mathcal{G}} For positive integers $K$ and $\alpha$, let $\tilde{t}_j := t_j + K \alpha$. We set their values in, Theorem \ref{thm2}. Recall from the model on $L_t$ and the slow subspace change assumption that new directions, $P_{j,\new}$, get added at $t=t_j$ and initially, for the first $\alpha$ frames, the projection of $L_t$ along these directions is small (and thus their variances are small), but can increase gradually. It is fair to assume that within $K \alpha$ frames, i.e. by $t=\tilde{t}_j$, the variances along these new directions have stabilized and do not change much for $t \in [\tilde{t}_j, t_{j+1}-1]$. It is also fair to assume that the same is true for the variances along the existing directions, $P_{j-1}$. In other words, we assume that the matrix $\Lambda_t$ is either constant or does not change much during this period. Under this assumption, we assume that we can cluster its eigenvalues (diagonal entries) into a few clusters such that the distance between consecutive clusters is large and the distance between the smallest and largest element of each cluster is small. We make this precise below. \begin{ass} \label{clusterass} Assume the following. \begin{enumerate} \item Either $\Lambda_t = \Lambda_{\tilde{t}_j}$ for all ${t \in [\tilde{t}_j, t_{j+1}-1]}$ or $\Lambda_t$ changes very little during this period so that for each $i=1,2,\cdots,r_j$, $\min_{t \in [\tilde{t}_j, t_{j+1}-1]} \lambda_i(\Lambda_t) \ge \max_{t \in [\tilde{t}_j, t_{j+1}-1]} \lambda_{i+1}(\Lambda_t)$. \item Let $ \mathcal{G}_{j,(1)}, \mathcal{G}_{j,(2)}, \cdots, \mathcal{G}_{j,(\vartheta_j)} $ be a partition of the index set $\{1,2, \dots r_j\}$ so that $\min_{i \in \mathcal{G}_{j,(k)}} \min_{t \in [\tilde{t}_j, t_{j+1}-1]} \lambda_i(\Lambda_t) > \max_{i \in \mathcal{G}_{j,(k+1)}} \max_{t \in [\tilde{t}_j, t_{j+1}-1]} \lambda_i(\Lambda_t)$, i.e. the first group/cluster contains the largest set of eigenvalues, the second one the next smallest set and so on (see Figure \ref{clustering_diag}). Let \begin{enumerate} \item $G_{j,k} := (P_j)_{ \mathcal{G}_{j,(k)} }$ be the corresponding cluster of eigenvectors, then $\operatorname{span}(P_j) = \operatorname{span}([G_{j,1},G_{j,2},\cdots,G_{j,\vartheta_j}])$; \item $\tilde{c}_{j,k} := |\mathcal{G}_{j,(k)}|$ be the number of elements in $\mathcal{G}_{j,(k)}$, then $\sum_{k=1}^{\vartheta_j} \tilde{c}_{j,k} = r_j$; \\ $\tilde{c}_{\min} : = \min_j \min_{k = 1,2,\cdots,\vartheta_j} \tilde{c}_{j,k}$ \item ${\lambda_{j,k}}^- := \min_{i\in \mathcal{G}_{j,(k)} } \min_{t\in [\tilde{t}_j, t_{j+1}-1]} \lambda_i (\Lambda_t)$, ${\lambda_{j,k}}^+ := \max_{i \in \mathcal{G}_{j,(k)} } \max_{t\in [\tilde{t}_j, t_{j+1}-1]} \lambda_i (\Lambda_t)$ and ${\lambda_{j,\vartheta_j+1} }^+:= 0$; \item $\tilde{g}_{j,k} := {\lambda_{j,k}}^+ / {\lambda_{j,k}}^- $ (notice that $\tilde{g}_{j,k} \ge 1$); \item $\tilde{h}_{j,k} := {\lambda_{j,k+1}}^+ / {\lambda_{j,k}}^-$ (notice that $\tilde{h}_{j,k} < 1$); \item $\tilde{g}_{\max} := \max_j \max_{k = 1,2,\cdots,\vartheta_j} \tilde{g}_{j,k}$, \\ $\tilde{h}_{\max} := \max_j \max_{k = 1,2,\cdots,\vartheta_j} \tilde{h}_{j,k}$, \item $\vartheta_{\max}: = \max_j \vartheta_j$ \end{enumerate} We assume that $\tilde{g}_{\max}$ is small enough (the distance between the smallest and largest eigenvalues of a cluster is small) and $\tilde{h}_{\max}$ is small enough (distance between consecutive clusters is large). We quantify this in Theorem \ref{thm2}. \end{enumerate} \end{ass} \begin{remark} In order to address a reviewer's concern, we should clarify the following point. The above assumption still allows the newly added eigenvalues to become large and hence still allows the subspace of $L_t$ to change significantly over time. The above requires the covariance matrix of $L_t$ to be constant or nearly constant only for the time between $\tilde{t}_j:= t_j + K \alpha$ and the next change time, $t_{j+1}$ and not for the first $K \alpha$ frames. Slow subspace change assumes that the projection of $L_t$ along the new directions is initially small for the first $\alpha$ frames but then can increase gradually over the next $K-1$ intervals of duration $\alpha$. The variance along the new directions can increase by as much as $1.2^{2K}$ times the initial variance. Thus by $t=\tilde{t}_j=t_j+K \alpha$, the variances along the new directions can have already increased to large enough values. \\ We can allow the variances to increase for even longer with the following simple change: re-define $\tilde{t}_j$ as $\tilde{t}_j:= t_{j+1} - \vartheta_j \tilde\alpha$ in both the clustering assumption and the algorithm. With this redefinition, we will be doing cluster-PCA at the very end of the current subspace interval. \\ Lastly, note that the projection along the new directions can further increase in the later subspace change periods also. \end{remark} \begin{figure}[t!] \centerline{ \epsfig{file = clustering_diag_2.pdf, width = \columnwidth} } \caption{\small{We illustrate the clustering assumption. Assume $\Lambda_t = \Lambda_{\tilde{t}_j}$. }} \label{clustering_diag} \end{figure} \subsection{The ReProCS with Cluster PCA Algorithm} ReProCS-cPCA is summarized in Algorithm \ref{ReProCS_del}. It uses the following definition. \begin{definition}\label{defn_intervals} Let $\tilde{t}_j := t_j + K\alpha$. Define the following time intervals \begin{enumerate} \item $\mathcal{I}_{j,k}:= [t_j + (k-1)\alpha, t_j + k\alpha-1]$ for $k=1,2,\cdots,K$. \item $\tilde{\mathcal{I}}_{j,k} := [\tilde{t}_j + (k-1) \tilde{\alpha}, \tilde{t}_j + k \tilde{\alpha}-1]$ for $k = 1,2,\cdots, \vartheta_j$. \item $\tilde{\mathcal{I}}_{j,\vartheta_j+1} := [\tilde{t}_j + \vartheta_j \tilde{\alpha}, t_{j+1}-1]$. \end{enumerate} \end{definition} \begin{algorithm*}[t] \caption{Recursive Projected CS with cluster-PCA (ReProCS-cPCA)}\label{ReProCS_del} {\bf Parameters: } algorithm parameters: $\xi$, $\omega$, $\alpha$, $\tilde{\alpha}$, $K$, model parameters: $t_j$, $c_{j,\new}$, $\vartheta_j$ and $\tilde{c}_{j,i}$ \\ {\bf Input: } $n \times 1$ vector, $M_t$, and $n \times r_0$ basis matrix $\hat{P}_0$. {\bf Output: } $n \times 1$ vectors $\Shat_t$ and $\Lhat_t$, and $n \times r_{(t)}$ basis matrix $\Phat_{(t)}$. \\ {\bf Initialization: } Let $\Phat_{(t_\train)} \leftarrow \Phat_0$. Let $j \leftarrow 1$, $k\leftarrow 1$. For $t > t_{\train}$, do the following: \begin{enumerate} \item {\bf Estimate $T_t$ and $S_t$ via Projected CS: } \begin{enumerate} \item \label{othoproj} Nullify most of $L_t$: compute $\Phi_{(t)} \leftarrow I-\Phat_{(t-1)} {\Phat_{(t-1)}}'$, $y_t \leftarrow \Phi_{(t)} M_t$ \item \label{Shatcs} Sparse Recovery: compute $\hat{S}_{t,\cs}$ as the solution of $\min_{x} \|x\|_1 \ s.t. \ \|y_t - \Phi_{(t)} x\|_2 \leq \xi$ \item \label{That} Support Estimate: compute $\hat{T}_t = \{i: \ |(\hat{S}_{t,\cs})_i| > \omega\}$ \item \label{LS} LS Estimate of $S_t$: compute $(\hat{S}_t)_{\hat{T}_t}= ((\Phi_t)_{\hat{T}_t})^{\dag} y_t, \ (\hat{S}_t)_{\hat{T}_t^{c}} = 0$ \end{enumerate} \item {\bf Estimate $L_t$. } $\hat{L}_t = M_t - \hat{S}_t$. \item \label{PCA} {\bf Update $\Phat_{(t)}$}: \begin{enumerate} \item If $t \neq t_j + q\alpha-1$ for any $q=1,2, \dots K$ and $t \neq t_j + K \alpha + \vartheta_j \tilde{\alpha} -1$, \begin{enumerate} \item set $\Phat_{(t)} \leftarrow \Phat_{(t-1)}$ \end{enumerate} \item {\bf Addition: Estimate $\Span(P_{j,\new})$ iteratively using proj-PCA: } If $t = t_j + k\alpha-1 \begin{enumerate} \item $\Phat_{j,\new,k} \leftarrow \text{proj-PCA} ([\hat{L}_{t_j+(k-1)\alpha}, \dots , \hat{L}_{t_j + k\alpha - 1}], \Phat_{j-1}, c_{j,\new})$ \item set $\Phat_{(t)} \leftarrow [\Phat_{j-1} \ \Phat_{j,\new,k}]$. \item If $k=K$, reset $k \leftarrow 1$; else increment $k \leftarrow k+1$. \end{enumerate} \item {\bf Deletion: Estimate $\Span(P_j)$ by cluster-PCA:} If $t= t_j + K \alpha + \vartheta_j \tilde{\alpha} -1$, \begin{enumerate} \item set $\hat{G}_{j,0} \leftarrow [.]$ \item For $i = 1,2,\cdots, \vartheta_j$, \begin{itemize} \item $\hat{G}_{j,i} \leftarrow \text{proj-PCA}( [\hat{L}_{\tilde{t}_j+(i-1)\tilde\alpha}, \dots , \hat{L}_{\tilde{t}_j+i\tilde\alpha -1}], [\hat{G}_{j,1},\hat{G}_{j,2}, \dots \hat{G}_{j,i-1}], \tilde{c}_{j,i})$ \end{itemize} End for \item set $\Phat_j \leftarrow [\hat{G}_{j,1},\cdots, \hat{G}_{j,\vartheta_j}]$ and set $\Phat_{(t)} \leftarrow \Phat_{j}$. \item increment $ j \leftarrow j+1$. \end{enumerate} \end{enumerate} \end{enumerate} \end{algorithm*} Steps 1, 2, 3a and 3b of ReProCS-cPCA are the same as Algorithm \ref{reprocs}. As shown earlier, within $K$ proj-PCA updates ($K$ chosen as given in Theorem \ref{thm2}) $\|e_t\|_2$ and the subspace error, $\SE_{(t)}$, drop down to a constant times ${\zeta}$. In particular, if at $t=t_{j}-1$, $\SE_{(t)} \le r \zeta$, then at $t= \tilde{t}_j:=t_j + K \alpha$, we can show that $\SE_{(t)} \le (r + c_{\max}) \zeta$. Here $r:=r_{\max} = r_0 + c_{\text{dif}}$. To bring $\SE_{(t)}$ down to $r \zeta$ before $t_{j+1}$, we proceed as follows. The main idea is to recover one cluster of entries of $P_j$ at a time. For each batch we use a new set of $\tilde\alpha$ frames. The entire procedure is done at $t=\tilde{t}_j + \vartheta_j \tilde\alpha -1$ (since we cannot update $\Phat_{(t)}$ until all clusters are recovered). We proceed as follows. In the first iteration, we use standard PCA to estimate the first cluster, $\Span(G_{j,1})$. In the $k^{th}$ iteration, we apply proj-PCA on $[\hat{L}_{\tilde{t}_j+(k-1)\tilde\alpha}, \dots , \hat{L}_{\tilde{t}_j+k\tilde\alpha -1}]$ with $P \leftarrow [\hat{G}_{j,1}, \hat{G}_{j,2}, \dots \hat{G}_{j,k-1}]$ to estimate $\Span(G_{j,k})$. By modifying the approach used to prove Theorem \ref{thm1}, we can show that since $\tilde{g}_{j,k}$ and $\tilde{h}_{j,k}$ are small enough, $\Span(G_{j,k})$ will be accurately recovered, i.e. $\|(I - \sum_{i=1}^{k} \hat{G}_{j,i} \hat{G}_{j,i}')G_{j,k}\|_2 \le \tilde{c}_{j,k} \zeta$. We do this $\vartheta_j$ times and finally we set $\Phat_j \leftarrow [\hat{G}_{j,1}, \hat{G}_{j,2} \dots \hat{G}_{j,\vartheta_j}]$ and $\Phat_{(t)} \leftarrow \Phat_j$. Thus, at $t=\tilde{t}_j + \vartheta_j \tilde{\alpha}-1$, $\SE_{(t)} \le \sum_{k=1}^{\vartheta_j} \|(I - \sum_{i=1}^{k} \hat{G}_{j,i} \hat{G}_{j,i}') G_{j,k} \|_2 \le \sum_{k=1}^{\vartheta_j} \tilde{c}_{j,k} \zeta = r_j \zeta \le r \zeta$. Under the assumption that $t_{j+1} - t_j \ge K \alpha + \vartheta_{\max} \tilde{\alpha}$, this means that before the next subspace change time, $t_{j+1}$, $\SE_{(t)}$ is below $r \zeta$. \begin{figure*}[t!] \centerline{ \epsfig{file = add_del_proj_pca_diag_2.pdf, width =16cm, height = 5cm} } \caption{\small{A diagram illustrating subspace estimation by ReProCS-cPCA }} \label{add_del_proj_pca_diag2} \end{figure*} We illustrate the ideas of subspace estimation by addition proj-PCA and cluster-PCA in Fig. \ref{add_del_proj_pca_diag2}. The connection between proj-PCA done in the addition step and for the cluster-PCA (in deletion) step is given in Table \ref{tab_diff}. \subsection{Performance Guarantees} \label{perf_g_del} \begin{definition} \label{def alpha del} We need the following definitions for stating the main result. \begin{enumerate} \item We define $\alpha_{\del}(\zeta)$ as \begin{multline*} \alpha_\del(\zeta) : = \left\lceil (\log 6 \vartheta_{\max} J + 11 \log n)\cdot \right.\\ \left.\frac{8 \cdot 10^2}{( \zeta \lambda^-)^2} \max( 4.2^2, 4 b_7^2 ) \right\rceil \nonumber \end{multline*} where $b_7 := (\sqrt{r} \gamma_* + \phi^+ \sqrt{\zeta})^2$ and $\phi^+=1.1732$. We choose $\alpha_{\text{del}}$ so that if , $\tilde\alpha \geq \alpha_{\text{del}}$, then the conclusions of the theorem will hold wth probability at least $(1 - 2n^{-10})$. \item Define \begin{align*} &f_{inc}(\tilde{g},\tilde{h},\kappa_{s,e}^+,\kappa_{s,D}^+) : = \\ &(r+c) \zeta \Bigg[ \max( 3\kappa_{s,e}^+ \kappa_{s,D}^+ \phi^+ \tilde{g}, \kappa_{s,e}^+ \phi^+ \tilde{h}) \\ & + \big[\kappa_{s,e}^+ \phi^+ + \kappa_{s,e}^+ (1+ 2\phi^+)\frac{r^2\zeta^2}{\sqrt{1-r^2\zeta^2}} \big] \tilde{h} \\ & + \big [\frac{r^2}{r+c}\zeta + 4r \zeta \kappa_{s,e}^+ \phi^+ + 2(r+c) \zeta(1+ {\kappa_{s,e}^+}^2) {\phi^+}^2\big] f \\ & \hspace{2in}+ 0.2 \frac{1}{r+c} \Bigg], \end{align*} \begin{align*} f_{dec}(\tilde{g},\tilde{h},\kappa_{s,e}^+,\kappa_{s,D}^+) & := 1- \tilde{h} - 0.2 \zeta - r^2 \zeta^2 f - r^2 \zeta^2 \\ &\hspace{.5in} - f_{inc}(\tilde{g},\tilde{h},\kappa_{s,e}^+,\kappa_{s,D}^+) \end{align*} Notice that $f_{inc}(.)$ is an increasing function of $\tilde{g},\tilde{h}$ and $f_{dec}(.)$ is a decreasing function of $\tilde{g},\tilde{h}$. \end{enumerate} \end{definition} \begin{theorem} \label{thm2} Consider Algorithm \ref{ReProCS_del}. Let $c:= c_{\max}$ and $r:= r_{\max} = r_0 + c_{\text{dif}}$. Pick a $\zeta$ that satisfies \[ \zeta \leq \min\left(\frac{10^{-4}}{r^2},\frac{1.5 \times 10^{-4}}{r^2 f},\frac{1}{r^{3}\gamma_*^2}\right) \] Assume that the initial subspace estimate is accurate enough, i.e. $\|(I - \Phat_0 \Phat_0') P_0\| \le r_0 \zeta$. If the following conditions hold: \begin{enumerate} \item All of the conditions of Theorem \ref{thm1} hold with $L_t$ satisfying Signal model \ref{del model}, \item $\tilde{\alpha} \ge \alpha_{\del}(\zeta)$, \item $\min_{j} (t_{j+1} -t_j) > K \alpha + \vartheta_{\max} \tilde{\alpha}$ \item algorithm estimates $\Phat_{j-1}$ and $\Phat_{j,\new,K}$ satisfy \[ \max_j \kappa_s ((I-\hat{P}_{j-1} {\hat{P}_{j-1}}' - \hat{P}_{j,\new,K} {\hat{P}_{j,\new,K}}')P_j) \leq \kappa_{s,e}^+ \] \item {\em (clustered eigenvalues) } Assumption \ref{clusterass} holds with $\tilde{g}_{\max},\tilde{h}_{\max}, \tilde{c}_{\min}$ satisfying $f_{dec}(\tilde{g}_{\max},\tilde{h}_{\max}, \kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta) - \frac{f_{inc}(\tilde{g}_{\max},\tilde{h}_{\max}, \kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta)}{\tilde{c}_{\min} \zeta} > 0$. \end{enumerate} then, with probability at least $1 - 2 n^{-10}$, at all times, $t$, \begin{enumerate} \item $ \That_t = T_t \ \text{and} \ \|e_t\|_2 = \|L_t - \hat{L}_t\|_2 = \|\hat{S}_t - S_t\|_2 \le 0.18 \sqrt{c} \gamma_{\new} + 1.24 \sqrt{\zeta}. $ \item the subspace error, $\SE_{(t)}$ satisfies \begin{eqnarray} &&\SE_{(t)} \leq \nonumber \\ &&\left\{ \begin{array}{ll} 0.6^{k-1} + r \zeta + 0.4 c \zeta & \ \text{if} \ t \in \mathcal{I}_{j,k}, \ k=1,2,\cdots,K\\ (r+c) \zeta & \ \text{if} \ t \in \tilde{\mathcal{I}}_{j,k}, \ k=1,2,\cdots,\vartheta_j \\ r \zeta & \ \text{if} \ t \in \tilde{\mathcal{I}}_{j,\vartheta_j+1} \end{array} \right. \nonumber \end{eqnarray} \item the error $e_t = \hat{S}_t - S_t = L_t - \hat{L}_t$ satisfies the following at various times \[ \|e_t\|_2 \le \begin{cases} 1.17 [ 0.15 \cdot 0.72^{k-1} \sqrt{c}\gamma_{\new} + \\ \hspace{.5in} 0.15 \cdot 0.4 c \zeta \sqrt{c} \gamma_* + r \zeta \sqrt{r} \gamma_*] \\ \hspace{.5in} \text{if} \ t \in \mathcal{I}_{j,k}, \ k=1,2,\cdots,K \\ 1.17(r+c) \zeta \sqrt{r} \gamma_* \\ \hspace{.5in} \text{if} \ \ t \in \tilde{\mathcal{I}}_{j,k}, \ k=1,2,\cdots,\vartheta_j \\ 1.17 r\zeta \sqrt{r} \gamma_* \ \ \text{if} \ \ t \in \tilde{\mathcal{I}}_{j,\vartheta_j+1} \end{cases} \nonumber \] \end{enumerate} \end{theorem} \subsection{Special Case when $f$ is small} \label{f_small_sec} If in a problem, $L_t$ has small magnitude for all times $t$ or if its subspace does not change, then $f$ can be small. In this case, the clustering assumption is not needed, or in fact it trivially holds with $\vartheta_j=1$, $\tilde{c}_{j,1} = r_j$, $\tilde{g}_{\max} =\tilde{g}_{j,1} = f$ and $\tilde{h}_{\max}={h}_{j,1} = 0$. Thus, $\vartheta_{\max} =1$. With this, the following corollary holds. \begin{corollary} \label{f_small} Assume that all conditions of Theorem \ref{thm2} hold except the last one (clustering assumption). If $f$ is small enough so that $f_{inc}(f,0,\kappa_{s,e}^+,\kappa_{s,*}^+ + r\zeta) \le f_{dec}(f,0,\kappa_{s,e}^+,\kappa_{s,*}^+ + r\zeta) r_j \zeta$, then, all conclusions of Theorem \ref{thm2} hold. \end{corollary} \begin{table*} \caption{Comparing and contrasting the addition proj-PCA step and proj-PCA used in the deletion step (cluster-PCA)} \begin{center} \begin{tabular}{|l||l|} \hline {\bf $k^\text{th}$ iteration of addition proj-PCA} & {\bf $k^\text{th}$ iteration of cluster-PCA in the deletion step} \\ \hline % done at $t= t_j+k \alpha-1$ & done at $t=t_j + K \alpha + \vartheta_j \tilde\alpha-1$ \\ \hline % goal: keep improving estimates of $\Span(P_{j,\new})$ & goal: re-estimate $\Span(P_{j})$ and thus ``delete" $\Span(P_{j,\old})$ \\ \hline % compute $\Phat_{j,\new,k}$ by proj-PCA on $[\hat{L}_t: t\in \mathcal{I}_{j,k}]$ & compute $\hat{G}_{j,k}$ by proj-PCA on $[\hat{L}_t: t\in \tilde{\mathcal{I}}_{j,k}]$ \\ with $P = \hat{P}_{j-1}$ & with $P = \hat{G}_{j,\text{det},k} = [\hat{G}_{j,1}, \cdots, \hat{G}_{j,k-1}]$ \\ \hline start with $\|(I - \Phat_{j-1} {\Phat_{j-1}}')P_{j-1}\|_2 \leq r\zeta$ and $\zeta_{j,k-1} \leq \zeta_{k-1}^+ \le 0.6^{k-1} + 0.4 c \zeta $ & start with $\|(I - \hat{G}_{j,\text{det},k}{\hat{G}_{j,\text{det},k}}')G_{j,\text{det},k}\|_2 \leq r\zeta$ and $\zeta_{j,K} \leq c \zeta$ \\ \hline need small $g$ which is the & need small $\tilde{g}_{\max}$ which is the \\ maximum condition number of $\text{Cov}(P_{j,\new}'L_t)$ & maximum of the maximum condition number of $\text{Cov}(G_{j,k}'L_t)$ \\ \hline no undetected subspace & extra issue: ensure perturbation due to $\Span(G_{j,\text{undet},k})$ is small; \\ & need small $\tilde{h}_{j,k}$ to ensure the above \\ \hline $\zeta_{j,k}$ is the subspace error in estimating $\text{span}(P_{j,\new})$ after the $k^{th}$ step & $\tilde{\zeta}_{j,k}$ is the subspace error in estimating $\text{span}(G_{j,k})$ after the $k^{th}$ step \\ \hline end with $\zeta_{j,k} \leq \zeta_k^+ \leq 0.6^k + 0.4 c\zeta$ w.h.p. & end with $\tilde{\zeta}_{j,k} \leq \tilde{c}_{j,k} \zeta$ w.h.p. \\ \hline stop when $k=K$ with $K$ chosen so that $\zeta_{j,K} \leq c\zeta$ & stop when $k = \vartheta_j$ and $\tilde{\zeta}_{j,k} \leq \tilde{c}_{j,k}\zeta$ for all $k=1,2,\cdots,\vartheta_j$ \\ \hline % after $K^{th}$ iteration: $\Phat_{(t)} \leftarrow [\Phat_{j-1} \ \Phat_{j,\new,K}]$ and $SE_{(t)} \leq (r+c)\zeta$ & after $\vartheta_j^{th}$ iteration: $\Phat_{(t)} \leftarrow [\hat{G}_{j,1},\cdots, \hat{G}_{j,\vartheta_j}]$ and $SE_{(t)} \leq r\zeta$ \\ \hline \end{tabular} \end{center} \label{tab_diff} \end{table*} \section{Model Verification and Simulation Experiments} \label{model_expts} We first discuss model verification for real data in Sec \ref{model_verify}. We then describe simulation experiments in Sec \ref{sims}. \subsection{Model Verification for real data} \label{model_verify} We experimented with two background image sequence datasets. The first was a video of lake water motion. The second was a video of window curtains moving due to the wind. The curtain sequence is available at \url{http://home.engineering.iastate.edu/~chenlu/ReProCS/Fig2.mp4}. For this sequence, the image size was $n=5120$ and the number of images, $t_{\max}=1755$. The lake sequence is available at \url{http://home.engineering.iastate.edu/~chenlu/ReProCS/ReProCS.htm} (sequence 3). For this sequence, $n=6480$ and the number of images, $t_{\max}=1500$. Any given background image sequence will never be exactly low rank, but only approximately so. Let the data matrix with its empirical mean subtracted be ${\cal L}_{full}$. Thus ${\cal L}_{full}$ is a $n \times t_{\max}$ matrix. We first ``low-rankified" this dataset by computing the EVD of $(1/t_{\max}) {\cal L}_{full} {\cal L}_{full}'$; retaining the 90\% eigenvectors' set (i.e. sorting eigenvalues in non-increasing order and retaining all eigenvectors until the sum of the corresponding eigenvalues exceeded 90\% of the sum of all eigenvalues); and projecting the dataset into this subspace. To be precise, we computed $P_{full}$ as the matrix containing these eigenvectors and we computed the low-rank matrix ${\cal L} = P_{full} P_{full}' {\cal L}_{full}$. Thus ${\cal L}$ is a $n \times t_{\max}$ matrix with $\operatorname{rank}({\cal L}) < \min(n, t_{\max})$. The curtains dataset is of size $5120 \times 1755$, but 90\% of the energy is contained in only $34$ directions, i.e. $\operatorname{rank}({\cal L})=34$. The lake dataset is of size $6480 \times 1500$ but 90\% of the energy is contained in only $14$ directions, i.e. $\operatorname{rank}({\cal L})=14$. This indicates that both datasets are indeed approximately low rank In practical data, the subspace does not just change as simply as in the model given in Sec. \ref{model}. There are also rotations of the new and existing eigen-directions at each time which have not been modeled there. Moreover, with just one training sequence of a given type, it is not possible to compute $\text{Cov}(L_t)$ at each time $t$. Thus it is not possible to compute the delay between subspace change times. The only thing we can do is to assume that there may be a change every $d$ frames, and that during these $d$ frames the data is stationary and ergodic, and then estimate $\text{Cov}(L_t)$ for this period using a time average. We proceeded as follows. We took the first set of $d$ frames, ${\cal L}_{1:d} := [L_1, L_2 \dots L_d]$, estimated its covariance matrix as $(1/d) {\cal L}_{1:d} {\cal L}_{1:d}'$ and computed $P_0$ as the 99.99\% eigenvectors' set. Also, we stored the lowest retained eigenvalue and called it $\lambda^-$. It is assumed that all directions with eigenvalues below $\lambda^-$ are due to noise. Next, we picked the next set of $d$ frames, ${\cal L}_{d+1:2d}: = [L_{d+1}, L_{d+2}, \dots L_{2d}]$; projected them perpendicular to $P_0$, i.e. computed ${\cal L}_{1,p}=(I - P_0 P_0'){\cal L}_{d+1:2d}$; and computed $P_{1,\new}$ as the eigenvectors of $(1/d) {\cal L}_{1,p} {\cal L}_{1,p}'$ with eigenvalues equal to or above $\lambda^-$. Then, $P_1 = [P_0, P_{1,\new}]$. For the third set of $d$ frames, we repeated the above procedure, but with $P_0$ replaced by $P_1$ and obtained $P_2$. A similar approach was repeated for each batch. We used $d=150$ for both the datasets. In each case, we computed $r_0 := \operatorname{rank}(P_0)$, and $c_{\max} := \max_j \operatorname{rank}(P_{j,\new})$. For each batch of $d$ frames, we also computed $a_{t,\new}: = P_{j,\new}' L_t$, $a_{t,*}: = P_{j-1}' L_t$ and $\gamma_*: = \max_t \|a_{t}\|_\infty$. We got $c_{\max}=3$ and $r_0=8$ for the lake sequence and $c_{\max}=5$ and $r_0=29$ for the curtain sequence. Thus the ratio $c_{\max}/r_0$ is sufficiently small in both cases. In Fig \ref{model_verification}, we plot $\|a_{t,\new}\|_{\infty}/\gamma_*$ for one 150-frame period of the curtain sequence and for three 150-frame change periods of the lake sequence. If we take $\alpha=40$, we observe that $\gamma_\new: = \max_j \max_{t_j \le t < t_j+ \alpha} ||a_{t,\new}||_\infty = 0.125 \gamma_*$ for the curtain sequence and $\gamma_\new = 0.06 \gamma_*$ for the lake sequence, i.e. the projection along the new directions is small for the initial $\alpha$ frames. Also, clearly, it increases slowly. In fact $\|a_{t,\new}\|_{\infty} \le \max(v^{k-1} \gamma_\new,\gamma_*)$ for all $t \in \mathcal{I}_{j,k}$ also holds with $v=1.5$ for the curtain sequence and $v=1.8$ for the lake sequence. {\em Verifying the clustering assumption. } We verified the clustering assumption for the lake video as follows. We first ``low-rankified" it to 90\% energy as explained above. Note that, with one sequence, it is not possible to estimate $\Lambda_t$ (this would require an ensemble of sequences) and thus it is not possible to check if all $\Lambda_t$'s in $[\tilde{t}_j, t_{j+1}-1]$ are similar enough. However, by assuming that $\Lambda_t$ is the same for a long enough sequence, one can estimate it using a time average and then verify if its eigenvalues are sufficiently clustered. When this was done, we observed that the clustering assumption holds with $\tilde{g}_{\max} = 7.2$, $\tilde{h}_{\max} = 0.34$ and $\vartheta_{\max} = 7$ \begin{figure} \centerline{ \includegraphics[height=5cm]{infinity_norm_div_gamma_star} } \caption{Verification of slow subspace change. The figure is discussed in Sec \ref{model_verify}.} \label{model_verification} \end{figure} \section{Notation and Background} \label{bgnd} \subsection{Notation} For a set $T \subset \{1,2,\dots, n\}$, we use $|T|$ to denote its cardinality, i.e., the number of elements in $T$. We use $T^c$ to denote its complement w.r.t. $\{1,2,\dots n\}$, i.e. $T^c:= \{i \in \{1,2,\dots n\}: i \notin T \}$. We use the interval notation, $[t_1, t_2]$, to denote the set of all integers between and including $t_1$ to $t_2$, i.e. $[t_1, t_2]:=\{t_1, t_1+1, \dots, t_2\}$. For a vector $v$, $v_i$ denotes the $i$th entry of $v$ and $v_T$ denotes a vector consisting of the entries of $v$ indexed by $T$. We use $\|v\|_p$ to denote the $\ell_p$ norm of $v$. The support of $v$, $\text{supp}(v)$, is the set of indices at which $v$ is nonzero, $\text{supp}(v) := \{i : v_i\neq 0\}$. We say that $v$ is s-sparse if $|\text{supp}(v)| \leq s$. For a matrix $B$, $B'$ denotes its transpose, and $B^{\dag}$ its pseudo-inverse. For a matrix with linearly independent columns, $B^{\dag} = (B'B)^{-1}B'$. We use $\|B\|_2:= \max_{x \neq 0} \|Bx\|_2/\|x\|_2$ to denote the induced 2-norm of the matrix. Also, $\|B\|_*$ is the nuclear norm (sum of singular values) and $\|B\|_{\max}$ denotes the maximum over the absolute values of all its entries. We let $\sigma_i(B)$ denotes the $i$th largest singular value of $B$. For a Hermitian matrix, $B$, we use the notation $B \overset{EVD}{=} U \Lambda U'$ to denote the eigenvalue decomposition of $B$. Here $U$ is an orthonormal matrix and $\Lambda$ is a diagonal matrix with entries arranged in decreasing order. Also, we use $\lambda_i(B)$ to denote the $i$th largest eigenvalue of a Hermitian matrix $B$ and we use $\lambda_{\max}(B)$ and $\lambda_{\min}(B)$ denote its maximum and minimum eigenvalues. If $B$ is Hermitian positive semi-definite (p.s.d.), then $\lambda_i(B) = \sigma_i(B)$. For Hermitian matrices $B_1$ and $B_2$, the notation $B_1 \preceq B_2$ means that $B_2-B_1$ is p.s.d. Similarly, $B_1 \succeq B_2$ means that $B_1-B_2$ is p.s.d. For a Hermitian matrix $B$, $\|B\|_2 = \sqrt{\max( \lambda_{\max}^2(B), \lambda_{\min}^2(B) )}$ and thus, $\|B\|_2 \le b$ implies that $-b \le \lambda_{\min}(B) \le \lambda_{\max}(B) \le b$. We use $I$ to denote an identity matrix of appropriate size. For an index set $T$ and a matrix $B$, $B_T$ is the sub-matrix of $B$ containing columns with indices in the set $T$. Notice that $B_T = B I_T$. Given a matrix $B$ of size $m \times n$ and $B_2$ of size $m \times n_2$, $[B \ B_2]$ constructs a new matrix by concatenating matrices $B$ and $B_2$ in the horizontal direction. Let $B_{\text{rem}}$ be a matrix containing some columns of $B$. Then $B \setminus B_{\text{rem}}$ is the matrix $B$ with columns in $B_{\text{rem}}$ removed. For a tall matrix $P$, $\Span(P)$ denotes the subspace spanned by the column vectors of $P$. The notation $[.]$ denotes an empty matrix. \section{Definitions needed for proving Theorem \ref{thm1}} \label{detailed} A few quantities are already defined in the model (Section \ref{model}), Definition \ref{Ijk}, Algorithm \ref{reprocs}, and Theorem \ref{thm1}. Here we define more quantities needed for the proofs. \begin{definition} \label{kappaplus} In the sequel, we let \begin{enumerate} \item $r := r_{\max}= r_0 + Jc_{\max}$ and $c: = c_{\max} = \max_j c_{j,\new}$, \item $\kappa_{s,*} := \max_j \kappa_s(P_{j-1})$, $\kappa_{s,\new} := \max_j \kappa_s(P_{j,\new})$, $\kappa_{s,k}:= \max_j \kappa_s (D_{j,\new,k})$, $\tilde{\kappa}_{s,k} := \max_j \kappa_s((I-P_{j,\new}{P_{j,\new}}') \Phat_{j,\new,k})$, \item $\kappa_{2s,*}^+ := 0.3$, $\kappa_{2s,\new}^+ := 0.15$, ${\kappa}_{s}^+ := 0.152$, $\tilde{\kappa}_{2s}^+ := 0.15$ and $g^+ := \sqrt{2}$ are the upper bounds assumed in Theorem \ref{thm1} on $\max_j \kappa_{2s}(P_j)$, $\max_j \kappa_{2s}(P_{j,\new})$, $\max_j \max_k \kappa_{s}(D_{j,\new,k})$, $\max_j \kappa_{2s}(Q_{j,\new,k})$ and $g$ respectively. \item $\phi^+ := 1.1735$ \item $\gamma_{\new,k} := \min (1.2^{k-1} \gamma_{\new}, \gamma_*)$ (recall that this is defined in Sec \ref{slowss}). \end{enumerate} \end{definition} \begin{definition} \label{zetakplus} Define the following: \begin{enumerate} \item $\zeta_{j,*}^+ := (r_0 + (j-1)c)\zeta$ \item Define the sequence $\{{\zeta_{j,k}}^+\}_{k=0,1,2,\dots, K}$ recursively as follows: \begin{align} \zeta_{j,0}^+ & := 1 \nonumber \\ \zeta_{j,k}^+ & :=\frac{b + 0.125 c \zeta}{1 - (\zeta_{j,*}^+)^2 - (\zeta_{j,*}^+)^2 f - 0.125 c \zeta - b} \; \text{ for} \ k \geq 1, \end{align} \end{enumerate} where \begin{align*} & b := C \kappa_s^+ g^+ \zeta_{j,k-1}^+ + \tilde{C} (\kappa_s^+)^2 g^+ (\zeta_{k-1}^+)^2 + C' f (\zeta_{j,*}^+)^2 \\ & C := \frac{2\kappa_s^+ \phi^+}{\sqrt{1-(\zeta_{j,*}^+)^2}} + \phi^+ , \\ & C' := (\phi^+)^2 + \frac{2\phi^+ }{\sqrt{1-(\zeta_{j,*}^+)^2}} + 1 + \\ &\hspace{.8in} \phi^+ + \frac{\kappa_s^+ \phi^+}{\sqrt{1-(\zeta_{j,*}^+)^2}} + \frac{\kappa_s^+(\phi^+)^2 }{\sqrt{1-(\zeta_{j,*}^+)^2}} , \\ & \tilde{C} := (\phi^+)^2 + \frac{\kappa_s^+ (\phi^+)^2 }{\sqrt{1-(\zeta_{j,*}^+)^2}} . \end{align*} As we will see, $\zeta_{j,*}^+$ and $\zeta_{j,k}^+$ are the high probability upper bounds on $\zeta_{j,*}$ and $\zeta_{j,k}$ (defined in Definition \ref{def_SEt}) under the assumptions of Theorem \ref{thm1}. \end{definition} \begin{definition} We define the noise seen by the sparse recovery step at time $t$ as $$\beta_t: = (I - \Phat_{(t-1)} \Phat_{(t-1)}') L_t.$$ Also define the reconstruction error of $S_t$ as $$e_t:= \Shat_t - S_t.$$ Here $\Shat_t$ is the final estimate of $S_t$ after the LS step in Algorithm \ref{reprocs}. Notice that $e_t$ also satisfies $e_t = L_t - \Lhat_t$. \end{definition} \begin{definition} \label{def_SEt} We define the subspace estimation errors as follows. Recall that $\Phat_{j,\new,0}=[.]$ (empty matrix).% \begin{eqnarray} && \SE_{(t)} := \|(I - \Phat_{(t)} \Phat_{(t)}') P_{(t)} \|_2, \nonumber \\ && \zeta_{j,*} := \|(I - \Phat_{j-1} \Phat_{j-1}') P_{j-1}\|_2 \nonumber \\ && \zeta_{j,k} := \|(I - \Phat_{j-1} \Phat_{j-1}' - \Phat_{j,\new,k} \Phat_{j,\new,k}') P_{j,\new}\|_2 \nonumber \end{eqnarray} \end{definition} \begin{remark}\label{zetastar} Recall from the model given in Sec \ref{model} and from Algorithm \ref{reprocs} that \begin{enumerate} \item $\Phat_{j,\new,k}$ is orthogonal to $\Phat_{j-1}$, i.e. $\Phat_{j,\new,k}'\Phat_{j-1}=0$ \item $\Phat_{j-1} := [\Phat_{0}, \Phat_{1,\new,K}, \dots \Phat_{j-1,\new,K}]$ and $P_{j-1}: = [P_0, P_{1,\new}, \dots P_{j-1,\new}]$ \item for $t \in \mathcal{I}_{j,k+1}$, $\Phat_{(t)} = [\Phat_{j-1}, \Phat_{j,\new,k}]$ and $P_{(t)} = P_j = [P_{j-1}, P_{j,\new}]$. \item $\Phi_{(t)} := I - \Phat_{(t-1)} \Phat_{(t-1)}'$ \end{enumerate} Then it is easy to see that \begin{enumerate} \item $\zeta_{j,*} \le \zeta_{j-1,*} + \zeta_{j,K} = \zeta_{1,*} + \sum_{j'=1}^{j-1} \zeta_{j',K}$ \item $\SE_{(t)} \le \zeta_{j,*} + \zeta_{j,k} \le \zeta_{1,*} + \sum_{j'=1}^{j-1} \zeta_{j',K} + \zeta_{j,k}$ \; for \ $t \in \mathcal{I}_{j,k+1}$. \end{enumerate} \end{remark} \begin{definition}\label{defn_Phi} Define the following \begin{enumerate} \item $\Phi_{j,k}$, $\Phi_{j,0}$ and $\phi_k$ \begin{enumerate} \item $\Phi_{j,k} := I-\Phat_{j-1} {\Phat_{j-1}}' - \Phat_{j,\new,k} {\Phat_{j,\new,k}}'$ is the CS matrix for $t \in \mathcal{I}_{j,k+1}$, i.e. $\Phi_{(t)} = \Phi_{j,k}$ for this duration. \item $\Phi_{j,0} := I-\Phat_{j-1} {\Phat_{j-1}}'$ is the CS matrix for $t \in \mathcal{I}_{j,1}$, i.e. $\Phi_{(t)} = \Phi_{j,0}$ for this duration. $\Phi_{j,0}$ is also the projection matrix used in all of the projection PCA steps for $t \in [t_j, t_{j+1}-1]$. \item $\phi_k := \max_j \max_{T:|T|\leq s}\|({(\Phi_{j,k})_T}'(\Phi_{j,k})_T)^{-1}\|_2$. It is easy to see that $\phi_k \le \frac{1}{1-\max_j \delta_s(\Phi_{j,k})}$ \cite{decodinglp}. \end{enumerate} \item $D_{j,\new,k}$, $D_{j,\new}$, $D_{j,*,k}$ and $D_{j,*}$ \begin{enumerate} \item $D_{j,\new,k} := \Phi_{j,k} P_{j,\new}$. $\Span(D_{j,\new,k})$ is the unestimated part of the newly added subspace for any $t \in \mathcal{I}_{j,k+1}$. \item $D_{j,\new} := D_{j,\new,0} = \Phi_{j,0} P_{j,\new}$. $\Span(D_{j,\new})$ is interpreted similarly for any $t \in \mathcal{I}_{j,1}$. \item $D_{j,*,k} := \Phi_{j,k} P_{j-1}$. $\Span(D_{j,*,k})$ is the unestimated part of the existing subspace for any $t \in \mathcal{I}_{j,k}$ \item $D_{j,*} := D_{j,*,0} = \Phi_{j,0} P_{j-1}$. $\Span(D_{j,*,k})$ is interpreted similarly for any $t \in \mathcal{I}_{j,1}$ \item Notice that $\zeta_{j,0} = \|D_{j,\new}\|_2$, $\zeta_{j,k} = \|D_{j,\new,k}\|_2$, $\zeta_{j,*} = \|D_{j,*}\|_2$. Also, clearly, $\|D_{j,*,k}\|_2 \le \zeta_{j,*}$. \end{enumerate} \end{enumerate} \end{definition} \begin{definition} \label{defHk}\ \begin{enumerate} \item Let $D_{j,\new} \overset{QR}{=} E_{j,\new} R_{j,\new}$ denote its reduced QR decomposition, i.e. let $E_{j,\new}$ be a basis matrix for $\Span(D_{j,\new})$ and let $R_{j,\new} = E_{j,\new}'D_{j,\new}$. \item Let $E_{j,\new,\perp}$ be a basis matrix for the orthogonal complement of $\Span(E_{j,\new})=\Span(D_{j,\new})$. To be precise, $E_{j,\new,\perp}$ is a $n \times (n-c_{j,\new})$ basis matrix that satisfies $E_{j,\new,\perp}'E_{j,\new}=0$. \item Using $E_{j,\new}$ and $E_{j,\new,\perp}$, define $A_{j,k}$, $A_{j,k,\perp}$, $H_{j,k}$, $H_{j,k,\perp}$ and $B_{j,k}$ as \begin{eqnarray} A_{j,k} &:=& \frac{1}{\alpha} \sum_{t \in \mathcal{I}_{j,k}} {E_{j,\new}}' \Phi_{j,0} L_t {L_t}' \Phi_{j,0} E_{j,\new} \nonumber \\ A_{j,k,\perp} &:=& \frac{1}{\alpha} \sum_{t \in \mathcal{I}_{j,k}} {E_{j,\new,\perp}}' \Phi_{j,0} L_t {L_t}' \Phi_{j,0} E_{j,\new,\perp} \nonumber \\ H_{j,k} &:=& \frac{1}{\alpha}\sum_{t \in \mathcal{I}_{j,k}} {E_{j,\new}}' \Phi_{j,0} \nonumber \\ &&\hspace{.5in}(e_t {e_t}' -L_t {e_t}' - e_t {L_t}') \Phi_{j,0} E_{j,\new} \nonumber\\ H_{j,k,\perp} &:=& \frac{1}{\alpha} \sum_{t \in \mathcal{I}_{j,k}} {E_{j,\new,\perp}}'\Phi_{j,0} \nonumber \\ &&\hspace{.4in} (e_t {e_t}' - L_t {e_t}' - e_t {L_t}') \Phi_{j,0} E_{j,\new,\perp} \nonumber \\ B_{j,k} &:=& \frac{1}{\alpha}\sum_{t \in \mathcal{I}_{j,k}} {E_{j,\new,\perp}}'\Phi_{j,0} \Lhat_t \Lhat_t' \Phi_{j,0} E_{j,\new}\nonumber \\ &=& \frac{1}{\alpha}\sum_{t \in \mathcal{I}_{j,k}} {E_{j,\new,\perp}}'\Phi_{j,0} (L_t-e_t) \nonumber \\ &&\hspace{1.2in}({L_t}'-{e_t}')\Phi_{j,0} E_{j,\new} \nonumber \end{eqnarray} \item Define \begin{eqnarray} &&\mathcal{A}_{j,k} := \left[ \begin{array}{cc} E_{j,\new} & E_{j,\new,\perp} \\ \end{array} \right] \left[\begin{array}{cc} A_{j,k} \ & 0 \ \\ 0 \ & A_{j,k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} {E_{j,\new}}' \\ {E_{j,\new,\perp}}' \\ \end{array} \right]\nonumber\\ &&\mathcal{H}_{j,k} := \left[ \begin{array}{cc} E_{j,\new} & E_{j,\new,\perp} \\ \end{array} \right] \left[\begin{array}{cc} H_{j,k} \ & {B_{j,k}}' \ \\ B_{j,k} \ & H_{j,k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} {E_{j,\new}}' \\ {E_{j,\new,\perp}}' \\ \end{array} \right] \nonumber \end{eqnarray} \end{enumerate} \end{definition} \begin{remark} \begin{enumerate} \item From the above, it is easy to see that $$\mathcal{A}_{j,k} + \mathcal{H}_{j,k} =\frac{1}{\alpha} \sum_{t \in \mathcal{I}_{j,k}} \Phi_{j,0} \hat{L}_t {\hat{L}_t}' \Phi_{j,0}.$$ \item Recall from Algorithm \ref{reprocs} that \begin{align*} \mathcal{A}_{j,k} +& \mathcal{H}_{j,k} \overset{EVD}{=}\\ &\left[ \begin{array}{cc} \Phat_{j,\new,k} & \Phat_{j,\new,k,\perp} \\ \end{array} \right] \left[\begin{array}{cc} \Lambda_k \ & 0 \ \\ 0 \ & \ \Lambda_{k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} \Phat_{j,\new,k}' \\ \Phat_{j,\new,k,\perp}' \\ \end{array} \right] \end{align*} is the EVD of $\mathcal{A}_{j,k} + \mathcal{H}_{j,k}$. \item Using the above, $\mathcal{A}_{j,k} + \mathcal{H}_{j,k}$ can be decomposed in two ways as follows: \begin{align*} &\mathcal{A}_{j,k} + \mathcal{H}_{j,k} \\ &= \left[ \begin{array}{cc} \Phat_{j,\new,k} & \Phat_{j,\new,k,\perp} \\ \end{array} \right] \left[\begin{array}{cc} \Lambda_k \ & 0 \ \\ 0 \ & \ \Lambda_{k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} \Phat_{j,\new,k}' \\ \Phat_{j,\new,k,\perp}' \\ \end{array} \right] \\ &= \left[ \begin{array}{cc} E_{j,\new} & E_{j,\new,\perp} \\ \end{array} \right] \\ &\hspace{.4in} \left[\begin{array}{cc} A_{j,k} + H_{j,k} \ & B_{j,k}' \ \\ B_{j,k} \ & A_{j,k,\perp} + H_{j,k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} {E_{j,\new}}' \\ {E_{j,\new,\perp}}' \\ \end{array} \right] \end{align*} \end{enumerate} \end{remark} \begin{definition} Define the random variable $X_{j,k} := \{a_1,a_2,\cdots,a_{t_j+k\alpha-1}\}$. \end{definition} Recall that the $a_t$'s are mutually independent over $t$, hence $X_{j,k}$ and $\{ a_{t_j+k\alpha}, \dots,a_{t_j+(k+1)\alpha -1} \}$ are mutually independent. \begin{definition} Define the set $\check{\Gamma}_{j,k}$ as follows: \begin{align*} \check{\Gamma}_{j,k} &:= \{ X_{j,k} : \zeta_{j,k} \leq \zeta_{k}^+ \text{ and } \hat{T}_t = T_t \text{ for all } t \in \mathcal{I}_{j,k} \} \\ \check{\Gamma}_{j,K+1} &:= \{ X_{j+1,0} : \hat{T}_t = T_t \text{ for all } t \in \mathcal{I}_{j,K+1} \} \end{align*} \end{definition} \begin{definition}\label{Gamma_def} Recursively define the sets $\Gamma_{j,k}$ as follows: \begin{align*} \Gamma_{1,0} & := \{ X_{1,0}: \zeta_{1,*} \leq r\zeta \\ &\hspace{.5in} \text{and} \ \hat{T}_t = T_t \ \text{for all} \ t\in [t_{\mathrm{train}}+1: t_1 -1]\} \\ \Gamma_{j,0} & := \{X_{j,0}: \zeta_{j',*} \le \zeta_{j',*}^+ \ \text{for all} \ j' = 1, 2, \dots, j \\ &\hspace{1.2in} \text{and} \ \hat{T}_t = T_t \ \text{for all} \ t \le t_{j-1} \}\\ \Gamma_{j,k} &:=\Gamma_{j,k-1} \cap \check{\Gamma}_{j,k} \ k=1,2,\dots K+1 \end{align*} \end{definition} \begin{remark} \label{etdef_rem} Whenever $\hat{T}_t = T_t$ we have an exact expression for $e_t$: \begin{equation}\label{et expression} e_t = I_{T_t} [ (\Phi_{(t)})_{T_t}'(\Phi_{(t)})_{T_t}]^{-1} {I_{T_t}}' \Phi_{(t)} L_t \end{equation} Recall that $L_t = P_j a_t = P_{j-1} a_{t,*} + P_{j,\new} a_{t,\new}$. \end{remark} \begin{definition} Define $P_{j,*}: = P_{j-1}$ and $\Phat_{j,*}: = \Phat_{j-1}$. \end{definition} \begin{remark} Notice that the subscript $j$ always appears as the first subscript, while $k$ is the last one. At many places in the rest of the paper, we remove the subscript $j$ for simplicity, e.g., $\Phi_0$ refers to $\Phi_{j,0}$, $\Phat_{\new,k}$ refers to $\Phat_{j,\new,k}$, $P_*$ refers to $P_{j,*}: = P_{j-1}$ and so on. \label{remove_j} \end{remark} \section{Proof of Theorem \ref{thm2}} \label{thmproof} We first give some new definitions next. We then give the key lemmas leading to the proof of the theorem and the proof itself. Finally we prove these lemmas. \subsection{Some New Definitions} \label{defs} Unless redefined here, all previous definitions still apply. \begin{definition} \label{def_zeta} Define the following: \begin{enumerate} \item $r = r_{\max} = r_0 + c_{\text{dif}}$ (Note that this is a redefinition from Definition \ref{kappaplus}) \item $\zeta_{j,*}^+ := r \zeta$ (Note that this is a redefinition from Definition \ref{zetakplus}) \item define the sequence $\{{\tilde\zeta_{k}}^+\}_{k=1,2,\cdots,\vartheta_j}$ as follows \begin{eqnarray} {\tilde\zeta_{k}}^+: = \frac{f_{inc}(\tilde{g}_k,\tilde{h}_k, \kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta)}{f_{dec}(\tilde{g}_k,\tilde{h}_k, \kappa_{s,e}^+,\kappa_{s,*}^+ + r \zeta)} \nonumber \end{eqnarray} where $f_{inc}(.)$ and $f_{dec}(.)$ are defined in Definition \ref{def alpha del}. \end{enumerate} \end{definition} \begin{definition} Define \begin{enumerate} \item $\Psi_{j,k} : = I - \sum_{i=0}^{k} \hat{G}_{j,i} \hat{G}_{j,i}'$. \item $G_{j,\text{det},k} := [G_{j,1} \cdots, G_{j,k-1}]$ and $\hat{G}_{j,\text{det},k} := [\hat{G}_{j,1} \cdots, \hat{G}_{j,k-1}]$. Notice that $\Psi_{j,k} = I - \hat{G}_{j,\text{det},k+1}\hat{G}_{j,\text{det},k+1}'$. \item $G_{j,\text{undet},k} := [G_{j,k+1} \cdots, G_{j,\vartheta_j}]$. \item $D_{j,k} := \Psi_{j,k-1} G_{j,k}$, $D_{j,\text{det},k} := \Psi_{j,k-1} G_{j,\text{det},k}$ and $D_{j,\text{undet},k} := \Psi_{j,k-1}G_{j,\text{undet},k} $. \end{enumerate} \end{definition} \begin{definition}\label{defHk_del} \ \begin{enumerate} \item Let $D_{j,k} \overset{QR}{=} E_{j,k} R_{j,k}$ denote its reduced QR decomposition, i.e. let $E_{j,k}$ be a basis matrix for $\Span(D_{j,k})$ and let $R_{j,k}:=E_{j,k}'D_{j,k}$. \item Let $E_{j,k,\perp}$ be a basis matrix for the orthogonal complement of $\Span(E_{j,k}) = \Span(D_{j,k})$. To be precise, $E_{j,k,\perp}$ is a $n \times (n-\tilde{c}_{j,k})$ basis matrix that satisfies ${E_{j,k,\perp}}' E_{j,k} = 0$. \item Using $E_{j,k}$ and $E_{j,k,\perp}$, define $\tilde{A}_{j,k}$, $\tilde{A}_{j,k,\perp}$, $\tilde{H}_{j,k}$, $\tilde{H}_{j,k,\perp}$ and $\tilde{B}_{j,k}$ as \begin{eqnarray} \tilde{A}_{j,k} &:=& \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{I}_{j,k}} {E_{j,k}}' \Psi_{j,k-1} L_t{L_t}' \Psi_{j,k-1} E_{j,k} \nonumber \\ \tilde{A}_{j,k,\perp} &:=& \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{I}_{j,k}} {E_{j,k,\perp}}' \Psi_{j,k-1} L_t{L_t}' \Psi_{j,k-1} E_{j,k,\perp} \nonumber \\ \tilde{H}_{j,k} &:=& \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{I}_{j,k}} {E_{j,k}}'\Psi_{j,k-1} (e_t{e_t}' - L_t{e_t}' - e_t {L_t}') \Psi_{j,k-1} E_{j,k} \nonumber \\ \tilde{H}_{j,k,\perp} &:=& \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{I}_{j,k}} {E_{j,k,\perp}}' \Psi_{j,k-1} \nonumber \\ && \hspace{1in} (e_t{e_t}' - L_t{e_t}' - e_t {L_t}') \Psi_{j,k-1} E_{j,k,\perp} \nonumber \\ \tilde{B}_{j,k} &:=& \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{I}_{j,k}} {E_{j,k,\perp}}' \Psi_{j,k-1} \hat{L}_t{\hat{L}_t}' \Psi_{j,k-1} E_{j,k} \nonumber \\ &=& \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{I}_{j,k}} {E_{j,k,\perp}}'\Psi_{j,k-1} (L_t-e_t)({L_t}' - {e_t}') \Psi_{j,k-1} E_{j,k} \nonumber \end{eqnarray} \item Define \begin{eqnarray} &&\tilde{\mathcal{A}}_{j,k} := \left[ \begin{array}{cc} E_{j,k} & E_{j,k,\perp} \\ \end{array} \right] \left[\begin{array}{cc}\tilde{A}_{j,k} \ & 0 \ \\ 0 \ & \tilde{A}_{j,k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} {E_{j,k}}' \\ {E_{j,k,\perp}}' \\ \end{array} \right]\nonumber\\ &&\tilde{\mathcal{H}}_{j,k} := \left[ \begin{array}{cc} E_{j,k} & E_{j,k,\perp} \\ \end{array} \right] \left[\begin{array}{cc} \tilde{H}_{j,k} \ & {\tilde{B}_{j,k}}' \ \\ \tilde{B}_{j,k} \ & \tilde{H}_{j,k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} {E_{j,k}}' \\ {E_{j,k,\perp}}' \\ \end{array} \right]\nonumber \label{defn_tilde_Hk} \end{eqnarray} \item From the above, it is easy to see that $$\tilde{\mathcal{A}}_{j,k} + \tilde{\mathcal{H}}_{j,k} =\frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{\mathcal{I}}_{j,k}} \Psi_{j,k-1} \hat{L}_t {\hat{L}_t}' \Psi_{j,k-1}.$$ \item Recall from Algorithm \ref{ReProCS_del} that \begin{align*} \tilde{\mathcal{A}}_{j,k} +& \tilde{\mathcal{H}}_{j,k} = \frac{1}{\tilde{\alpha}} \sum_{t \in \tilde{\mathcal{I}}_{j,k}} \Psi_{j,k-1} \hat{L}_t {\hat{L}_t}' \Psi_{j,k-1} \\ &\overset{EVD}{=} \left[ \begin{array}{cc} \hat{G}_{j,k} & \hat{G}_{j,k,\perp} \\ \end{array} \right] \left[\begin{array}{cc} \Lambda_{j,k} \ & 0 \ \\ 0 \ & \ \Lambda_{j,k,\perp} \\ \end{array} \right] \left[ \begin{array}{c} \hat{G}_{j,k}' \\ \hat{G}_{j,k,\perp}' \\ \end{array} \right] \end{align*} is the EVD of $\tilde{\mathcal{A}}_{j,k} + \tilde{\mathcal{H}}_{j,k}$. Here $\Lambda_k$ is a $\tilde{c}_{j,k} \times \tilde{c}_{j,k}$ diagonal matrix. \end{enumerate} \end{definition} \begin{definition} For $k=1,2,\cdots,\vartheta_j$, define \[ \tilde{\zeta}_{j,k} : = \bigg\|\Big(I - \sum_{i=1}^{k} \hat{G}_{j,i} \hat{G}_{j,i}'\Big)G_{j,k}\bigg\|_2 \] This is the error in estimating $\Span(G_{j,k})$ after the $k^{th}$ iteration of the cluster-PCA step. \end{definition} \begin{remark}\label{SE_rem} \ \begin{enumerate} \item Notice that $\zeta_{j,0} = \|D_{j,\new}\|_2$, $\zeta_{j,k} = \|D_{j,\new,k}\|_2$ and $\tilde{\zeta}_{j,k} = \|(I - \hat{G}_k \hat{G}_k') D_{j,k}\|_2 = \|\Psi_{j,k}G_{j,k}\|_2$. \item Notice from the algorithm that (i) $\Phat_{j,\new,k}$ is perpendicular to $\Phat_{j,*}=\Phat_{j-1}$; and (ii) $\hat{G}_{j,k}$ is perpendicular to $[\hat{G}_{j,1},\hat{G}_{j,2},\dots \hat{G}_{j,k-1}]$. \item For $t\in \mathcal{I}_{j,k}$, $P_{(t)} = P_j = [(P_{j-1}R_j \setminus P_{j,\old}), \ P_{j,\new}]$, $\hat{P}_{(t)} = [\hat{P}_{j-1} \ \hat{P}_{j,\new,k}]$ and \begin{align*} SE_{(t)} &= \|(I - \hat{P}_{j-1} {\hat{P}_{j-1}}' - \hat{P}_{j,\new,k}{\hat{P}_{j,\new,k}}')P_j\|_2\\ & \leq \|(I - \hat{P}_{j-1} {\hat{P}_{j-1}}' - \hat{P}_{j,\new,k}{\hat{P}_{j,\new,k}}')\\ &\hspace{1.6in} [ P_{j-1} \ P_{j,\new}]\|_2 \\ &\leq \zeta_{j,*} + \zeta_{j,k} \end{align*} for $k=1,2 \dots K$. The last inequality uses the first item of this remark. \item For $t \in \tilde{\mathcal{I}}_{j,k}$, $P_{(t)} = P_j$, $\hat{P}_{(t)} = [\hat{P}_{j-1} \ \hat{P}_{j,\new,K}]$ and $$SE_{(t)} = SE_{(t_j + K\alpha-1)} \leq \zeta_{j,*} + \zeta_{j,K}$$ \item For $t \in \tilde{\mathcal{I}}_{j,\vartheta_j+1}$, $P_{(t)} = P_j$, $\operatorname{span}(P_j) = \operatorname{span}([G_{j,1},\cdots,G_{j,\vartheta_j}])$, $\hat{P}_{(t)} = \hat{P}_j = [\hat{G}_{j,1},\cdots,\hat{G}_{j,\vartheta_j}]$, and $$SE_{(t)} = \zeta_{j+1,*} \leq \sum_{k=1}^{\vartheta_j} \tilde{\zeta}_{j,k}$$ The last inequality uses the first item of this remark. \end{enumerate} \end{remark} \begin{definition} Recall the definition of $\Phi_{j,k}$ from Definition \ref{defn_Phi}. Define $\Phi_{(t)}$ as \begin{eqnarray} \Phi_{(t)} := \left\{ \begin{array}{ll} \Phi_{j,k-1} \ & \ \ t \in \mathcal{I}_{j,k}, \ k=1,2 \dots K \\ \Phi_{j,K} \ & \ \ t \in \mathcal{\tilde{I}}_{j,k}, \ k=1,2 \dots \vartheta_j \\ \Phi_{j+1,0} \ & \ \ t \in \mathcal{\tilde{I}}_{j,\vartheta_j+1} \end{array} \right. \nonumber \end{eqnarray} \end{definition} \begin{definition} Define the random variable \begin{align*} \tilde{X}_{j,k} &:= \{ a_1,a_2,\cdots, a_{t_j + K \alpha +k\tilde{\alpha}- 1}\} \end{align*} \end{definition} \begin{definition} Define the sets \begin{align*} \tilde{\check{\Gamma}}_{j,k}& := \{ \tilde{X}_{j,k} :\tilde{\zeta}_{j,k} \leq \tilde{c}_{j,k}\zeta, \text{and} \ \hat{T}_t = T_t \ \text{for all} \ t \in \mathcal{\tilde{I}}_{j,k}\}, \\ &\hspace{.5in} k=1,2,\dots \vartheta_j, \ j=1,2,3,\dots J \\ \tilde{\check{\Gamma}}_{j, \vartheta_j+1}& := \{X_{j+1,0}: \hat{T}_t = T_t \ \text{for all} \ t \in \mathcal{\tilde{I}}_{j,\vartheta_j+1}\}, \\ &\hspace{1.2in} j=1,2,3,\dots J \nonumber \end{align*} Define the sets \begin{align*} \tilde{\Gamma}_{j,0} &:= \Gamma_{j,K} \\ \tilde{\Gamma}_{j,k}& := \tilde{\Gamma}_{j,k-1} \cap \tilde{\check{\Gamma}}_{j,k}, \ k=1,2,\dots \vartheta_j, \ j=1,2,3,\dots J \end{align*} \end{definition} \begin{definition} \label{def_kappa_D} Define $\kappa_{s,D}: = \max_j \max_k \kappa_s( D_{j,k})$ \end{definition} \begin{remark} \label{rem_kappa_D} Conditioned on $\tilde\Gamma_{j,k-1}^e$, it is easy to see that \begin{align*} \kappa_{s,D} & :=\max_j \max_k \kappa_s( D_{j,k}) \\ &\le \max_j \max_k (\kappa_s(G_{j,k}) + r \zeta) \\ & \le \max_j \kappa_s(P_j) + r \zeta \le \kappa_{s,D}^+: = \kappa_{s,*}^+ + r \zeta. \end{align*} In the above we have used $ \kappa_s(G_{j,k}) \le \kappa_s(P_j)$ and the same idea as in Lemma \ref{Dnew0_lem} \end{remark} \subsection{Performance Guarantees} \label{result} We state the main result here and then discuss it in Section \ref{discuss_add}. Definitions needed for the proof are given in Section \ref{detailed} and the actual proof is given in Section \ref{mainlemmas}. \begin{definition}\label{defn_alpha} We define here the parameters that will be used in Theorem \ref{thm1}. \begin{enumerate} \item Let $c:= c_{\max}$ and $r:= r_0 + (J-1)c$. \item Define $K=K(\zeta) := \left\lceil\frac{\log(0.6c\zeta)}{\log {0.6}} \right\rceil$ \item Define $\xi_0(\zeta) := \sqrt{c} \gamma_{\new} + \sqrt{\zeta}(\sqrt{r} + \sqrt{c})$ \item Define \begin{multline*} \alpha_\add(\zeta) := \left\lceil (\log 6KJ + 11 \log n) \frac{8 \cdot 24^2} {\zeta^2 (\lambda^-)^2}\cdot \right. \\ \max\left(\min(1.2^{4K} \gamma_{\new}^4, \gamma_*^4), \frac{16}{c^2}, \right. \\ 4(0.186 \gamma_\new^2 + 0.0034 \gamma_\new + 2.3)^2 \Big) \Bigg\rceil \end{multline*} We note that $\alpha_{\text{add}}$ is the number of data points, $\alpha$, used for one projection PCA step and is chosen to ensure that the conclusions of Theorem \ref{thm1} hold with probability at least $(1 - n^{-10})$. If $\gamma_*$ is large enough (${\gamma_*}^4>16$), a simpler but larger value for $\alpha_\add(\zeta)$ is $$\alpha_\add(\zeta) = \left\lceil (\log 6KJ + 11 \log n) \frac{ 8 \cdot 24^2 \gamma_*^4}{\zeta^2 (\lambda^-)^2} \right\rceil$$ \end{enumerate} \end{definition} \begin{theorem} \label{thm1} Consider Algorithm \ref{reprocs}. Pick a $\zeta$ that satisfies \[ \zeta \leq \min\left(\frac{10^{-4}}{r^2},\frac{1.5 \times 10^{-4}}{r^2 f},\frac{1}{r^{3}\gamma_*^2}\right) \] Assume that the initial subspace estimate is accurate enough, i.e. $\|(I - \Phat_0 \Phat_0') P_0\| \le r_0 \zeta$. If the following conditions hold: \begin{enumerate} \item The algorithm parameters are set as $\xi = \xi_0(\zeta), \ 7 \xi \leq \omega \leq S_{\min} - 7 \xi, \ K = K(\zeta), \ \alpha \ge \alpha_{\text{add}}(\zeta)$ \item$L_t$ satisfies Signal Model \ref{Ltmodel} with \begin{enumerate} \item $0 \le c_{j,\new} \leq c_{\max}$ for all $j$ (thus $r_j \le r_{\max}:=r_0 + J c_{\max}$), \item the $a_t$'s mutually independent over $t$, \item $\|a_t\|_{\infty} \leq \gamma_*$ for all $t$ ($a_t$'s bounded);, \item $0 < \lambda^- \le \lambda^+ < \infty$,and \item $g \le g^+ = \sqrt{2}$; \end{enumerate} \item \label{slow}slow subspace change holds: (\ref{delay}) holds with $d=K\alpha$; (\ref{atnew_inc}) holds with $v = 1.2$; and $c$ and $\gamma_\new$ are small enough so that $14 \xi_0 (\zeta) \le S_{\min}$. \item denseness holds: equation \eqref{kappa plus} holds with $\kappa_{2s,*}^+ = 0.3$ and equation \eqref{kappa new plus} holds with $\kappa_{2s,\new}^+ = 0.15$ \item the matrices \begin{align*} D_{j,\new,k} &:= (I - \Phat_{j-1} \Phat_{j-1}'-\Phat_{j,\new,k} \Phat_{j,\new,k}')P_{j,\new} \\ \text{and} \\ Q_{j,\new,k} &: = (I-P_{j,\new}{P_{j,\new}}')\Phat_{j,\new,k} \end{align*} satisfy \begin{align*} \max_j \max_{1 \le k \le K} \kappa_{s}(D_{j,\new,k}) & \le \kappa_{s}^+ := 0.152 \\ \max_j \max_{1 \le k \le K} \kappa_{2s}(Q_{j,\new,k}) & \leq \tilde{\kappa}_{2s}^+ := 0.15 \end{align*} \end{enumerate} then, with probability at least $(1 - n^{-10})$, at all times, $t$, all of the following hold: \begin{enumerate} \item at all times, $t$, $$\That_t = T_t \ \ \text{and}$$ \begin{multline*} \|e_t\|_2 = \|L_t - \hat{L}_t\|_2 = \|\hat{S}_t - S_t\|_2 \le \\0.18 \sqrt{c} \gamma_{\new} + 1.2\sqrt{\zeta}(\sqrt{r} + 0.06 \sqrt{c}). \end{multline*} \item the subspace error $\SE_{(t)} := \|(I - \Phat_{(t)} \Phat_{(t)}') P_{(t)} \|_2$ satisfies \begin{align*} \SE_{(t)} &\le \left\{ \begin{array}{ll} (r_0 + (j-1)c) \zeta + 0.4 c \zeta + 0.6^{k-1} & \ \\ \hspace{1in} \text{if} \ \ t \in \mathcal{I}_{j,k}, \ k=1,2 \dots K \nonumber \\ (r_0 + jc) \zeta \qquad \text{if} \ \ t \in \mathcal{I}_{j,K+1} \end{array} \right. \nonumber \\ &\le \left\{ \begin{array}{ll} 10^{-2} \sqrt{\zeta} + 0.6^{k-1} \ \\ \hspace{.8in} \text{if} \ \ t \in \mathcal{I}_{j,k}, \ k=1,2 \dots K \nonumber \\ 10^{-2} \sqrt{\zeta} \qquad \text{if} \ \ t \in \mathcal{I}_{j,K+1} \end{array} \right. \end{align*} \item the error $e_t = \hat{S}_t - S_t = L_t - \hat{L}_t$ satisfies the following at various times \begin{align*} \|e_t\|_2 & \le \left\{ \begin{array}{ll} 0.18 \sqrt{c}0.72^{k-1}\gamma_{\new} + \\ \qquad 1.2 (\sqrt{r} + 0.06 \sqrt{c}) (r_0+(j-1)c)\zeta \gamma_* \\ \hspace{1 in} \text{if} \ \ t \in \mathcal{I}_{j,k}, \ k=1,2 \dots K \nonumber \\ 1.2(r_0+ j c) \zeta \sqrt{r} \gamma_* \quad \ \text{if} \ \ t \in \mathcal{I}_{j,K+1} \end{array} \right. \nonumber \\ & \le \left\{ \begin{array}{ll} 0.18 \sqrt{c}0.72^{k-1}\gamma_{\new} + 1.2(\sqrt{r} + 0.06 \sqrt{c}) \sqrt{\zeta} \\ \hspace{1in} \text{if} \ \ t \in \mathcal{I}_{j,k}, \ k=1,2 \dots K \nonumber \\ 1.2 \sqrt{r} \sqrt{\zeta} \quad \text{if} \ \ t \in \mathcal{I}_{j,K+1} \end{array} \right. \end{align*} \end{enumerate} \end{theorem} \begin{remark} \label{Dnew0_rem} Consider the last assumption. We actually also need a similar denseness of $\kappa_s(D_{j,\new})$ where $D_{j,\new} =D_{j,\new,0}= (I - \Phat_{j-1} \Phat_{j-1}')P_{j,\new}$. Conditioned on the fact that $\Span(P_{j-1})$ has been accurately estimated, this follows easily from the denseness of $P_{j,\new}$ (see Lemma \ref{Dnew0_lem}). \end{remark} \subsection{Discussion} \label{discuss_add} First consider the choices of $\alpha$ and of $K$. Notice that $K = K(\zeta)$ is larger if $\zeta$ is smaller. Also, $\alpha_\add$ is inversely proportional to $\zeta$. Thus, if we want to achieve a smaller lowest error level, $\zeta$, we need to compute projection PCA over larger durations $\alpha$ and we need more number of projection PCA steps $K$. This means that we also require a larger delay between subspace change times, i.e. larger $t_{j+1}-t_j$. Now consider the assumptions used in the result. We assume slow subspace change, i.e. the delay between change times is large enough, $\|a_{t,\new}\|_\infty$ is initially below $\gamma_\new$ and increases gradually, and $14 \xi_0 \le S_{\min}$ which holds if $c_{\max}$ and $\gamma_\new$ are small enough. Small $c_{\max}$, small initial $a_{t,\new}$ (i.e. small $\gamma_\new$) and its gradual increase are verified for real video data in Section \ref{model_verify}. As explained there, one cannot estimate the delay between change times unless one has access to an ensemble of videos of a given type and hence the first assumption cannot be verified. We also assume denseness of $P_{j-1}$ and $P_{j,\new}$. This is a subset of the denseness assumptions used in earlier work \cite{rpca}. As explained there, this is valid for the video application because typically the changes of the background sequence are global, e.g. due to illumination variation affecting the entire image or due to textural changes such as water motion or tree leaves' motion etc. We quantify this denseness using the parameter $\kappa_s$. The way it is defined, bounds on $\kappa_s$ simultaneously place restrictions on denseness of $L_t$, $r = \operatorname{rank}(P_J)$, and $s$ (the maximum sparsity of any $S_t$). To compare our assumptions with those of Cand\`{e}s et. al. in \cite{rpca}, we could assume $\kappa_1(P_{J}) \leq \sqrt{\frac{\mu r}{n}}$, where $\mu$ is any value between $1$ and $\frac{n}{r}$. Using the bound $\kappa_s(P) \leq \sqrt{s}\kappa_1(P)$, we see that if $\frac{2 s r}{n}\leq \mu^{-1}(0.3)^2$, then our assumption of $\kappa_{2s}(P_{J}) \leq 0.3$ will be satisfied. Up to differences in the constants, this is the same requirement found in \cite{hsu2011robust}, even though \cite{hsu2011robust} studies a batch approach (PCP) while we study an online algorithm. From this we can see that if $s$ grows linearly with $n$, then $r$ must be constant. Similarly, if $r$ grows linearly with $n$, then $s$ must be constant. This is a stronger assumption than required by \cite{rpca} where $s$ is allowed to grow linearly with $n$, and $r$ is simultaneously allowed to grow as $\frac{n}{\log(n)^2}$. However, the comparison with \cite{rpca} is not direct because we do not need denseness of the right singular vectors or a bound on the vector infinity norm of $UV'$. The reason for the stronger requirement on the product $sr$ is because we study an online algorithm that recovers the sparse vector $S_t$ at each time $t$ rather than in a batch or a piecewise batch fashion. Because of this the sparse recovery step does not use the low dimensional structure of the new (and still unestimated) subspace. We assume the independence of $a_t$'s, and hence of $L_t$'s, over time. This is typically not valid in practice; however, it allows us to simplify the problem and hence the derivation of the performance guarantees. In particular it allows us to use the matrix Hoeffding inequality to bound the terms in the subspace error bound. In ongoing work by Zhan and Vaswani \cite{reprocs_cor}, we are seeing that, with some more work, this can be replaced by a more realistic assumption: an autoregressive model on the $a_t$'s, i.e. assume $a_t = b a_{t-1} + \nu_t$ where $\nu_t$'s are independent over time and $b<1$. We can work with this model in two ways. If we assume $b$ is known, then a simple change to the algorithm (in the subspace update step, replace $\Lhat_t$ by $\Lhat_t - b \Lhat_{t-1}$ everywhere) allows us to get a result that is almost the same as the current one using exactly the same approach. Alternatively if $b$ is unknown, as long as $b$ is bounded by a $b_* <1$, we can use the matrix Azuma inequality to still get a result similar to the current one. It will require a larger $\alpha$ though and some other changes. The most limiting assumption is the assumption on $D_{j,\new,k}$ and $Q_{j,\new,k}$ because these are functions of algorithm estimates. The denseness assumption on $Q_{j,\new,k}$ is actually not essential, it is possible to prove a slightly more complicated version of Theorem \ref{thm1} without it. We use this assumption only in Lemma \ref{RIC_bnd}. However, if we use tighter bounds on other quantities such as $g$ and $\kappa_s(P_{j,\new})$, and if we analyze the first projection-PCA step differently from the others, we can get a tighter bound on $\zeta_{j,1}$ (and hence $\zeta_{j,k}$ for $k \ge 1$) and then we will not need this assumption. Consider denseness of $D_{j,\new,k}$. Our proof actually only needs smallness of $\max_{t \in \mathcal{I}_{j,k+1}} d_t$ where $d_t = \|{I_{T_t}}' D_{j,\new,k}\|_2 / \|D_{j,\new,k}\|_2$ for $t \in \mathcal{I}_{j,k+1}$ for $k=1,2 \dots K$. Since this quantity is upper bounded by $\kappa_s(D_{j,\new,k})$, we have just assumed a bound on this for simplicity. Note also that densenss of $D_{j,\new,0}$ does not need to be assumed, this follows from denseness of $P_{j,\new}$ conditioned on the fact that $P_{j-1}$ has been accurately estimated. We attempted to verify the smallness of $d_t$ in simulations done with a dense $P_j$ and $P_{j,\new}$ and involving correlated support change of $S_t$'s. We observed that, as long as there was a support change every few frames, this quantity was small. For example, with $n=2048$, $s=20$, $r_0=36$, $c_\new=1$, support change by one index every 2 frames was sufficient to ensure a small $d_t$ at all times (see Sec \ref{sims}). Even one index change every 50 frames was enough to ensure that the errors decayed down to small enough values, although in this case $d_t$ was large at certain times and the decay of the subspace error was not exponential. It should be possible to use a similar idea to modify our result as well. The first thing to point out is that the max of $d_t$ can be replaced by its average over $t \in \mathcal{I}_{j,k}$ with a minor change to the proof of Lemma \ref{termbnds}. Moreover, if we try to show linear decay of the subspace error (instead of exponential decay), and if we analyze the first projection-PCA interval differently from the others, we will need a looser bound on the $d_t$'s, which will be easier to obtain under a certain support change assumption. In the first interval, the subspace error is large since $P_\new$ has not been estimated but $D_{\new,0}$ is dense (see Remark \ref{Dnew0_rem}). In the later intervals, the subspace error is lower but $D_{\new,k}$ may not be as dense Finally, Algorithm \ref{reprocs} assumes knowledge of certain model parameters and these may not always be available. It needs to know $c_{j,\new}$, which is the number of new directions added at subspace change time $j$, and it needs knowledge of $\gamma_\new$ (in order to set $\xi$ and $\omega$), which is the bound on the infinity norm of the projection of $a_t$ along the new directions for the first $\alpha$ frames. It also needs to know the subspace change times $t_j$, and this is the most restrictive. A practical version of Algorithm \ref{reprocs} (that provides reasonable heuristics for setting its parameters without model knowledge) is given in \cite{han_tsp}. As explained there, $\hat{t}_j + \alpha-1$ can be estimated by taking the last set of $\alpha$ estimates $\Lhat_t$, projecting them perpendicular to $\Phat_{j-1}$ and checking if any of the singular values of the resulting matrix is above $\sqrt{\hat\lambda^-}$. It should be possible to prove in future work that this happens only after an actual change and within a short delay of it. Lastly, note that, because the subspace change model only allows new additions to the subspace, the rank of the subspace basis matrix $P_j$ can only grow over time. The same is true for its ReProCS estimate. Thus, $\max_j \kappa_{2s}(P_j) = \kappa_{2s}(P_J)$ and a bound on this imposes a bound on the number of allowed subspace change times, $J$, or equivalently on the maximum rank of ${\cal L}_t$ for any $t$. A similar bound is also needed by PCP \cite{rpca} and all batch approaches. In Sec \ref{Del_section}, we explain how we can remove the bound on $J$ and hence on the rank of ${\cal L}_t$ if an extra clustering assumption holds.
2,869,038,155,135
arxiv
\section{Introduction} Let $(R, \mathfrak{m})$ be a local ring of positive characteristic $p$ and Krull dimension $d$. Assume that $R$ is $F$-finite and has perfect residue field. The classical Hilbert-Kunz function and multiplicity are defined for $R$-modules of finite length, and in particular for $\mathfrak{m}$-primary ideals. \begin{definition} Let $M$ be a finite length $R$-module. The Hilbert-Kunz function of $M$ is $$ f_{HK}^M(n)= l(F^n(M)) $$ where $F^n(M)=M\otimes_R {^{n}R}$ denotes the $n$-fold iteration of the Frobenius functor, and the Hilbert-Kunz multiplicity of $M$ is $$ e_{HK}(M)= \lim_{n \rightarrow \infty} \frac{f_{HK}^M(n)}{p^{nd}}. $$ In particular, if $I\subset R$ is an $\mathfrak{m}$-primary ideal, the Hilbert-Kunz function of $R/I$ is $$f_{HK}^{R/I}(n):= l\left(\frac{R}{I^{[p^n]}}\right).$$ \end{definition} These concepts were introduced by Kunz in \cite{Ku}. The fact that the limit in the definition of $e_{HK}(M)$ exists was proved by Monsky in \cite{Mo}. Hilbert-Kunz functions and multiplicities have connections to the theory of tight closure and the classification of singularities, see \cite{Hu2}. Epstein and Yao in \cite{EY} generalize the definition of Hilbert-Kunz functions to the case $\mathrm{dim}(M)>0$ by using $0^{\mathrm th}$ local cohomology. The following is a special case of the definition in \cite{EY}, where a relative version is considered. \begin{definition} Let $M$ be a finitely generated $R$-module. The generalized Hilbert-Kunz function of $M$ is $$ f_{gHK}^M(n)= l\left(H_{\mathfrak{m}}^0(F^n(M))\right). $$ \end{definition} One drawback of this notion is that the limit $\displaystyle \lim_{n\rightarrow \infty} f_{gHK}^M(n)/p^{nd}$ that one would naturally want to use as the definition of the generalized Hilbert-Kunz multiplicity is not known to exist in general. One is thus forced to define $e^+_{gHK}(M)$ and $e^-_{gHK}(M)$ as the limsup and the liminf respectively. This existence of the limit is proved in \cite{DS} for modules that have finite projective dimension on the punctured spectrum, with certain assumptions on the ring. A related problem that involves the $0^{th}$ local cohomology of Frobenius powers is the following long-standing conjecture, which will be referred to as the (LC) property in this note (the conjecture being that all rings of characteristic $p$ satisfy (LC)). We will need to use a more general version, where sequences of ideals other than Frobenius powers of a fixed ideal are involved. \begin{definition} ({\bf 1.}) We say that a sequence of ideals $J_q$ indexed by $q=p^n$ satisfies the (LC) property if there exists $N$ such that $$\displaystyle \mathfrak{m}^{Nq}H_{\mathfrak{m}}^0\left(\frac{R}{J_q}\right)=0 \ \forall q=p^n.$$ ({\bf 2.}) We say that the ring $R$ satisfies (LC) if for every ideal $J$ in $R$, the sequence of Frobenius powers $J^{[q]}$ satisfies (LC). \end{definition} (LC) was previously studied in connection with the question of whether tight closure commutes with localization (which is now known to be false, see \cite{Br2}), and specifically whether a localization of a ring in which all ideals are tightly closed will continue to have this property (which is still an open question). See the discussion following Prop. 4.16 in \cite{HH1} for the connection between (LC) and the problem of localization of tight closure. See also \cite{Ab} for some work in this direction. (LC) is known to hold in certain cases, such as when $R$ is a graded ring and $J_q=J^{[q]}$ where $J$ is a homogeneous ideal with $\mathrm{dim}(R/J)=1$ (see \cite{Hu}, \cite{Vr}). In \cite{DS} it is observed that if $R$ satisfies (LC) and countable prime avoidance, then $e^+_{gHK}(M)$ is finite for any finitely generated module $M$. The main result of this paper is a significant improvement of this observation. More precisely, we show that if $R$ satisfies (LC) and countable prime avoidance, then generalized Hilbert-Kunz functions of ideals can be expressed as linear combinations of classical Hilbert-Kunz functions (in particular, the generalized Hilbert-Kunz multiplicity exists, and can be expressed in terms of classical Hilbert-Kunz multiplicities of related ideals). Countable prime avoidance is a mild assumption, known to hold if the ring is complete, or if the residue field is uncountable, see \cite{Bu}. By way of motivation for this work, we point out that Brenner's construction of an irrational Hilbert-Kunz multiplicity in \cite{Br} involved generalized Hilbert-Kunz multiplicities in an essential way. Namely, Brenner first constructs a (non $\mathfrak{m}$-primary) ideal $I$ for which $e_{gHK}(R/I)$ is irrational, then he uses that to obtain (in a non-explicit manner) a finite length module with irrational Hilbert-Kunz multiplicity, and as a final step he obtains an $\mathfrak{m}$-primary ideal with irrational Hilbert-Kunz multiplicity. If one could check that (LC) holds for the Frobenius powers of the ideal $I$ obtained in the first step of Brenner's construction, our result could then be used to obtain a more direct and explicit route to $\mathfrak{m}$-primary ideals with irrational Hilbert-Kunz multiplicities. We recall some basic facts and definitions. \begin{definition} Let $(R, \mathfrak{m})$ be a local ring, and let $I \subset R$ be an ideal. Then $\displaystyle I^{sat}= \bigcup_n (I : \mathfrak{m}^n)$, and $\displaystyle H_{\mathfrak{m}}^0(\frac{R}{I})=\frac{I^{sat}}{I}$. \end{definition} The following facts are well-known. We include a proof for the reader's convenience. \begin{fact}\label{facts} {\rm a.} Let $I=Q_1 \cap Q_2 \cdots \cap Q_t$ be a primary decomposition of $I$. Assume that $Q_t$ is the $\mathfrak{m}$-primary component (possibly $Q_t=R$ if $\mathfrak{m}$ is not an associated prime), and $Q_1, \ldots, Q_{t-1}$ are primary to non-maximal prime ideals. Then $I^{sat} = Q_1 \cap Q_2 \cap \cdots \cap Q_{t-1}$. {\rm b.} If $s\in R$ does not belong to any associated prime ideal of $R/I$ except for the maximal ideal, then we have $(I:s) \subseteq I^{sat}$. If moreover $s\in \mathfrak{m}^n$ and $\displaystyle \mathfrak{m}^n H_{\mathfrak{m}}^0(\frac{R}{I})=0$, then $(I:s)= I^{sat}$. {\rm c.} If $R$ satisfies countable prime avoidance, and $J_q$ is a sequence of ideals that satisfies (LC), there exists $s \in \mathfrak{m}$ such that $$ H_{\mathfrak{m}}^0(\frac{R}{J_q})= \frac{(J_q:s^q)}{J_q} \ \forall q=p^e$$ \end{fact} \begin{proof} Let $P_i$ denote the unique associated prime of $R/Q_i$ for $1 \le i \le t-1$. a. If $x \in I^{sat}$, then $m^n x \subseteq Q_i$ for some $n\ge 1$, and all $1 \le i \le t-1$. Since $\mathfrak{m}^n \not\subseteq P_i$, and $Q_i$ is $P_i$-primary, it follows that $x \in Q_i$. Conversely, if $x \in Q_1 \cap Q_2 \cap \ldots \cap Q_{t-1}$, it follows that $Q_t x \subseteq I$. Since $Q_t$ is $\mathfrak{m}$-primary, we have $\mathfrak{m}^n \subseteq Q_t$ for some $n$, and therefore $m^n x \subseteq I$, which proves $x \in I^{sat}$. b. If $x \in (I:s)$, we have $sx \in Q_i$ for all $1 \le i \le t-1$. Since $s \notin P_i$, it follows that $x \in Q_i$, and thus we have $(I:s) \subseteq Q_1 \cap Q_2 \cap \ldots \cap Q_{t-1}= I^{sat}$. If $\displaystyle \mathfrak{m}^n H_{\mathfrak{m}}^0(\frac{R}{I})=0$, we have $\mathfrak{m}^n I^{sat}\subseteq I$, and therefore $s\in \mathfrak{m}^n$ implies $s I^{sat}\subseteq I$, or equivalently $(I:s) \supseteq I^{sat}$. c. By countable prime avoidance, we can pick $s_0$ outside of all associated primes of all $J_q$, with the exception of the maximal ideal. If $N$ is such that $\displaystyle \mathfrak{m}^{Nq}H_{\mathfrak{m}}^0(\frac{R}{J_q})=0$, then we let $s=s_0^N$ and observe that the conditions in (b) are satisfied when we use $s^q$ for $s$ and $J_q$ for $I$. \end{proof} \section{Main Result} \begin{theorem}\label{main_theorem} Assume that $R$ satisfies (LC) and countable prime avoidance. Let $I$ be an ideal with $\mathrm{dim}(R/I)=c$. Then the generalized Hilbert-Kunz function of $R/I$, $f_{gHK}(R/I)$ can be expressed as a linear combination with integer coefficients that do not depend on $I$ of Hilbert-Kunz functions of $\mathfrak{m}$-primary ideals. \end{theorem} \begin{lemma}\label{isomorphism} Assume that $(I:s)=(I:s^2)$ (note that this holds in particular when $I^{\mathrm{sat}}=(I:s)$). Then $$ \frac{(I:s)}{I}\cong \frac{(I:s)+(s)}{I + (s)} $$ \end{lemma} \begin{proof} Let $f$ denote the composition $\displaystyle \frac{(I:s)}{I} \hookrightarrow \frac{R}{I} \twoheadrightarrow \frac{R}{I+(s)}$ where the first map is inclusion and the second map is projection. Note that the assumption that $I:s=I:s^2$ implies that $f$ is injective. Indeed, if $x \in (I:s) \cap (I +(s))$, we can write $x=i+ ys$ with $i \in I$ and $y \in R$, and we have $ys \in (I:s)$, or $y \in (I:s^2) = (I:s)$, which implies $x \in I$. Therefore $\displaystyle \frac{(I:s)}{I} \cong \mathrm{im}(f)= \frac{(I:s) + (s)}{I+(s)}$. \end{proof} \begin{lemma}\label{length} Assume that $I^{\mathrm{sat}}=(I:s)$. Then \begin{equation}\label{length_eq} l\left( H_{\mathfrak{m}}^0\left(\frac{R}{I}\right)\right) = l\left(H_{\mathfrak{m}}^0\left(\frac{R}{I+(s)}\right)\right)-l\left(H_{\mathfrak{m}}^0\left(\frac{R}{(I:s)+(s)}\right)\right) \end{equation} \end{lemma} \begin{proof} Consider the short exact sequence $$ 0 \rightarrow \frac{(I:s)}{I} \stackrel{f}{\longrightarrow} \frac{R}{I+(s)} \longrightarrow \frac{R}{(I:s)+(s)} \rightarrow 0 $$ given by the proof of Lemma ~\ref{isomorphism}. Since $\displaystyle \frac{(I:s)}{I}=H_{\mathfrak{m}}^0(\frac{R}{I})$, it is a zero-dimensional module, and therefore $\displaystyle H_{\mathfrak{m}}^1(\frac{(I:s)}{I})=0$. This ensure that exactness is preserved when we apply $H_{\mathfrak{m}}^0 ( {\underline{\ \ } } ) $ to the above short exact sequence, and the conclusion follows from the additivity of length on short exact sequences. \end{proof} Before embarking on the proof of the main theorem, we illustrate it for low dimensional cases, where we give explicit formulas for the linear combinations mentioned in the statement of the main theorem. \begin{prop}\label{dim1} Assume that $R$ satisfies (LC) and countable prime avoidance. Let $I \subset R$ be an ideal with $\mathrm{dim}(R/I)=1$. Then there exists $s \in \mathfrak{m}$ such that $I+(s)$, $I+(s^2)$ are $\mathfrak{m}$-primary, and $$f_{gHK}^{R/I}=2 f_{HK}^{R/I+(s)}-f_{HK}^{R/I+( s^2)}.$$ \end{prop} \begin{proof} Countable prime avoidance allows us to pick $s \in \mathfrak{m}$ which is not in any associated primes of any Frobenius power $I^{[q]}$ except for $\mathfrak{m}$. According to (~\ref{facts}), we can pick such an $s$ which satisfies $\displaystyle H_{\mathfrak{m}}^0(\frac{R}{I^{[q]}})=\frac{(I^{[q]}:s^q)}{I^{[q]}}$. Note that this choice of $s$ also implies that $I+(s)$ and $I+(s^2)$ are $\mathfrak{m}$-primary. Lemma ~\ref{length} now gives \begin{equation}\label{lincomb} l\left(H_{\mathfrak{m}}^0\left( \frac{R}{I^{[q]}}\right)\right)= l\left(\frac{R}{I^{[q]}+(s^q)}\right)- l\left(\frac{R}{(I^{[q]}:s^q) +(s^q)}\right) \end{equation} The first term on the right hand side of equation (\ref{lincomb}) is a $f_{HK}^{R/I+(s)}$. In order to deal with the second term, consider the map \begin{equation}\label{amap} \frac{R}{(I^{[q]}:s^q)+(s^q)} \rightarrow \frac{R}{I^{[q]}+(s^{2q})} \end{equation} given by multiplication by $s^q$. It is easy to check that this map is injective, and the cokernel is $\displaystyle \frac{R}{I^{[q]}+(s^q)}$. The conclusion now follows using additivity of length on short exact sequences together with (\ref{lincomb}). \end{proof} \begin{prop}\label{dimension2} Assume $R$ satisfies (LC) and countable prime avoidance. Let $I\subset R$ be an ideal with $\mathrm{dim}(R/I)=2$. Then there exist $s, t \in \mathfrak{m}$ such that $$ f_{gHK}^{R/I}= 2f_{gHK}^{R/I+( s)}-2f_{gHK}^{R/I+( s^2, st)} + f_{gHK}^{R/I+( s^2, st^2)} $$ where the terms on the right hand side are generalized Hilbert-Kunz functions of one-dimensional ideals. \end{prop} \begin{proof} Pick $s \in \mathfrak{m}$ as in the proof of Proposition ~\ref{dim1}, such that $\displaystyle H_{\mathfrak{m}}^0(R/I^{[q]})=\frac{ (I^{[q]}:s^q)}{I^{[q]}}$. Apply Lemma~\ref{length} to $I^{[q]}$. Note that the first term in equation (\ref{length_eq}) is $f_{gHK}^{R/I+(s)}$, a generalized Hilbert-Kunz function of a one-dimensional ideal. Now we calculate the second term. Observe that the family of ideals $(I^{[q]}:s^q)+(s^q)$ satisfy (LC), since applying $H_{\mathfrak{m}}^0( \underline{\ \ })$ to the map (\ref{amap}) shows that $H_{\mathfrak{m}}^0(R/(I^{[q]}:s^q)+(s^q))$ is a submodule of $H_{\mathfrak{m}}^0(R/I^{[q]}+(s^{2q}))$, and the latter local cohomology module is annihilated by $m^{Nq}$ for some $N$ that does not depend on $q$ from the assumption that the ring satisfies (LC). Thus, we can pick $t$ such that $$ H_{\mathfrak{m}}^0\left(\frac{R}{(I^{[q]}:s^q)+(s^q)}\right) = \frac{(I^{[q]}:s^q+(s^q)):t^q}{(I^{[q]}:s^q) +(s^q)}, $$ and Lemma~\ref{length} gives \begin{equation}\label{second_term} l\left(H_{\mathfrak{m}}^0\left(\frac{R}{(I^{[q]}:s^q)+(s^q)}\right)\right)= l\left(\frac{R}{(I^{[q]}:s^q)+(s^q, t^q)}\right)- l\left(\frac{R}{(I^{[q]}:s^q+(s^q)):t^q +( t^q)}\right) \end{equation} (note that the two quotients on the right hand side are zero dimensional). In order to express the two terms on the right hand side of equation (\ref{second_term}) in terms of Hilbert-Kunz multiplicities of zero-dimensional ideals, we consider the following two short exact sequences, where the leftmost maps are multiplication by $s^q$ and $t^q$ respectively: \begin{equation}\label{ses1} 0 \rightarrow \frac{R}{(I^{[q]}:s^q)+( s^q, t^q)} \rightarrow \frac{R}{I^{[q]}+( s^{2q}, s^qt^q)}\rightarrow \frac{R}{I^{[q]}+(s^q)}\rightarrow 0 \end{equation} \begin{equation}\label{ses2} 0 \rightarrow \frac{R}{(I^{[q]}:s^q, s^q):t^q+( t^q)} \rightarrow \frac{R}{(I^{[q]}:s^q)+(s^q, t^{2q})}\rightarrow \frac{R}{(I^{[q]}:s^q)+( s^q, t^q)}\rightarrow 0 \end{equation} Since the leftmost terms in (\ref{ses1}) and (\ref{ses2}) are zero-dimensional, exactness is preserved when applying $H_{\mathfrak{m}}^0(\underline{\ \ })$. The two rightmost terms of the short exact sequence (\ref{ses1}) become $f_{gHK}^{R/I+(s^2, st)}$ and $f_{gHK}^{R/I+(s)}$ respectively, which are generalized Hilbert-Kunz functions of one-dimensional ideals. The two rightmost terms of the short exact sequence (\ref{ses2}) are zero dimensional, and in order to evaluate their lengths we use the short exact sequences \begin{equation}\label{ses3} 0 \rightarrow \frac{R}{(I^{[q]}:s^q)+(s^q, t^{2q})} \rightarrow \frac{R}{I^{[q]}+(s^{2q}, s^q t^{2q})}\rightarrow \frac{R}{I^{[q]}+(s^q)}\rightarrow 0, \end{equation} and \begin{equation}\label{ses4} 0\rightarrow \frac{R}{(I^{[q]}:s^q)+(s^q, t^q)} \rightarrow \frac{R}{I^{[q]}+(s^{2q}, s^qt^q)}\rightarrow \frac{R}{I^{[q]}+( s^q)}\rightarrow 0 \end{equation} Exactness is preserved in (\ref{ses3}) and (\ref{ses4}) when applying $H_{\mathfrak{m}}^0( \underline{\ \ })$, and now the two right-most terms become $f_{gHK}^{R/I+(s^2, st^2)}$ and $f_{gHK}^{R/I+(s)}$ in (\ref{ses3}), and $f_{gHK}^{R/I+(s^2, st)}$ and $f_{gHK}^{R/I+(s)}$ in (\ref{ses4}). The conclusion follows from the additivity of length on short exact sequences. \end{proof} The proof of the main theorem will use the following notation: \begin{definition}\label{notation} Let $J, K\subset R$ and $x\in R$. We define $J*_0 xK:= J+xK$, and $J*_1 x:= (J: x) + (x)$. We will frequently write $J:_{\epsilon} xK$ with $\epsilon\in \{0, 1\}$. It will be understood that if $\epsilon=1$ then $K=R$ and $J*_1 xR$ just means $J*_1x$. When we write $J*_{\epsilon_1}x_1K_1*_{\epsilon_2}\cdots *_{\epsilon_s}x_sK_s$ it will be understood that the order of operation is from left to right. We will make the assumption that $K$ and $(x)$ are not contained in any associated prime ideal of $J$ except for $\mathfrak{m}$, and $\mathrm{dim}(R/J)>0$. Then we have $\mathrm{dim}(R/J*_{\epsilon} xK)=\mathrm{dim}(R/J)-1$. When we write $J*_{\epsilon_1}x_1K_1*_{\epsilon_2}\cdots *_{\epsilon_s}x_sK_s$, we will make the assumption that for all $i \le s$, $x_i$ and $K_i$ are not contained in any associated prime ideal of $J*_{\epsilon_1}x_1K_1*_{\epsilon_2}\cdots *_{\epsilon_{i-1}}x_{i-1}K_{i-1}$. \end{definition} \begin{lemma}\label{recursive} For all $c>0$ and all $(\epsilon_1, \ldots, \epsilon_c) \in \{0, 1\}^c$, there exist integers $d_{(\epsilon_1, \ldots, \epsilon_c)}$ such that if $J\subset R$ is an ideal with $\mathrm{dim}(R/J)=c$, the following hold: {\rm a.} We can pick elements $x_1, \ldots, x_c$ such that $$ l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J}\right)\right) = \sum d_{\epsilon_1, \ldots, \epsilon_c} l\left(\frac{R}{J*_{\epsilon_1}x_1*_{\epsilon_2} \ldots *_{\epsilon_c}x_c}\right) $$ where the summation is over all $(\epsilon_1, \ldots, \epsilon_x) \in \{0, 1\}^c$ and all the $J*_{\epsilon_1}x_1*_{\epsilon_2} \ldots *_{\epsilon_c} x_c$ are $\mathfrak{m}$-primary ideals. Assume moreover that $R$ satisfies countable prime avoidance and has the (LC) property. Then: {\rm b.} We can pick $y_1, \ldots, y_c$ independent of $q$, such that \begin{equation}\label{linearcomb} l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J^{[q]}}\right)\right)=\sum d_{\epsilon_1, \ldots, \epsilon_c} l\left(\frac{R}{J^{[q]}*_{\epsilon_1}y_1^q*_{\epsilon_2} \ldots *_{\epsilon_c}y_c^q}\right) \ \forall q=p^e \end{equation} where the summation is over all $(\epsilon_1, \ldots, \epsilon_c) \in \{0, 1\}^c$ and all the $J^{[q]}*_{\epsilon_1}y_1^q*_{\epsilon_2} \ldots *_{\epsilon_c} y_c^q$ are $\mathfrak{m}$-primary ideals. {\rm c. } For every $s<c$, $\epsilon'_1, \ldots, \epsilon'_s\in \{0, 1\}$, $y_1, \ldots, y_s \in R$ and $K_1, \ldots, K_s$ ideals in $R$ satisfying the assumption in (\ref{notation}), we can pick $z_{s+1}, \ldots, z_c$ such that $$ l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J^{[q]}*_{\epsilon_1'}*y_1^qK_1^{[q]}*_{\epsilon_2'} \ldots *_{\epsilon_s'}y_s^qK_s^{[q]}}\right)\right) = $$ $$ \sum d_{\epsilon_{s+1}, \ldots, \epsilon_c} l\left(\frac{R}{J*_{\epsilon'_1}y_1^qK_1^{[q]}*_{\epsilon'_2} \ldots *_{\epsilon'_s}y_s^qK_s^{[q]}*_{\epsilon_{s+1}}z_{s+1}^q *_{\epsilon_{s+2}} \ldots *_{\epsilon_c} z_c^q}\right) $$ where the summation is over all the choices of $(\epsilon_{s+1}, \ldots, \epsilon_c) \in \{0, 1\}^{c-s}$ and all the ideals appearing in the quotients on the right hand side are $\mathfrak{m}$-primary. \end{lemma} \begin{proof} (a). We pick $x_i$ recursively to be outside of all associated primes of $J*_{\epsilon_1}x_1 *_{\epsilon_2}*\cdots *_{\epsilon_{i-1}}x_{i-1}$ except for the maximal ideal, for all $\epsilon_1, \ldots, \epsilon_{i-1} \in \{0, 1\}$. Note that the ideals $J':=J*_{\epsilon_1}x_1 *_{\epsilon_2}*\cdots*_{\epsilon_{i-1}}x_{i-1}$ constructed this way have dimension $c-i+1$, and we may choose $x_i$ such that $\displaystyle H_{\mathfrak{m}}^0\left(\frac{R}{J'}\right)= \frac{J':x_i}{J'}$. Lemma~\ref{length} shows that the length of this local cohomology module can be computed as $$ \left(H_{\mathfrak{m}}^0\left(\frac{R}{J'*_0x_i}\right)\right) - l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J'*_1x_i}\right)\right) $$ The claim now follows by induction on $c$. (b). and (c). Induct on $c$. Note that (LC) for the ideal $J$ gives that we can pick $x_1 = y_1^q$. Then the ideal $J^{[q]}*_0x_1= (J, y_1)^{[q]}$ is a Frobenius power of an ideal of dimension $c-1$, and by the inductive hypothesis we can pick $x_i = y_i^q$ ($i=2, \ldots, c$) that satisfy the desired conclusion for $J':=J^{[q]}*_0x_1$. As in the proof of Proposition (\ref{dimension2}), we note that the family of ideals $J_q'':=J^{[q]}*_1y_1^{q_1}$ satisfy (LC), since $H_{\mathfrak{m}}^0(R/J_q'')$ is a submodule of $H_{\mathfrak{m}}^0(R/J^{[q]}+(y_1^{2q}))$. Therefore we can pick $x_2=y_2^q$ such that $$ l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J_q''}\right)\right) = l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J''_q*_0x_2^q}\right)\right) - l\left(H_{\mathfrak{m}}^0\left( \frac{R}{J''_q*_1x_2^q}\right)\right) $$ (via Lemma (\ref{length})). We claim that we can pick $y_i$ recursively such that for all $i<c$ and all $\epsilon_1, \ldots, \epsilon_i \in \{0, 1\}$ we have $$ H_{\mathfrak{m}}^0\left(\frac{R}{J_{q, (\epsilon_1, \ldots, \epsilon_i)}}\right)= \frac{J_{q, (\epsilon_1, \ldots, \epsilon_i)}:y_{i+1}^q}{J_{q, (\epsilon_1, \ldots, \epsilon_i)}}, $$ where $J_{q, (\epsilon_1, \ldots, \epsilon_i)}:= J^{[q]}*_{\epsilon_1}y_1^q *_{\epsilon_2} \cdots *_{\epsilon_i}y_i^q$ Assume that all $y_j$ for $j\le i$ are picked. In order to be able to pick $y_{i+1}$, we need to show that the family of ideals $J_{q, (\epsilon_1, \ldots, \epsilon_i)}$ satisfy (LC) for every $\epsilon_1, \ldots, \epsilon_i \in \{0, 1\}$. We claim more generally that for any elements $f_1, \ldots, f_N$, ideals $K_1, \ldots, K_N$, and $\epsilon_1, \ldots, \epsilon_N \in \{0, 1\}$, the family of ideals $J_{q, (\epsilon_1, \ldots, \epsilon_N)}:=J^{[q]}*_{\epsilon_1}f_1^qK_1^{[q]}*_{\epsilon_2}f_2^qK_2^{[q]}*_{\epsilon_3} \ldots *_{\epsilon_N}f_N^qK_N^{{q}}$ satisfy (LC). This will also justify the statement in (c). Let $j$ be the largest index with $\epsilon_j=1$ (we consider $j$ to be zero if all $\epsilon_j=0$). We prove the claim by induction on $j$. The case $j=0$ is clear from the assumption that $R$ satisfies (LC), since the ideals under consideration are Frobenius powers of a fixed ideal. Let $J'_q:= J^{[q]}*_{\epsilon_1} f_1^qK_1^{[q]}*_{\epsilon_2} \ldots *_{\epsilon_{j-1}}f_{j-1}^qK_{j-1}^{[q]}$, so that $$J_{q, (\epsilon_1, \ldots, \epsilon_i)}= J'_q :f_j^q + (f_j^q)+ f_{j+1}^qK_{j+1}^{[q]}+\ldots + f_N^qK_N^{[q]}.$$ Multiplication by $f_j^q$ gives an injection $$ \frac{R}{J_{q, (\epsilon_1, \ldots, \epsilon_i)} } \hookrightarrow \frac{R}{J'_q +(f_j^{2q})+ f_j^qf_{j+1}^qK_{j+1}^{[q]}+\ldots + f_j^qf_N^qK_N^{[q]}}. $$ The denominator of the second quotient can be described as $$J^{[q]}*_{\epsilon'_1}f_1'^q K_1^{[q]}*_{\epsilon_2'}f_2'^qK_2^{[q]}* \ldots * {\epsilon'_N}f_N'^qK_N^{[q]},$$ where $\epsilon_i'=\epsilon_i$ for all $i\ne j$, $\epsilon_j'=0$, $f_i'=f_i$ for $i<j$, and $f_i'=f_jf_i$ for $i \ge j$. The inductive hypothesis tells us that this family of ideals satisfy (LC), and therefore, since $H_{\mathfrak{m}}^0(\underline{ \ })$ is left exact, the family of ideals $J_{q, (\epsilon_1, \ldots, \epsilon_N)}$ do as well. \end{proof} Now we are ready to give the proof of the main theorem. \begin{proof} Let $\mathrm{dim}(R/J)=c$, and let $y_1, \ldots, y_c$ be such that equation (\ref{linearcomb}) from Lemma (\ref{recursive}) holds. We claim that for all $(\epsilon_1, \ldots, \epsilon_c) \in \{0, 1\}^c$, the length $$ l\left(\frac{R}{J^{[q]}*_{\epsilon_1}y^q_1*_{\epsilon_2} y^q_2*_{\epsilon_3} \ldots *_{\epsilon_c}y^q_c}\right) $$ can be expressed as a linear combination (with integer coefficients that do not depended on $J$ and $q$) of generalized Hilbert-Kunz functions of ideals of dimension $\le c-1$. Due to Lemma (\ref{recursive}), this will show that $f_{gHK}^{R/J}$ is a linear combination of generalized Hilbert-Kunz functions of ideals of dimension $\le c-1$. Repeated applications of this fact will then allow us to reduce to Hilbert-Kunz functions of zero-dimensional ideals, as desired. More generally, we claim that if $y_1, \ldots, y_c$, $K_1, \ldots, K_c$ are such that the assumption in Definition (\ref{notation}) is satisfied, then the length of $$ l\left(\frac{R}{J^{[q]}*_{\epsilon_1}y_1^qK_1^{[q]}*_{\epsilon_2} y^q_2K_2^{[q]} *_{\epsilon_3} \ldots *_{\epsilon_c}y^q_cK_c^{[q]}}\right) $$ can be expressed as a linear combination (with integer coefficients independent of $q$) of generalized Hilbert-Kunz functions of ideals of dimension $\le c-1$. We induct on $N(\epsilon_1, \ldots, \epsilon_c):=\sum_{i=1}^c \epsilon_i 2^{c-i}$. The base case is $\epsilon_1 = \ldots = \epsilon_c =0$, in which case the function under consideration is itself a Hilbert-Kunz function of a zero-dimensional ideal. For the inductive step, let $j$ be the largest index for which $\epsilon_j=1$. Let $J'_q= J^{[q]}*_{\epsilon_1}y_1^qK_1^{[q]} *_{\epsilon_2} \ldots *_{\epsilon_{j-1}}y_{j-1}^qK_{j-1}^{[q]}$. Note that $\mathcal{J}:= J^{[q]}*_{\epsilon_1}y_1^qK_1^{[q]}*_{\epsilon_2} y^q_2K_2^{[q]}*_{\epsilon_3} \ldots *_{\epsilon_c}y^q_cK_c^{[q]}$ can be written as $J'_q:y_j^q + y_j^q+ y_{j+1}^qK_{j+1}^{[q]} + \ldots + y_c^q K_c^{[q]}$. Multiplication by $y_j^q$ gives an injection \begin{equation}\label{injection} \frac{R}{\mathcal{J}} \hookrightarrow \frac{R}{J'_q+y_j^{2q}+y_j^q(y_{j+1}^qK_{j+1}^{[q]}+ \cdots + y_c^q K_c^{[q]})} \end{equation} The denominator of the second quotient above can be written as $J'_q*_0y_j^qK'^{[q]}$ where $K'=(y_j^q)+y_{j+1}^qK_{j+1}^{[q]}+ \ldots + y_c^qK_c^{[q]}$. The cokernel of the map (\ref{injection}) is $\displaystyle \frac{R}{J'_q+y_j^q}$. Since $\displaystyle \frac{R}{\mathcal{J}}$ is zero-dimensional, applying $H_{\mathfrak{m}}^0$ to the resulting short exact sequence preserves exactness. Therefore, the length of $\displaystyle H_{\mathfrak{m} }^0(\frac{R}{\mathcal{J}})$ is \begin{equation}\label{conclusion} l\left(H_{\mathfrak{m}}^0\left( \frac{R}{J'_q *_0 y_j^q K'^{[q]}}\right)\right) - l\left(H_{\mathfrak{m}}^0\left(\frac{R}{J'_q*_0y_j^q}\right)\right) \end{equation} Now apply the last part of Lemma ~\ref{recursive} to find $z_{j+1}, \ldots, z_c$ such that the two terms in (\ref{conclusion}) are linear combinations of $\displaystyle l \left(\frac{R}{J'_q*_0y_j^qK'^{[q]}*_{\epsilon'_{j+1}}z_{j+1}^c *_{\epsilon'_{j+2}}\ldots *_{\epsilon'_c}z_c^q}\right)$, and $\displaystyle l \left(\frac{R}{J'_q*_0y_j^q*_{\epsilon'_{j+1}}z_{j+1}^q *_{\epsilon'_{j+2}}\ldots *_{\epsilon'_c}z_c^q}\right)$ respectively, where $(\epsilon'_{j+1}, \ldots, \epsilon'_c)$ range through $\{0, 1\}^{c-j}$. We can think of the ideals in the denominators as $J^{[q]}*_{\epsilon_1'} z_1^q{K'}_1^{[q]}*_{\epsilon_2'} \cdots *_{\epsilon_c'}z_c^q{K'}_c^{[q]}$ where for $i<j$ we have $\epsilon_i'=\epsilon_i$, $z_i=y_i$, $K_i'=K_i$, for $i>j$ we have $K_i'=R$, $\epsilon_i=0$, and for $i=j$ we have $\epsilon_j'=0$, $z_j=y_j$, $K_j'=K'$ (in the case of the first quotient), or $K_j'=R$ (in the case of the second quotient). We have $$N(\epsilon_1', \ldots, \epsilon_c')=\sum_{i=1}^{j-1}\epsilon_i2^{c-i}+ \sum_{i=j+1}^c \epsilon'_i2^{c-i}\le $$ $$ \sum_{i=1}^{j-1}\epsilon_i2^{c-i}+\sum_{i=j+1}^c2^{c-i}= \sum_{i=1}^{j-1}\epsilon_i2^{c-i}+2^{c-j}-1= N(\epsilon_1, \ldots, \epsilon_c)-1$$ and the desired conclusion follows from the inductive hypothesis. \end{proof} \begin{obs} The proof of Theorem (\ref{main_theorem}) only uses the (LC) assumption for Frobenius powers of ideals of the form $J+y_1K_1 + \ldots y_jK_j$, where $y_1, \ldots, y_c$ are such that equation (\ref{linearcomb}) in Lemma (\ref{recursive}) holds, and $K_1, \ldots, K_c$ are ideals generated by certain products of $y_1, \ldots, y_c$. In particular, we only need to assume (LC) for Frobenius powers of ideals containing $J$. Furthermore, if $R$ is graded and $J$ is a homogeneous ideal, then we can pick $y_1, \ldots, y_c$ to be homogeneous elements, and thus it suffices to assume (LC) for Frobenius powers of homogeneous ideals containing $J$. \end{obs}
2,869,038,155,136
arxiv
\section{Sample Section Title} \label{sec:sample1} Lorem ipsum dolor sit amet, consectetur adipiscing \citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna \citet{Blondeletal2008} aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \citep{Blondeletal2008,FabricioLiang2013} anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\ref{sec:sample:appendix}. \section{Introduction} \label{sec:intro} Clouds are the largest source of uncertainty in weather prediction and climate science. They are remaining as a weak link in modeling of the atmospheric circulation. This is rooted in the fact that clouds depend on the physical and chemical processes over a huge range of scales, from the collisions of micron-sized droplets and particles to the airflow dynamics on the scales of thousands of meters (\textcolor{black}{...}). Since ambiguities exist related to representation of clouds in climate models, we need more observations. However not all types of clouds are relevant, when we discuss important issues such as global warming. Due to reflectivity, clouds cool the earth by around 12 °C, an effect largely caused by \textit{stratocumulus (warm)} clouds. However, at the same time, they heat the earth by around 7 °C by reflecting emitted radiation, an effect largely caused by cirrus clouds. This averages out to a net loss of 5 °C\cite{onln:cloud_climatology}. To understand why cloud evolves as observed, why heavy rainfalls can not be predicted precisely in sub-grid parametrization, one must know evolution of the internal fluctuations and and sources of fluctuations (forces) from direct measurements. In order to determine the fluctuations and forces relevant to cloud dynamics, we need to measure quantities such as temperature, pressure, moisture (humidity), accelerations, velocity fields inside clouds. This can be achieved from a collection of simultaneous point-to-point direct measurements in different parts of the cloud to obtain precise point measurements and retrieve (generate) information about general view of the the cloud flow. And these kind of observations are not directly available with current instrumentation and measurement techniques. Remote sensing platforms, either ground based (\textcolor{black}{...}), airborne (\textcolor{black}{...}), or spaceborne (\textcolor{black}{...}), are not particularly useful owing to the relatively long times required to complete a scan and the inability to penetrate clouds and precipitation\cite{markowski2017drifter}. Nowadays radars are the main source of observation information of clouds. They can provide the exact information about the morphology of clouds, precipitation levels, liquid water content. Dual-Doppler observations can also provide information on velocity, rotation, thus so vorticity field. Direct numerical simulations can also provide insights for understanding internal fluctuations and intermittency of clouds. However, only using them can not provide us full picture. Since, numerical simulations can resolve only small portion of clouds (~1-10 meters, (\textcolor{black}{...})). And it is also computationally expensive to resolve all physical and chemical processes inside clouds for the large portion of the cloud (~1-10 km, (\textcolor{black}{...})). It is important to study small-scale fluctuations inside clouds and their counterparts (clear-air) within the sub-grid scale (usually grid size is 10 km, but state-of the art is 2 km with $\sim$ 90 million core hours \cite{onln:cl_sim_prace_2020, hentgen_2019_cl_sim}) of the existing climate simulations. This is because convective parameterization schemes in global climate model (GCM) simulations may be underestimated (overestimated), \cite{zang2022_gcm}. That is why we need more realistic numerical simulations and in-field experiments: i) Numerical simulations that can resolve small-scale dynamics of the clouds and their interaction with surrounding clear-air; ii) In-field experiments that can provide small-scale variations of the physical quantities (velocity, acceleration, pressure, humidity, temperature, etc.) from direct measurements. There are two different approaches in climate simulation models for representing clouds and cloudiness: \textbf{convection-parameterizing} and \textbf{convection-resolving}. The convection-parameterizing simulations overestimates the humidity (cloudiness) in low-level (close to the ground, 2m above the ground, 850 hPa) and underestimates the humidity in mid-level (700 hPa, 500 hPa). In the upper-level (200 hPa) simulations, usually the both simulation models act in the same way. High level of cloudiness in mid-level altitudes are due to strong and frequent updrafts, strong vertical mixing and favorable dynamical and microphysical conditions for the formation of the mixed-phase clouds.\cite{hentgen_2019_cl_sim} {This paper is about a notable gap in ability to \textit{directly} observe \textit{fluctuations} of physical and chemical quantities inside mid-level (\textit{warm}) clouds.} {To study turbulent characteristics (intermittency, high-energy level in different parts of the cloud) of the clouds by means of numerical simulations (...) and in-field measurements (...) were important topics for the research community.} {In summary, we need new instrumentation and/or measurement techniques to provide reliable, real-time in-filed observations inside clouds.} It is difficult to find the similar in-field experiments in the current application context. But, we can find some similar instrumentation setups, such as \citet{swenson2019}, some other (\textcolor{black}{...}), etc. The advantage of such in-field measurement system is threefold: (i) Direct quantification of Lagrangian turbulent dispersion and diffusion from real, in-field measurement; (ii) Tracking small variations of physical quantities inside real clouds; (iii) General understanding of the cloud dynamics with simultaneous measurements in different parts of the cloud. For this reason the COMPLETE project was submitted to H2020. The project was inspired by the experimental method introduced by Richardson L. F. (1926) \cite{richardson1926atmospheric}. And this in-field experiment was not carried out since then. The project is an interdisciplinary and aimed to decrease knowledge gaps in understanding cloud dynamics by combining skills from different areas. Within project it is proposed to use the numerical simulations, laboratory and in-field experiments. And here, we are presenting our work on in-field measurements. For the in-field measurements, mini, biodegradable radiosondes are designed and developed. Mini radiosonde is only 5x5 cm and wights 7 grams without battery with helium balloon, which has radius of 20 cm (much less than traditional weather balloons). Recently, the second version of the mini radiosonde has been developed, which has dimension of 3.5x4 cm and weights around 3 grams only. Thus, for the future experiments smaller balloons can be used. Each radiosonde include various set of sensors, such as pressure, humidity, temperature, IMU (Intertial Measurement Unit) and GNSS sensors. With the help of radiosondes we aim to create a new Lagrangian based, cloud fluctuation datasets. The generated dataset is required to reduce the fragmentation of results and knowledge in this field. By providing more accurate data obtained by radiosondes, we try to reduce ambiguities and limitations of numerical simulation and modeling. In the Lagrangian reference system, the fluid flow properties are determined by tracking the motion and properties of the individual fluid particles as they move in time. For example, the temperature will be measured along the trajectory of the fluid particle as time passes. In this way, if we can track many fluid particles, we can have an idea about fluid properties for the whole domain (\textcolor{black}{...}). Usually sensors introduce high and low frequency faults. High frequency faults may arise when the GNSS signals undergo multi-path errors. These errors occur when the GNSS signal is reflected off one or more surfaces before it reaches the receiver antenna. Low frequency faults can be introduced by IMU sensor readings \cite{art:aHighIntegrity}. It is called sensor bias, offset of measurement while sensor is not measuring anything. This bias offset can be removed by calibrating the IMU sensor output. The removal of bias in the sensors does not however provide perfect solutions because of white noise introduced by sensors. Further application of subsequent filters (e.g. Kalman) help us to remove effects of errors in our measurements. In the Section \ref{sec:system_description} we describe the measurement system. Then, in Section \ref{sec:result_discussion}, we discuss the traceability of the system, quality of the obtained dataset and validation with reference systems. In Section \ref{sec:field_tests}, we provide the results from the preliminary in-field experiment campaigns and we conclude our discussion with Section \ref{sec:conclusion}. \section{Measurement system description}\label{sec:system_description} Getting the probes to observe relevant parts of the cloud over different range of scales during cloud lifetime is challenging in terms of instrumentation setup. To accomplish this challenging task, the following measurement system was suggested for in-field experiments, see Figure \ref{fig:scheme}. The measurement system consists of the following building blocks: a set of radiosondes, ground stations and post-processing machine. We aim to put a set of radiosondes inside warm clouds (or any other atmospheric environment), where each radiosonde can passively follow the fluid flow. Thus, it can be possible to have an idea about real dynamics of the surrounding fluid, which is cloud, when balloon is inside it, or clear air when it is outside. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.58\textwidth} \includegraphics[width=\linewidth]{figures/in_field_measurement_concept_v3.pdf} \caption{context} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}[b]{0.39\textwidth} \centering \includegraphics[height=0.65\linewidth]{figures/radisonde_balloon.pdf} \caption{radiosonde} \includegraphics[height=0.4\linewidth]{figures/radioprobe.pdf} \caption{electronic board components} \end{subfigure} \caption{In-field measurement measurement context.} \label{fig:scheme} \end{figure} Each radiosonde transmits sensor readings to ground stations with Lora radio transmission protocol. LoRa is a relatively new proprietary communication technology that allows long-range communication distances while consuming very little power. It utilizes license-free Industrial, Scientific and Medical ISM frequency bands to exchange information at low data rates. Ground stations receive data from radiosondes, they are connected to post-processing machine. We store all data in post-processing machine. In order to reduce data losses same data transmitted by radiosonde can be received by different ground stations. The design of the radiosonde electronic board, tests inside environmental chamber and initial the performance evolution of the radiosonde in field experiments were described in the previous work by Miryam et. al\cite{miryam_sensors2021}. In Figure \ref{fig:scheme}(b) we can see the radiosonde design: it includes biodegradable balloon and radioprobe (electronic board). The balloon is filled with helium and radioprobe is connected to the battery. They are attached together and can float (stay on air) for multiple hours. Embedded electronics (microprocessor, radio module and sensors) can measure velocity, acceleration, pressure, temperature and humidity fluctuations in the surrounding environment. This configuration is selected as a result of the in-field experiment tests (see Sections \ref{sec:result_discussion} and \ref{sec:field_tests} for discussion). \subsection{Radioprobe} Furthermore, Figure \ref{fig:radioprobeversions} shows the current (red) and newly optimized (green) prototypes of the radioprobe. Nowadays, the new version of the radioprobe prototype is under hardware and software tests. Thus, the focus of the discussion will be current working prototype. The building blocks of the radioprobe board are illustrated in the Figure \ref{fig:scheme}(c), which consists of microcontroller, power module, radio tranmission module, sensors and antennas. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figures/radioprobe_versions} \caption{Current (red) and 2nd (green) prototype of the radiosonde board.} \label{fig:radioprobeversions} \end{figure} The \textit{microcontroller} is data-processing and control unit, which allows to control other units, acquire sensor readings and execute function calls in an automated way inside the device. The \textit{radio tranmission module} of the radioprobe enables the one-way wireless communication with ground stations using radio frequency signals. \textit{PHT} (Pressure, Humidity, Temperature), \textit{IMU} (Inertial Measurement Unit) and \textit{GNSS} (Global Navigation Satellite System) \textit{sensors} provide sensor readings of physical quantities \cite{miryam_sensors2021}. Those quantities together with sensor operating ranges, sample rate and provider functional unit are described in Table \ref{tbl:sensor_ranges}. \begin{table}[h!] \centering \caption{Sensor operating ranges.} \begin{tabular}{l l l l} \hline Physical quantity & Range & Sample time & Device \\ [0.5ex] \hline\hline Pressure & [300, 1100] mbar& 4 s & PHT \\ Humidity & [0, 100] \% & 4 s & PHT \\ Temperature & [-40, 85] $^{\circ}$C & 4 s & PHT \\ Longitude & degrees & 4 s & GNSS \\ Latitude & degrees & 4 s & GNSS \\ Altitude & m & 4 s & GNSS \\ Acceleration & [-16, 16] g & 4 s & IMU \\ Course & [-1, 1] quats & 4 s & IMU \\ [1ex] \hline \end{tabular} \label{tbl:sensor_ranges} \end{table} Sensors are chosen based on their compact size and low-power consumption. Furthermore, they are configured to work in energy-efficient modes. For example, GNSS by U-blox has compact size and can be configured to operate in e-mode. In other studies, researchers exploited high precision GNSS sensors \cite{swenson2019}, (\textcolor{black}{...}), however such sensors consume more power, which is crucial for our application context. Moreover, GNSS sensor provides compact PVT (Position, Velocity and Time) navigation information by using proprietary protocol \cite{docs:ubx_rec_desc}, which can not provided in a single sensor reading by using traditional NMEA protocol\cite{onln:nmea}. \subsection{Biodegradable balloon and stable floating} The radiosonde system should float at an almost constant altitude during the experiments. To float at a fixed altitude, the balloon volume must remain relatively the same during flight \cite{basso2020,Yajima2009}. Therefore, the balloons used in our experiments were made from non-elastic materials. Furthermore, to minimize environmental impact of the radiosonde system, the used electronic components and balloon material should be as biodegradable as possible. The material characteristics of the balloon, processing methods, and polymer coatings were studied in the COMPLETE project by Basso et al. \cite{basso2020}. They studied green polymers, such as Mater Bi and PLA were studied and compared with materials used for traditional weather balloon production, such as Latex and Mylar. The properties of the above-mentioned materials were examined in laboratory experiments in collaboration with IIT Genoa. The main properties of interest are the \textit{tensile strength, hydrophobicity, helium permeability, and resistance to variations} of the surrounding temperature and humidity. As a result of experiments, it can be concluded that Mater-Bi with applied coatings was the best fit for satisfying above properties\cite{basso2020}. For the recent in-field experiments spherical balloons (R = 20 cm) were made from store bought Mater-Bi bags. Selected Mater-Bi material has 20 $\mu$m thickness a density of 1.24 g cm$^{-3}$, which is thinner than the previous studies (30 $\mu$m) carried out by Basso et al.\cite{basso2020}. Thus, the balloon mass was reduced by a factor of 1.5, which in turn reduced the overall payload budget(eq. \ref{eq:volballoon}). Balloon dimensions were identified based on the weight of the radiosonde electronic board with battery and atmospheric parameters in a target floating altitude (see Table \ref{tbl:air_density}). The volume of the balloon should satisfy the following equation for stable floating in a fixed altitude: \begin{equation} V_b = \frac{m_{total}}{\rho_a (1- M_g/M_a)} = \frac{m_r + m_b}{\rho_a (1- M_g/M_a)}, \label{eq:volballoon} \end{equation} where $m_r$ is mass of the radioprobe with battery and connections, $m_b$ is the mass of the balloon, $\rho_a$ is air density in a given altitude, $M_a$ and $M_g$ are molar masses of air and gas inside balloon. $m_b = S\Delta d \rho_m = 4 \pi R^2\ \Delta d\, \rho_m$, where $S$ is surface area of the spherical balloon with radius $R$. $\Delta d$ and $\rho_m$ are thickness and density of the Mater-Bi material. \begin{table}[h!] \centering \caption{Standard atmospheric parameters for possible operating altitude range of the radiosonde. Here, altitude is given as height above sea level, T is temperature, P is the pressure and $\rho_a$ is air density.} \begin{tabular}{l l l l} \hline \rule{0pt}{2.6ex} Altitude [m] & T [K] & P [hPa] & $\rho_a$ [$kg/m^3$] \\ \hline 0 & 288 & 1013 & 1.22 \\ 500 & 285 & 950 & 1.17 \\ 1000 & 282 & 900 & 1.11 \\ 1500 & 278 & 850 & 1.06 \\ 2000 & 275 & 795 & 1.01 \\ 2500 & 272 & 748 & 0.95 \\ 3000 & 269 & 701 & 0.90 \\ \hline \end{tabular} \label{tbl:air_density} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.7\linewidth]{figures/mass_budget.png} \caption{Total payload budget with respect to the operating altitude. Each line represents variation of the payload budget along altitude for a given balloon dimensions: $R$ is the radius of the spherical balloon and $V$ is the volume in liters. As it can be seen from the Table \ref{tbl:air_density}, the air density changes with the altitude, thus the lifting force. Horizontal lines shows weight of the the radiosonde: current version (17.5 g), the second optimized version (8.5 g) and medium (13 g). This weight includes battery and all equipment needed to attach the radiosonde to the balloon. Detailed breakdown of the radiosonde weight is given in the Table \ref{tbl:probeweight}.} \label{fig:weightvsvol} \end{figure} Figure \ref{fig:weightvsvol} shows the relationship between total liftable payload (excluding balloon weight, $m_b$) and floating altitude for the fixed balloon dimensions. In the current design, weight of the radioprobe with battery and connections, $m_r$, is 17.5 grams (see Table \ref{tbl:probeweight} for details). It can be lifted up to 1725 meters above sea level with 20 cm radius balloon; up to 2650 meters with 21 cm radius balloon and so on. While, the second version of the prototype (8.5 grams) allows us to use even smaller balloons and less amount of gas (e.g. Helium) to reach the same floating altitude. \begin{table}[h!] \centering \caption{Radiosonde payload weight.} \begin{tabular}{l l l} \hline \rule{0pt}{2.6ex} Part & \multicolumn{2}{c}{Mass [grams]}\\ \rule{0pt}{2.6ex} & Current design & New design \\ \hline Radioprobe & 7 & 3 \\ Battery & 8 & 3 \\ Connections & 2.5 & 2.5 \\ Balloon & 12.5 (R=20 cm) & 9 (R=17 cm) \\ \hline \rule{0pt}{2.6ex} Total & 30 & 17.5 \\ \hline \end{tabular} \label{tbl:probeweight} \end{table} \subsection{Ground station and network architecture} LoRa based wireless sensor network (WSN) concept was adopted for the radiosonde network. The star architecture was used, where each radiosonde is connected to the ground receiver station with point-to-point link. The feasibility analysis of the selected network architecture was carried out in different application scenarios \cite{bertoldo2018feasibility, bertoldo2018urbannoisy, paredes2019propagation}. First in-field test of the network architecture in the current application context were presented here \cite{miryam_sensors2021}. In most cases, LoRa protocol based WSN networks are used as within LoRaWAN network. However, in this work, LoRa protocol is used to create an ad hoc private network and adapt the technology to the working scenario\cite{miryam_sensors2021}. Therefore, the commercial off-the-shelf LoRa-based transceiver module RFM95 from HopeRF was used. It is a module featuring long-range spreadspectrum communication links and high immunity to interference whilst optimizing the power use \cite{miryam_sensors2021}. This module allows power-transmission ranges within 5 dBm (3.16 mW) to 20 dBm (100 mW), although according to the regulations released by the European Telecommunications Standards Institute (ETSI), the maximum power allowed in the European area is 14 dBm (25.12 mW) \cite{onln:ETSI_EN_regulation}. \subsection{Data acquisition and processing} Data-processing flow of the radioprobe can bee seen in Figure \ref{fig:probeworkingprinciple}. The flow consists of steps to be performed by radioprobe (transmitter) and ground station (receiver). Some of the processing will be done directly in the transmitter side, and more power and time consuming part will be done in the receiver side with the help of the post-processing machine. \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{figures/probe_working_principle} \caption{Radiosonde system processing flow.} \label{fig:probeworkingprinciple} \end{figure} As it can be observed from the Figure \ref{fig:probeworkingprinciple}, sensor data is processed by AHRS (Attitude and Heading Reference System) filter before sending to the ground station. The AHRS filter acquires readings from 9-DOF IMU sensor (3x accelerometer, 3x gyroscope and 3x magnetometer) and provides course (orientation) of the radioprobe as output. In order to remove possible errors introduced by sensor readings, the AHRS filter also uses sensor calibration data \cite{madgwick2011}. IMU sensor readings are provided with respect to the body frame (xyz) of the radioprobe. Those readings can be translated into local experiment frame (X$_e$Y$_e$Z$_e$) by using orientation data from AHRS filter. Furthermore, acceleration data represented in local experiment frame can be handful to obtain positioning information during GNSS outages. Acceleration data are fused with GNSS sensor data by means of Kalman Filter. It has two different operating modes: \textit{predict} and \textit{update} \cite{Kalman1960ANA}. In predict mode, we use IMU data to provide position information with respect to the last reference position. As soon as GNSS data is available, we update the reference position. In this way, we can have position information during GNSS outages. Since the GNSS sensor consumes relatively more power than IMU and other sensors, this approach helps us to reduce power consumption. \section{Results and discussion}\label{sec:result_discussion} As it is stated in the previous sections, the preliminary results were presented in the previous works by Basso et al.\cite{basso2020} and Miryam et al.\cite{miryam_sensors2021}. However, in the previous works, not all parts of the measurement system were tested and validated with extensive in-field campaigns. Furthermore, it is possible to apply a certain post-processing algorithms only when proper in-field experiments are carried out. In the following subsections, recently obtained results are described as a set of reached milestones. The measurement system was compared and validated with respect to traditional instrumentation. Type of experiments can be carried out with proposed measurement system are: (i) fixed point measurements at ground level; (ii) vertical profiling measurements of atmosphere; (iii) Lagrangian tracking measurements with a cluster of radiosondes. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\linewidth]{figures/rs_conf_a.png} \caption{} \end{subfigure} \hspace{0.1\textwidth} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\linewidth]{figures/rs_conf_b.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{figures/conf_A_real.png} \caption{} \end{subfigure} \hspace{0.02\textwidth} \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\linewidth]{figures/radioprobe_with_battery} \caption{} \end{subfigure} \caption{Two different configurations for radiosonde. (a) configuration A: radioprobe board is \textbf{outside} the balloon. (b) configuration B: radioprobe board is in the pocket \textbf{inside} the balloon. (c) radiosonde in configuration A, attached to the ground with a thread. (d)radioprobe board with battery.} \label{fig:inr1_conf} \end{figure} \subsection{Pre-launch calibration and fixed point measurements} In-field test started with testing various configurations for the radioprobe and validating sensor measurements with respect to fixed point measurements at ground level. Figure \ref{fig:inr1_conf} illustrates two different configurations, that are tested and validated with respect to reference station at campus of INRIM. \begin{figure}[ht!] \centering \begin{subfigure}{0.4\textwidth} \caption*{Calibration 1, without balloons} \includegraphics[width=\linewidth]{figures/inr1_rawcal1_pres} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \caption*{Calibration 2, with balloons} \includegraphics[width=\linewidth]{figures/inr1_rawcal2_pres} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal1_hum} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal2_hum} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal1_temp} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal2_temp} \caption{} \end{subfigure} \caption{Pressure, humidity and temperature readings from radiosondes with two different configurations. (a,c,e) First sensor readings without balloons. (b,d,f) Sensor readings after attaching the balloons. conf A: radioprobe is \textbf{outside} the balloon. conf B: radioprobe is \textbf{inside} the balloon.} \label{fig:inr1_pht} \end{figure} Both configurations were tested before and after attaching the balloons. The second configuration (\textit{conf. B}: radioprobe inside balloon) can introduce unintended biases into sensor readings. In Figure \ref{fig:inr1_pht} we compared pressure, humidity and temperature readings from two configurations. Sensor readings are also compared with the fixed Vaisala station, which includes RS41 probe. The reference Viasala station was pre-calibrated and provides temperature, pressure and humidity sensor readings for all day long with 1 minute time interval. In the first period of the experiment (Panels a,c,e), it can be seen that sensors take some period ($\sim$ 10 minutes, from 11.10 to 11.20) to warm up and catch up with Vaisala station readings. This is common behavior of MEMS sensors, particularly for atmospheric measurement sensors.((\textcolor{black}{...})). After attaching radioprobes to the balloons, readings from configuration B started to show mismatches with reference station (Panels b,d,f), while readings of configuration A were aligned better with reference station measurements, particularly for pressure and temperature. However, some small fluctuations in temperature and humidity readings with respect to the fixed station can be due to the positioning and movement of our probes around the station. This experiment helped to validate the proper configuration of the radiosonde, quantify biases and warm-up time of sensors. \subsection{Vertical atmospheric profiling measurements} The second dual-sonde launch experiment was held in collaboration with the Regional Agency for the Protection of the Environment (ARPA) of Piedmont region, on June 9, 2021 at Levaldigi Airport, Cuneo, Italy. The experiment site was equipped with automatic sounding system, where ARPA-Piemonte launch a radiosonde twice a day for atmospheric profiling measurements. In the the first dual-sonde launch experiment, we observed interference problems of GNSS sensor\cite{miryam_sensors2021}. In that time, the radioprobe board was attached directly to the case of the Vaisala RS-41 probe. To resolve this issue during the second launch, the radioprobe was attached to the reference Vaisala probe with 80 cm offset. \begin{figure}[bht!] \centering \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/num_packet_time_l.png} \caption{} \end{subfigure} \hspace{0.001\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/num_packet_alt_l.png} \caption{} \end{subfigure} \caption{Packet transmission test for long distances in Levaldigi Airport with ARPA, Piedmont. (a) Number of packets received in each minute together with moving average. (b) Number of packets along altitude levels, bin size = 400 meters.} \label{fig:arpa_transmission} \end{figure} During the experiment, data transmission continued around 1 hour, until the radioprobe reached almost $\sim$ 9 km altitude and \hl{13 km in distance}. We received packets in each 3-4 seconds and relative performance on our target altitudes (2-3 km) are also promising. In Figure \ref{fig:arpa_transmission}, we can see the number of packets received in each minute (panel a) and in a given altitude (panel b) during the first 25 minutes of the launch. {The original idea is to reach 1 Hz transmission rate. However, with the current computational parameters of the radioprobe and data packet size, it is difficult to achieve. Moreover, the current prototype of the receiver station does not support multi-channel LoRa connections. It is agreed that the design of the new multi-channel receiver station can solve this issue. The new station, which is under development, can receive multiple connections in each channel as also used here}\cite{swenson2019}. {In this way, the receiver station can receive simultaneously data packets from 10-20 radiosondes without packet losses due to collision.} \begin{figure}[bht!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_cmp_latlong_map.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_cmp_alt_time.png} \caption{} \end{subfigure} \caption{GNSS positioning measurements of the COMPLETE radioprobe and comparison with Vaisala RS-41 probe. (a) Map plot of longitude and latitude readings. (b) Altitude readings, the inset is plotted for better comparison.} \label{fig:arpa2_gnss_lla} \end{figure} Figure \ref{fig:arpa2_gnss_lla} shows the comparison of GNSS sensor measurements. It can be seen that raw GNSS readings of longitude, latitude and altitude already provide quite accurate results with respect to the reference system, prior to apply any filters. \begin{figure}[bht!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_vel_comp.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_wind_speed.png} \caption{} \end{subfigure} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_wind_speed_rt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_wind_speed_spectrum_v2.png} \caption{} \end{subfigure} \caption{Velocity measurements of the radioprobe. (a) Velocity components and its magnitude. (b) Wind speed comparison from both radioprobes. For wind speed computation, only north and east components of velocity were used. (c) Wind speed from COMPLETE radioprobe: raw measurements (blue), resampled dataset with 4 second regular intervals (orange) and reference dataset from Vaisala probe. (d) Power spectrum of wind speed dataset of the COMPLETE radioprobe. Besides raw spectrum dataset (blue), moving average of the spectrum values (orange) and two trend lines (yellow and violet) are provided for comparison. Frequency range was taken based on Nyquist frequency, which is half of the sampling frequency, $f_s/2 = 0.125 s^{-1}$.} \label{fig:arpa2_gnss_vel} \end{figure} GNSS sensor can also provide velocity readings in north, east and down directions, as it is plotted in Figure \ref{fig:arpa2_gnss_vel}a. Horizontal wind speed was computed from north and east velocity components and was compared with horizontal wind speed readings of RS41 probe, see Fig. \ref{fig:arpa2_gnss_vel}b. Wind speed was further analyzed with FFT (Fast Fourier Transform) to get preliminary results of power spectra of fluctuations. To compute the spectra, 30 minutes wind speed dataset was sampled with a 4s time step (see Fig. \ref{fig:arpa2_gnss_vel}c), which gives frequency range from 5$\cdot 10^{-4}$s$^{-1}$ to 0.25 s$^{-1}$ and Nyquist frequency of 0.12 s$^{-1}$ (2$\pi$/8 = $\pi$/4 rad/s), see Fig. \ref{fig:arpa2_gnss_vel}d. The same kind of analysis can be performed with datasets of vertical velocity and temperature. The same power spectra analysis of the vertical velocity can be used to identify a cut off point of Brunt-Vaisala frequency, while vertical temperature profile (Fig. \ref{fig:arpa_pht}e) can used to derive a complete profile of the Brunt-Vaisala frequency along the altitude\cite{nath2010turbulencecharacteristics, jaiswal2020gpsradiosonde}. \begin{figure}[ht!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_pres_time_alt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_delta_temp.png} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_hum_time_alt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_delta_hum.png} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_temp_time_alt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_delta_temp.png} \caption{} \end{subfigure} \caption{Pressure, humidity and temperature readings from radiosondes. (a, c, e) Comparison between COMPLETE and Vaisala probes. (b, d, f) Differences of two measurements. Dashed lines highlight absolute accuracy ranges of pressure ($\pm$1 hPa), humidity ($\pm$3 \%) and temperature ($\pm$1 $^o$C)\cite{docs:pht_datasheet}.} \label{fig:arpa_pht} \end{figure} The plots in Figure \ref{fig:arpa_pht} show that the radioprobe experienced some biases during the launch with respect to the Vaisala RS-41 probe. Biases are evident, especially for humidity and temperature readings, which is mainly due to heating and radiation in sunny day. However, it is unlikely that the biases inside clouds due to heating is noticeable. {Furthermore, the sensor datasheet suggests that the air-flow in direction to the vent-hole of the PHT sensor has to be engineered in a way that a sufficient air exchange inside to outside will be possible. This aspect was already considered during the PCB board design, tests in environmental chamber and in field experiments} \cite{miryam_sensors2021}. {However, it is believed that the second board design will improve humidity measurements. Due to slow response time, it is also suggested to use low data rates for atmospheric observation applications. To observe effects on the response time of the device for humidity measurements, which is 1 second to reach 63\% of the step change, an air-flow velocity of approximately 1 m/s is needed}\cite{docs:pht_datasheet}. {For this purpose, sensor's response to environmental changes was tested and validated inside Kambic KK190 CHLT climatic chamber, which is in the Applied Thermodynamics Laboratory of the Italian National Metrology Institute (INRiM)}\cite{miryam_sensors2021} It is worth to mention that the PHT sensor, like other sensor components, was selected because of its compact size, low-power and low-cost characteristics. \section{In-field measurements with the cluster of radiosondes}\label{sec:field_tests} In the previous section, tests and validation results were presented with respect to the traditional fixed and vertical profiling radiosondes. Here, we present preliminary results of simultaneous measurements from a cluster of radiosondes. \subsection{Multiple tethered radiosonde setup} Tests were started with experiments of 5 tethered radiosonde measurements as in Figure \ref{fig:inr2_setup} at INRIM. \begin{figure}[ht!] \centering \begin{subfigure}{0.75\textwidth} \includegraphics[width=\linewidth]{figures/inr2_photo1.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.75\textwidth} \includegraphics[width=\linewidth]{figures/inr2_photo2.png} \caption{} \end{subfigure} \caption{Experiment with multiple tethered radiosondes at INRIM. 5 radiosondes are prepared with 2 ground stations for this experiment. 2 radiosondes are highlighted with red and black colors for tracking them with video camera.} \label{fig:inr2_setup} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_pres_cal.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_raw_pres.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_hum_cal.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_raw_hum.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_temp_cal.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_raw_temp.png} \caption{} \end{subfigure} \caption{Pressure, humidity and tempretature measurements in two phases of the experiment: calibration and fluctuating radiosondes, which are tethered to the ground. (a, c, e) PHT readings during calibration, where we removed systematic bias offsets. (b, d, f) Sensor readings during fluctuation phase of the radiosondes.} \label{fig:inr2_pht} \end{figure} \subsection{Validation of position with the stereo vision}\label{sec:pos_validation} The integration of gyroscope measurement errors will lead to an accumulating error in the calculated orientation. Therefore, gyroscopes alone cannot provide an absolute measurement of orientation. An accelerometer and magnetometer will measure the earth's gravitational and magnetic fields respectively and so provide an absolute reference of orientation. However, they are likely to be subject to high levels of noise; for example, accelerations due to motion will corrupt measured direction of gravity. The task of an orientation filter is to compute a single estimate of orientation through the optimal fusion of gyroscope, accelerometer and magnetometer measurements\cite{madgwick2011}. Innovative aspects of the AHRS filter by Madgwick et. al (2011)\cite{madgwick2011} include: a single adjustable parameter defined by observable systems characteristics; an analytically derived and optimized gradient descent algorithm enabling performance at low sampling rates; an on-line magnetic distortion compensation algorithm; and gyroscope bias drift compensation. Recommended sampling rate is at least 20 Hz for Madgwick filter to work and provide proper output. \subsection{Free launch of radiosonde cluster} According to our knowledge, this experiment is one of the first observations done by using cluster of radiosondes to track fluctuations of quantities inside clouds and atmospheric flow field. \begin{figure}[ht!] \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth,angle=90,origin=c]{figures/radiosondes.jpg} \caption{radiosondes} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth,angle=-90,origin=c]{figures/receiver_station.jpg} \caption{receiver station} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/filling_the_balloon.jpeg} \caption{filling balloon with helium} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth,angle=-90,origin=c]{figures/camera2.jpg} \caption{camera for stereo vision analysis} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth,angle=-90,origin=c]{figures/calibration_instrumentation.jpg} \caption{calibration instrumentation} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{figures/radisondes_calibration.jpeg} \caption{PHT sensor calibration} \end{subfigure} \caption{Experiment setup with (a) 10 radiosondes, (b) 2 ground stations, (d) 2 cameras and (e) calibration instrumentation.} \label{fig:experiment_setup} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/map_pht_alt_legend.png} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=\textwidth]{figures/map_pht_alt.png} \end{subfigure} \caption{Position of the radioprobes together with corresponding pressure, humidity and temperature readings with respect to travel distance and altitude. Dashboard was generated with MATLAB.} \label{fig:dashboard_alt} \end{figure} \subsection{Relative positioning and dispersion analysis} It is expected that a combination of knowledge gained from the numerical simulations and in-field experiments will enable us to understand better the relative dispersion and diffusion in atmosphere. Indeed simulation results provided preliminary insights for the setup of the in-field measurements, such as the selecting the initial launch point and initial neighborhood size. \begin{figure}[bht!] \centering \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/diff_ll_map_time.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/diff_ned_time.png} \caption{} \end{subfigure} \caption{Relative positioning of the radiosondes are given in longitude-latitude frame (a) and north-east frame (b) with respect to experiment observation point (or initial launch point).} \label{fig:ao2_rel_pos} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh20_L_time.png} \caption{Q, h=20 m} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh20_L_time_sm.png} \caption{Q, h=20 m, interpolation} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh50_L_time.png} \caption{Q, h = 50 m} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh50_L_time_sm.png} \caption{Q, h=50 m, interpolation} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh100_L_time.png} \caption{Q, h = 100 m} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh100_L_time_sm.png} \caption{Q, h=100 m, interpolation} \end{subfigure} \caption{Distance-neighbor-graph is computed with different neighborhood size in the selected time instances. Initial time instance is 14:15:00. Following time instances are given in terms of seconds from initial time instance.} \label{fig:ao2_Q} \end{figure} \subsection{Basic spectral analysis from sensor readings} \begin{figure}[bht!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_wind_speed_rt5.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_wind_speed_spectrum_v2.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_acc_mag_rt5.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_acc_mag_spectrum_v2.png} \caption{} \end{subfigure} \caption{(a, c) Wind speed and magnitude of 3D acceleration measurements from 3 freely floating radioprobes. Dataset were resampled with 5 second regular intervals. (b, d) Power spectrum of wind speed and magnitude of 3D acceleration dataset of 3 radioprobes and two trend lines (violet and green) are provided for comparison. Frequency range is taken based on Nyquist frequency, $f_s/2$.} \label{fig:ao2_vel_spectra} \end{figure} \begin{figure}[bht!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_temp_rt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_temp_spectrum_v2.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_hum_rt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_hum_spectrum_v2.png} \caption{} \end{subfigure} \caption{(a, c) Temperature and humidity measurements from 3 freely floating radioprobes. Dataset were resampled with 5 second regular intervals. (b) Power spectrum of temperature and humidity dataset of 3 radioprobes and two trend lines (violet and green) are provided for comparison. Frequency range is taken based on Nyquist frequency, $f_s/2$.} \label{fig:ao2_temp_spectra} \end{figure} \section{Conclusions}\label{sec:conclusion} This work describes a new balloon-borne instrumentation system together with a consideration of the new measurement technique. In the paper, we highlighted all tests and in-field experiments, which helped us to validate and to bring the measurement system towards realization. Currently, we are working on the optimization of the transmission and acquisition. The current data transmission rate is 1 packet per 3-4 seconds, which is acceptable for the current prototype. However, the prototype was already designed with optimized computational characteristics, weight and size. In the same time, new ground station is under development, which should allow users to receive data in higher rates and in parallel way. For this reason, we are developing custom gateway based on custom LoRa architecture. By going forward, we would like to combine results coming from numerical simulations and in-field measurements, into a more comprehensive analysis of clouds, cloud microphysics, turbulent fluctuations. In the future radioprobe board sensors will be protected from radiation and precipitation sources with lightweight shield case. \section*{Acknowledgments} This project has received funding from the Marie-Sklodowska Curie Actions (MSCA ITN ETN COMPLETE) under the European Union’s Horizon 2020 research and innovation program. Grant agreement 675675, \href{http://www.complete-h2020network.eu}{\textit{COMPLETE ITN-ETN NETWORK}}. We would like to thank to \href{https://www.oavda.it/la-fondazione}{\textit{L’Osservatorio Astronomico della Regione Autonoma Valle d’Aosta}}, Luca Tommassone and \href{http://www.arpa.piemonte.it/chi-siamo}{\textit{ARPA, Piemonte}} for hosting us during in-field experiment campaigns. \bibliographystyle{elsarticle-num-names-mod} \section{Sample Section Title} \label{sec:sample1} Lorem ipsum dolor sit amet, consectetur adipiscing \citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna \citet{Blondeletal2008} aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \citep{Blondeletal2008,FabricioLiang2013} anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\ref{sec:sample:appendix}. \section{Introduction} \label{sec:intro} Clouds are the largest source of uncertainty in weather prediction and climate science. They are remaining as a weak link in modeling of the atmospheric circulation. This is rooted in the fact that clouds depend on the physical and chemical processes over a huge range of scales, from the collisions of micron-sized droplets and particles to the airflow dynamics on the scales of thousands of meters (\textcolor{black}{...}). Since ambiguities exist related to representation of clouds in climate models, we need more observations. However not all types of clouds are relevant, when we discuss important issues such as global warming. Due to reflectivity, clouds cool the earth by around 12 °C, an effect largely caused by \textit{stratocumulus (warm)} clouds. However, at the same time, they heat the earth by around 7 °C by reflecting emitted radiation, an effect largely caused by cirrus clouds. This averages out to a net loss of 5 °C\cite{onln:cloud_climatology}. To understand why cloud evolves as observed, why heavy rainfalls can not be predicted precisely in sub-grid parametrization, one must know evolution of the internal fluctuations and and sources of fluctuations (forces) from direct measurements. In order to determine the fluctuations and forces relevant to cloud dynamics, we need to measure quantities such as temperature, pressure, moisture (humidity), accelerations, velocity fields inside clouds. This can be achieved from a collection of simultaneous point-to-point direct measurements in different parts of the cloud to obtain precise point measurements and retrieve (generate) information about general view of the the cloud flow. And these kind of observations are not directly available with current instrumentation and measurement techniques. Remote sensing platforms, either ground based (\textcolor{black}{...}), airborne (\textcolor{black}{...}), or spaceborne (\textcolor{black}{...}), are not particularly useful owing to the relatively long times required to complete a scan and the inability to penetrate clouds and precipitation\cite{markowski2017drifter}. Nowadays radars are the main source of observation information of clouds. They can provide the exact information about the morphology of clouds, precipitation levels, liquid water content. Dual-Doppler observations can also provide information on velocity, rotation, thus so vorticity field. Direct numerical simulations can also provide insights for understanding internal fluctuations and intermittency of clouds. However, only using them can not provide us full picture. Since, numerical simulations can resolve only small portion of clouds (~1-10 meters, (\textcolor{black}{...})). And it is also computationally expensive to resolve all physical and chemical processes inside clouds for the large portion of the cloud (~1-10 km, (\textcolor{black}{...})). It is important to study small-scale fluctuations inside clouds and their counterparts (clear-air) within the sub-grid scale (usually grid size is 10 km, but state-of the art is 2 km with $\sim$ 90 million core hours \cite{onln:cl_sim_prace_2020, hentgen_2019_cl_sim}) of the existing climate simulations. This is because convective parameterization schemes in global climate model (GCM) simulations may be underestimated (overestimated), \cite{zang2022_gcm}. That is why we need more realistic numerical simulations and in-field experiments: i) Numerical simulations that can resolve small-scale dynamics of the clouds and their interaction with surrounding clear-air; ii) In-field experiments that can provide small-scale variations of the physical quantities (velocity, acceleration, pressure, humidity, temperature, etc.) from direct measurements. There are two different approaches in climate simulation models for representing clouds and cloudiness: \textbf{convection-parameterizing} and \textbf{convection-resolving}. The convection-parameterizing simulations overestimates the humidity (cloudiness) in low-level (close to the ground, 2m above the ground, 850 hPa) and underestimates the humidity in mid-level (700 hPa, 500 hPa). In the upper-level (200 hPa) simulations, usually the both simulation models act in the same way. High level of cloudiness in mid-level altitudes are due to strong and frequent updrafts, strong vertical mixing and favorable dynamical and microphysical conditions for the formation of the mixed-phase clouds.\cite{hentgen_2019_cl_sim} {This paper is about a notable gap in ability to \textit{directly} observe \textit{fluctuations} of physical and chemical quantities inside mid-level (\textit{warm}) clouds.} {To study turbulent characteristics (intermittency, high-energy level in different parts of the cloud) of the clouds by means of numerical simulations (...) and in-field measurements (...) were important topics for the research community.} {In summary, we need new instrumentation and/or measurement techniques to provide reliable, real-time in-filed observations inside clouds.} It is difficult to find the similar in-field experiments in the current application context. But, we can find some similar instrumentation setups, such as \citet{swenson2019}, some other (\textcolor{black}{...}), etc. The advantage of such in-field measurement system is threefold: (i) Direct quantification of Lagrangian turbulent dispersion and diffusion from real, in-field measurement; (ii) Tracking small variations of physical quantities inside real clouds; (iii) General understanding of the cloud dynamics with simultaneous measurements in different parts of the cloud. For this reason the COMPLETE project was submitted to H2020. The project was inspired by the experimental method introduced by Richardson L. F. (1926) \cite{richardson1926atmospheric}. And this in-field experiment was not carried out since then. The project is an interdisciplinary and aimed to decrease knowledge gaps in understanding cloud dynamics by combining skills from different areas. Within project it is proposed to use the numerical simulations, laboratory and in-field experiments. And here, we are presenting our work on in-field measurements. For the in-field measurements, mini, biodegradable radiosondes are designed and developed. Mini radiosonde is only 5x5 cm and wights 7 grams without battery with helium balloon, which has radius of 20 cm (much less than traditional weather balloons). Recently, the second version of the mini radiosonde has been developed, which has dimension of 3.5x4 cm and weights around 3 grams only. Thus, for the future experiments smaller balloons can be used. Each radiosonde include various set of sensors, such as pressure, humidity, temperature, IMU (Intertial Measurement Unit) and GNSS sensors. With the help of radiosondes we aim to create a new Lagrangian based, cloud fluctuation datasets. The generated dataset is required to reduce the fragmentation of results and knowledge in this field. By providing more accurate data obtained by radiosondes, we try to reduce ambiguities and limitations of numerical simulation and modeling. In the Lagrangian reference system, the fluid flow properties are determined by tracking the motion and properties of the individual fluid particles as they move in time. For example, the temperature will be measured along the trajectory of the fluid particle as time passes. In this way, if we can track many fluid particles, we can have an idea about fluid properties for the whole domain (\textcolor{black}{...}). Usually sensors introduce high and low frequency faults. High frequency faults may arise when the GNSS signals undergo multi-path errors. These errors occur when the GNSS signal is reflected off one or more surfaces before it reaches the receiver antenna. Low frequency faults can be introduced by IMU sensor readings \cite{art:aHighIntegrity}. It is called sensor bias, offset of measurement while sensor is not measuring anything. This bias offset can be removed by calibrating the IMU sensor output. The removal of bias in the sensors does not however provide perfect solutions because of white noise introduced by sensors. Further application of subsequent filters (e.g. Kalman) help us to remove effects of errors in our measurements. In the Section \ref{sec:system_description} we describe the measurement system. Then, in Section \ref{sec:result_discussion}, we discuss the traceability of the system, quality of the obtained dataset and validation with reference systems. In Section \ref{sec:field_tests}, we provide the results from the preliminary in-field experiment campaigns and we conclude our discussion with Section \ref{sec:conclusion}. \section{Measurement system description}\label{sec:system_description} Getting the probes to observe relevant parts of the cloud over different range of scales during cloud lifetime is challenging in terms of instrumentation setup. To accomplish this challenging task, the following measurement system was suggested for in-field experiments, see Figure \ref{fig:scheme}. The measurement system consists of the following building blocks: a set of radiosondes, ground stations and post-processing machine. We aim to put a set of radiosondes inside warm clouds (or any other atmospheric environment), where each radiosonde can passively follow the fluid flow. Thus, it can be possible to have an idea about real dynamics of the surrounding fluid, which is cloud, when balloon is inside it, or clear air when it is outside. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.58\textwidth} \includegraphics[width=\linewidth]{figures/in_field_measurement_concept_v3.pdf} \caption{context} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}[b]{0.39\textwidth} \centering \includegraphics[height=0.65\linewidth]{figures/radisonde_balloon.pdf} \caption{radiosonde} \includegraphics[height=0.4\linewidth]{figures/radioprobe.pdf} \caption{electronic board components} \end{subfigure} \caption{In-field measurement measurement context.} \label{fig:scheme} \end{figure} Each radiosonde transmits sensor readings to ground stations with Lora radio transmission protocol. LoRa is a relatively new proprietary communication technology that allows long-range communication distances while consuming very little power. It utilizes license-free Industrial, Scientific and Medical ISM frequency bands to exchange information at low data rates. Ground stations receive data from radiosondes, they are connected to post-processing machine. We store all data in post-processing machine. In order to reduce data losses same data transmitted by radiosonde can be received by different ground stations. The design of the radiosonde electronic board, tests inside environmental chamber and initial the performance evolution of the radiosonde in field experiments were described in the previous work by Miryam et. al\cite{miryam_sensors2021}. In Figure \ref{fig:scheme}(b) we can see the radiosonde design: it includes biodegradable balloon and radioprobe (electronic board). The balloon is filled with helium and radioprobe is connected to the battery. They are attached together and can float (stay on air) for multiple hours. Embedded electronics (microprocessor, radio module and sensors) can measure velocity, acceleration, pressure, temperature and humidity fluctuations in the surrounding environment. This configuration is selected as a result of the in-field experiment tests (see Sections \ref{sec:result_discussion} and \ref{sec:field_tests} for discussion). \subsection{Radioprobe} Furthermore, Figure \ref{fig:radioprobeversions} shows the current (red) and newly optimized (green) prototypes of the radioprobe. Nowadays, the new version of the radioprobe prototype is under hardware and software tests. Thus, the focus of the discussion will be current working prototype. The building blocks of the radioprobe board are illustrated in the Figure \ref{fig:scheme}(c), which consists of microcontroller, power module, radio tranmission module, sensors and antennas. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{figures/radioprobe_versions} \caption{Current (red) and 2nd (green) prototype of the radiosonde board.} \label{fig:radioprobeversions} \end{figure} The \textit{microcontroller} is data-processing and control unit, which allows to control other units, acquire sensor readings and execute function calls in an automated way inside the device. The \textit{radio tranmission module} of the radioprobe enables the one-way wireless communication with ground stations using radio frequency signals. \textit{PHT} (Pressure, Humidity, Temperature), \textit{IMU} (Inertial Measurement Unit) and \textit{GNSS} (Global Navigation Satellite System) \textit{sensors} provide sensor readings of physical quantities \cite{miryam_sensors2021}. Those quantities together with sensor operating ranges, sample rate and provider functional unit are described in Table \ref{tbl:sensor_ranges}. \begin{table}[h!] \centering \caption{Sensor operating ranges.} \begin{tabular}{l l l l} \hline Physical quantity & Range & Sample time & Device \\ [0.5ex] \hline\hline Pressure & [300, 1100] mbar& 4 s & PHT \\ Humidity & [0, 100] \% & 4 s & PHT \\ Temperature & [-40, 85] $^{\circ}$C & 4 s & PHT \\ Longitude & degrees & 4 s & GNSS \\ Latitude & degrees & 4 s & GNSS \\ Altitude & m & 4 s & GNSS \\ Acceleration & [-16, 16] g & 4 s & IMU \\ Course & [-1, 1] quats & 4 s & IMU \\ [1ex] \hline \end{tabular} \label{tbl:sensor_ranges} \end{table} Sensors are chosen based on their compact size and low-power consumption. Furthermore, they are configured to work in energy-efficient modes. For example, GNSS by U-blox has compact size and can be configured to operate in e-mode. In other studies, researchers exploited high precision GNSS sensors \cite{swenson2019}, (\textcolor{black}{...}), however such sensors consume more power, which is crucial for our application context. Moreover, GNSS sensor provides compact PVT (Position, Velocity and Time) navigation information by using proprietary protocol \cite{docs:ubx_rec_desc}, which can not provided in a single sensor reading by using traditional NMEA protocol\cite{onln:nmea}. \subsection{Biodegradable balloon and stable floating} The radiosonde system should float at an almost constant altitude during the experiments. To float at a fixed altitude, the balloon volume must remain relatively the same during flight \cite{basso2020,Yajima2009}. Therefore, the balloons used in our experiments were made from non-elastic materials. Furthermore, to minimize environmental impact of the radiosonde system, the used electronic components and balloon material should be as biodegradable as possible. The material characteristics of the balloon, processing methods, and polymer coatings were studied in the COMPLETE project by Basso et al. \cite{basso2020}. They studied green polymers, such as Mater Bi and PLA were studied and compared with materials used for traditional weather balloon production, such as Latex and Mylar. The properties of the above-mentioned materials were examined in laboratory experiments in collaboration with IIT Genoa. The main properties of interest are the \textit{tensile strength, hydrophobicity, helium permeability, and resistance to variations} of the surrounding temperature and humidity. As a result of experiments, it can be concluded that Mater-Bi with applied coatings was the best fit for satisfying above properties\cite{basso2020}. For the recent in-field experiments spherical balloons (R = 20 cm) were made from store bought Mater-Bi bags. Selected Mater-Bi material has 20 $\mu$m thickness a density of 1.24 g cm$^{-3}$, which is thinner than the previous studies (30 $\mu$m) carried out by Basso et al.\cite{basso2020}. Thus, the balloon mass was reduced by a factor of 1.5, which in turn reduced the overall payload budget(eq. \ref{eq:volballoon}). Balloon dimensions were identified based on the weight of the radiosonde electronic board with battery and atmospheric parameters in a target floating altitude (see Table \ref{tbl:air_density}). The volume of the balloon should satisfy the following equation for stable floating in a fixed altitude: \begin{equation} V_b = \frac{m_{total}}{\rho_a (1- M_g/M_a)} = \frac{m_r + m_b}{\rho_a (1- M_g/M_a)}, \label{eq:volballoon} \end{equation} where $m_r$ is mass of the radioprobe with battery and connections, $m_b$ is the mass of the balloon, $\rho_a$ is air density in a given altitude, $M_a$ and $M_g$ are molar masses of air and gas inside balloon. $m_b = S\Delta d \rho_m = 4 \pi R^2\ \Delta d\, \rho_m$, where $S$ is surface area of the spherical balloon with radius $R$. $\Delta d$ and $\rho_m$ are thickness and density of the Mater-Bi material. \begin{table}[h!] \centering \caption{Standard atmospheric parameters for possible operating altitude range of the radiosonde. Here, altitude is given as height above sea level, T is temperature, P is the pressure and $\rho_a$ is air density.} \begin{tabular}{l l l l} \hline \rule{0pt}{2.6ex} Altitude [m] & T [K] & P [hPa] & $\rho_a$ [$kg/m^3$] \\ \hline 0 & 288 & 1013 & 1.22 \\ 500 & 285 & 950 & 1.17 \\ 1000 & 282 & 900 & 1.11 \\ 1500 & 278 & 850 & 1.06 \\ 2000 & 275 & 795 & 1.01 \\ 2500 & 272 & 748 & 0.95 \\ 3000 & 269 & 701 & 0.90 \\ \hline \end{tabular} \label{tbl:air_density} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.7\linewidth]{figures/mass_budget.png} \caption{Total payload budget with respect to the operating altitude. Each line represents variation of the payload budget along altitude for a given balloon dimensions: $R$ is the radius of the spherical balloon and $V$ is the volume in liters. As it can be seen from the Table \ref{tbl:air_density}, the air density changes with the altitude, thus the lifting force. Horizontal lines shows weight of the the radiosonde: current version (17.5 g), the second optimized version (8.5 g) and medium (13 g). This weight includes battery and all equipment needed to attach the radiosonde to the balloon. Detailed breakdown of the radiosonde weight is given in the Table \ref{tbl:probeweight}.} \label{fig:weightvsvol} \end{figure} Figure \ref{fig:weightvsvol} shows the relationship between total liftable payload (excluding balloon weight, $m_b$) and floating altitude for the fixed balloon dimensions. In the current design, weight of the radioprobe with battery and connections, $m_r$, is 17.5 grams (see Table \ref{tbl:probeweight} for details). It can be lifted up to 1725 meters above sea level with 20 cm radius balloon; up to 2650 meters with 21 cm radius balloon and so on. While, the second version of the prototype (8.5 grams) allows us to use even smaller balloons and less amount of gas (e.g. Helium) to reach the same floating altitude. \begin{table}[h!] \centering \caption{Radiosonde payload weight.} \begin{tabular}{l l l} \hline \rule{0pt}{2.6ex} Part & \multicolumn{2}{c}{Mass [grams]}\\ \rule{0pt}{2.6ex} & Current design & New design \\ \hline Radioprobe & 7 & 3 \\ Battery & 8 & 3 \\ Connections & 2.5 & 2.5 \\ Balloon & 12.5 (R=20 cm) & 9 (R=17 cm) \\ \hline \rule{0pt}{2.6ex} Total & 30 & 17.5 \\ \hline \end{tabular} \label{tbl:probeweight} \end{table} \subsection{Ground station and network architecture} LoRa based wireless sensor network (WSN) concept was adopted for the radiosonde network. The star architecture was used, where each radiosonde is connected to the ground receiver station with point-to-point link. The feasibility analysis of the selected network architecture was carried out in different application scenarios \cite{bertoldo2018feasibility, bertoldo2018urbannoisy, paredes2019propagation}. First in-field test of the network architecture in the current application context were presented here \cite{miryam_sensors2021}. In most cases, LoRa protocol based WSN networks are used as within LoRaWAN network. However, in this work, LoRa protocol is used to create an ad hoc private network and adapt the technology to the working scenario\cite{miryam_sensors2021}. Therefore, the commercial off-the-shelf LoRa-based transceiver module RFM95 from HopeRF was used. It is a module featuring long-range spreadspectrum communication links and high immunity to interference whilst optimizing the power use \cite{miryam_sensors2021}. This module allows power-transmission ranges within 5 dBm (3.16 mW) to 20 dBm (100 mW), although according to the regulations released by the European Telecommunications Standards Institute (ETSI), the maximum power allowed in the European area is 14 dBm (25.12 mW) \cite{onln:ETSI_EN_regulation}. \subsection{Data acquisition and processing} Data-processing flow of the radioprobe can bee seen in Figure \ref{fig:probeworkingprinciple}. The flow consists of steps to be performed by radioprobe (transmitter) and ground station (receiver). Some of the processing will be done directly in the transmitter side, and more power and time consuming part will be done in the receiver side with the help of the post-processing machine. \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{figures/probe_working_principle} \caption{Radiosonde system processing flow.} \label{fig:probeworkingprinciple} \end{figure} As it can be observed from the Figure \ref{fig:probeworkingprinciple}, sensor data is processed by AHRS (Attitude and Heading Reference System) filter before sending to the ground station. The AHRS filter acquires readings from 9-DOF IMU sensor (3x accelerometer, 3x gyroscope and 3x magnetometer) and provides course (orientation) of the radioprobe as output. In order to remove possible errors introduced by sensor readings, the AHRS filter also uses sensor calibration data \cite{madgwick2011}. IMU sensor readings are provided with respect to the body frame (xyz) of the radioprobe. Those readings can be translated into local experiment frame (X$_e$Y$_e$Z$_e$) by using orientation data from AHRS filter. Furthermore, acceleration data represented in local experiment frame can be handful to obtain positioning information during GNSS outages. Acceleration data are fused with GNSS sensor data by means of Kalman Filter. It has two different operating modes: \textit{predict} and \textit{update} \cite{Kalman1960ANA}. In predict mode, we use IMU data to provide position information with respect to the last reference position. As soon as GNSS data is available, we update the reference position. In this way, we can have position information during GNSS outages. Since the GNSS sensor consumes relatively more power than IMU and other sensors, this approach helps us to reduce power consumption. \section{Results and discussion}\label{sec:result_discussion} As it is stated in the previous sections, the preliminary results were presented in the previous works by Basso et al.\cite{basso2020} and Miryam et al.\cite{miryam_sensors2021}. However, in the previous works, not all parts of the measurement system were tested and validated with extensive in-field campaigns. Furthermore, it is possible to apply a certain post-processing algorithms only when proper in-field experiments are carried out. In the following subsections, recently obtained results are described as a set of reached milestones. The measurement system was compared and validated with respect to traditional instrumentation. Type of experiments can be carried out with proposed measurement system are: (i) fixed point measurements at ground level; (ii) vertical profiling measurements of atmosphere; (iii) Lagrangian tracking measurements with a cluster of radiosondes. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\linewidth]{figures/rs_conf_a.png} \caption{} \end{subfigure} \hspace{0.1\textwidth} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\linewidth]{figures/rs_conf_b.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{figures/conf_A_real.png} \caption{} \end{subfigure} \hspace{0.02\textwidth} \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\linewidth]{figures/radioprobe_with_battery} \caption{} \end{subfigure} \caption{Two different configurations for radiosonde. (a) configuration A: radioprobe board is \textbf{outside} the balloon. (b) configuration B: radioprobe board is in the pocket \textbf{inside} the balloon. (c) radiosonde in configuration A, attached to the ground with a thread. (d)radioprobe board with battery.} \label{fig:inr1_conf} \end{figure} \subsection{Pre-launch calibration and fixed point measurements} In-field test started with testing various configurations for the radioprobe and validating sensor measurements with respect to fixed point measurements at ground level. Figure \ref{fig:inr1_conf} illustrates two different configurations, that are tested and validated with respect to reference station at campus of INRIM. \begin{figure}[ht!] \centering \begin{subfigure}{0.4\textwidth} \caption*{Calibration 1, without balloons} \includegraphics[width=\linewidth]{figures/inr1_rawcal1_pres} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \caption*{Calibration 2, with balloons} \includegraphics[width=\linewidth]{figures/inr1_rawcal2_pres} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal1_hum} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal2_hum} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal1_temp} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/inr1_rawcal2_temp} \caption{} \end{subfigure} \caption{Pressure, humidity and temperature readings from radiosondes with two different configurations. (a,c,e) First sensor readings without balloons. (b,d,f) Sensor readings after attaching the balloons. conf A: radioprobe is \textbf{outside} the balloon. conf B: radioprobe is \textbf{inside} the balloon.} \label{fig:inr1_pht} \end{figure} Both configurations were tested before and after attaching the balloons. The second configuration (\textit{conf. B}: radioprobe inside balloon) can introduce unintended biases into sensor readings. In Figure \ref{fig:inr1_pht} we compared pressure, humidity and temperature readings from two configurations. Sensor readings are also compared with the fixed Vaisala station, which includes RS41 probe. The reference Viasala station was pre-calibrated and provides temperature, pressure and humidity sensor readings for all day long with 1 minute time interval. In the first period of the experiment (Panels a,c,e), it can be seen that sensors take some period ($\sim$ 10 minutes, from 11.10 to 11.20) to warm up and catch up with Vaisala station readings. This is common behavior of MEMS sensors, particularly for atmospheric measurement sensors.((\textcolor{black}{...})). After attaching radioprobes to the balloons, readings from configuration B started to show mismatches with reference station (Panels b,d,f), while readings of configuration A were aligned better with reference station measurements, particularly for pressure and temperature. However, some small fluctuations in temperature and humidity readings with respect to the fixed station can be due to the positioning and movement of our probes around the station. This experiment helped to validate the proper configuration of the radiosonde, quantify biases and warm-up time of sensors. \subsection{Vertical atmospheric profiling measurements} The second dual-sonde launch experiment was held in collaboration with the Regional Agency for the Protection of the Environment (ARPA) of Piedmont region, on June 9, 2021 at Levaldigi Airport, Cuneo, Italy. The experiment site was equipped with automatic sounding system, where ARPA-Piemonte launch a radiosonde twice a day for atmospheric profiling measurements. In the the first dual-sonde launch experiment, we observed interference problems of GNSS sensor\cite{miryam_sensors2021}. In that time, the radioprobe board was attached directly to the case of the Vaisala RS-41 probe. To resolve this issue during the second launch, the radioprobe was attached to the reference Vaisala probe with 80 cm offset. \begin{figure}[bht!] \centering \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/num_packet_time_l.png} \caption{} \end{subfigure} \hspace{0.001\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/num_packet_alt_l.png} \caption{} \end{subfigure} \caption{Packet transmission test for long distances in Levaldigi Airport with ARPA, Piedmont. (a) Number of packets received in each minute together with moving average. (b) Number of packets along altitude levels, bin size = 400 meters.} \label{fig:arpa_transmission} \end{figure} During the experiment, data transmission continued around 1 hour, until the radioprobe reached almost $\sim$ 9 km altitude and \hl{13 km in distance}. We received packets in each 3-4 seconds and relative performance on our target altitudes (2-3 km) are also promising. In Figure \ref{fig:arpa_transmission}, we can see the number of packets received in each minute (panel a) and in a given altitude (panel b) during the first 25 minutes of the launch. {The original idea is to reach 1 Hz transmission rate. However, with the current computational parameters of the radioprobe and data packet size, it is difficult to achieve. Moreover, the current prototype of the receiver station does not support multi-channel LoRa connections. It is agreed that the design of the new multi-channel receiver station can solve this issue. The new station, which is under development, can receive multiple connections in each channel as also used here}\cite{swenson2019}. {In this way, the receiver station can receive simultaneously data packets from 10-20 radiosondes without packet losses due to collision.} \begin{figure}[bht!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_cmp_latlong_map.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_cmp_alt_time.png} \caption{} \end{subfigure} \caption{GNSS positioning measurements of the COMPLETE radioprobe and comparison with Vaisala RS-41 probe. (a) Map plot of longitude and latitude readings. (b) Altitude readings, the inset is plotted for better comparison.} \label{fig:arpa2_gnss_lla} \end{figure} Figure \ref{fig:arpa2_gnss_lla} shows the comparison of GNSS sensor measurements. It can be seen that raw GNSS readings of longitude, latitude and altitude already provide quite accurate results with respect to the reference system, prior to apply any filters. \begin{figure}[bht!] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_vel_comp.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_wind_speed.png} \caption{} \end{subfigure} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_wind_speed_rt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_wind_speed_spectrum_v2.png} \caption{} \end{subfigure} \caption{Velocity measurements of the radioprobe. (a) Velocity components and its magnitude. (b) Wind speed comparison from both radioprobes. For wind speed computation, only north and east components of velocity were used. (c) Wind speed from COMPLETE radioprobe: raw measurements (blue), resampled dataset with 4 second regular intervals (orange) and reference dataset from Vaisala probe. (d) Power spectrum of wind speed dataset of the COMPLETE radioprobe. Besides raw spectrum dataset (blue), moving average of the spectrum values (orange) and two trend lines (yellow and violet) are provided for comparison. Frequency range was taken based on Nyquist frequency, which is half of the sampling frequency, $f_s/2 = 0.125 s^{-1}$.} \label{fig:arpa2_gnss_vel} \end{figure} GNSS sensor can also provide velocity readings in north, east and down directions, as it is plotted in Figure \ref{fig:arpa2_gnss_vel}a. Horizontal wind speed was computed from north and east velocity components and was compared with horizontal wind speed readings of RS41 probe, see Fig. \ref{fig:arpa2_gnss_vel}b. Wind speed was further analyzed with FFT (Fast Fourier Transform) to get preliminary results of power spectra of fluctuations. To compute the spectra, 30 minutes wind speed dataset was sampled with a 4s time step (see Fig. \ref{fig:arpa2_gnss_vel}c), which gives frequency range from 5$\cdot 10^{-4}$s$^{-1}$ to 0.25 s$^{-1}$ and Nyquist frequency of 0.12 s$^{-1}$ (2$\pi$/8 = $\pi$/4 rad/s), see Fig. \ref{fig:arpa2_gnss_vel}d. The same kind of analysis can be performed with datasets of vertical velocity and temperature. The same power spectra analysis of the vertical velocity can be used to identify a cut off point of Brunt-Vaisala frequency, while vertical temperature profile (Fig. \ref{fig:arpa_pht}e) can used to derive a complete profile of the Brunt-Vaisala frequency along the altitude\cite{nath2010turbulencecharacteristics, jaiswal2020gpsradiosonde}. \begin{figure}[ht!] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_pres_time_alt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_delta_temp.png} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_hum_time_alt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_delta_hum.png} \caption{} \end{subfigure} \vspace{0.001\textheight} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_temp_time_alt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{figures/arpa2_delta_temp.png} \caption{} \end{subfigure} \caption{Pressure, humidity and temperature readings from radiosondes. (a, c, e) Comparison between COMPLETE and Vaisala probes. (b, d, f) Differences of two measurements. Dashed lines highlight absolute accuracy ranges of pressure ($\pm$1 hPa), humidity ($\pm$3 \%) and temperature ($\pm$1 $^o$C)\cite{docs:pht_datasheet}.} \label{fig:arpa_pht} \end{figure} The plots in Figure \ref{fig:arpa_pht} show that the radioprobe experienced some biases during the launch with respect to the Vaisala RS-41 probe. Biases are evident, especially for humidity and temperature readings, which is mainly due to heating and radiation in sunny day. However, it is unlikely that the biases inside clouds due to heating is noticeable. {Furthermore, the sensor datasheet suggests that the air-flow in direction to the vent-hole of the PHT sensor has to be engineered in a way that a sufficient air exchange inside to outside will be possible. This aspect was already considered during the PCB board design, tests in environmental chamber and in field experiments} \cite{miryam_sensors2021}. {However, it is believed that the second board design will improve humidity measurements. Due to slow response time, it is also suggested to use low data rates for atmospheric observation applications. To observe effects on the response time of the device for humidity measurements, which is 1 second to reach 63\% of the step change, an air-flow velocity of approximately 1 m/s is needed}\cite{docs:pht_datasheet}. {For this purpose, sensor's response to environmental changes was tested and validated inside Kambic KK190 CHLT climatic chamber, which is in the Applied Thermodynamics Laboratory of the Italian National Metrology Institute (INRiM)}\cite{miryam_sensors2021} It is worth to mention that the PHT sensor, like other sensor components, was selected because of its compact size, low-power and low-cost characteristics. \section{In-field measurements with the cluster of radiosondes}\label{sec:field_tests} In the previous section, tests and validation results were presented with respect to the traditional fixed and vertical profiling radiosondes. Here, we present preliminary results of simultaneous measurements from a cluster of radiosondes. \subsection{Multiple tethered radiosonde setup} Tests were started with experiments of 5 tethered radiosonde measurements as in Figure \ref{fig:inr2_setup} at INRIM. \begin{figure}[ht!] \centering \begin{subfigure}{0.75\textwidth} \includegraphics[width=\linewidth]{figures/inr2_photo1.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.75\textwidth} \includegraphics[width=\linewidth]{figures/inr2_photo2.png} \caption{} \end{subfigure} \caption{Experiment with multiple tethered radiosondes at INRIM. 5 radiosondes are prepared with 2 ground stations for this experiment. 2 radiosondes are highlighted with red and black colors for tracking them with video camera.} \label{fig:inr2_setup} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_pres_cal.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_raw_pres.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_hum_cal.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_raw_hum.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_temp_cal.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/inr2_raw_temp.png} \caption{} \end{subfigure} \caption{Pressure, humidity and tempretature measurements in two phases of the experiment: calibration and fluctuating radiosondes, which are tethered to the ground. (a, c, e) PHT readings during calibration, where we removed systematic bias offsets. (b, d, f) Sensor readings during fluctuation phase of the radiosondes.} \label{fig:inr2_pht} \end{figure} \subsection{Validation of position with the stereo vision}\label{sec:pos_validation} The integration of gyroscope measurement errors will lead to an accumulating error in the calculated orientation. Therefore, gyroscopes alone cannot provide an absolute measurement of orientation. An accelerometer and magnetometer will measure the earth's gravitational and magnetic fields respectively and so provide an absolute reference of orientation. However, they are likely to be subject to high levels of noise; for example, accelerations due to motion will corrupt measured direction of gravity. The task of an orientation filter is to compute a single estimate of orientation through the optimal fusion of gyroscope, accelerometer and magnetometer measurements\cite{madgwick2011}. Innovative aspects of the AHRS filter by Madgwick et. al (2011)\cite{madgwick2011} include: a single adjustable parameter defined by observable systems characteristics; an analytically derived and optimized gradient descent algorithm enabling performance at low sampling rates; an on-line magnetic distortion compensation algorithm; and gyroscope bias drift compensation. Recommended sampling rate is at least 20 Hz for Madgwick filter to work and provide proper output. \subsection{Free launch of radiosonde cluster} According to our knowledge, this experiment is one of the first observations done by using cluster of radiosondes to track fluctuations of quantities inside clouds and atmospheric flow field. \begin{figure}[ht!] \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth,angle=90,origin=c]{figures/radiosondes.jpg} \caption{radiosondes} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth,angle=-90,origin=c]{figures/receiver_station.jpg} \caption{receiver station} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/filling_the_balloon.jpeg} \caption{filling balloon with helium} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth,angle=-90,origin=c]{figures/camera2.jpg} \caption{camera for stereo vision analysis} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth,angle=-90,origin=c]{figures/calibration_instrumentation.jpg} \caption{calibration instrumentation} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{figures/radisondes_calibration.jpeg} \caption{PHT sensor calibration} \end{subfigure} \caption{Experiment setup with (a) 10 radiosondes, (b) 2 ground stations, (d) 2 cameras and (e) calibration instrumentation.} \label{fig:experiment_setup} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/map_pht_alt_legend.png} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=\textwidth]{figures/map_pht_alt.png} \end{subfigure} \caption{Position of the radioprobes together with corresponding pressure, humidity and temperature readings with respect to travel distance and altitude. Dashboard was generated with MATLAB.} \label{fig:dashboard_alt} \end{figure} \subsection{Relative positioning and dispersion analysis} It is expected that a combination of knowledge gained from the numerical simulations and in-field experiments will enable us to understand better the relative dispersion and diffusion in atmosphere. Indeed simulation results provided preliminary insights for the setup of the in-field measurements, such as the selecting the initial launch point and initial neighborhood size. \begin{figure}[bht!] \centering \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/diff_ll_map_time.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\linewidth]{figures/diff_ned_time.png} \caption{} \end{subfigure} \caption{Relative positioning of the radiosondes are given in longitude-latitude frame (a) and north-east frame (b) with respect to experiment observation point (or initial launch point).} \label{fig:ao2_rel_pos} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh20_L_time.png} \caption{Q, h=20 m} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh20_L_time_sm.png} \caption{Q, h=20 m, interpolation} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh50_L_time.png} \caption{Q, h = 50 m} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh50_L_time_sm.png} \caption{Q, h=50 m, interpolation} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh100_L_time.png} \caption{Q, h = 100 m} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figures/Qh100_L_time_sm.png} \caption{Q, h=100 m, interpolation} \end{subfigure} \caption{Distance-neighbor-graph is computed with different neighborhood size in the selected time instances. Initial time instance is 14:15:00. Following time instances are given in terms of seconds from initial time instance.} \label{fig:ao2_Q} \end{figure} \subsection{Basic spectral analysis from sensor readings} \begin{figure}[bht!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_wind_speed_rt5.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_wind_speed_spectrum_v2.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_acc_mag_rt5.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_acc_mag_spectrum_v2.png} \caption{} \end{subfigure} \caption{(a, c) Wind speed and magnitude of 3D acceleration measurements from 3 freely floating radioprobes. Dataset were resampled with 5 second regular intervals. (b, d) Power spectrum of wind speed and magnitude of 3D acceleration dataset of 3 radioprobes and two trend lines (violet and green) are provided for comparison. Frequency range is taken based on Nyquist frequency, $f_s/2$.} \label{fig:ao2_vel_spectra} \end{figure} \begin{figure}[bht!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_temp_rt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_temp_spectrum_v2.png} \caption{} \end{subfigure} \vspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_hum_rt.png} \caption{} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/ao2_hum_spectrum_v2.png} \caption{} \end{subfigure} \caption{(a, c) Temperature and humidity measurements from 3 freely floating radioprobes. Dataset were resampled with 5 second regular intervals. (b) Power spectrum of temperature and humidity dataset of 3 radioprobes and two trend lines (violet and green) are provided for comparison. Frequency range is taken based on Nyquist frequency, $f_s/2$.} \label{fig:ao2_temp_spectra} \end{figure} \section{Conclusions}\label{sec:conclusion} This work describes a new balloon-borne instrumentation system together with a consideration of the new measurement technique. In the paper, we highlighted all tests and in-field experiments, which helped us to validate and to bring the measurement system towards realization. Currently, we are working on the optimization of the transmission and acquisition. The current data transmission rate is 1 packet per 3-4 seconds, which is acceptable for the current prototype. However, the prototype was already designed with optimized computational characteristics, weight and size. In the same time, new ground station is under development, which should allow users to receive data in higher rates and in parallel way. For this reason, we are developing custom gateway based on custom LoRa architecture. By going forward, we would like to combine results coming from numerical simulations and in-field measurements, into a more comprehensive analysis of clouds, cloud microphysics, turbulent fluctuations. In the future radioprobe board sensors will be protected from radiation and precipitation sources with lightweight shield case. \section*{Acknowledgments} This project has received funding from the Marie-Sklodowska Curie Actions (MSCA ITN ETN COMPLETE) under the European Union’s Horizon 2020 research and innovation program. Grant agreement 675675, \href{http://www.complete-h2020network.eu}{\textit{COMPLETE ITN-ETN NETWORK}}. We would like to thank to \href{https://www.oavda.it/la-fondazione}{\textit{L’Osservatorio Astronomico della Regione Autonoma Valle d’Aosta}}, Luca Tommassone and \href{http://www.arpa.piemonte.it/chi-siamo}{\textit{ARPA, Piemonte}} for hosting us during in-field experiment campaigns. \bibliographystyle{elsarticle-num-names-mod}
2,869,038,155,137
arxiv
\section{Introduction and general conventions} Let $M$ be any closed, orientable, hyperbolic $3$-manifold. The volume of $M$ is known to be an extremely powerful topological invariant, but its relationship to more classical topological invariants remains elusive. The main geometrical result of this paper, Theorem \ref{geom 11}, asserts that if $\vol M \le 3.08$ then $H_1(M,\Z_2)$ has rank at most $6$. The Weeks-Hodgson census of closed hyperbolic $3$-manifolds \cite{snappea} contains two examples, {\tt m135(-1,3)} and {\tt m135(1,3)}, for which the volume is $<3.08$ and the rank of the first homology with $\Z_2$ coefficients is $3$. (They are both of volume $2.666745\ldots$, and they have integer first homology isomorphic to $\Z_2 \oplus \Z_2 \oplus \Z_4$ and $\Z_2 \oplus \Z_4 \oplus \Z_4$ respectively.) There are no examples in that census for which the volume is $<3.08$ and the rank of the first homology with $\Z_2$ coefficients is $\ge4$. Thus there is still a substantial gap between our results and the known examples. However, the bound on the rank of $H_1(M;\Z_2)$ given in this paper seems to be better by orders of magnitude than what could be readily deduced by previously available methods. The proof of Theorem \ref{geom 11} relies on a purely topological result, Theorem \ref{top 11}, which states that if $M$ is a closed $3$-manifold which is simple (see \ref{simple def}), if $\pi_1(M)$ has a subgroup isomorphic to a genus-$g$ surface group for a given integer $g$, and if the rank of $H_1(M;\Z_2)$ is at least $4g-1$, then $M$ contains a connected incompressible closed surface of genus $g$. This may be regarded as a partial analogue of Dehn's lemma for $\pi_1$-injective genus-$g$ surfaces. Theorem \ref{geom 11} will be proved in Section \ref{geometric section} by combining Theorem \ref{top 11} with a number of deep geometric results. These include the Marden tameness conjecture, recently established by Agol \cite{agol} and by Calegari-Gabai \cite{cg}; a co-volume estimate for $3$-tame, $3$-free Kleinian groups due to Anderson, Canary, Culler and Shalen \cite[Proposition 8.1]{accs}; and a volume estimate for hyperbolic Haken manifolds recently proved by Agol, Storm and Thurston \cite{ast}. The results of \cite{ast} depend in turn on estimates developed by Perelman in his work \cite{perelman} on geometrization of $3$-manifolds. By refining the methods of this paper one can obtain improvements of Theorems \ref{top 11} and \ref{geom 11}. In particular, in the case $g=2$, the lower bound of $7$ for the rank of $H_1(M;\Z_2)$ in the hypothesis of Theorem \ref{top 11} can be replaced by $6$, and the upper bound of $6$ in the conclusion of Theorem \ref{geom 11} can be replaced by $5$. The relevant refinements will be explored systematically in \cite{second}. Our strategy for proving Theorem \ref{top 11} is based on the method of two-sheeted coverings used by Shapiro and Whitehead in their proof \cite{ShapiroWhitehead} of Dehn's Lemma. (This method was inspired by Papakyriakopoulos's tower construction \cite{Papa}, and was systematized by Stallings \cite{Stallingsloop}.) We consider a $\pi_1$-injective genus-$g$ singular surface in the $3$-manifold $M$, i.e. a map $\phi :K\to M$, where $K$ is a closed orientable genus-$g$ surface, and $\phi _\sharp$ { is} injective. One can construct a ``tower'' \goodbreak $$ \xymatrix{ &&N_n\hskip8pt\ar@{^{(}->}[r]&M_n\hskip10pt\ar@{->}[dl]^{p_n}\\ &&N_{n-1}\ar@{^{(}->}[r]&M_{n-1}\ar@{->}[dl]^{\genfrac{}{}{0pt}{1}{p_{n-1}}{\vdots}}\\ &&\hbox to 36pt{\strut\hfill}&\hbox to 36pt{\strut\hfill}\ar@{->}[dl]^{p_{2}}\\ &&N_1\hskip8pt\ar@{^{(}->}[r]&M_{1}\hskip10pt\ar@{->}[dl]^{p_{1}}\\ K\ar@/_2pc/[rrr]^\phi\ar@{->}[rruuuu]^{\tilde\phi}&&N_0\hskip8pt\ar@{^{(}->}[r]&M_0\hskip10pt\\ \\} $$ where the $M_j$ are simple (\ref{simple def}) $3$-manifolds, $N_j$ is a simple $3$-dimensional submanifold of $M_j$ for $j=0,\ldots,n$, the $p_j:M_j\to N_{j-1}$ are two-sheeted covering maps, $\tilde\phi_*:H_1(K;\Z_2)\to H_1(N_n;\Z_2)$ is surjective, and the diagram commutes up to homotopy. In general this diagram may contain both closed and bounded manifolds, but we use ideas from \cite{shalenwagreich} to construct the tower in such a way that if $H_1(M,\Z_2)$ has rank $\ge 4g-1$, then $H_1(M_j,\Z_2)$ has rank $\ge 4g-2$ whenever $M_j$ is closed. We also use ideas developed in \cite{peripheralstructure} based on Simon's results \cite{simon} on compactification of covering spaces, to construct the tower in such a way that the (possibly empty and possibly disconnected) surface $\partial N_j$ is incompressible in $M_j$ for each $j\le n$. The manifold $N_n$ always has non-empty boundary. This is obvious if $\partial M_n\ne\emptyset$. If $M_n$ is closed then $H_1(M_n;\Z_2)$ has rank at least $4g-2$, whereas the surjectivity of $\tilde\phi_*:H_1(K;\Z_2)\to H_1(N_n;\Z_2)$ implies that the rank of $H_1(N_n;\Z_2)$ is at most $2g$. It follows that in this case $N_n$ is a proper submanifold of $M_n$, and hence $\partial N_n\ne\emptyset$. We in fact show, using elementary arguments based on Poincar\'e-Lefschetz duality, that if the map $\tilde\phi_*:H_2(K;\Z)\to H_2(N_n;\Z)$ is trivial, then $\partial N_n$ has a component $F$ of genus at most $g$. In the case where $\tilde\phi_*:H_2(K;\Z)\to H_2(N_n;\Z)$ is non-trivial, we use Gabai's results from \cite{gabai} to show that $N_n$ contains a non-separating incompressible closed surface $F$ of genus at most $g$. The rest of the proof consists of showing that if a given $M_j$, with $0<j\le n$, contains a closed incompressible surface of genus at most $g$, then $N_{j-1}$ also contains such a surface. The surface in $N_{j-1}$ will be incompressible in $M_{j-1}$, as well as in $N_{j-1}$, because $\partial N_{j-1}$ is incompressible in $M_{j-1}$. It is at this step that we need to know that closed manifolds in the tower have first homology with $\Z_2$-coefficients of rank at least $4g-2$. Indeed, Proposition \ref{new improved old prop 3} implies that the existence of a closed incompressible surface of genus at most $g$ in a $2$-sheeted covering of a simple compact $3$-manifold $N$ implies the existence of such a surface in $N$ itself unless $N$ is closed and $H_1(N;\Z_2)$ has rank at most $4g-3$. Proposition \ref{new improved old prop 3} involves the notion of a ``book of $I$-bundles'' which we define formally in \ref{true book def}. Books of $I$-bundles in PL $3$-manifolds arise naturally as neighborhoods of ``books of surfaces'' (\ref{bosdef}). We may think of a book of surfaces as being constructed from a $2$-manifold with boundary $\hat\Pi$, whose components have Euler characteristic $\le0$, and a closed $1$-manifold $\Psi$, by attaching $\partial\hat\Pi$ to $\Psi$ by a covering map. The components of $\Psi$ and $\Pi=\inter\hat\Pi$ are respectively ``bindings'' and ``pages.'' A book of $I$-bundles comes equipped with a corresponding decomposition into ``pages'' which are $I$-bundles over surfaces, and ``bindings'' which are solid tori. (In the informal discussion that we give in this introduction, the extra structure defined by the decomposition will be suppressed from the notation.) With these notions as background we shall now sketch the proof of Proposition \ref{new improved old prop 3}. An incompressible surface $F$ in a two-sheeted covering space of $N$, if it is in general position, projects to $N$ via a map which has only double-curve singularities. After routine modifications one obtains a map from $F$ to $N$ with the additional property that its double curves are homotopically non-trivial. In particular, the image of such a map is a book of surfaces $X$. A regular neighborhood $W$ of $X$ in $N$ is then a book of $I$-bundles, which has Euler characteristic $\ge2-2g$ if $F$ has genus at most $g$. Using the the simplicity of $N$ one can then produce a book of $I$-bundles $V$ with $W\subset V\subset N$ and $\chi(W)\ge2-2g$, such that each page of $W$ has strictly negative Euler characteristic. (This step is handled by Lemma \ref{make book}.) We now distinguish two cases. In the case where some page $P_0$ of $V$ has the property that $P_0\cap\partial V$ is contained in a single component of $\partial V$, we show that by splitting bindings of the book of surfaces $X$, one can produce an embedded (possibly disconnected) closed, orientable surface $S$ which is homologically non-trivial in $N$. Ambient surgery on $S$ in $N$ then produces a non-empty incompressible surface whose components have genus at most $g$. In the case where no such page $P_0$ exists, an Euler characteristic calculation shows that the boundary components of $V$ have genus at most $g$. In this case, ambient surgery on $\partial V$ produces a non-empty incompressible surface whose components have genus at most $g$. We show that this surface is non-empty unless $V$ carries $\pi_1(N)$. But for a book of $I$-bundles $V$ whose Euler characteristic is at least $2-2g$, and whose pages are all of negative Euler characteristic, one can show that $H_1(V;\Z_2)$ has rank at most $4g-3$ (this is included in Lemma \ref{moosday}); so in the case where $V$ carries $\pi_1(N)$, the rank of $H_1(N;\Z_2)$ is at most $4g-3$. The details and background needed for the proof of Proposition \ref{new improved old prop 3} occupy Sections \ref{boib section 1}---\ref{down}. Section \ref{singularity section} provides the combinatorial background needed to construct the tower, while Sections \ref{homology section} and \ref{new section} provide the homological background. The application of Gabai's results mentioned above appears in Section \ref{new section}. The material on towers proper, and the proof of the main topological theorem and its corollary, are given in Section \ref{tower section}, and the geometric applications are given in Section \ref{geometric section}. The rest of this introduction will be devoted to indicating some conventions that will be used in the rest of the paper. \Number In general, if $X$ and $Y$ are subsets of a set, we denote by $X\setminus Y$ the set of elements of $X$ that do not belong to $Y$. In the case where we know that $Y\subset X$ and wish to emphasize this we will write $X-Y$ for $X\setminus Y$. \EndNumber \Number A {\it manifold} may have a boundary. If $M$ is a manifold, we shall denote the boundary of $M$ by $\partial M$ and its interior $M-\partial M$ by $\inter M$. In many of our results about manifolds of dimension $\le3$ we do not specify a category. These results may be interpreted in the category in which manifolds are topological, PL or smooth, and submanifolds are respectively locally flat, PL or smooth; these three categories are equivalent in these low dimensions as far as classification is concerned. In much of the paper the proofs are done in the PL category, but the applications to hyperbolic manifolds in Section \ref{geometric section} are carried out in the smooth category. \EndNumber \Number\label{separating def} A (possibly disconnected) codimension-$1$ submanifold $S$ of a manifold $M$ is said to be {\it separating} if $M$ can be written as the union of two $3$-dimensional submanifolds $M_1$ and $M_2$ such that $M_1\cap M_2 = S$. \EndNumber \Number\label{abcd goldfish} We shall say that a map of topological spaces $f:\calx\to \caly$ is {\it $\pi_1$-injective} if for every path component $X$ of $\calx$, the map $f|X$ induces an injection from $\pi_1(X)$ to $\pi_1(Y)$, where $Y$ is the path component of $\caly$ containing $f(X)$. We shall say that a subset $A$ of a topological space $X$ is {\it $\pi_1$-injective} in $X$ if the inclusion map $A\to X$ is $\pi_1$-injective. \EndNumber \Number\label{oiler} If $X$ is a space having the homotopy type of a finite CW complex, the Euler characteristic of $X$ will be denoted by $\chi(X)$. We have $\chi(X)=\sum_{j\in\Z}\dim_FH_j(X;F)$ for {\it any} field $F$: the sum is independent of $F$ by virtue of the standard observation that it is equal to $\sum_{j\in\Z}(-1)^jc_j$, where $c_j$ denotes the number of $j$-cells in a finite CW complex homotopy equivalent to $X$. We shall often write $\barchi(X)$ as shorthand for $-\chi(X)$. \EndNumber \Number If $x$ is a point of a compact PL space $X$, there exist a finite simplicial complex $K$ and a PL homeomorphism $h:X\to|K|$ such that $v=h(x)$ is a vertex of $K$. If $L$ denotes the link of $v$ in $K$ then the PL homeomorphism type of the space $|L|$ depends only on $X$ and $x$, not on the choice of $K$ and $h$. We shall refer to $L$ as the {\it link of $x$ in $X$}, with the understanding that it is defined only up to PL homeomorphism. \EndNumber \Number\label{Pi notation} Suppose that $x$ is a point of a compact PL space $X$ and that $n\ge0$ is an integer. The link of $x$ is PL homeomorphic to $S^{n-1}$ if and only if $x$ is an $n$-manifold point of $X$, i.e. some neighborhood of $x$ is piecewise-linearly homeomorphic to ${\bf R}^n$. If $X$ is a compact PL space of dimension at most $2$, we shall denote by $\Pi(X)$ the set of all $2$-manifold points of $X$. Note that $\Pi(X)$ is an open subset of $X$, and with its induced PL structure it is a PL $2$-manifold. Furthermore, $X-\Pi(X)$ is a compact PL subspace of $X$. \EndNumber \Number Let $F$ be a properly embedded orientable surface in an orientable $3$-manifold $M$. We define a {\it compressing disk} for $F$ in $M$ to be a disk $D\subset M$ such that $D\cap F=\partial D$, and such that $\partial D$ is not the boundary of a disk in $F$. It is a standard consequence of the loop theorem that $F$ is $\pi_1$-injective in $M$ if and only if there is no compressing disk for $F$ in $M$. A closed orientable surface $S$ contained in the interior of an orientable $3$-manifold $M$ will be termed {\it incompressible} if $S$ is $\pi_1$-injective in $M$ and no component of $S$ is a $2$-sphere. (We have avoided using the term ``incompressible'' for surfaces that are not closed.) \EndNumber \Number An {\it essential arc} in a $2$-manifold $F$ is a properly embedded arc in $F$ which is not the frontier of a disk. \EndNumber \Definitions\label{simple def} A $3$-manifold $M$ will be termed {\it irreducible} if every $2$-sphere in $M$ bounds a ball in $M$. We shall say that $M$ is {\it boundary-irreducible} if $\partial M$ is $\pi_1$-injective in $M$, or equivalently if, for every properly embedded disk $D\subset M$, the simple closed curve $\partial D$ bounds a disk in $\partial M$. We shall say that a $3$-manifold $M$ is {\it simple} if (i) $M$ is compact, connected, orientable, irreducible and boundary-irreducible; (ii) no subgroup of $\pi_1(M)$ is isomorphic to $\Z\times\Z$; and (iii) $M$ is not a closed manifold with finite fundamental group. \EndDefinitions \Number\label{simple covering} It is a theorem due to Meeks, Simon and Yau \cite{MSY} that a covering space of an irreducible orientable $3$-manifold is always irreducible. Given this result, it follows formally from our definition of simplicity that if a compact, orientable $3$-manifold $M$ is simple, then every connected finite-sheeted covering of $M$ is also simple. \EndNumber \Number\label{horizontal def} The unit interval $[0,1]$ will often be denoted by $I$. By an {\it $I$-bundle} we shall always mean a compact space equipped with a specific locally trivial fibration over some (often unnamed) base space, in which the fibers are homeomorphic to $[0,1]$. (The reader is referred to \cite[Chapter 10]{hempel} for a general discussion of $3$-dimensional $I$-bundles.) By a {\it Seifert fibered manifold} we shall always mean a compact $3$-manifold equipped with a specific Seifert fibration. In particular, the notion of a {\it fiber} of an $I$-bundle or a Seifert fibered manifold is well defined, although the fiber projection and base space will often not be explicitly named. A compact subset of an $I$-bundle or Seifert fibered space will be called {\it horizontal} if it meets each fiber in one point. A compact set will be called {\it vertical} if it is a union of fibers. If $\calp$ is an $I$-bundle, we define the {\it horizontal boundary} of $\calp$ to be the subset of $\calp$ consisting of all endpoints of fibers of $\calp$. We shall denote the horizontal boundary of $\calp$ by $\partial_h\calp$. In the case where the base of the $I$-bundle $\calp$ is a $2$-manifold $F$ (so that $\calp$ is a $3$-manifold), we define the {\it vertical boundary} of $\calp$ to be $p^{-1}(\partial F)$, where $p:\calp\to F$ denotes the bundle map. Note that in this case we have $\partial\calp=\partial_v\calp\cup\partial_h\calp$, and if $\calp$ is orientable then $\partial_v\calp$ is always a finite disjoint union of annuli. \EndNumber \Number The {\it rank} of a finitely generated group $\Gamma$ is the cardinality of a minimal generating set for $\Gamma$. In particular, the trivial group has rank $0$. A group $\Gamma$ is said to be {\it freely indecomposable} if $\Gamma$ is not the free product of two non-trivial subgroups. \EndNumber \Number If $V$ is a finite dimensional vector space over $\Z_2$ then the dimension of $V$ will be denoted $\rk V$. If $X$ is a topological space, we will set $\rk X = \rk H_1(X;\Z_2)$. \EndNumber \section{Books of \texorpdfstring{$I$}{I}-bundles} \label{boib section 1} \Definition A {\it generalized book of $I$-bundles} is a triple $\calw=(W,\calb,\calp)$, where $W$ is a (possibly empty) compact, orientable $3$-manifold, and $\calb,\calp\subset W$ are submanifolds such that \Bullets \item $\calb$ is a (possibly disconnected) Seifert fibered space, \item $\calp$ is an $I$-bundle over a (possibly disconnected) $2$-manifold, and every component of $\calp$ has Euler characteristic $\le0$, \item $W=\calb\cup\calp$, \item $\calb\cap\calp$ is the vertical boundary of $\calp$, and \item $\calb\cap\calp$ is vertical in the Seifert fibration of $\calb$. \EndBullets We shall denote $W$, $\calb$ and $\calp$ by $|\calw|$, $\calb_\calw $ and $\calp_\calw $ respectively. The components of $\calb_\calw $ will be called {\it bindings} of $\calw$, and the components of $\calp_\calw$ will be called its {\it pages}. The submanifold $\calp\cap\calb$, whose components are properly embedded annuli in $W$, will be denoted $\cala_\calw$. If $B$ is a binding of a generalized book of $I$-bundles $\calw$, we define the {\it valence} of $B$ to be the number of components of $\cala_\calw$ that are contained in $\partial B$. A generalized book of $I$-bundles $\calw$ will be termed {\it connected} if the manifold $|\calw|$ is connected. Likewise, $\calw$ will be termed {\it boundary-irreducible} if $|\calw|$ is boundary-irreducible. \EndDefinition \Definitions\label{true book def} A {\it book of $I$-bundles} is a generalized book of $I$-bundles $\calw$ such that \Bullets \item $|\calw|\ne\emptyset$, \item each binding of $\calw$ is a solid torus, and \item each binding of $\calw$ meets at least one page of $\calw$. \EndBullets If $B$ is a binding of a book of $I$-bundles $\calw$, there is a unique integer $d>0$ such that for every component $A$ of $\cala_\calw$ contained in $\partial B$, the image of the inclusion homomorphism $H_1(A;\Z)\to H_1(B;\Z)$ has index $d$ in $H_1(B;\Z)$. We shall call $d$ the {\it degree} of the binding $B$. \EndDefinitions \Lemma\label{no annular pages} Suppose that $\calw$ is a generalized book of $I$-bundles. Then there is a generalized book of $I$-bundles $\calw_0$ such that \Conclusions \item\label{good morrow} $|\calw_0|=|\calw|$, \item\label{i prithee discover} every page of $\calw_0$ has strictly negative Euler characteristic, and \item\label{steal purchase or borrow}every page of $\calw_0$ is a page of $\calw$. \EndConclusions \EndLemma \Proof Set $W=|\calw|$, $\calb=\calb_\calw$ and $\calp=\calp_\calw$. Let $\calq$ denote the union of all components $P$ of $\calp$ such that $\chi(P)=0$. Then $\calq$ is an $I$-bundle over a compact surface $A$ whose components are annuli and M\"obius bands, and $\calq\cap\calb$ is the induced $I$-bundle over $\partial A$. Hence every component $Q$ of $\calq$ is a solid torus, and $Q\cap\calb$ consists of either a single annulus of degree $2$ in $Q$, or of two annuli of degree $1$ in $Q$. Since every such annulus is also vertical in the Seifert fibration of $\calb$, it follows that this Seifert fibration may be extended to a Seifert fibration of the manifold $\calb_0=\calb\cup \calq$, in such a way that each component of $\calq$ contains either no singular fiber, or exactly one singular fiber of order $2$. Furthermore, since every component of $\calq$ meets $\calb$, every component of $\calb_0$ contains a component of $\calb$. The manifold $\calp_0=\calp-\calq$ is a union of components of $\calp$ and therefore inherits an $I$-bundle structure. It is now clear that $\calw_0 = (W,\calb_0,\calp_0)$ is a generalized book of $I$-bundles. It follows from the definition of $\calq$ that $\calw_0$ satisfies conclusions (\ref{i prithee discover}) and (\ref{steal purchase or borrow}) of the lemma. \EndProof \Lemma\label{Seifert holes} Suppose that $\hat B$ is a connected, Seifert-fibered submanifold of a simple, closed, orientable $3$-manifold $M$. Then either \Conclusions \item\label{hole one} $\hat B$ is a solid torus, or \item\label{hole two} $\hat B$ is contained in a ball in $M$, or \item\label{hole three} some component of $M-\inter \hat B$ is a solid torus. \EndConclusions \EndLemma \Proof Since $M$ is simple and $\hat B$ is Seifert-fibered, we have $\hat B\ne M$, i.e. $\partial \hat B\ne\emptyset$. Since the components of $\partial \hat B$ are tori and $M$ is simple, $\partial \hat B$ cannot be $\pi_1$-injective in $M$. Hence there is a compressing disk for $\partial \hat B$ in $M$. If $D\subset \hat B$ then $\hat B$ is a boundary-reducible Seifert fibered space and hence (\ref{hole one}) holds. The other possibility is that $D\cap \hat B=\partial D$. In this case, let $V$ denote a regular neighborhood of $D$ relative to $M-\inter \hat B$. The boundary of the manifold $\hat B\cup V$ has a unique sphere component $S$. Since $M$ is irreducible, $S$ bounds a ball $\Delta\subset M$. We must have either $\Delta\supset \hat B$, which gives conclusion (\ref{hole two}), or $\inter \Delta\cap \hat B=\emptyset$; in the latter case, $\Delta\cup V$ is a solid torus component of $M-\inter \hat B$, and so (\ref{hole three}) holds. \EndProof \Lemma\label{make book} Suppose that $M$ is a simple, closed, orientable $3$-manifold, and that $\calw$ is a connected generalized book of $I$-bundles such that $W=|\calw|\subset M$. Suppose that $\chi(W)<0$, and that $\calp_\calw $ is $\pi_1$-injective in $M$. Then there is a connected book of $I$-bundles $\calv$ with $V=|\calv|\subset M$, such that \Conclusions \item\label{a pair of pizza pies} $V\supset W$, \item\label{Euler doesn't care}$\chibar(V)=\chibar(W)$, \item\label{he does and he doesn't}$\chi(P)<0$ for every page $P$ of $\calv$, \item\label{wysiwyg} $\partial V$ is a union of components of $\partial W$, \item\label{little birdies' dirty feet} every component of $\overline{V-W}$ is a solid torus, \item\label{and me without a spoon} every page of $\calv$ is a page of $\calw$, and \item\label{mutilated monkey meat}for each page $P$ of $\calv$ we have $P\cap\partial V=P\cap\partial W$. \EndConclusions \EndLemma \Proof Let $\calw_0 = (W, \calb, \calp)$ be a generalized book of $I$-bundles satisfying conditions (\ref{good morrow})--(\ref {steal purchase or borrow}) of Lemma \ref{no annular pages}. Since each page of $\calw_0$ is also a page of $\calw$, the hypothesis implies that each page of $\calw_0$ is $\pi_1$-injective in $M$. Let $B$ be any binding of $\calw_0$. We will show that the Seifert fibers of $B$ are homotopically non-trivial in $M$. Since $\calw_0$ is connected and $\chi(|\calw_0|)<0$, the binding $B$ must meet some page $P$ of $\calw_0$. Let $A$ be one of the annulus components of $B\cap P$. Then $A$ is a component of the vertical boundary of $P$ and, since $\chi(P)<0$, it follows that $A$ is $\pi_1$-injective in $P$. Since $P$ is $\pi_1$-injective in $M$, it follows that $A$ is also $\pi_1$-injective in $M$. Recalling that the annulus $A$ is saturated in the Seifert fibration of $B$, we may conclude that each Seifert fiber of $B$ is homotopically non-trivial in $M$. Now for any binding $B$ of $\calw_0$ let us define $\hat B$ to be the union of $B$ with all of the solid torus components of $\overline{M-B}$. We will show that $\hat B$ is a Seifert fibered submanifold of $M$ such that $\hat B\cap\calw_0 = B$. If $J$ is any solid torus component of $\overline{M-B}$ then no page of $\calv_0$ can be contained in $J$, since the pages are $\pi_1$-injective in $M$ and have negative Euler characteristic. Thus $\inter J$ must be disjoint from all of the pages of $\calw_0$. This implies that $\hat B\cap\calw_0 = B$. If $F\subset\partial J$ is a fiber of the Seifert fibered space $B$ then, since $F$ is homotopically non-trivial in $M$, the simple closed curve $F\subset\partial J$ cannot be a meridian curve for the solid torus $J$. It follows that the Seifert fibration of $B$ may be extended to a Seifert fibration of $B=B\cup J$, and hence that $\hat B$ admits a Seifert fibration. Next we will show that $\hat B$ is, in fact, a solid torus. We know that $\hat B$ must satisfy one of the conditions (1)---(3) of Lemma \ref{Seifert holes}. Condition \ref{Seifert holes}(\ref{hole three}) is ruled out since, by construction, no component of $M-\inter \hat B$ is a solid torus. The fact that the Seifert fibers of $B$ are homotopically non-trivial in $M$ implies that the inclusion homomorphism $\pi_1(B)\to\pi_1(M)$ has non-trivial image and thus $B$ cannot be contained in a ball in $M$. This rules out condition \ref{Seifert holes}(\ref{hole two}). Thus we conclude that condition \ref{Seifert holes}(\ref{hole one}) must hold, i.e that $\hat B$ is a solid torus. Since each binding of $\calw$ must meet some page, and since no page can be contained in a solid torus, we have that if $B_1$ and $B_2$ are distinct bindings of $\calw_0$, then $\hat B_1$ is disjoint from $\hat B_2$. We define $\calb'$ to be the union of the solid tori $\hat B$ as $B$ ranges over all bindings of $\calw_0$, and we set $V = \calb'\cup\calp$. We have $\calb'\cap|\calw_0| = \calb$. It follows that $\calv = (V,\calb',\calp)$ is a book of $I$-bundles, and that every page of $\calv$ has strictly negative Euler characteristic. We shall now complete the proof by observing that $V$ satisfies Conclusions (\ref{a pair of pizza pies})---(\ref{mutilated monkey meat}) of the present lemma. Conclusions (\ref{a pair of pizza pies}), (\ref{wysiwyg}) and (\ref{little birdies' dirty feet}) are immediate from the construction of $V$, and they imply Conclusion (\ref{Euler doesn't care}). The pages of $\calv$ are the same as the pages of $\calw_0$, and each page of $\calw_0$ is a page of $\calw$ and has negative Euler characteristic. Hence Conclusions (\ref{he does and he doesn't}) and (\ref{and me without a spoon}) hold. Since $\partial W$ is the union of $\partial V$ with a collection of tori that are disjoint from all pages, it follows that $P\cap\partial V = P\cap\partial W$ for every page $P$ of $\calv$. This is Conclusion (\ref{mutilated monkey meat}). \EndProof Recall that in \ref{Pi notation} we defined $\Pi(X)\subset X$ to be the set of $2$-manifold points in an arbitrary compact PL space $X$ of dimension at most $2$, and we observed that $X-\Pi(X)$ is a compact PL subset of $X$. It follows that $\Pi(X)$ has the homotopy type of a compact PL space. In particular $\chi(\pi)$ is a well-defined integer for every component $\pi$ of $\Pi(X)$. \Definition\label{bosdef} We define a {\it book of surfaces} to be a compact PL space $X$ such that \Properties \item the link of every point of $x\in X$ is PL homeomorphic to the suspension of some non-empty finite set $Z_x$; and \item for every component $\pi$ of $\Pi(X)$ we have $\chi(\pi)\le0$. \EndProperties The cardinality of the set $Z_x$ appearing in condition (1) is clearly uniquely determined by the point $x$. It will be called the {\it order} of $x$. \EndDefinition \Number Note that a point $x$ in a book of surfaces $X$ has order $2$ if and only if $x\in\Pi(X)$. It also follows from the definition that if $X$ is a book of surfaces, the set $X-\Pi(X)$ is a compact PL $1$-manifold, which will be denoted by $\Psi(X)$. The components of $\Psi(X)$ and $\Pi(X)$ may be respectively thought of as {\it bindings} and {\it pages} of $X$. We also observe that if $M$ is a PL $3$-manifold and if $S_1$ and $S_2$ are closed surfaces in $\inter M$ which meet transversally, then $S_1\cup S_2$ is a book of surfaces. \EndNumber \Lemma\label{book structure} If $X$ is a book of surfaces, there exist a (possibly disconnected) compact PL surface $F$ and a PL map $r:F\to X$ such that \Conclusions \item $r|\inter F$ is a homeomorphism of $\inter F$ onto $\Pi(X)$, and \item $r|\partial F$ is a covering map from $\partial F$ to $\Psi(X)$. \EndConclusions \EndLemma \Proof Let us identify $X$ with $|K|$, where $|K|$ is some finite simplicial complex. After subdividing $K$ if necessary we may assume that for every closed simplex $\Delta$ of $K$ the set $\Delta\cap\Psi(X)$ is a (possibly empty) closed face of $\Delta$. Let $\cald$ denote the abstract disjoint union of all the closed $2$-simplices of $X$, and let $i:\cald\to X$ denote the map which is the inclusion on each closed $2$-simplex. For each point $z\in\cald$ let $\Delta_z$ denote the closed $2$-simplex containing $z$. We define a relation $\sim$ on $\cald$ by writing $z\sim w$ if and only if (i) $\Delta_z\cap\Delta_w\not\subset\Psi(X)$ and (ii) $i(z)=i(w)$. It is straightforward to show that $\sim$ is an equivalence relation. The quotient space $F=\Delta/\sim$ inherits a PL structure from $\cald$. The definition of $\sim$ implies that there is a unique map $r:F\to X$ such that $r\circ q=i$, where $q:\cald\to F$ is the quotient map, and that $r$ maps $E=r^{-1}\Pi(X)$ homeomorphically onto $\Pi(X)$. If $x$ is a point of $\Psi(X)$, then since $X$ is a book of surfaces, there exist a neighborhood $A$ of $x$ in $\Psi(X)$, and a neighborhood $V$ of $x$ in $X$, such that $A$ is a PL arc, $V$ is a union of PL disks $D_1\cup\cdots\cup D_m$, where $m$ is the order of $x$ in $X$, and $D_i\cap D_j=A$ whenever $i\ne j$. The definition of $\sim$ implies that $r^{-1}(V)$ is a disjoint union of PL disks $\widetilde D_1,\ldots,\widetilde D_m$ such that $r$ maps $\widetilde D_i$ homeomorphically onto $D_i$ for $i=1,\ldots,m$. Hence $F$ is a PL surface with interior $E$ and boundary $r^{-1}(\Psi(X))$, and $r|\partial F:\partial F\to\Psi(X)$ is a covering map. \EndProof \Lemma\label{neighborhood is a book} Suppose that $M$ is an orientable PL $3$-manifold and that $X\subset\inter M$ is a book of surfaces. Then there is a book of $I$-bundles $\calw$ such that \Conclusions \item \label{reg nbhd}$|\calw|=W$ is a regular neighborhood of $X$; \item \label{more reg nbhd}$|\calb_\calw|$ is a regular neighborhood of $\psi(X)$; \item \label{gopher}for every page $P$ of $\calw$, the set $X\cap P$ is a section of the $I$-bundle $P$; and \item\label{twofer} $\calp_\calw$ is a regular neighborhood in $M$ of a deformation retract of $\Pi(X)$. \EndConclusions \EndLemma \Proof Let $\calb$ be a regular neighborhood of $\Psi(X)$ in $M$ such that $N=\calb\cap X$ is a regular neigborhood of $\Psi(X)$ in the PL space $X$. Every component of $\calb$ is a solid torus. Since $\Pi(X)$ is an open $2$-manifold, $Y=X\cap\overline{M-\calb}$ is a compact $2$-manifold and a deformation retract of $\Pi(X)$. In particular, in view of condition (2) in the definition of a book of surfaces, every component of $Y$ has Euler characteristic $\le0$. Let $\calp$ be a regular neighborhood of $Y$ in $\overline{M-N}$. Then $W=\calb\cup\calp$ is a regular neighborhood of $X$ in $M$. We may give $\calp$ the structure of an $I$-bundle over $Y$ in such a way that $Y$ is identified with a section of the bundle. We have $\calp\cap\calb=\partial_v\calp$, and $\chi(P)\le0$ for every component $P$ of $\calp$. Let $F$ be the surface, and $r:F\to X$ the map, given by Lemma \ref{book structure}. We have $N=r(C)$, where $C$ is a collar neighborhood of $\partial F$ in $F$. Now if $A$ is any component of $\partial_v\calb$, then $A\cap Y$ is a component of $\partial Y$ and therefore cobounds an annulus component of $C$ with some component $\widetilde \psi_A$ of $\partial F$. It follows from \ref{book structure} that $r|\widetilde \psi_A$ is a covering map of some degree $d_A$ to some component $\psi_A$ of $\psi(X)$. The annulus $A$ lies in the boundary of the component $B_A$ of $\calp$ containing $\psi_A$, and the (unsigned) degree of $A$ in the solid torus $B_A$ is $d_A$. In particular, every component of $\partial_v\calp$ has non-zero degree in the component of $\calb$ containing it. This implies that $\calw=(W,\calb,\calp)$ is a book of $I$-bundles. Each page $P$ of $\calp$ was constructed as an $I$-bundle over a component $Y_0$ of $Y$, where $Y_0$ is identified with a section of the bundle. Since $Y_0 = X\cap P$, Conclusion (\ref{gopher}) of the lemma follows. Conclusions (\ref{reg nbhd}), (\ref{more reg nbhd}) and (\ref{twofer}) are immediate from the construction of $\calw$. \EndProof \Lemma\label{oinksday} Suppose that $\calw$ is a book of $I$-bundles, and let $p$ denote the number of pages of $\calw$. Then $$\rk H_2(|\calw|;\Z_2)\le p.$$ \EndLemma \Proof It is most natural to prove a very mild generalization: if $\calw$ is a generalized book of $I$-bundles whose bindings are all solid tori, and if $p$ denotes the number of pages of $\calw$, then $\rk H_2(|\calw|;\Z_2)\le p$. We set $W=|\calw|$ and use induction on $p$. If $p=0$ then the components of $W$ are solid tori and hence $\rk H_2(W)=0$. If $p>0$, choose a page $P$ of $\calw$, and set $W'=\overline{W-P}$ and $ \calp'=\calp_\calw-P$. Then $\calp'$ inherits an $I$-bundle structure from $\calp$, and $\calw'=(W',\calb,\calp')$ is a book of $I$-bundles with $p-1$ pages. By the induction hypothesis we have $\rk H_2(W')\le p-1$. On the other hand, if $F$ denotes the base surface of the $I$-bundle $P$, we have $$H_2(W,W';\Z_2)\cong H_2(P,\partial_vP;\Z_2)\cong H_2(F,\partial F;\Z_2)$$ and hence $\rk H_2(W,W')=1$. It follows that $$\rk H_2(W)\le\rk H_2(W')+\rk H_2(W,W')\le p.$$ \EndProof \Lemma\label{moosday} If $\calw$ is a book of $I$-bundles, and if every page of $\calw$ has strictly negative Euler characteristic, we have $$\rk(|\calw|)\le2\barchi(|\calw|)+1.$$ \EndLemma \Proof Set $W=|\calw|$. By hypothesis we have $\chibar(P)\ge1$ for every page $P$ of $\calw$. Hence if $P_1,\ldots,P_p$ denote the pages of $\calw$, we have $$\chibar(W)=\sum_{i=1}^p\chibar(P_i)\ge p.$$ According to Lemma \ref{oinksday} we have $$\rk H_2(W;\Z_2)\le p\le\chibar(W).$$ Now $W$ is a connected $3$-manifold with non-empty boundary. Hence $\rk H_0(W;\Z_2)=1$, and $H_j(W;\Z_2)=0$ for each $j>2$. In view of \ref{oiler}, we have $$\chibar(W)=\rk H_1(W;\Z_2)-\rk H_2(W;\Z_2)-1.$$ Hence $$\rk(W)=\rk H_1(W;\Z_2)=\chibar(W)+\rk H_2(W;\Z_2)+1\le2\chibar(W)+1.$$ \EndProof \section {Compressing submanifolds} \label{compressing section} \Definition\label{complexity def} If $\cals$ is a closed (possibly empty or disconnected) surface, we define a non-negative integer $\kappa(\cals)$ by $$\kappa(V)=\sum_S(1+\genus(S)^2),$$ where $S$ ranges over the components of $\cals$. \EndDefinition \Lemma\label{pere tranquille} Let $\cals$ be a closed (possibly empty or disconnected) surface, let $A\subset \cals$ be a homotopically non-trivial annulus, and let $\cals'$ be the surface obtained from the bounded surface $\overline{\cals-A}$ by attaching disks $D_1$ and $D_2$ to its two boundary components. Then $\kappa(\cals')<\kappa(\cals)$. \EndLemma \Proof Let us index the components of $\cals$ as $S_0,\ldots,S_n$, where $n\ge0$ and $A\subset S_0$. If $S_0-A$ is connected, the components of $\cals'$ are $S_0',S_1,\ldots,S_n$, where $S_0'=(S_0-A)\cup D_1\cup D_2$. We then have $\genus S_0'=(\genus S_0)-1$, so that $\kappa( \cals)<\kappa(\cals')$. If $S_0-A$ is disconnected, then $(S_0-A)\cup D_1\cup D_2$ has two components $S_0'$ and $S_0''$. If we denote the respective genera of $S_0$, $S_0'$ and $S_0''$ by $g$, $g'$ and $g''$, we have $g=g'+g''$; and since $A$ is homotopically non-trivial in $S_0$, both $g'$ and $g''$ are strictly positive. It follows that $(1+(g')^2)+(1+(g'')^2)<1+g^2$, and we again deduce that $\kappa( \cals)<\kappa(\cals')$. \EndProof \Number\label{des moines} Recall that a connected $3$-manifold $H$ is called a {\it compression body} if it can be constructed from a product $T\times[-1,1]$, where $T$ is a connected, closed, orientable $2$-manifold, by attaching finitely many $2$- and $3$-handles to $T\times\{-1\}$. One defines $\partial_+H$ to be the submanifold $T\times\{1\}$ of $\partial H$, and one define $\partial_-H$ to be $\partial H-\partial_+ H$. \EndNumber \Number\label{waterloo} If $H$ is a connected compression body, it is clear that $\partial_+H$ is connected, and that for each component $F$ of $\partial_-H$ we have $\genus(F)\le\genus(\partial_+H)$. \EndNumber \Number\label{cedar rapids} It is a standard observation that a connected compression body $H$ with $\partial_-H=\emptyset$ is a handlebody. \EndNumber \Number\label{davenport} Another standard observation is that any connected compression body $H$ with $\partial_H\ne\emptyset$ can be constructed from a product $S\times[-1,1]$, where $S$ is a possibly disconnected, closed, orientable $2$-manifold, by attaching $1$- and $2$-handles to $S\times\{1\}$. One then has $\partial_-H=S\times\{-1\}$. An immediate consequence of this observation is that if $H$ is a connected compression body then $\partial_-H$ is $\pi_1$-injective in $H$. \EndNumber \Number\label{sioux city} More generally, we shall define a {\it compression body} to be a compact, possibly disconnected $3$-manifold $\calh$ such that each component of $\calh$ is a compression body in the sense defined above. We define $\partial_+\calh=\bigcup_H\partial_+H$ and $\partial_-\calh=\bigcup_H\partial_-H$, where $H$ ranges over the components of $\calh$. \EndNumber \Proposition\label{new old prop 2} Let $N$ be a compact orientable, irreducible $3$-manifold, and let $V$ be a compact, connected, non-empty $3$-submanifold of $\inter N$. Suppose that $\overline{N-V}$ is $\pi_1$-injective in $N$. Then at least one of the following conditions holds: \Conclusions \item\label{gone the way of brother tom}$V$ is contained in a ball in $N$; or \item\label{sister jenny's turn} $\partial V\ne\emptyset$, and there exists a connected, incompressible closed surface in $N$ whose genus is at most the maximum of the genera of the components of $\partial V$; or \item\label{mama's aim is bad} $N$ is closed and every component of $\overline{N-V}$ is a handlebody. \EndConclusions \EndProposition \Proof First note that if $V=N$ then conclusion (\ref{mama's aim is bad}) holds. (The hypothesis $V\subset\inter N$ implies that $N$ is closed, and the other assertion of (\ref{mama's aim is bad}) is vacuously true.) Hence we may assume that $V\ne N$, so that $\partial V\ne\emptyset$. Let $\calc$ denote the set of all (possibly disconnected) compression bodies $\calh\subset N$ such that $\calh\cap V=\partial_+\calh=\partial V$. Note that a regular neighborhood of $\partial V$ relative to $\overline{N-V}$ is an element of $\calc$, and hence that $\calc\ne\emptyset$. Let us fix an element $\calh$ of $\calc$ such that (in the notation of \ref{complexity def}) we have $\kappa(\partial_-\calh)\le\kappa(\partial_-\calh')$ for every $\calh'\in\calc$. Note that $V\cup \calh$ is connected since $\calh\in\calc$. Consider first the case in which $\partial_- \calh=\emptyset$. In this case, it follows from \ref{cedar rapids} that every component of $\calh$ is a handlebody, and we have $\partial \calh=\partial_+\calh=\partial V$. Since $N$ is connected and $\partial V\ne\emptyset$, we must have $\calh=\overline{N-V}$. In particular $N$ must be closed. Thus conclusion (\ref{mama's aim is bad}) of the proposition holds in this case. Now consider the case in which some component $S$ of $\partial_-\calh$ is a $2$-sphere. By irreducibility, $S$ bounds a ball $B\subset N$. Since $V\cup \calh$ is connected, we have either $V\cup \calh\subset B$ or $B\cap(V\cup \calh)=\partial B$. If $V\cup \calh\subset B$, then in particular conclusion (\ref{gone the way of brother tom}) of the proposition holds. If $B\cap(V\cup \calh)=\partial B$, then $\calh'\doteq \calh\cup B$ is obtained from $\calh$ by attaching a $3$-handle to $\partial_-\calh$, and hence $\calh'\in\calc$ (cf. \ref{des moines}). But we have $\partial_-\calh'=\partial_-\calh-S$, and it follows from Definition \ref{complexity def} that $\kappa(\calh')=\kappa(\calh)-1$. This contradicts the minimality of $\kappa(\calh)$. There remains the case in which $\partial_- \calh\ne\emptyset$, and every component of $\partial_- \calh$ has positive genus. Let us fix a component $Z$ of $\overline{N-V}$ which contains at least one component of $\partial_-\calh$. Let us set $F=Z\cap\partial_-\calh$. Then $F$ is a non-empty (and possibly disconnected) closed surface in $\inter Z$, and each component of $F$ has positive genus. We claim that $F$ is incompressible in $Z$. Suppose to the contrary that $F$ is compressible in $Z$. Then there is a disk $D\subset\inter Z$ such that $D\cap F=\partial D$, and such that $\partial D$ is a homotopically non-trivial simple closed curve in $F$. Since $D\subset\inter Z\subset N-V$, we have $D\cap\partial_+\calh=\emptyset$. Furthermore, since $D\subset Z$, we have $D\cap\partial_-\calh=D\cap( Z\cap\partial_-\calh)=D\cap F=\partial D$. Hence $D\cap\partial \calh=\partial D$. It follows that either $D\subset \calh$ or $D\cap \calh=\partial D$. If $D\subset \calh$, let $H_0$ denote the component of $\calh$ containing $D$, and let $F_0\subset\partial_-H_0$ denote the component of $F$ containing $\partial D$. Since $\partial D$ is homotopically non-trivial it follows that the inclusion homomorphism $\pi_1(F_0)\to\pi_1(H_0)$ has non-trivial kernel. This contradicts \ref{davenport}. If $D\cap \calh=\partial D$, we fix a regular neighborhood $E$ of $D$ relative to $\overline{Z-\calh}$. Then $\calh'\doteq \calh\cup E$ is obtained from $\calh$ by attaching a $2$-handle to $\partial_-\calh$, and hence $\calh'\in\calc$ (cf. \ref{des moines}). The surface $\partial \calh'$ has the form $((\partial \calh)-A)\cup D_1\cup D_2$, where $A\subset\partial \calh$ is a homotopically non-trivial annulus, and $D_1$ and $D_2$ are disjoint disks in $N$ such that $(D_1\cup D_2)\cap\partial \calh=\partial A$. It therefore follows from Lemma \ref{pere tranquille} that $\kappa( \partial \calh)<\kappa(\partial \calh')$. This contradicts the minimality of $\kappa(\calh)$, and the incompressibility of $F$ in $Z$ is proved. Since $Z$ is $\pi_1$-injective in $N$ by hypothesis, it now follows that $F$ is incompressible in $N$. Our choice of $Z$ guarantees that $F\ne\emptyset$. Choose any component $F_1$ of $F$, and let $H_1$ denote the component of $\calh$ containing $F_1$. By \ref{waterloo}, $\genus(F_1)$ is at most the genus of the connected surface $\partial_+ \calh$. But $\partial_+\calh$ is a component of $\partial V$ since $\calh\in\calc$, and so $\genus(F_1)$ is at most the maximum of the genera of the components of $\partial V$. Hence conclusion (\ref{sister jenny's turn}) of the proposition holds in this case. \EndProof \section{Transporting surfaces downstairs} \label{down} \Lemma\label{the chalice in the palace} Let $M$ be a simple, compact, orientable $3$-manifold, let $p:\widetilde M\to M$ be a $2$-sheeted covering, and let $\tau:\widetilde M\to\widetilde M$ denote the non-trivial deck transformation. Suppose that $\widetilde M$ contains a closed, incompressible surface $F_0$ of positive genus. Then $F_0$ is ambiently isotopic to a surface $F$ such that $F$ and $\tau(F)$ meet transversally, and every component of $F\cap\tau(F)$ is a homotopically non-trivial simple closed curve in $M$. \EndLemma \Proof Let $\calf$ denote the collection of all surfaces $S\subset M$ such that $S$ is isotopic to $F_0$ and $S$ meets $\tau(S)$ transversely. Choose a surface $F\in\calf$ so that the number of components of $F\cap\tau(F)$ is minimal. We will show that every component of $F\cap\tau(F)$ is a homotopically non-trivial simple closed curve. Suppose there exists a homotopically trivial component $\gamma$ of $F\cap\tau(F)$. Then, since $F$ is incompressible in $M$, the simple closed curve $\gamma$ must bound disks $D\subset F$ and $D'\subset \tau(F)$. We assume, without loss of generality, that the disk $D'$ is innermost on $\tau(F)$ in the sense that $ D'\cap F = \gamma$. This implies, in particular that $D\cup D'$ is an embedded $2$-sphere in $M$. Since $M$ is irreducible, the $2$-sphere $D\cup D'$ bounds a ball $B$ in $M$. We may observe at this point that the curve $\gamma$ cannot be invariant under $\tau$. Otherwise, since $D'$ is the unique disk on $\tau(F)$ bounded by $\gamma$, it would follow that $\tau(D)=D'$, and hence that the sphere $D\cup D'$ is invariant under $\tau$. Since $\widetilde M$ contains an incompressible surface, it is not homeomorphic to $S^3$, and therefore $B$ is the unique $3$-ball bounded by $D\cup D'$. Thus the assumption that $\gamma$ is invariant implies that the ball $B$ is invariant under the fixed point free map $\tau$, contradicting the Brouwer Fixed Point Theorem. This shows that $\gamma$ is not invariant under $\tau$. It follows, since $D'$ is innermost, that $D'$ is disjoint from its image under $\tau$. Now let $V$ be a regular neighborhood of $B$, chosen so that $V\cap F$ is a regular neighborhood of $D$ and $V\cap F'$ is a regular neighborhood of $D'$. The disk $F'\cap V$ divides $V$ into two balls, one of which, say $U$, is disjoint from the interior of $D$. Since $D'\cap \tau(D') = \emptyset$, we may assume without loss of generality that $V$ has been chosen to be small enough so that $U\cap\tau(U) = \emptyset$. Let $E$ denote the disk in $\partial U$ which is bounded by $F\cap U$ and which is disjoint from $\tau(F)$. We set $A =\overline{F\setminus U}$ and consider the surface $F' = A\cup E$, which is clearly isotopic to $F$ by an isotopy supported in $V$. We will show that $F'\cap\tau(F') \subset (F\cap \tau(F)) - \gamma$. We write $F'\cap\tau(F') = (A\cup E)\cap(\tau(A)\cup\tau(F))$ as the union of the four sets $A\cap\tau(A)$, $A\cap\tau(E)$, $E\cap\tau(A)$ and $E\cap\tau(E)$. We have $A\cap\tau(A)\subset F\cap\tau(F)-\gamma$. Since $E\subset U$ and $U\cap\tau(U)=\emptyset$ we have $E\cap\tau(E)=\emptyset$. The sets $E$ and $\tau(F)\supset\tau(A)$ are disjoint by construction, and hence $E\cap\tau(A) = \emptyset$. Finally, $A\cap\tau(E)=\tau(E\cap\tau(A)) = \emptyset$. We have shown that $F'\cap\tau(F') \subset (F\cap \tau(F)) - \gamma$, and hence that $F'\cap\tau(F')$ has fewer components that $F\cap \tau(F)$. This contradicts the choice of $F$, and completes the proof of the lemma. \EndProof \Lemma\label{the brew that is true} Let $ N $ be a simple, compact, orientable $3$-manifold, let $p: \widetilde N \to N $ be a $2$-sheeted covering, and let $\tau: \widetilde N \to \widetilde N $ denote the non-trivial deck transformation. Suppose that $F\subset \widetilde N $ is a closed, incompressible surface such that $F$ and $\tau(F)$ meet transversally, and every component of $F\cap\tau(F)$ is a homotopically non-trivial simple closed curve in $ N $. Then $N -p(F)$ is $\pi_1$-injective in $N$. \EndLemma \Proof Set $F_1=\tau(F)$, so that $F_1$ is incompressible in $\widetilde N$. Set $C=F\cap F_1$. Let $\widetilde N'$ denote the $3$-manifold obtained by splitting $\widetilde N$ along $F$, and let $F_1'$ denote the surface obtained by splitting $F_1$ along $C$. Then $\widetilde N$ and $F_1$ may be regarded as quotient spaces of $\widetilde N'$ and $F_1'$, and $F_1'$ is naturally identified with a properly embedded surface in $\widetilde N'$. We have a commutative diagram $$\xymatrix{ F_1' \ar[r] \ar[d] & F_1 \ar[d] \\ \widetilde N' \ar[r] & \widetilde N \\ }$$ where the horizontal maps are quotient maps and the vertical maps are inclusions. The inclusion $F_1\to\widetilde N$ is $\pi_1$-injective because $F_1$ is incompressible in $\widetilde N$, and the quotient map $F_1'\to F_1$ is $\pi_1$-injective because the components of $C$ are homotopically non-trivial. By commutativity of the diagram it follows that the inclusion $F_1'\to\widetilde N'$ is $\pi_1$-injective. Now let $\widetilde N''$ denote the $3$-manifold obtained by splitting $\widetilde N'$ along $F_1'$. Since the inclusion $F_1'\to\widetilde N'$ is $\pi_1$-injective, the quotient map $\widetilde N''\to\widetilde N'$ is also $\pi_1$-injective. On the other hand, the quotient map $\widetilde N'\to\widetilde N$ is $\pi_1$-injective because $F$ is incompressible in $\widetilde N$. Hence the composite quotient map $\widetilde N''\to\widetilde N$ is $\pi_1$-injective. It follows that the inclusion map $\widetilde N-(F\cup F_1)\to\widetilde N$ is $\pi_1$-injective. Now consider any component $Z$ of $ N -p(F)$. Choose a component $\widetilde Z$ of $p^{-1}(Z)$. Then $\widetilde Z$ is a component of $\widetilde N-(F\cup F_1)$, and hence the inclusion $\widetilde Z\to\widetilde N$ is $\pi_1$-injective. Thus in the commutative diagram $$\xymatrix{ \pi_1(\widetilde Z) \ar[r] \ar[d] & \pi_1(\widetilde N) \ar[d] \\ \pi_1(Z) \ar[r] & \pi_1(N) \\ }$$ the inclusion homomorphism $\pi_1(\widetilde Z)\to \pi_1(\widetilde N)$ is injective, while the vertical homomorphisms are induced by covering maps and are therefore also injective. Since the image of $\pi_1(\widetilde Z)$ has index at most $2$ in $\pi_1(Z)$, the kernel of the inclusion homomorphism $\pi_1(Z)\to \pi_1(N)$ has order at most $2$. But $\pi_1(Z)$ is torsion-free because $N$ is simple. Hence $\pi_1(Z)\to \pi_1(N)$ is injective, as asserted by the Lemma. \EndProof \Lemma\label{new lemma for an old prop} Suppose that $ N $ is a simple, compact, orientable $3$-manifold, that $p: \widetilde N \to N $ is a $2$-sheeted covering, that $g\ge2$ is an integer, and that $ \widetilde N $ contains a closed, incompressible surface of genus $g$. Then there exist a connected book of $I$-bundles $\calv$ with $V=|\calv|\subset N $, and a closed, orientable (possibly disconnected) surface $S\subset \inter V$ such that \Conclusions \item\label{you can go by foot}$\chibar(V)=\chibar(S)=2g-2$; \item\label{you can go by cow}every page of $\calv$ has strictly negative Euler characteristic; \item\label{i'm injective}$\calp_\calw$ is $\pi_1$-injective in $N$; \item\label{you're injective}$ N -V$ is $\pi_1$-injective in $N$; \item\label{square hole}no component of $S$ is a sphere; and \item\label{marvin k mooney}for every page $P$ of $\calv$, the set $S\cap P$ is a section of the $I$-bundle $P$. \EndConclusions \EndLemma \Proof According to Lemma \ref{the chalice in the palace}, $ \widetilde N $ contains a closed, incompressible surface $F$ of genus $g$ such that $F$ and $\tau(F)$ meet transversally, and every component of $F\cap\tau(F)$ is a homotopically non-trivial simple closed curve in $ N $. It follows that $q=p|F:F\to N$ is an immersion with at most double-curve singularities. The map $q_\sharp:\pi_1(F)\to\pi_1(N)$ is injective because $F$ is incompressible in $\widetilde N$ and $p:\widetilde N\to N$ is a covering map. Let us set $X=q(F)$, and let $C\subset X$ denote the union of all double curves of $q$. Since the components of $C$ are homotopically non-trivial in $N$ and hence in $X$, the set $\widetilde C=q^{-1}(C)$ is a disjoint union of homotopically non-trivial simple closed curves in $F$. Hence $F-\widetilde C$ is $\pi_1$-injective in $F$, and each of its components has non-positive Euler characteristic. Since $q_\sharp:\pi_1(F)\to\pi_1(N)$ is injective it follows that $q|(F-\widetilde C):(F-\widetilde C)\to N$ is $\pi_1$-injective. The set $F-\widetilde C$ is mapped homeomorphically onto $X-C$ by $q$. In particular, each component of $X-C$ has non-positive Euler characteristic. Furthermore, since $q|(F-\widetilde C):(F-\widetilde C)\to N$ is $\pi_1$-injective, it now follows that $X-C$ is $\pi_1$-injective in $N$. In the notation of (\ref{Pi notation}), we have $\Pi(X)=X-C$ , and the link in $X$ of every point of $C$ is homeomorphic to the suspension of a four-point set. Since every component of $X-C$ has non-positive Euler characteristic, it follows from Definition \ref{bosdef} that $X$ is a book of surfaces. Since each component of $C$ is a simple closed curve, we have $\barchi(X)=\barchi(F)=2g-2$. Let $W$ denote a regular neighborhood of $X$ in $ N $. According to Lemma \ref{neighborhood is a book}, we may write $W=|\calw|$ for some book of $I$-bundles $\calw$ in such a way that Conclusions (\ref{more reg nbhd})--(\ref{twofer}) of Lemma \ref{neighborhood is a book} hold. Since $X-C$ is $\pi_1$-injective in $N$, it follows from Conclusion (\ref{twofer}) of Lemma \ref{neighborhood is a book} that $\calp_\calw$ is $\pi_1$-injective in $N$. Since $\chi(W)=\chi(X)=2-2g<0$, and since $\calp_\calw$ is $\pi_1$-injective in $N$, it follows from Lemma \ref{make book} that there is a connected book of $I$-bundles $\calv$ with $V=|\calv|\subset N $, such that Conclusions (\ref{a pair of pizza pies})---(\ref{mutilated monkey meat}) of Lemma \ref{make book} hold. Conclusion (\ref{Euler doesn't care}) of Lemma \ref{make book} gives $\chibar(V)=\chibar(W)=\chibar(X)$, so that \Equation\label{a high wind in the attic} \chibar(V)=2g-2. \EndEquation It follows from Conclusions (\ref{a pair of pizza pies}) and (\ref{and me without a spoon}) of Lemma \ref{make book} that every binding of $\calw$ is contained in a binding of $\calv$. Since by Conclusion (\ref{more reg nbhd}) of Lemma \ref{neighborhood is a book} we have $C\subset\inter\calb_\calw$, it follows that $C\subset\inter\calb_\calv$. Let $\calu$ denote a regular neighborhood of $C$ in $\inter\calb_\calv$. We may suppose $\calu$ to be chosen so that $\partial \calu$ meets $\Pi(X)$ transversally, and each component of $\calu\cap X$ is homeomorphic to $\text{\large +}\times S^1$, where $\text{\large +}$ denotes a cone on a four-point set. Set $X'=\overline{X-(\calu\cap X)}$ and $F'=F\cap q^{-1}(X')$. Then $F'$ and $X'$ are (possibly disconnected) compact $2$-manifolds with boundary, and $q'=q|F'$ maps $F'$ homeomorphically onto $X'$. Let us fix an orientation of $F$, so that $F'$ inherits an orientation, and define an orientation of $X'$ by transporting the orientation of $F'$ via $q$. Let $U_1,\ldots,U_m$ denote the components of $\calu$. We set $B_i=X\cap\partial U_i$. Each component $\beta$ of $B_i$ is a boundary component of $X'$ and hence has an orientation induced from the orientation of $X'$, which determines a generator of $H_1(U_i;\Z)$ via the inclusion isomorphism $H_1(\beta;\Z)\to H_1(U_i;\Z)$. We shall say that two components of $B_i$ are {\it similar} if they determine the same generator of $H_1(U_i;\Z)$ via this construction. The set $(\partial U_i)-B_i$ has four components. Their closures are annuli, which we shall call {\it complementary annuli}. We shall say that two components of $ B_i$ are {\it adjacent} if their union is the boundary of a complementary annulus, and {\it opposite} otherwise. If $\beta$ and $\beta'$ are opposite components of $ X\cap\partial U_i$, then $q^{-1}(\beta)$ and $q^{-1}(\beta')$ form the boundary of an annulus $A$ in $F$, which is mapped homeomorphically by $q$ to an embedded annulus in $U_i$. Since the orientation of $F'$ is the restriction of an orientation of $F$, the induced orientations of $q^{-1}(\beta)$ and $q^{-1}(\beta')$ determine different generators of $H_1(A;\Z)$. In view of our definitions it follows that opposite components of $ B_i$ are dissimilar. Let us call a complementary annulus {\it bad} if its boundary curves are similar, and {\it good} otherwise. If $\beta$ is any component of $B_i$, the two components of $B_i$ adjacent to $\beta$ are opposite each other; hence exactly one of them is similar to $\beta$. This shows that $\beta$ is contained in the boundary of exactly one bad annulus and one good annulus. We conclude that $\partial U_i$ contains exactly two good annuli, say $A_i$ and $A_i'$, and that $A_i\cap A_i'=\emptyset$. The set $$S=(X-(X\cap\calu))\cup(A_1\cup\cdots\cup A_m)\cup(A'_1\cup\cdots\cup A'_m)$$ is a (possibly disconnected) compact PL $2$-manifold embedded in $V$. Since $A_i$ and $A_i'$ are good annuli, the orientation of $X'$ extends to an orientation of $S$. In particular $S$ is orientable. We shall show that Conclusions (\ref{you can go by foot})--(\ref {marvin k mooney}) of the present lemma hold when $\calv$ and $S$ are defined as above. According to (\ref{a high wind in the attic}) we have $\chibar(V)=2g-2$. It follows from the construction of $S$ that $\chibar(S)=\chibar(X)=2g-2$. Hence Conclusion (\ref{you can go by foot}) of the present lemma holds. Conclusion (\ref{you can go by cow}) of the present lemma follows from Conclusion (\ref{he does and he doesn't}) of Lemma \ref{make book}. Since we have seen that $\calp_\calw$ is $\pi_1$-injective in $N$, it follows from Conclusion (\ref{and me without a spoon}) of Lemma \ref{make book} that $\calp_\calv$ is $\pi_1$-injective in $N$. This is Conclusion (\ref{i'm injective}) of the present lemma. It follows from Lemma \ref{the brew that is true} that $N-X= N-q(F)$ is $\pi_1$-injective in $N$. It follows from conclusions (\ref{a pair of pizza pies}) and (\ref{wysiwyg}) of Lemma \ref{make book} that every component of $N-V$ is also a component of $N-W$, and is therefore ambiently isotopic in $N$ to a component of $ N- X$. Hence $ N -V$ is $\pi_1$-injective in $N$. This is Conclusion (\ref{you're injective}) of the present lemma. It follows from the construction of $S$ that $S\cap\calp_\calv=X\cap\calp_\calv$. If $P$ is any page of $\calv$, then by Conclusion (\ref{and me without a spoon}) of Lemma \ref{make book}, $P$ is a page of $\calw$, and hence $S\cap P=X\cap P$ is a section of the $I$-bundle $P$ according to Conclusion (\ref{gopher}) of Lemma \ref{neighborhood is a book}. This establishes Conclusion (\ref{marvin k mooney}) of the present lemma. In particular it follows that for every page $P$ of $\calv$ the surface $P\cap S$ is connected and has non-positive Euler characteristic. On the other hand, the construction of $S$ shows that every component of $S\cap\calb_\calv$ is an annulus. Hence every component of $S$ has non-positive Euler characteristic, and Conclusion (\ref{square hole}) of the present lemma follows. \EndProof \Proposition\label{new improved old prop 3} Suppose that $ N $ is a simple, compact, orientable $3$-manifold, that $p: \widetilde N \to N $ is a $2$-sheeted covering, that $g\ge2$ is an integer, and that $ \widetilde N $ contains a closed, incompressible surface of genus $g$. Then either \Conclusions \item\label{hit 'im wid a brick} $N$ contains a closed, connected, incompressible surface of genus at most $g$, or \item\label{bust 'em clown} $N$ is closed and there is a connected, book of $I$-bundles $\calv$ with $V=|\calv|\subset N $ such that $\chibar(V)=2g-2$, every page of $\calv$ has strictly negative Euler characteristic, and every component of $\overline{N-V}$ is a handlebody. In particular, the rank of $H_1(N;\Z_2)$ is at most $4g-3$. Furthermore, there is a closed, orientable (possibly disconnected) surface $S\subset \inter V$ such that for every page $P$ of $\calv$, the set $S\cap P$ is a section of the $I$-bundle $P$. \EndConclusions \EndProposition The last sentence of alternative of (\ref{bust 'em clown}) is not used in this paper, but will be needed in \cite{second}. \Proof[Proof of Proposition \ref{new improved old prop 3}] Let us fix a connected book of $I$-bundles $\calv$ with $V=|\calv|\subset N $, and a closed, orientable surface $S\subset \inter V$, such that Conclusions (\ref{you can go by foot}) to (\ref{marvin k mooney}) of Lemma \ref{new lemma for an old prop} hold. We distinguish two cases, depending on whether there does or does not exist a page of $\calv$ whose horizontal boundary is contained in a single component of $\partial V$. {\bf Case I. There is a page $P_0$ of $\calv$ such that $\partial_hP_0$ is contained in a single component $Y_0$ of $\partial V$.} According to conclusion (\ref{marvin k mooney}) of Lemma \ref{new lemma for an old prop}, the set $S\cap P_0$ is a section of the $I$-bundle $P_0$. Hence there is a properly embedded arc $\alpha$ in $V$, such that $\alpha\subset P_0$, and such that $\alpha$ meets $S$ transversally in a single point. The endpoints of $\alpha$ lie in $\partial_hP_0\subset Y_0$. Since $Y_0$ is connected, there is an arc $\beta\subset Y_0$ with $\partial\beta=\partial\alpha$. Let $\sigma$ denote the class in $ H_2(N;\Z_2)$ represented by $S$. Since $\alpha$ is properly embedded in $V$ and meets $X$ transversally in a single point of $\pi_0\subset\Pi(X)$, the class $\sigma$ has intersection number $1$ with the class in $H_1(N;\Z_2)$ represented by the simple closed curve $\alpha\cup\beta$. In particular $\sigma\ne0$. Hence some component $S_0$ of $S$ represents a non-zero class in $H_2(N;\Z_2)$. It follows from Conclusions (\ref{you can go by foot}) and (\ref{square hole}) of Lemma \ref{new lemma for an old prop} that $\chibar(S_0)\le\chibar(S)=2g-2$, and hence that $\genus(S_0)\le g$. Among all closed, orientable surfaces in $ N $ that represent non-trivial classes in $ H_2( N ;\Z_2)$, let us choose one, say $S_1$, of minimal genus. Then $\genus (S_1)\le\genus(S_0)\le g$. If $S_1$ is compressible in $ N $, a compression of $S_1$ produces a $2$-manifold $S_2$ with one or two components. Each component of $S_2$ has strictly smaller genus than $S_1$, and at least one of them represents a non-trivial class in $ H_2( N ;\Z_2)$. This contradicts minimality. Hence $S_1$ is incompressible in $ N $. Since $\genus(S_1)\le g$, conclusion (\ref{hit 'im wid a brick}) of the present lemma holds in this case. {\bf Case II. There is no page $P_0$ of $\calv$ such that $\partial_hP_0$ is contained in a single component of $\partial V$.} In this case, every page of $\calv$ is a trivial $I$-bundle. Furthermore, if $T$ is any component of $\partial V$, then for every page $P$ of $\calv$, at most one component of the horizontal boundary of $P$ is contained in $T$. Hence $$\chibar(T\cap P)\le\chibar (P)$$ for every page $P$ of $\calv$. Letting $P$ range over the pages of $\calv$, and using (\ref{a high wind in the attic}), we find that $$\chibar(T)=\sum_P\chibar(T\cap P)\le\sum_P\chibar(P)=\chibar(V)=2g-2.$$ This shows that \Equation\label{rats in jamaica} \genus(T)\le g \EndEquation for every component $T$ of $\partial V$. According to Conclusion (\ref{you're injective}) of Lemma \ref{new lemma for an old prop}, $ N -V$ is $\pi_1$-injective in $N$. Thus $V\subset N$ satisfies the hypotheses of Proposition \ref{new old prop 2}. There are three subcases corresponding to the three alternatives (\ref{gone the way of brother tom})---(\ref{mama's aim is bad}) of Proposition \ref{new old prop 2}. First suppose that alternative (\ref{gone the way of brother tom}) of Proposition \ref{new old prop 2} holds, i.e. that $V$ is contained in a ball. Then in particular for any page $P$ of $\calv$, the inclusion homomorphism $\pi_1(P)\to\pi_1(W)$ is trivial. But according to Conclusions (\ref{you can go by cow}) and (\ref{i'm injective}) of Lemma \ref{new lemma for an old prop}, we have $\chi(P)<0$ (so that $\pi_1(P)$ is non-trivial) and $\calp_\calw$ is $\pi_1$-injective in $N$. This contradiction shows that alternative (\ref{gone the way of brother tom}) of Proposition \ref{new old prop 2} cannot hold in this situation. Next suppose that alternative (\ref{sister jenny's turn}) of Proposition \ref{new old prop 2} holds, i.e. that there exists a connected, incompressible closed surface $S_1$ in $ N $ whose genus is at most the maximum of the genera of the components of $\partial V$. By (\ref{rats in jamaica}) this maximum is at most $g$. Thus conclusion (\ref{hit 'im wid a brick}) of the present lemma holds in this subcase. Finally, suppose that alternative (\ref{mama's aim is bad}) of Proposition \ref{new old prop 2} holds, i.e. that $N$ is closed and that every component of $\overline{N-V}$ is a handlebody. We have that $V=|\calv|$ where $V$ is a book of $I$-bundles whose pages all have negative Euler characteristic, and Conclusion (\ref{you can go by foot}) of Lemma \ref{new lemma for an old prop} gives $\chibar(V)= 2g-2$. Since the components of $\overline{N-V}$ are handlebodies, the inclusion of $V$ into $N$ induces a surjection from $H_1(V;\Z_2)$ to $H_1(N;\Z_2)$; hence the latter group has rank at most $4g-3$ by Lemma \ref{moosday}. Furthermore, according to Conclusion (\ref{marvin k mooney}) of Lemma \ref{new lemma for an old prop}, for every page $P$ of $\calv$, the set $S\cap P$ is a section of the $I$-bundle $P$. Thus conclusion (\ref{bust 'em clown}) of the present proposition holds in this subcase. \EndProof \section{Singularity of PL maps} \label{singularity section} If $K$ is a finite simplicial complex, we shall denote the underlying space of $K$ by $|K|$. A simplicial map $\phi:K_1\to K_2$ between finite simplicial complexes defines a map from $|K_1|$ to $|K_2|$ which we shall denote by $|\phi|$. Now suppose that $X_1$ and $X_2$ are compact topological spaces and that $f:X_1\to X_2$ is a continuous surjection. We define a {\it triangulation} of $f$ to be a quintuple $(K_1,J_1,K_2,J_2,\phi)$, where each $K_i$ is a finite simpicial complex, $J_i:|K_i|\to X_i$ is a homeomorphism, and $f\circ J_1=J_2\circ\phi$. When it is unnecessary to specify the $K_i$ and $J_i$ we shall simply say that $\phi$ is a triangulation of $f$. Note that if $f$ is any PL map from a compact PL space $X$ to a PL space $Y$, then the surjection $f:X\to f(X)$ admits a triangulation. \Definition Let $K$ and $L$ be finite simplicial complexes and let $\phi:K\to L$ be a simplicial map. We define the {\it degree of singularity} of $\phi$, denoted $\DS(\phi)$, to be the number of ordered pairs $(v,w)$ of vertices of $K$ such that $v\ne w$ but $\phi(v)=\phi(w)$. If $f$ is any PL map from a compact PL space $X$ to a PL space $Y$, we define the {\it absolute degree of singularity} of $f$, denoted $\ADS(f)$, by $$\ADS(f)=\min_{\phi}\DS(\phi),$$ where $\phi$ ranges over all triangulations of $f:X\to f(X)$. \EndDefinition \Number\label{co-restriction} We emphasize that the definition of $\ADS(f)$ is based on regarding $f$ as a map from $X$ to $f(X)$. Hence if $f$ is any PL map from a compact PL space $X$ to a PL space $Y$, and $Z$ is a PL subspace of $Y$ containing $f(X)$, then the absolute degree of singularity of $f$ is unchanged when we regard $f$ as a PL map from $X$ to $Z$. \EndNumber An almost equally trivial immediate consequence of the definition of absolute degree of singularity is expressed by the following result. \Lemma\label{homeomorphic images} Suppose that $X$, $Y$ and $Z$ are PL spaces, that $X$ is compact, that $f:X\to Y$ is a PL map, and that $h$ is a PL homeomorphism of $f(X)$ onto a PL subspace of $Z$. Then $h\circ f:X\to Z$ has the same absolute degree of singularity as $f$. \EndLemma \Proof In view of \ref{co-restriction} we may assume that $f$ is surjective and that $h$ is a PL homeomorphism of $Y$ onto $Z$. Now if $(K_1,J_1,K_2,J_2,\phi)$ is a triangulation of $f$ then $(K_1,J_1, K_2,h\circ J_2,h\circ\phi)$ is a triangulation of $h\circ f$, and $\DS(h\circ\phi)=\DS(\phi)$. It follows that $\ADS(h\circ f)\le\ADS(f)$. The same argument, with $h^{-1}$ in place of $h$, shows that $\ADS( f)\le\ADS( h\circ f)$. \EndProof \Proposition[Stallings]\label{stallings} Suppose that $Y$ is a connected PL space and that $p:\widetilde Y\to Y$ is a connected covering space, which is non-trivial in the sense that $p$ is not a homeomorphism. Suppose that $f$ is a PL map from a compact connected PL space $X$ to $Y$, such that the inclusion homomorphism $\pi_1(f(X))\to\pi_1(Y)$ is surjective. Suppose that $\widetilde f:X\to \widetilde Y$ is a lift of $f$. Then $\ADS(\widetilde f)<\ADS(f)$. \EndProposition \Proof We first prove the proposition in the special case where $f:X\to Y$ is a surjection. In this case we set $m=\ADS(f)$, and we fix a triangulation $(K_1,J_1,K_2,J_2,\phi)$ of the PL surjection $f$ such that $\DS(\phi)=m$. Here, by definition, $J_1:|K_1|\to X$ and $J_2:|K_2|\to Y$ are homeomorphisms. Let us identify $X$ and $Z$ with $|K_1|$ and $|K_2|$ via these homeomorphisms. The covering space $\widetilde Y$ of $Y$ may be identified with $|\widetilde K_2|$ for some simplicial covering complex $\widetilde K_2$ of $K_2$; thus $p=|\sigma|$ for some simplicial covering map $\sigma:\widetilde K_2\to K_2$. The lift $\widetilde f$ may be written as $|\widetilde\phi|$ for some simplicial lift $\widetilde\phi:K_1\to\widetilde K_2$. We shall denote by $W$ the subcomplex {$\widetilde\phi(K_1)$} of $\widetilde K_2$. Since $\sigma\circ\widetilde\phi=\phi$, the definition of degree of singularity implies that $\DS(\widetilde\phi)\le\DS(\phi)=m$. If equality holds here, then the restriction of $\sigma$ to the vertex set of $W$ is one-to-one. This implies that $p$ restricts to a one-to-one map from $|W|$ to $Y$. But we have $W=\widetilde f(X)$, and the surjectivity of $f$ implies that $p$ maps $|W|$ onto $Y$; thus $p$ restricts to a homeomorphism from $|W|$ to $Y$. This is impossible since $p:\widetilde Y\to Y$ is a non-trivial connected covering space. Hence we must have $\DS(\widetilde\phi)<m$. Since by definition we have $\ADS(\widetilde\phi)\le DS(\widetilde\phi)$, the assertion of the proposition follows in the case where $f$ is surjective. We now turn to the general case. Let us set $Z=f(X)$ and $\widetilde Z=p^{-1}(Z)$. Since $\widetilde Y$ is a non-trivial connected covering space of $Y$, and since the inclusion homomorphism $\pi_1(Z)\to\pi_1(Y)$ is surjective, $\widetilde Z$ is a non-trivial connected covering space of $Z$. According to \ref{co-restriction}, regarding $\widetilde f$ and $f$ as maps into $\widetilde Z$ and $Z$ does not affect their absolute degrees of singularity. Since $f:X\to Z$ is surjective, the inequality now follows from the special case that has already been proved. \EndProof Following the terminology used by Simon in \cite{simon}, we shall say that a $3$-manifold $M$ admits a {\it manifold compactification} if there is a homeomorphism $h$ of $M$ onto an open subset of a compact $3$-manifold $Q$ such that $h(\inter M)=\inter Q$. \Lemma\label{Simon} Suppose that $N$ is a compact, orientable, connected, irreducible PL $3$-manifold and that $D$ is a separating, properly embedded disk in $N$. Let $X$ denote the closure of one of the connected components of $N-D$. Let $\nu\in D$ be a base point, and let $p:(\widetilde N,\widetilde\nu)\to (N,\nu)$ denote the based covering space corresponding to the subgroup $\image(\pi_1(X,\nu)\to\pi_1(N,\nu))$ of $\pi_1(N,\nu)$. Then $\widetilde N$ admits a manifold compactification. \EndLemma \Proof Let us set $X_1=\overline{N-X}$. It will also be convenient to write $X_0=X$. Then the $X_i$ are compact submanifolds of $N$, and $X_1\cap X_2=D$. We set $H_i=\pi_1(X_i,\nu)$ for $i=0,1$. We identify $\pi_1(N,\nu)$ with $H_0\star H_1$, so that the $H_i$ are in particular subgroups of $\pi_1(N,\nu)$. Thus $(\widetilde N,\widetilde\nu)$ is the based covering space corresponding to the subgroup $H_0$. According to the general criterion given by Simon in \cite[Theorem 3.1]{simon}, $\widetilde N$ will admit a manifold compactification provided that the following conditions hold: \Conditions \item $X_0$ and $X_1$ are irreducible, \item $D$ is $\pi_1$-injective in $X_0$ and $X_1$, \item each conjugate of $H_0$ in $\pi_1(N,\nu)$ intersects $\image(\pi_1(D,\nu)\to\pi_1(N,\nu))$ in a finitely generated subgroup, and \item for each $i\in\{0,1\}$, and for each finitely generated subgroup $Z$ of $H_i$ which has the form $H_i\cap g^{-1}H_0g$ for some $g\in\pi_1(N,\nu)$, the based covering space of $(X_i,\nu)$ corresponding to $Z$ admits a manifold compactification. \EndConditions Here conditions (ii) and (iii) hold trivially because $\pi_1(D)$ is trivial. Condition (i) follows from the irreducibility of $N$. (A ball bounded by a sphere in $\inter X_i$ must be contained in $X_i$ because the frontier of $X_i$ is the disk $D$, and $\partial D\ne\emptyset$.) To prove (iv), we consider any $i\in\{0,1\}$ and any subgroup of $H_i$ having the form $Z=H_i\cap g^{-1}H_0g$ where $g\in\pi_1(N,\nu)$. Since $\pi_1(N,\nu)=H_0\star H_1$, we have either (a) $Z= \{1\}$ or (b) $i=0$ and $g\in H_0$. If (a) holds then the based covering of $(X_i,\nu)$ corresponding to $Z$ is equivalent to the universal cover of $X_i$. But since $X_i$ is irreducible and has a non-empty boundary, it is a Haken manifold. Hence by \cite[Theorem 8.1]{waldhausen}, the universal cover of $X_i$ admits a manifold compactification. If (b) holds then the covering corresponding to $Z$ is homeomorphic to $X_i$ and is therefore a manifold compactification of itself. \EndProof \Lemma\label{Simon consequence} Suppose that $N$ is a compact, connected, orientable, irreducible PL $3$-manifold and that $D$ is a separating, properly embedded disk in $N$. Let $X$ denote the closure of one of the connected components of $N-D$. Let $\nu\in D$ be a base point, and let $p:(\widetilde N,\widetilde\nu)\to (N,\nu)$ denote the based covering space corresponding to the subgroup $\image(\pi_1(X,\nu)\to\pi_1(N,\nu))$ of $\pi_1(N,\nu)$. Let $\widetilde X$ denote the component of $p^{-1}(X)$ containing $\widetilde\nu$ (so that $p$ maps $\widetilde X$ homeomorphically onto $X$). Then every compact PL subset of $\inter\widetilde N$ is PL ambient-isotopic to a subset of $\widetilde X$. \EndLemma \Proof Since $N$ is a compact, orientable, irreducible $3$-manifold with non-empty boundary, it is a Haken manifold. Hence by \cite[Theorem 8.1]{waldhausen}, the universal cover of $\inter N$ is homeomorphic to ${\bf R}^3$. Thus $\inter\widetilde N$ is covered by an irreducible manifold and is therefore irreducible. According to Lemma \ref{Simon}, the manifold $\widetilde N$ admits a manifold compactification. Thus there is a homeomorphism $h$ of $\widetilde N$ onto an open subset of a compact $3$-manifold $Q$ such that $h(\inter\widetilde N)=\inter Q$. Since $\inter Q$ is homeomorphic to the irreducible manifold $\inter\widetilde N$, the compact manifold $Q$ is itself irreducible. The definition of $\widetilde N$ implies that the inclusion map $\iota:X\to N$ admits a based lift $\widetilde\iota:(X,\nu)\to(\widetilde N,\widetilde\nu)$, that $\widetilde\iota(X)=\widetilde X$, and that $\widetilde\iota_\sharp:\pi_1(X,\nu)\to\pi_1(\widetilde N,\widetilde\nu)$ is an isomorphism. Hence the inclusion $\widetilde X\to \widetilde N$ induces an isomorphism of fundamental groups, and if we set $X'=h(\widetilde X)$, the inclusion $X'\to Q$ induces an isomorphism of fundamental groups. On the other hand, since the frontier of $X$ in $N$ is $D$, the frontier of {$X'$} in $Q$ is $D'= h(\widetilde\iota(D))$, a properly embedded disk in the compact $3$-manifold $Q$. Set $Y=\overline{Q-X'}$. Then in terms of a base point in $D'$ we have a canonical identification of $\pi_1(Q)$ with $\pi_1(X')\star\pi_1(Y)$. Since the inclusion $ X'\to Q$ induces an isomorphism of fundamental groups, it follows that $\pi_1(Y)$ is trivial. We also know that $Y$ is irreducible because its frontier in the irreducible manifold $Q$ is a disk. Thus $Y$ is a compact, simply connected, irreducible $3$-manifold with non-empty boundary, and is therefore PL homeomorphic to a ball. We have now exhibited $Q$ as the union of the compact $3$-dimensional submanifold $X$ and the PL $3$-ball $Y$, and their intersection is the disk $D$. It follows that any compact PL subset $W$ of $\inter Q$ is PL isotopic to a subset of $\inter X$. Since $h$ maps $\inter\widetilde N$ homeomorphically onto $\inter Q$, and maps $\inter\widetilde X$ homeomorphically onto $\inter X'$, the conclusion of the lemma follows. \EndProof \Lemma\label{handle removal lemma} Suppose that $K$ is a compact, connected PL space such that $\pi_1(K)$ has rank $\ge2$ and is freely indecomposable. Suppose that $N$ is a compact, connected, orientable PL $3$-manifold which is irreducible but boundary-reducible. Suppose that $f:K\to\inter N$ is a $\pi_1$-injective PL map, and that the inclusion homomorphism $\pi_1(f(K))\to\pi_1(N)$ is surjective. Then $f$ is homotopic to a map $g$ such that $\ADS(g)<\ADS(f)$. \EndLemma \Proof Since $N$ is boundary-reducible it contains an essential properly embedded disk. If $N$ contains a non-separating essential disk $D_0$, then there is a separating essential disk $D_1$ in $N-D_0$ such that the closure of the component of $N-D_1$ containing $D_0$ is a solid torus $J$. In this case $\pi_1(\overline{N-J})$ is non-trivial, since $\pi_1(N)$ has rank at least $2$; hence $D_1$ is an essential disk as well. Thus in all cases, $N$ contains a separating essential disk $D$. We may write $N=X_0\cup X_1$ for some connected submanifolds $X_0$ and $X_1$ of $N$ with $X_0\cap X_1=D$. We choose a base point in $\nu\in D$ and set $A_i=\image(\pi_1(X_i,\nu)\to\pi_1(N,\nu))$ for $i=0,1$. Then $\pi_1(N,\nu)=A_0\star A_1$. If one of the $A_i$ were trivial, then one of the $X_i$ would be a ball since $N$ is irreducible, and $D$ would not be an essential disk. Hence the $A_i$ are non-trivial subgroups. It then follows from the free product structure of $\pi_1(N,\nu)$ that the $A_i$ are of infinite index in $\pi_1(N,\nu)$, and in particular that they are proper subgroups. Since the subgroup $H=f_\sharp(\pi_1(K))$, which is defined only up to conjugacy in $\pi_1(N)$, has rank at least $2$ and is freely indecomposable, it follows from the Kurosh subgroup theorem that $H$ is conjugate to a subgroup of one of the $A_i$. By symmetry we may assume that $H$ is conjugate to a subgroup of $A_0$. Hence after modifying $f$ by a homotopy we may assume that $f$ maps some base point $\kappa$ of $K$ to $\nu$ and that $f_\sharp(\pi_1(K,\kappa))\subset A_0$. Hence if $(\widetilde N,\widetilde\nu)$ denotes the based covering space of $(N,\nu)$ corresponding to the subgroup $A_0$ of $\pi_1(N)$, then $f$ admits a lift $\widetilde f:(K,\kappa)\to(\widetilde N,\widetilde\nu)$. Since $A_0$ is a proper subgroup of $\pi_1(N,\nu)$, the covering space $\widetilde N$ is non-trivial. Hence, according to Proposition \ref{stallings}, we have $\ADS(\widetilde f)<\ADS(f)$. Let $\widetilde X_0$ denote the component of $p^{-1}(X_0)$ containing $\widetilde\nu$, so that $p$ maps $\widetilde X_0$ homeomorphically onto $X_0$. According to Lemma \ref{Simon consequence}, the compact PL subset $\widetilde f(K)$ of $\inter\widetilde N$ is PL ambient-isotopic to a subset of $\widetilde X_0$. In particular, there is a PL homeomorphism $j$ of $\widetilde f(K)$ onto a subset $L$ of $\widetilde X_0$ such that $j$, regarded as a map of $\widetilde f(K)$ into $\widetilde N$, is homotopic to the inclusion $\widetilde f(K)\to\widetilde N$. It now follows that $p\circ j$ maps $\widetilde f(K)$ homeomorphically onto the subset $p(L)$ of $X_0\subset N$. Hence by Lemma \ref{homeomorphic images}, if we set $g=p\circ j\circ\widetilde f:K\to N$, we have $\ADS(g)=\ADS(\widetilde f)<\ADS(f)$. But since $j:\widetilde f(K)\to \widetilde N$ is homotopic to the inclusion $\widetilde f(K)\to\widetilde N$, the map $g:K\to N$ is homotopic to $f$. \EndProof \Proposition\label{DS and incompressible boundary} Suppose that $K$ is a compact, connected PL space such that $\pi_1(K)$ has rank at least $2$ and is freely indecomposable. Suppose that $f$ is a $\pi_1$-injective. PL map from $K$ to the interior of a compact, connected, orientable, irreducible PL $3$-manifold $M$. Then there exist a map $g:K\to M$ homotopic to $f$ with $\ADS(g)\le\ADS(f)$, and a compact, connected $3$-dimensional submanifold $N$ of $\inter M$ such that (i) $\inter N\supset g(K)$, (ii) the inclusion homomorphism $\pi_1(g(K))\to\pi_1(N)$ is surjective, (iii) $\partial N$ is incompressible in $M$, and (iv) $N$ is irreducible. \EndProposition \Proof Among all maps from $K$ to $M$ that are homotopic to $f$, we choose one, $g$, for which $\ADS(g)$ has the smallest possible value. In particular we then have $\ADS(g)\le\ADS(f)$. Note also that $f_\sharp:\pi_1(K)\to\pi_1(N)$ is injective. Now let $N$ be a compact, connected $3$-submanifold of $M$ satisfying conditions (i) and (ii) of the statement of the Proposition, and choose $N$ so as to minimize the quantity $\kappa(\partial N)$ (see \ref{complexity def}) among all compact, connected $3$-submanifolds satisfying (i) and (ii). We shall complete the proof by showing that $N$ satisfies (iii) and (iv). We first show that (iv) holds, i.e. that $N$ is irreducible. If $S\subset\inter N$ is a $2$-sphere, then $S$ bounds a ball $B\subset M$. If we set $N'=N\cup B$, then the pair $N'$ satisfies (i) and (ii). (It inherits property (ii) from $N$ because the inclusion homomorphism $\pi_1(N)\to\pi_1(N')$ is surjective.) But if $B\not\subset N$, it is clear from Definition \ref{complexity def} that $\kappa(\partial N')<\kappa(\partial N)$, and the minimality of $\kappa(\partial N)$ is contradicted. Hence we must have $B\subset N$, and irreducibility is proved. It remains to show that (iii) holds, i.e. that $\partial N$ is incompressible. If this is false, then either $\partial N$ has a sphere component, or there is a compressing disk $D$ for $\partial N$. If $\partial N$ has a sphere component $S$, then the irreducibility of $N$ implies that $N$ is a ball. But then the injectivity of $g_\sharp:\pi_1(K)\to\pi_1(N)$ implies that $\pi_1(K)$ is trivial, a contradiction to the hypothesis that $\pi_1(K)$ has rank at least $2$. If there is a compressing disk $D$ for $\partial N$, then either $D\cap N=\partial D$ or $D\subset N$. If $D\cap N=\partial D$, and if we set $N'=N\cup Q$, where $Q$ is a regular neighborhood of $D$ relative to $\overline{M-N}$, then the { $3$-submanifold $N$ satisfies conditions (i) and (ii).} (It inherits property (ii) because the inclusion homomorphism $\pi_1(N)\to\pi_1(N')$ is again surjective.) Now $\partial N'$ has the form $((\partial N)-A)\cup D_1\cup D_2$, where $A\subset\partial N$ is a homotopically non-trivial annulus, and $D_1$ and $D_2$ are disjoint disks in $M$ such that $(D_1\cup D_2)\cap\partial N=\partial A$. It therefore follows from Lemma \ref{pere tranquille} that $\kappa( \partial N)<\kappa(\partial N')$. Again the minimality of $\kappa(\partial N)$ is contradicted. Finally, if $D\subset N$, then $N$ is boundary-reducible. As we have already shown that $N$ is irreducible, it follows from Lemma \ref{handle removal lemma} that $f$ is homotopic in $N$ to a map $g'$ such that $\ADS(g')<\ADS(g)$. In particular, $g'$ is homotopic to $g$ in $M$; and since, according to \ref{co-restriction}, the absolute degrees of singularity of $g$ and $g'$ do not depend on whether they are regarded as maps into $N$ or into { $M$}, we now have a contradiction to the minimality of $\ADS(g)$. \EndProof \section{Homology of covering spaces} \label{homology section} In this short section we shall apply and extend some results from \cite{shalenwagreich} concerning homology of covering spaces of $3$-manifolds. In this section all homology groups are understood to be defined with coefficients in $\Z_2$. \Number\label{exact sequence} If $N$ is a normal subgroup of a group $G$, we shall denote by $G\#N$ the subgroup of $G$ generated by all elements of the form $gag^{-1}a^{-1}b^2$ with $g \in G$ and $a,b \in N$. (This is a special case of the notation used in \cite{Stallingshomology} and \cite {shalenwagreich}. Here we are taking the prime $p$, which was arbitrary in \cite{Stallingshomology} and \cite {shalenwagreich}, to be $2$. \EndNumber \Number\label{gee sub em} As in Section 1 of \cite{shalenwagreich}, for any group $\Gamma$, we define subgroups $\Gamma_{d}$ of $\Gamma$ recursively for ${d}\ge0$, by setting $\Gamma_0=\Gamma$ and $\Gamma_{{d}+1}=\Gamma\#\Gamma_{d}$. We regard $\Gamma_{d}/\Gamma_{{d}+1}$ as a $\Z_2$-vector space. \EndNumber \Lemma \label{2r-3} Let $M$ be a closed $3$-manifold and set $r=\rk M$. Suppose that $\widetilde M$ is a regular cover of $M$ whose group of deck transformations is isomorphic to $(\Z_2)^m$ for some integer $m\ge0$. Then $$\rk(\widetilde M)\ge mr-\frac{m(m+1)}2.$$ \EndLemma \Proof We set $\Gamma=\pi_1(M)$ and define $\Gamma_{d}$ for each ${d}\ge0$ as in \ref{exact sequence}. We have $\rk \Gamma/\Gamma_1=\rk M=r$. It then follows from \cite[Lemma 1.3]{shalenwagreich} that $\rk(\Gamma_1/\Gamma_2) \ge r(r-1)/2$. Let $N$ denote the normal subgroup of $\Gamma$ corresponding to the regular covering space $\widetilde M$. Since $\Gamma/N\cong(\Z_2)^m$, we may write $N = E\Gamma_1$ for some $(r-m)$-generator subgroup $E$ of $\Gamma$. It now follows from \cite[Lemma 1.4]{shalenwagreich} that \begin{eqnarray*} \rk\widetilde M&=&\rk H_1(E\Gamma_1)\cr &\ge&\rk(\Gamma_1/\Gamma_2)-\frac{(r-m)(r-m-1)}{2}\cr &\ge & \frac{r(r-1)}{2}-\frac{(r-m)(r-m-1)}2 = mr-\frac{m(m+1)}2. \end{eqnarray*} \EndProof The case $m=2$ of Lemma \ref{2r-3} will be applied in the proof of Lemma \ref{2r-4}. \section{An application of a result of Gabai's} \label{new section} This section contains the applications of Gabai's results that were mentioned in the introduction. The main result of the section is Proposition \ref{newprop}. \Lemma\label{dark green} Let $X$ be a PL space, let $K$ be a closed, connected, orientable surface of genus $g>0$, and let $f:K\to X$ be a PL map. Suppose that the homomorphism $f_*:H_2(K;\Z_2)\to H_2(X;\Z_2)$ is trivial. Then the image of $f_*:H_1(K;\Z_2)\to H_1(X;\Z_2)$ has dimension at most $g$. \EndLemma \Proof Since $f_*:H_2(K;\Z_2)\to H_2(X;\Z_2)$ is trivial, it follows that the dual homomorphism $f^*:H^2(X;\Z_2)\to H^2(K;\Z_2)$ is also trivial. Hence for any $\alpha,\beta\in H^1(X)$ we have $$f^*(\alpha)\cup f^*(\beta)=f^*(\alpha\cup\beta)=0.$$ Thus if we set $V=H^1(K;\Z_2)$ and let $L\subset V$ denote the image of of $f^*:H^1(X;\Z_2)\to H^1(K;\Z_2)$, we have $L\cup L=0$, i.e. $$L\subset L^\perp=\{v\in V:v\cup L=0\}.$$ Hence if $d$ denotes the dimension of $L$, we have $$d\le\dim L^\perp.$$ But by Poincar\'e duality, the cup product pairing on $V$ is non-singular, and so $$\dim L^\perp=\dim V-\dim L=2g-d.$$ Hence $d\le g$. As the linear map $f_*:H_1(K;\Z_2)\to H_1(X;\Z_2)$ is dual to $f^*:H^1(X;\Z_2)\to H^1(K;\Z_2)$, its rank is the same as that of $f^*$, namely $d$. The conclusion follows. \EndProof \Notation If $F$ is a closed, orientable $2$-manifold, we shall denote by $\tg(F)$ the total genus of $F$, i.e. the sum of the genera of its components. \EndNotation \Lemma\label{polka dots} For any compact, connected, orientable $3$-manifold $N$, we have $$\tg(\partial N)\le\rk N.$$ \EndLemma \Proof In the exact sequence $$H_2( N,\partial N;\Z_2)\longrightarrow H_1(\partial N;\Z_2) \longrightarrow H_1( N;\Z_2) ,$$ Poincar\'e-Lefschetz duality implies that the vector spaces $H_2(N,\partial N;\Z_2)$ and $H_1( N;\Z_2)$ are of the same dimension, $\rk N$. Hence we have $$2~\tg(\partial N)=\rk\partial N\le2\rk N$$ and the conclusion follows. \EndProof \Lemma\label{falafel} If $N$ is a compact, connected, orientable $3$-manifold $N$ such that $\partial N$ has at most one connected component, then $H_2(N;\Z)$ is torsion-free. \EndLemma \Proof In the exact sequence $$H_2(\partial N;\Z)\longrightarrow H_2( N;\Z)\longrightarrow H_2( N,\partial N;\Z)$$ the inclusion map $H_2(\partial N;\Z)\longrightarrow H_2( N;\Z)$ is trivial since $\partial N$ has at most one connected component. Hence the map $H_2( N;\Z)\longrightarrow H_2( N,\partial N;\Z)$ is injective, so that $H_2( N;\Z)$ is isomorphic to a subgroup of $ H_2( N,\partial N;\Z)$. But by Poincar\'e-Lefschetz duality, $ H_2( N,\partial N;\Z)$ is isomorphic to $H^1(N,\Z)$ and is therefore torsion-free. The conclusion follows. \EndProof \Proposition\label{newprop} Suppose that $N$ is a compact (possibly closed) orientable $3$-manifold which is irreducible and boundary-irreducible. Suppose that $K$ is a closed, connected, orientable surface of genus $g\ge2$, and that $\phi:K\to N$ is a $\pi_1$-injective PL map. Then either \Conclusions \item $N$ contains a connected (non-empty) closed incompressible surface of genus at most $g$, or \item the $\Z_2$-vector subspace $\phi_*(H_1(K;\Z_2))$ of $H_1(N;\Z_2)$ has dimension at most $g$. \EndConclusions Furthermore, if $\phi_*:H_1(K;\Z_2)\to H_1(N;\Z_2)$ is surjective and $\partial N\ne\emptyset$, then (i) holds. \EndProposition \Proof We begin with the observation that $N$ is non-simply connected in view of the existence of the map $\phi$. Since $N$ is also irreducible, it follows that no component of $\partial N$ is a sphere. On the other hand, since $N$ is boundary-irreducible, every component of $\partial N$ is $\pi_1$-injective in $N$. Thus every component of $\partial N$ is parallel to an incompressible surface in $N$. To prove the first assertion of the proposition we distinguish three cases, which are not mutually exclusive but cover all possibilities. \Cases \Case{{\bf Case A.\ }} The homomorphism $\phi_*:H_2(K;\Z)\to H_2(N;\Z)$ is trivial. \Case{{\bf Case B.\ }} The surface $\partial N$ has at least two components. \Case{{\bf Case C.\ }} The surface $\partial N$ has at most one component and $\phi_*:H_2(K;\Z)\to H_2(N;\Z)$ is a non-trivial homomorphism. \EndCases To prove the assertion in Case A, we first consider the commutative diagram $$\xymatrix{ H_2(K;\Z) \ar[r]\ar[d] & H_2(N;\Z) \ar[d] \\ H_2(K;\Z_2) \ar[r] & H_2(N;\Z_2) \\ }$$ in which the vertical maps are natural homomorphisms and the horizontal maps are induced by $\phi$. The left-hand vertical arrow is surjective because the surface $K$ is orientable. Since the top horizontal map is trivial, it follows that the bottom horizontal map is trivial. Hence Lemma \ref{dark green} asserts that the image of $\phi_*:H_1(K;\Z_2)\to H_1(N;\Z_2)$ has dimension at most $g$. Thus alternative (2) of the conclusion holds in Case A. In Case B, using Lemma \ref{polka dots} and the surjectivity of $\phi_*:H_1(K;\Z_2)\to H_1(N;\Z_2)$, we find that $$\tg(\partial N)\le \rk N\le\rk K=2g.$$ Since $\partial N$ has at least two components in this case, some component $F$ of $\partial N$ must have genus at most $g$. By the observation at the beginning of the proof, $F$ is parallel to an incompressible surface in $N$. Thus alternative (1) of the conclusion holds in Case B. To prove the assertion in Case C, we begin by considering the commutative diagram $$\xymatrix{ H_2(K;\Z) \ar[r]\ar[d] & H_2(N;\Z) \ar[d] \\ H_2(K;\R) \ar[r] & H_2(N;\R) \\ }$$ in which the vertical maps are natural homomorphisms and the horizontal maps are induced by $\phi$. Since $\partial M$ has at most one component, Lemma \ref{falafel} asserts that $H_2(N;\Z)$ is torsion-free. Hence the right-hand vertical arrow in the diagram is injective. Since the top horizontal map is non-trivial, it follows that the bottom horizontal map is non-trivial. In other words, if $[K]$ denotes the fundamental class in $H_2(K;\R)$ then the class $\alpha=f_*([K])\in H_2(N;\R)$ is non-zero. We shall now apply a result from \cite{gabai}. For any $2$-manifold $\mathcal F$ we shall denote by $\chiminus({\mathcal F})$ the quantity $$\sum_F\max(\chibar(F),0),$$ where $F$ ranges over the components of $\mathcal F$. As in \cite{gabai}, given a class $z$ in $H_2(M;\R)$, we denote by $x_s(z)$ and $x(z)$ respectively the ``norm based on singular surfaces'' and the Thurston norm of $z$. Since $\alpha$ is by definition realized by a map of the surface $K$ into $N$, and since $\chiminus(K)=2g-2$, we have $x_s(\alpha)\le 2g-2$. But it follows from \cite[Corollary 6.18]{gabai} that $x(\alpha) = x_s(\alpha)$. Hence $x(\alpha)\le 2g-2$. By definition this means that if ${\mathcal F}$ is a closed orientable embedded surface in $\inter N$ such that the fundamental class $[{\mathcal F}]\in H_1({\mathcal F};\R)$ is mapped to $\alpha$ under inclusion, and if $\mathcal F$ is chosen among all such surfaces so as to minimize $\chiminus({\mathcal F})$, then $\chiminus({\mathcal F})\le 2g-2$. Since $\alpha\ne0$ we have ${\mathcal F}\ne\emptyset$. Since $N$ is irreducible, any sphere component of $\mathcal F$ must be homologically trivial in $N$. We may assume that every torus component of $F$ is compressible, as otherwise alternative (1) of the conclusion holds. Under this assumption, if $T$ is a torus component of $\mathcal F$, compressing $T$ yields a sphere which must be homologically trivial; hence $T$ is itself homologically trivial. Thus after discarding homologically trivial components of $\mathcal F$ whose Euler characteristics are $\ge0$, we may suppose that no component of $\mathcal F$ is a sphere or torus. The minimality of $\chiminus({\mathcal F})$ now implies that $\mathcal F$ is incompressible. Let $F$ be any component of $\mathcal F$. Then $F$ is an incompressible closed surface in $N$, and we have $$\chiminus(F)\le\chiminus({\mathcal F})\le2g-2.$$ Hence $F$ has genus at most $g$, and alternative (1) holds. This completes the proof of the first assertion of the proposition. To prove the second assertion, suppose that $\phi_*:H_1(K;\Z_2)\to H_1(N;\Z_2)$ is surjective, that $\partial N\ne\emptyset$, and that alternative (2) holds. Then $\rk N\le g$, and it follows from Lemma \ref{polka dots} that $\tg(\partial N)\le g$. In particular, any component $F$ of the non-empty $2$-manifold $\partial N$ has genus at most $g$. By the observation at the beginning of the proof, $F$ is parallel to an incompressible surface in $N$. Thus alternative (1) of the conclusion holds. \EndProof \section{Towers} \label{tower section} In this section we prove a result, Proposition \ref{there's an extension}, which summarizes the tower construction described in the introduction. Our main topological result, Theorem \ref{top 11}, will then be proved by combining Proposition \ref{there's an extension} with results from the earlier sections. We begin by introducing some machinery that will be needed for the statement and proof of Proposition \ref{there's an extension}. \Definition\label{tower def} Suppose that $n$ is a non-negative integer. We define a {\it height-$n$ tower of $3$-manifolds} to be a $(3n+2)$-tuple $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n),$$ where $M_0,\ldots,M_n$ are compact, connected, orientable PL $3$-manifolds, $N_j$ is a compact, connected $3$-dimensional PL submanifold of $M_j$ for $j=0,\ldots,n$, and $p_j:M_j\to N_{j-1}$ is a PL covering map for $j=1,\ldots,n$. We shall refer to $M_0$ as the {\it base} of the tower $\mathcal T$ and to $N_n$ as its {\it top}. We define the {\it tower map associated to $\mathcal T$} to be the map $$h=\iota_0\circ p_1\circ\iota_1\circ p_2\circ\cdots\circ p_n\circ\iota_n:N_n\to M_0,$$ where $\iota_j:N_j\to M_j$ denotes the inclusion map for $j=0,\ldots,n$. \EndDefinition \Number\label{tower remark} Consider any tower of $3$-manifolds $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n).$$ Note that for any given $j$ with $0\le j<n$, the manifold $N_j$ is closed if and only if its finite-sheeted covering space $M_{j+1}$ is closed. Note also that if, for a given $j$ with $0\le j\le n$, the submanifold $N_j$ of the (connected) manifold $M_j$ is closed, then we must have $N_j=M_j$, so that in particular $M_j$ is closed. It follows that if $M_j$ is closed for a given $j$ with $0\le j\le n$, then $M_{i}$ is also closed for every $i$ with $0\le i\le j$. Thus either all the $M_j$ { have non-empty {boundaries}}, or there is an index $j_0$ with $0\le j_0\le n$ such that $M_j$ is closed when $0\le j\le j_0$ and {$M_j$ has non-empty boundary} when $j_0<j\le n$. Furthermore, in the latter case, for each $j< j_0$ we have $N_j=M_j$. \EndNumber \Number\label{further tower remark} In particular, if in a tower of $3$-manifolds $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n)$$ the manifold $M_j$ is closed for a given $j\le n$, then for every $i$ with $0\le i<j$ the composition $$p_{j-1}\circ\cdots\circ p_{i}:M_j\to M_i$$ is a well-defined covering map, whose degree is the product of the degrees of $p_i,\ldots,p_{j-1}$. \EndNumber \Definition\label{good tower def} A tower of $3$-manifolds $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n)$$ will be termed {\it good} if it has the following properties: \Properties \item $M_j$ and $N_j$ are irreducible for $j=0,\ldots,n$; \item $\partial N_j$ is a (possibly empty) incompressible surface in $M_j$ for $j=0,\ldots,n$; \item the covering map $p_j:M_j\to N_{j-1}$ has degree $2$ for $j=1,\ldots,n$; and \item for each $j$ with $2\le j\le n$ such that $M_j$ is closed, the four-fold covering map (see \ref{further tower remark}) $$p_j\circ p_{j-1}:M_j\to M_{j-2}$$ is regular and has covering group isomorphic to $\Z_2\times\Z_2$. \EndProperties \EndDefinition \Lemma\label{2r-4} Suppose that $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n)$$ is a good tower of $3$-manifolds and that $j_0$ is an index with $0 \le j_0\le n$ such that $M_{j_0}$ is closed. Set $r=\rk M_0$ and assume that $r\ge3$. For any index $j$ with $0\le j\le j_0$, we have $$\rk M_j\ge2^{j/2}(r-3)+3$$ if $j$ is even, and $$\rk M_j\ge2^{(j-1)/2}(r-3)+2$$ if $j$ is odd. In particular, we have $\rk M_j\ge r-1$ for each $j$ with $0\le j\le n$ such that $M_j$ is closed, and we have $\rk M_j\ge 2r-4$ for each $j$ with $2\le j\le n$ such that $M_j$ is closed. \EndLemma \Proof According to \ref{tower remark}, $M_j$ is closed for every index $j$ with $0\le j \le j_0$. We shall first show that for every even $j$ with $0\le j\le j_0$ we have $\rk M_j\ge2^{j/2}(r-3)+3$. For $j=0$ this is trivial since $r=\rk M_0$. Now, arguing inductively, suppose that $j$ is even, that $0< j\le n$, and that $\rk M_{j-2}\ge2^{(j-2)/2}(r-3)+3$. Since the definition of a good tower implies that $M_j$ is a regular $(\Z_2\times\Z_2)$-cover of $M_{j-2}$, we apply Lemma \ref{2r-3} with $m=2$ to deduce that $$\rk M_j\ge2(\rk M_{j-2})-3\ge 2(2^{(j-2)/2}(r-3)+3)-3=2^{j/2}(r-3)+3.$$ This completes the induction and shows that $\rk M_j\ge2^{j/2}(r-3)+3$ for every even index $j$ with $2\le j\le j_0$. Finally, if $j$ is an odd index with $2<j\le j_0$, then since $j-1$ is even we have $\rk M_{j-1}\ge2^{(j-1)/2}(r-3)+3$; and since $M_j$ is a $2$-sheeted cover of $M_{j-1}$, it is clear that $\rk M_j\ge\rk M_{j-1}-1\ge2^{(j-1)/2}(r-3)+2$. \EndProof \Definition\label{truncation def} If $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n)$$ is a height-$n$ tower of $3$-manifolds, then for any $m$ with $0\le m\le n$, the $(3m+2)$-tuple $${\mathcal T}^-=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_m,M_m,N_m)$$ is a height-${n}$ tower. We shall refer to the tower ${\mathcal T}^-$ as the height-$m$ {\it truncation} of $\mathcal T$. We shall say that a tower ${\mathcal T}^+$ is an {\it extension} of a tower $\mathcal T$, or that ${\mathcal T}^+$ {\it extends} $\mathcal T$, if ${\mathcal T}$ is a truncation of ${\mathcal T}^+$. In particular, any tower may be regarded as an extension of itself. This will be called the {\it degenerate} extension. \EndDefinition \Definition\label{good lift def} Let $\mathcal T$ be a tower of $3$-manifolds with base $M$ and top $N$, and let $h:N\to M$ denote the associated tower map. Let $\phi$ be a PL map from a compact PL space $K$ to $M$. By a {\it homotopy-lift} of $\phi$ through the tower $\mathcal T$ we mean a PL map $\widetilde \phi:K\to N$ such that $h\circ\widetilde \phi$ is homotopic to $\phi$. A homotopy-lift $\widetilde \phi$ of $\phi$ will be termed {\it good} if the inclusion homomorphism $\pi_1({\widetilde\phi(K)})\to\pi_1(N)$ is surjective. \EndDefinition \Lemma\label{first hairy lemma} Suppose that $K$ is a compact PL space with freely indecomposable fundamental group of rank $k\ge2$. Suppose that $\mathcal T=(M_0,N_0,p_1,\ldots,N_n)$ is a good tower of $3$-manifolds of height $n$. Suppose that $\phi:K\to M_0$ is a $\pi_1$-injective PL map, and that $\widetilde \phi:K\to N_n$ is a good homotopy-lift of $\phi$ through the tower $\mathcal T$. Suppose that $p_{n+1}:M_{n+1}\to N_n$ is a two-sheeted covering space of $N_n$, and that the map $\widetilde \phi:K\to N_n$ admits a lift to the covering space $M_{n+1}$. Suppose that {\it either} \begin{enumerate} \item[($\alpha$)]$n\ge1$, the manifold $N_{n}$ is closed (so that $M_{n+1}$ is closed, cf. \ref{tower remark}), and the covering map $$p_n\circ p_{n+1}:M_{n+1}\to M_{n-1}$$ is regular and has covering group isomorphic to $\Z_2\times\Z_2$; or \item[($\beta$)]$\partial M_{n+1}\ne\emptyset$, or \item[($\gamma$)]$n=0$. \end{enumerate} Then there exists a compact submanifold $N_{n+1}$ of $M_{n+1}$ with the following properties: \Conclusions \item ${\mathcal T}^+=(M_0,N_0,p_1,\ldots,N_n,p_{n+1},M_{n+1},N_{n+1})$ is a good height-$(n+1)$ tower extending ${\mathcal T}$, and \item there is a a good homotopy-lift $\widetilde \phi^+$ of $\phi$ through the tower ${\mathcal T}^+$ such that $$\ADS(\widetilde \phi^+)<\ADS(\widetilde \phi).$$ \EndConclusions \EndLemma \Proof Let $h:N_n\to M_0$ be the tower map associated to $\calt$. We fix a lift $f:K\to M_{n+1}$ of the map $\widetilde \phi:K\to N_n$ to the covering space $M_{n+1}$. Since $\widetilde\phi$ is a homotopy lift of $\phi$, the map $h\circ p_{n+1}\circ f:K\to M_0$ is homotopic to $\phi$. Since $\phi_\sharp:\pi_1(K)\to\pi_1( M_0)$ is injective, it now follows that $f_\sharp:\pi_1(K)\to\pi_1(M_{n+1})$ is also injective. We may therefore apply Proposition \ref{DS and incompressible boundary} to this map $f$, taking $M=M_{n+1}$ and $N=N_{n+1}$. We choose a map $g:K\to M_{n+1}$ homotopic to $f$, with $\ADS(g)\le\ADS(f)$, and a compact $3$-dimensional submanifold $N=N_{n+1}$ of $\inter M_{n+1}$, such that conditions (i)---(iv) of \ref{DS and incompressible boundary} hold with $M=M_{n+1}$. It is clear from the definition that ${\mathcal T}^+=(M_0,N_0,p_1,\ldots,N_n,p_{n+1},M_{n+1},N_{n+1})$ is a tower extending $\calt$. To show that the tower $\calt^+$ is good, we first observe that conditions (1)---(4) of Definition \ref{good tower def} hold whenever $j\le n$ because $\calt$ is a good tower. For $j=n+1$, Conditions (1) and (2) of Definition \ref{good tower def} follow from conditions (iv) and (iii) of \ref{DS and incompressible boundary}, while condition (3) of Definition \ref{good tower def} follows from the hypothesis that $p_{n+1}:M_{n+1}\to N_n$ is a two-sheeted covering. The case $j=n+1$ of Condition (4) of Definition \ref{good tower def} is clear if alternative ($\alpha$) of the hypothesis holds, and is vacuously true if alternative ($\beta$) or ($\gamma$) holds. Hence $\calt^+$ is a good tower. Since by condition (i) of Proposition \ref{DS and incompressible boundary} we have $\inter N_{n+1}\supset g(K)$, we may regard $g:K\to M_{n+1}$ as a composition $\iota_{n+1}\circ\widetilde \phi^+$, where $\iota_{n+1}:N_{n+1}\to M_{n+1}$ is the inclusion map and $\widetilde \phi^+$ is a PL map from $K$ to $N_{n+1}$. Since $g$ is homotopic to $f$, the map {$h\circ p_{n+1}\circ\iota_{n+1}\circ\widetilde \phi^+= h\circ p_{n+1}\circ g:K\to M_0$} is homotopic to $\phi$. It follows that $\widetilde\phi^+$ is a homotopy-lift of $\phi$ through the tower $\calt^+$. Condition (ii) of \ref{DS and incompressible boundary} asserts that the inclusion homomorphism $\pi_1(\widetilde\phi^+(K))\to\pi_1(N_{n+1})$ is surjective, which according to Definition \ref{good lift def} means that the homotopy-lift $\widetilde\phi^+$ of $\phi$ is good. Finally, since the homotopy-lift $\widetilde\phi$ of $\phi$ is good by hypothesis, the inclusion homomorphism $\pi_1(\widetilde\phi^+(K))\to\pi_1(N_n)$ is surjective. As $f$ is a lift of $\widetilde\phi$ to the non-trivial covering space $M_{n+1}$ of $N_n$, it follows from Proposition \ref{stallings} that $\ADS(f)<\ADS(\widetilde\phi)$. But we chose $g$ in such a way that $\ADS(g)\le\ADS(f)$, and according to \ref{co-restriction} we have $\ADS(\widetilde\phi^+)=\ADS(g)$. Hence $\ADS(\widetilde\phi^+)<\ADS(\widetilde\phi)$. \EndProof \Lemma\label{second hairy lemma} Suppose that $K$ is a closed orientable surface of genus $g\ge2$. Suppose that $\mathcal T$ is a good tower of $3$-manifolds of height $n$. Let $M$ denote the base of $\mathcal T$, and assume that $\rk M\ge g+3$. Suppose that $\phi:K\to M$ is a $\pi_1$-injective PL map, and that $\widetilde \phi$ is a good homotopy-lift of $\phi$ through the tower $\mathcal T$. Then at least one of the following alternatives holds: \Conclusions \item $N_n$ contains a connected (non-empty) closed incompressible surface of genus at most $g$; \item $n\ge1$ and $N_{n-1}$ contains a connected (non-empty) closed incompressible surface of genus at most $g$; or \item there exist a height-$(n+1)$ extension ${\mathcal T}^+$ of $\mathcal T$ which is a good tower, and a good homotopy-lift $\widetilde \phi^+$ of $\phi$ through the tower ${\mathcal T}^+$, such that $$\ADS(\widetilde \phi^+)<\ADS(\widetilde \phi).$$ \EndConclusions \EndLemma \Proof We write $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n),$$ so that $M=M_0$. We distinguish several cases. \Cases \Case{{\bf Case A:\ }}$\partial N_n\ne\emptyset$ and the homomorphism $\widetilde \phi_*:H_1(K;\Z_2)\to H_1(N_n;\Z_2)$ is surjective; \Case{{\bf Case B:\ }} $\partial N_n\ne\emptyset$ and $\widetilde \phi_*:H_1(K;\Z_2)\to H_1(N_n;\Z_2)$ is not surjective; \Case{{\bf Case C:\ }}$n=0$; \Case{{\bf Case D:\ }}$n\ge1$ and $N_n$ is closed. \EndCases In Case A, all the hypotheses of the final assertion of Proposition \ref{newprop} hold with $\widetilde\phi$ in place of $\phi$. It therefore follows from the final assertion of Proposition \ref{newprop} that conclusion (1) of the present lemma holds. In Case B, the map $\widetilde\phi:K\to N_n$ admits a lift to some two-sheeted covering space $p_{n+1}:M_{n+1}\to N_n$ of $N_n$. Since $\partial N_n\ne\emptyset$, we have $\partial M_{n+1}\ne\emptyset.$ This is alternative ($\beta$) of the hypothesis of Lemma \ref{first hairy lemma}. It therefore follows from \ref{first hairy lemma} that conclusion (3) of the present lemma holds. In Case C the argument is identical to the one used in Case B, except that we have alternative $(\gamma)$ of Lemma \ref{first hairy lemma} in place of alternative ($\beta$). We now turn to Case D. In this case, as was observed in \ref{tower remark}, we have $N_n=M_n$ and $N_{n-1}=M_{n-1}$, and $p_n$ is a two-sheeted covering map from $M_n$ to $M_{n-1}$. Let us set $r=\rk M\ge g+3$. According to Lemma \ref{2r-4}, for any index $j$ such that $1\le j\le n$ and such that $M_j$ is closed, we have $\rk M_j\ge r-1$. In particular, if we set $d=\rk M_{n-1}$, we have $d\ge r-1\ge g+2$. Now set $\widetilde\phi^-=p_n\circ\widetilde\phi :K\to M_{n-1}$. Then $X=\widetilde\phi^-_*(H_1(K;\Z_2))$ is a subspace of the $d$-dimensional $\Z_2$-vector space $V=H_1(M_{n-1};\Z_2)$. The hypotheses of Proposition \ref{newprop} hold with $N$ and $\phi$ replaced by $M_{n-1}$ and $\widetilde\phi^-$. Hence either $M_{n-1}$ contains a connected (non-empty) closed incompressible surface of genus at most $g$, or $X$ has dimension at most $g$. The first alternative gives conclusion (2) of the present lemma. There remains the subcase in which $X$ has dimension at most $g$. Since $d\ge r-1\ge g+2$, the dimension of $X$ is then at most $g\le d-2$. In this subcase we shall show that $\widetilde\phi:K\to M_n$ admits a lift to some two-sheeted covering space $p_{n+1}:M_{n+1}\to M_n$ of $M_n=N_n$ for which alternative ($\alpha$) of the hypothesis of Lemma \ref{first hairy lemma} holds. It will will then follow from \ref{first hairy lemma} that conclusion (3) of the present lemma holds. Let $q$ denote the natural homomorphism from $\pi_1(M_{n-1})$ to $H_1(M_{n-1};\Z_2)$. The two-sheeted cover $M_n$ of $M_{n-1}$ corresponds to a normal subgroup of $\pi_1(M_{n-1})$ having the form $q^{-1}(Z)$, where $Z$ is some $(d-1)$-dimensional vector subspace of $V$. Since $\widetilde\phi^-$ admits the lift $\widetilde\phi$ to $M_n$, we have $X\subset Z\subset V$. Since in addition we have $\rk X\le d-2<d-1=\rank Z$, there exists a $(d-2)$-dimensional vector subspace $Y$ of $V$ with $X\subset Y\subset Z$. The subgroup $q^{-1}(Y)$ determines a regular covering space $ M_{n+1}$ of $M_{n-1}$ with covering group $\Z_2\times\Z_2$. Since $q^{-1}(Y)\subset q^{-1}(Z)$, the degree-four covering map $M_{n+1}\to M_{n-1}$ factors as the composition of a degree-two covering map $p_{n+1}:M_{n+1}\to M_n$ with $p_n:M_n\to M_{n+1}$. Thus the covering space $p_{n+1}:M_{n+1}\to M_n$ satisfies alternative ($\alpha$) of \ref{first hairy lemma}. It remains to show that $\widetilde\phi$ admits a lift to $M_{n+1}$. Since $\widetilde\phi^-_\sharp(\pi_1(K))\subset q^{-1}(X)\subset q^{-1}(Y)$, the map $\widetilde\phi^-$ admits a lift to the four-fold cover $M_{n+1}$ of $M_{n-1}$. Since $M_{n+1}$ is a regular covering space of $M_{n-1}$, there exist four different lifts of $\phi^-$ to $M_{n+1}$. But $\widetilde\phi^-$ can have at most two lifts to $M_n$, and each of these can have at most two lifts to $M_{n+1}$. Hence each lift of $\widetilde\phi^-$ to $M_n$ admits a lift to $M_{n+1}$. In particular, $\widetilde\phi$ admits a lift to $M_{n+1}$. \EndProof \Lemma\label{there's an extension} Suppose that $K$ is a closed, orientable surface of genus $g\ge2$. Suppose that $M$ is a closed, orientable $3$-manifold such that $\rk M\ge g+3$, and that $\phi:K\to M$ is a $\pi_1$-injective PL map. Suppose that $$\calt_0=(M_0,N_0,p_1,\ldots,N_{n_0})$$ is a good tower with base $M$ such that $\phi$ admits a good homotopy-lift through $\calt$. Then either \Conclusions \item ${n_0}\ge1$, and $N_{{n_0}-1}$ contains a connected (non-empty) closed incompressible surface of genus at most $g$, or \item there exists a good tower $\calt_1$ which is a (possibly degenerate) extension of $\calt_0$, such that the top $N$ of $\calt_1$ contains a connected (non-empty) closed incompressible surface of genus at most $g$, and $\phi$ admits a good homotopy-lift $\widetilde\phi_1$ through the tower $\calt_1$. \EndConclusions \EndLemma \Proof Let us fix a good homotopy-lift $\widetilde\phi_0$ of $\phi$ through $\calt_0$. Let $\cals$ denote the set of all ordered pairs $(\calt,\widetilde\phi)$ such that $\calt$ is a good tower which is an extension of $\calt_0$ and $\widetilde\phi$ is a good homotopy-lift of $\phi$ through $\calt$. Then we have $(\calt_0,\phi_0)\in\cals$, and so $\cals\ne\emptyset$. Hence there is an element $(\calt_1,\widetilde\phi_1)$ of $\cals$ such that $\ADS(\widetilde\phi_1)\le\ADS(\widetilde\phi)$ for every element $(\calt,\widetilde\phi)$ of $\cals$. Let us write $$\calt_1=(M_0,N_0,p_1,\ldots,N_{n_1}).$$ The hypotheses of Lemma \ref{second hairy lemma} now hold with $\calt_1$ and $\widetilde\phi_1$ in place of $\calt$ and $\widetilde\phi$. Hence one of the following alternatives must hold: \begin{enumerate} \item[\ref{second hairy lemma}(1)] $N_{n_1}$ contains a connected (non-empty) closed incompressible surface of genus at most $g$; \item[\ref{second hairy lemma}(2)]$n_1\ge1$ and $N_{n_1-1}$ contains a connected (non-empty) closed incompressible surface of genus at most $g$; or \item[\ref{second hairy lemma}(3)] there exist a height-$(n_1+1)$ extension ${\mathcal T}^+$ of $\calt_1$ which is a good tower, and a good homotopy-lift $\widetilde \phi^+$ of $\phi$ through the tower ${\mathcal T}^+$, such that $$\ADS(\widetilde \phi^+)<\ADS(\widetilde \phi_1).$$ \end{enumerate} If \ref{second hairy lemma}(1) holds, then the tower $\calt_1$ has the property asserted in conclusion (1) of the present lemma. If \ref{second hairy lemma}(2) holds, and if $n_1>{n_0}$ (i.e. $\calt_1$ is a non-degenerate extension of $\calt_0$), then the height-$(n_1-1)$ truncation $\calt_1'$ of $\calt_1$ is an extension of $\calt_0$, and conclusion (2) holds with $\calt_1'$ in place of $\calt_1$. If \ref{second hairy lemma}(2) holds and $n_1={n_0}$ (i.e. $\calt_1$ is a degenerate extension of $\calt_0$), conclusion (2) of the present lemma holds. Finally, if \ref{second hairy lemma}(3) holds, then $(\calt^+,\widetilde\phi^+)\in\cals$, and we have a contradiction to the minimality of $\ADS(\widetilde\phi_1)$. \EndProof \Proposition\label{tower proposition} Suppose that $g$ is an integer $\ge2$, that $M$ is a closed, orientable $3$-manifold with $\rk M\ge g+3$, and that $\pi_1(M)$ has a subgroup isomorphic to a genus-$g$ surface group. Then there exists a good tower $$\calt=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n),$$ with base $M=M_0$, such that $N_n$ contains a connected incompressible closed surface of genus $\le g$. \EndProposition \Proof Let $K$ denote a closed, orientable surface of genus $g$. The hypothesis implies that there is a $\pi_1$-injective PL map $\phi:K\to M$. According to {Proposition} \ref{DS and incompressible boundary}, there exist a PL map $\widetilde\phi_0:K\to M$ homotopic to $\phi$, and a compact, connected $3$-submanifold $N_0$ of $\inter M$, such that (i) $\inter N_0\supset \widetilde\phi_0(K)$, (ii) the inclusion homomorphism $\pi_1(\widetilde\phi_0(K))\to\pi_1(N_0)$ is surjective, (iii) $\partial N_0$ is incompressible in $M$, and (iv) $N_0$ is irreducible. According to the definitions, this means that $\calt_0=(M,N_0)$ is a good tower of height $0$ and that $\widetilde\phi_0$ is a good homotopy-lift of $\phi$ through $\calt_0$. We apply Proposition \ref{there's an extension} with these choices of $\calt_0$ and $\widetilde\phi_0$. Conclusion (2) of \ref{there's an extension} cannot hold since $\calt_0$ has height $0$. Hence conclusion (1) must hold. The extension $\calt=\calt_0$ of $\calt_0$ given by conclusion (1) is a good tower whose top contains a connected, closed incompressible surface of genus at most $g$. \EndProof \Lemma \label{simple tower} Suppose that $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n)$$ is a good tower of $3$-manifolds such that $M_0$ is simple. Then the manifolds $M_j$ and $N_j$ are all simple for $j=0,\ldots,n$. \EndLemma \Proof By hypothesis $M_0$ is simple. If $M_j$ is simple for a given $j\le n$, then since $N_j$ is a submanifold of $M_j$ bounded by a (possibly disconnected and possibly empty) incompressible surface, it is clear from Definition \ref{simple def} that $N_j$ is simple. If $j<n$ it then follows from \ref{simple covering} that the two-sheeted covering space $M_{j+1}$ of $N_j$ is also simple. \EndProof The following theorem is the main topological result of this paper. \Theorem\label{top 11} Let $g$ be an integer $\ge2$. Let $M$ be a closed simple $3$-manifold such that $\rk M \ge 4g-1$ and $\pi_1(M)$ has a subgroup isomorphic to a genus-$g$ surface group. Then $M$ contains a closed, incompressible surface of genus at most $g$. \EndTheorem \Proof Applying Proposition \ref{tower proposition}, we find a good tower $${\mathcal T}=(M_0,N_0,p_1,M_1,N_1,p_2,\ldots,p_n,M_n,N_n),$$ with base $M_0$ homeomorphic to $M$ and with some height $n\ge0$, such that {$N_n$} contains a connected incompressible closed surface $F$ of genus $\le g$. According to the definition of a good tower, $\partial N_{n}$ is incompressible (and, {\it a priori}, possibly empty) in $M_{n}$. Hence $N_{n}$ is $\pi_1$-injective in $ M_{n}$. Since $F$ is incompressible in $N_{n}$, it follows that it is also incompressible in $M_{n}$. Since $M$ is simple it follows from Lemma \ref{simple tower} that all the $M_j$ and $N_j$ are simple. Let $m$ denote the least integer in $\{0,\ldots,n\}$ for which $M_m$ contains a closed incompressible surface $S_m$ of genus at most $g$. To prove the theorem it suffices to show that $m=0$. Let $h$ denote the genus of $S_m$. Suppose to the contrary that $m\ge1$. Then the hypotheses of Proposition \ref{new improved old prop 3} hold with $N_{m-1}$ and $M_m$ playing the respective roles of $N$ and $\widetilde N$. Suppose that conclusion (1) of \ref{new improved old prop 3} holds, i.e. that $ N_{m-1}$ contains an incompressible closed surface $S_{m-1}$ with $\genus(S_{m-1})\le h\le g$. According to the definition of a good tower, $\partial N_{m-1}$ is an incompressible (and possibly empty) surface in $M_{m-1}$. Hence $N_{m-1}$ is $\pi_1$-injective in $ M_{m-1}$. Since $S_{m-1}$ is incompressible in $N_{m-1}$, it follows that it is also incompressible in $M_{m-1}$. We therefore have a contradiction to the minimality of $m$. Hence conclusion (2) of \ref{new improved old prop 3} must hold; in particular, $N_{m-1}$ is closed, so that $N_{m-1}=M_{m-1}$; and $\rk M_{m-1}=\rk N_{m-1}\le4h-3\le4g-3$. On the other hand, since by hypothesis we have $\rk M_0\ge4g-1$, it follows from Lemma \ref{2r-4} that for any index $j$ such that $0\le j\le n$ and such that $M_j$ is closed, we have $\rk M_j\ge 4g-2$. This is a contradiction, and the proof is complete. \EndProof \section{Proof of the geometric theorem} \label{geometric section} As a preliminary to the proof of Theorem \ref{geom 11} we shall point out how the Marden tameness conjecture, recently established by Agol \cite{agol} and by Calegari-Gabai \cite{cg}, strengthens the results proved in \cite{accs}. We first recall some definitions from \cite[Section 8]{accs}. Let $\Gamma$ be a discrete torsion-free subgroup of $\Isom_+(\haitch^3)$, and let $k\ge2$ be an integer. We say that $\lambda$ is a {\em $k$-Margulis number} for $\Gamma$, or for $M=\haitch^3/\Gamma$, if for any $k$ elements $\xi_1,\dots,\xi_k \in\Gamma$, and for any $z\in \haitch^3$, we have that either \Bullets \item the group $\langle\xi_1,\dots,\xi_k\rangle$ is generated by at most $k-1$ abelian subgroups, or \item $\max_{i=1}^k \dist(\xi_i\cdot z,z)\ge \lambda$. \EndBullets We say that $\lambda$ is a {\em strong $k$-Margulis number} for $\Gamma$, or for $M$, if for any $k$ elements $\xi_1,\dots,\xi_k \in\Gamma$, and for any $z\in \haitch^3$, we have that either \Bullets \item the group $\langle\xi_1,\dots,\xi_k\rangle$ is generated by at most $k-1$ abelian subgroups, or \item $\displaystyle \sum_{i=1}^k\frac{1}{1+e^{\dist(\xi_i\cdot z,z)}} \le \frac{k}{1+e^{\lambda}}\ . $ \EndBullets We note that if $\lambda$ is a strong $k$-Margulis number for $\Gamma$, then $\lambda$ is also a $k$-Margulis number for $\Gamma$. A group $\Gamma$ is termed {\it $k$-free,} where $k$ is a positive integer, if every subgroup of $\Gamma$ whose rank is at most $k$ is free. \Theorem\label{accs plus agol} Let $k\ge2$ be an integer and let $\Gamma$ be a discrete subgroup of $\Isom_+(\haitch^3)$. Suppose that $\Gamma$ is $k$-free and purely loxodromic. Then $\log(2k-1)$ is a strong $k$\hyph Margulis number for $\Gamma$. \EndTheorem \Proof This is the same statement as \cite[Proposition 8.1]{accs} except that the latter result contains the additional hypothesis that $\Gamma$ is $k$-tame, in the sense that every subgroup of $\Gamma$ whose rank is at most $k$ is topologically tame. (To say that a discrete torsion-free subgroup $\Delta$ of $\Isom_+(\haitch^3)$ is topologically tame means that $\haitch^3/\Delta$ is diffeomorphic to the interior of a compact $3$-manifold.) But the main theorem of \cite{agol} or \cite{cg} asserts that { any finitely generated} discrete torsion-free subgroup $\Delta$ of $\Isom_+(\haitch^3)$ is topologically tame. \EndProof By combining this with another result from \cite{accs}, we shall prove: \Theorem\label{3-free case} Suppose that $M$ is an orientable hyperbolic $3$-manifold without cusps and that $\pi_1(M)$ is $3$\hyph free. Then either $M$ contains a hyperbolic ball of radius $(\log 5)/2$, or $\pi_1(M)$ is a free group of rank $2$. \EndTheorem \Proof We may write $M=\haitch^3/\Gamma$, where $\Gamma\le\Isom(\haitch^3)$ is discrete and purely loxodromic. Since $\Gamma\cong\pi_1(M)$ is $3$-free, it follows from Theorem \ref{accs plus agol} that $\log5$ is a strong $3$-Margulis number, and in particular a Margulis number, for $\Gamma$ (or equivalently for $\Gamma$). According to \cite[Theorem 9.1]{accs}, if $M$ is a hyperbolic $3$-manifold without cusps, if $\pi_1(M)$ is $3$\hyph free and if $\lambda$ a $3$\hyph Margulis number for $M$, then either $M$ contains a hyperbolic ball of radius $\lambda/2$, or $\pi_1(M)$ is a free group of rank $2$. The assertion follows. \EndProof \Corollary\label{3-free volume} Suppose that $M$ is a closed orientable hyperbolic 3\hyph manifold such that $\pi_1(M)$ is $3$-free. Then $M$ contains a hyperbolic ball of radius $(\log5)/2$. Hence the volume of $M$ is greater than $3.08$. \EndCorollary \Proof It follows from Theorem \ref{3-free case} that either $M$ contains a hyperbolic ball of radius $(\log5)/2$ or $\pi_1(M)$ is a free group of rank $2$. The latter alternative is impossible, because $\Gamma$, as the fundamental group of a closed hyperbolic $3$-manifold, must have cohomological dimension $3$, whereas a free group has cohomological dimension 1. Thus $M$ must contain a hyperbolic ball of radius $(\log5)/2$. The lower bound on the volume now follows by applying B\"or\"oczky's density estimate for hyperbolic sphere-packings as in \cite{logthree}. \EndProof \Theorem[Agol-Storm-Thurston]\label{firstAST} Suppose that $M$ is a closed orientable hyperbolic $3$-manifold containing a connected incompressible closed surface $S$. Then either $\vol(M)>\WHAT$, or the manifold $X$ obtained by splitting $M$ along $S$ has the form $X=|\calw|$ for some (possibly disconnected) book of $I$-bundles $\calw$. \EndTheorem \Proof According to \cite[Corollary 2.2]{ast}, if $S$ is an incompressible closed surface in a closed orientable hyperbolic $3$-manifold $M$, if $X$ denotes the manifold obtained by splitting $M$ along $S$, and if $K=\overline{X-\Sigma}$ where $\Sigma$ denotes the relative characteristic submanifold of the simple manifold $X$, then the volume of $M$ is greater than $\chibar(K)\cdot\WHAT$. Hence either $\vol(M)>\WHAT$ or $\chi(K)=0$. In the latter case, we shall show that $X$ is a book of $I$-bundles; this will complete the proof. Note that each component of $K$ must have Euler characteristic $\le0$, because the components of the frontier of $K$ in $X$ are essential annuli in $X$. Since $\chi(K)=0$ it follows that each component of $K$ has Euler characteristic $0$. Hence if $Y$ denotes the union of all components of $\Sigma$ with strictly negative Euler characteristic, and if we set $Z=\overline{X-Y}$, then each component of $Z$ has Euler characteristic $0$. But $Z$ is $\pi_1$-injective in $X$ because its frontier components are essential annuli. Since $X$ is simple, the components of $Z$ are solid tori. Since $Y=\overline{X-Z}$ is an $I$-bundle with $Y\cap Z=\partial_vY$, and the components of $\partial_vY$ are $\pi_1$-injective in $X$ and hence in $Z$, it follows from the definition that $X$ is a book of $I$-bundles. \EndProof \Proposition\label{what if it's the other thing} Suppose that $M$ is a closed orientable hyperbolic 3\hyph manifold such that $\rk M\ge7$. Suppose that $\pi_1(M)$ {has a subgroup isomorphic to a genus-$2$ surface group.} Then $\vol M\ge\WHAT$. \EndProposition \Proof Since $M$ is simple and $\rk M\ge7$, it follows from Theorem \ref{top 11} that $M$ contains either a closed incompressible surface of genus $2$. Suppose that $\vol M<\WHAT$. Let $X$ denote the manifold obtained by splitting $M$ along $S$. According to Theorem \ref{firstAST}, each component of $M-S$ has the form $|\calw|$ for some book of $I$-bundles $\calw$. Consider the subcase in which $X$ is connected. Since $S$ has genus $2$, we have $\chibar(X)=2$. By Lemma \ref{moosday} it follows that $$\rk(X)\le2\barchi(X)+1\le5.$$ Hence $$\rk M\le\rk X+1\le6,$$ a contradiction to the hypothesis. There remains the case in which $X$ has two components, say $X_1$ and $X_2$. Since $S$ has genus $2$, we have $\chibar(X_i)=1$ for $i=1,2$. By Lemma \ref{moosday}, it follows that $$\rk(X_i)\le2\barchi(X_i)+1=3.$$ Hence $$\rk M\le\rk X_1+\rk X_2\le6,$$ and we have a contradiction. (The bound of $6$ in this last inequality could easily be improved to $4$, but this is obviously not needed.) \EndProof We can now prove our main geometrical result. \Theorem\label{geom 11} Let $M$ be a closed orientable hyperbolic $3$-manifold such that $\vol M \le3.08$. Then $\rk M \le 6$. \EndTheorem \Proof Assume that $\rk M \ge 7$. If $\pi_1(M)$ has a subgroup isomorphic to a genus-$2$ surface group, then it follows from Proposition \ref{what if it's the other thing} (with $g=2$) that $\vol M\ge\WHAT>3.08$, a contradiction to the hypothesis. There remains the possibility that $\pi_1(M)$ has no subgroup isomorphic to a genus-$2$ surface group. In this case, since $\rk M \ge 5$, it follows from \cite[Proposition 7.4 and Remark 7.5]{accs} that $\pi_1(M)$ is $3$-free. Hence by Corollary \ref{3-free volume} we have $\vol M>3.08$, and again the hypothesis is contradicted. \EndProof \bibliographystyle{hplain}
2,869,038,155,138
arxiv
\section{Introduction} A new approach to the dark energy problem, that has recently been suggested in \cite{serendipity}, is inspired by the necessity to avoid the fine tuning problem. This approach suggests the theory in which the de Sitter or anti-de Sitter evolution can occur at any value of the effective cosmological constant $\Lambda$ -- the antithesis to the dark energy scale encoded in the action of the model and fine tuned to the observational data. A concrete observable value of $\Lambda$ in this theory is supposed to be selected by the mechanism analogous to symmetry breaking \cite{serendipity}. Interestingly, the realization of this approach quite unexpectedly has also led to the analogue of the dark matter phenomenon characterized at large distances by gravitational attraction stronger than in general relativity or Newton theory. The action of this theory was shown to generate vacuum equations of motion which have as a solution the de Sitter or anti-de Sitter background. This background carries only transverse-traceless gravitons as propagating physical modes and is free from ghost instabilities. The stability property was proven in \cite{serendipity} by very extensive calculations for a maximally symmetric background and then extended in \cite{Solodukhin} to generic Einstein spaces $R_{\mu\nu}=\Lambda g_{\mu\nu}$ with a vanishing traceless part of the Ricci tensor \begin{eqnarray} E_{\mu\nu}\equiv R_{\mu\nu}-\frac14\,g_{\mu\nu}R=0. \label{Einsteinspace} \end{eqnarray} Thus, this model could be regarded as one of the first modifications of the Einstein theory made by Einstein himself, who for reasons of unification with electromagnetism suggested to replace the Einstein tensor $G_{\mu\nu}=R_{\mu\nu}-\frac12g_{\mu\nu}R$ in the left hand side of Einstein equations by $E_{\mu\nu}$ \cite{Einstein}. The action with these properties is the following nonlocal functional of the spacetime metric $g_{\mu\nu}$,\footnote{We use the Euclidean signature spacetime and curvature tensor conventions, $R=g^{\mu\nu}R_{\mu\nu}=g^{\mu\nu}R^\alpha_{\;\;\mu\alpha\nu}= g^{\mu\nu}\partial_\alpha\Gamma^\alpha_{\nu\mu} -...\;$.} \begin{eqnarray} &&S=\frac{M^2}2\int dx\,g^{1/2}\,\left\{-R+ \alpha\,R^{\mu\nu} \frac1{\Box+\hat P}\,G_{\mu\nu} \right\},\;\;\;\; \label{action}\\ &&\hat P\equiv P_{\alpha\beta}^{\;\;\;\mu\nu} =a R_{(\alpha\;\;\beta)}^{\;\;\,(\mu\;\;\,\nu)} +b \big(g_{\alpha\beta}R^{\mu\nu} +g^{\mu\nu}R_{\alpha\beta}\big) +c R^{(\mu}_{(\alpha}\delta^{\nu)}_{\beta)} +d R\,g_{\alpha\beta}g^{\mu\nu} +e R \delta^{\mu\nu}_{\alpha\beta}, \label{potential} \end{eqnarray} where the hat denotes matrices acting on symmetric tensors, and we use the condensed notation for the Green's function of the covariant operator \begin{eqnarray} \Box+\hat P\equiv\Box\,\delta_{\alpha\beta}^{\;\;\;\mu\nu} +P_{\alpha\beta}^{\;\;\;\mu\nu}, \quad \Box=g^{\lambda\sigma} \nabla_\lambda\nabla_\sigma, \label{operator} \end{eqnarray} acting on any symmetric tensor field $\Phi_{\mu\nu}$ as \begin{eqnarray} &&\frac1{\Box+\hat P}\,\Phi_{\mu\nu}(x)\equiv \Big[\,\frac1{\Box+\hat P}\,\Big]_{\mu\nu}^{\alpha\beta}\Phi_{\alpha\beta}(x) =\int dy\,G_{\mu\nu}^{\alpha\beta}(x,y)\, \Phi_{\alpha\beta}(y) \end{eqnarray} with $G_{\mu\nu}^{\alpha\beta}(x,y)$ -- the two-point kernel of this Green's function. Thus this model formally falls into the category of nonlocal theories descending from the old approach to nonlocal QFT and quantum gravity \cite{Efimov} and its latest development \cite{nonloccosm} motivated by cosmological implications \cite{DeffWood,TsamisWoodard,Odintsovetal,ParkDodelson} and the requirements of renormalizability and unitarity \cite{latest}. However, in contrast to the functional ambiguity in the choice of action, characteristic of these works, here we have only a parametric freedom. The action (\ref{action}) has one dimensional parameter $M$ and six dimensionless parameters $\alpha$, $a$, $b$, $c$, $d$ and $e$, the first one $\alpha$ determining the overall magnitude of the nonlocal correction to the Einstein term. For a small value of $|\alpha|\ll 1$ and the value of $M$ related to the Planck mass $M_P$, \begin{eqnarray} M^2=\frac{M^2_P}{1-\alpha}, \label{M_Prenorm} \end{eqnarray} the theory (\ref{action}) has a GR limit on a {\em flat-space background}\footnote{Note that the nonlocal part contributes to the quadratic part of the action in metric perturbations and renormalizes the value of the Newton constant \cite{covnonloc}. The structure of nonlocal corrections in (\ref{action}) is motivated by the nonlocal covariant expansion in powers of the curvature for the Einstein action including the Gibbons-Hawking surface term \cite{covnonloc}.}, whereas the rest of the parameters are restricted by the requirement of a stable (A)dS solution existing in this theory. These restrictions read \begin{eqnarray} &&\alpha=-A-4B, \label{relation}\\ &&C=\frac23, \label{Crelation}\\ &&M_{\rm eff}^2 =\frac{A^2-\alpha^2}{\alpha}\,M^2>0. \label{effectivemass} \end{eqnarray} where the new quantities $A$, $B$ and $C$ equal in terms of original parameters \begin{eqnarray} &&A=a+4\,b+c,\quad B=b+4\,d+e, \label{AB}\\ &&C=\frac{a}3-c-4e, \label{C} \end{eqnarray} and $M_{\rm eff}$ is the effective Planck mass which determines the cutoff scale of perturbation theory in the (A)dS phase and the strength of the gravitational interaction of matter sources. The condition (\ref{relation}) guarantees the existence of the (A)dS solution, while Eqs.(\ref{Crelation})-(\ref{effectivemass}) are responsible for its stability. The calculation of the gauge fixed quadratic part of the action on the (A)dS background shows that longitudinal and trace modes which formally have a ghost nature are unphysical and can be eliminated by residual gauge transformations preserving the gauge \cite{serendipity}. This well-known mechanism leaves only two transverse-traceless physical modes propagating on the (A)dS background, similar to GR theory. Finally, as was shown in \cite{Solodukhin} the additional condition, \begin{eqnarray} a=2, \label{arelation} \end{eqnarray} allows one to extend the ghost stability arguments to generic Einstein backgrounds with a nonvanishing Weyl tensor. What is critically different from the GR phase of the theory -- its effective gravitational constant $G_{\rm eff}\equiv 1/8\pi M_{\rm eff}^2$ which can be much larger than than the Newton constant $G_N=1/8\pi M_P^2$, because in view of (\ref{relation}) a natural range of the parameter $A$ is $A\sim\alpha$, and $G_{\rm eff}\sim G_N/|\alpha|\gg G_N$. This property can be interpreted as a simulation of DM mechanism, because it implies strengthening of the gravitational attraction in the (A)dS phase of the theory and its possible effect on rotation curves at relevant distance scales. The main goal of this paper is a simple derivation of the above results, which is based on the new representation of the action (\ref{action}) with a critical value (\ref{relation}) of $\alpha$ \begin{eqnarray} &&S=-\frac{M^2_{\rm eff}}2 \int dx\,g^{1/2}\,E^{\mu\nu} \frac1{\Box+\hat P}\,E_{\mu\nu}. \label{newrep} \end{eqnarray} As we show below, it holds for closed compact spacetimes with the Euclidean signature. This Euclidean setting underlies the problems of black hole thermodynamics and the Schwinger-Keldysh technique for quantum expectation values in a special class of quantum states like Euclidean (quasi-de Sitter invariant) vacuum. The advantage of this representation is obvious -- quadratic in $E_{\mu\nu}$ form of (\ref{newrep}) directly indicates the existence of Einstein space solutions satisfying (\ref{Einsteinspace}) and also very easily gives the inverse propagator of the theory on their background. Single-pole nature of the propagator with a positive residue yields the ghost-free criteria (\ref{Crelation})-(\ref{effectivemass}) and (\ref{arelation}). All this is presented in the next two sections. The concluding section is devoted to the discussion of a serious difficulty of our model, which clearly manifests itself in its new representation (\ref{newrep}). In contrast to anticipations of \cite{serendipity}, the theory has the GR limit neither in the short wavelengths regime $\nabla\nabla\gg R$ nor in the limit of $\alpha\to 0$. This property, that was first observed in \cite{Solodukhin}, is explained by the presence of the constant zero mode of the scalar $\Box$ operator on a compact spacetime without a boundary. This leads to a nonanalytic behavior of the theory at $\alpha\to 0$ and the absence of a crossover between its dark energy phase and the GR phase, the latter existing only in the asymptotically flat spacetime. Then we discuss the possibility to recover the GR phase in a special conformal frame of the spacetime metric. Though a direct application of the model (\ref{action}) in realistic cosmology seems questionable, it might be interesting in context of currently popular critical gravity theories \cite{critical}. In particular, it looks like a nonlocal version of these theories quadratic in curvature, because for a critical value of $\alpha$ (\ref{relation}) and a generic constant $C\neq 2/3$, its propagator has a double pole nature and incorporates the so-called logarithmic modes \cite{critical}. \section{Euclidean field theory vs Schwinger-Keldysh technique and compactness of spacetime} The action (\ref{action}) above requires specification of boundary conditions for the Green's function. Any choice, however, violates causality in the initial value problem for a dynamically evolving fields. Their nonlocal equations of motion break causality because the behavior of the field at any spacetime point is not independent of the field values in the future light cone of this point \cite{serendipity}. Therefore, applicability of this action is restricted to the class of problems alternative to those of the evolution from the initial state. One such class is represented by gravitational thermodynamics implemented by the Euclidean quantum gravity (EQG) -- quantum gravity in the Euclidean signature spacetime. Another class in the Lorentzian signature spacetime is mediated by a special technique adapting nonlocal equations of motion to causality. This is the Schwinger-Keldysh technique \cite{SchwKeld} for quantum expectation values $\langle\,{IN}\,|\,\hat{\cal O}(x)\,|\,{ IN}\,\rangle$ of local physical observables $\hat{\cal O}$ in the initial quantum state $|\,{IN}\,\rangle$. Though the equations for $\langle\,{IN}\,|\,\hat{\cal O}(x)\,|\,{IN}\,\rangle$ are nonlocal, this quantity depends only on the past of the point $x$. This property is again not manifest and turns out to be the consequence of locality and unitarity of the original fundamental field theory (achieved via a complicated set of cancelations between nonlocal terms with chronological and anti-chronological boundary conditions). In contrast to Wick rotation in the S-matrix theory this technique is not related to Euclidean quantum field theory and to EQG, in particular. However, there exists a class of problems for which a retarded nature of effective equations explicitly follows from their quantum effective action calculated in Euclidean spacetime \cite{beyond}. This is a statement based on Schwinger-Keldysh technique \cite{SchwKeld,SchwKeld2} that for an appropriately defined initial quantum state $|{\rm in}\rangle$ the effective equations for the mean field \begin{eqnarray} g_{\mu\nu}=\langle{\,IN}\,|\,\hat g_{\mu\nu}|\,{IN}\,\rangle \end{eqnarray} originate from the {\em Euclidean} quantum effective action $S=S_{\rm Euclidean}[g_{\mu\nu}]$ by the following procedure \cite{beyond}\footnote{We formulate this statement directly for the case of gravity theory with the expectation value of the metric field operator $\hat g_{\mu\nu}(x)$, though it is valid in a much wider context of a generic local field theory \cite{beyond}.}. Calculate nonlocal $S_{\rm Euclidean}[g_{\mu\nu}]$ and its variational derivative. In the Euclidean signature spacetime nonlocal quantities, relevant Green's functions and their variations are generally uniquely determined by their trivial (zero) boundary conditions at infinity, so that this variational derivative is unambiguous in Euclidean theory. Then make a transition to the Lorentzian signature and impose the {\em retarded} boundary conditions on the resulting nonlocal operators, \begin{eqnarray} \left.\frac{\delta S_{\rm Euclidean}}{\delta g_{\mu\nu}(x)}\right|_{\;++++\,\; \Rightarrow\;-+++}^{\;\rm retarded}=0. \label{EuclidLorentz} \end{eqnarray} These equations are causal ($g_{\mu\nu}(x)$ depending only on the field behavior in the past of the point $x$) and satisfy all local gauge and diffeomorphism symmetries encoded in the original $S_{\rm Euclidean}[g_{\mu\nu}]$. To be more precise, the relation (\ref{EuclidLorentz}), that was proven to the first order of perturbation theory in \cite{Hartle-Horowitz} and to all orders in \cite{beyond}, originates as follows. The one-loop equation for the mean IN-IN field $g(x)$ contains the tadpole type quantum contribution \begin{eqnarray} &&\frac{\delta S}{\delta g(x)} +\frac{i}2\,\int dy\,dz\,\frac{\delta^3 S}{\delta g(x)\,\delta g(y)\,\delta g(z)}\,G_{IN\!-\!IN}(y,z)=0,\\ &&\nonumber\\ &&G_{IN\!-\!IN}(x,y)= \langle IN\,|\,\hat g(x)\,\hat g(y)\,|\,IN\rangle, \end{eqnarray} with the IN-IN Wightman Green's function $G_{IN\!-\!IN}(x,y)$ alternative to the conventional Feynman propagator. As was shown in \cite{beyond} for the Poincare invariant vacuum state (associated with a plane wave decomposition of the IN-operators) the following relation holds \begin{eqnarray} &&\frac{i}2\,\int dy\,dz\,\frac{\delta^3 S}{\delta g(x)\,\delta g(y)\,\delta g(z)}\,G_{IN\!-\!IN}(y,z)= \left.\frac{\delta\varGamma_{E}^{\rm 1-loop}} {\delta g(x)}\,\right|_{\;++++\,\Rightarrow\,-+++}^{\;\rm retarded}\\ &&\varGamma_{E}^{\rm 1-loop}=\frac12\,{\rm Tr}\ln\frac{\delta^2 S_{\rm Euclidean}}{\delta g(x)\,\delta g(y)}. \end{eqnarray} This confirms the relation (\ref{EuclidLorentz}) with the full one-loop Euclidean effective action $\varGamma_{\rm Euclidean}=S_{\rm Euclidean}+\varGamma_{E}^{\rm 1-loop}$. \begin{figure}[h] \centerline{\epsfxsize 12cm \epsfbox{LorentzEuclid.eps}} \caption{\small Euclidean de Sitter hemisphere denoted by dashed lines is used as a tool for constructing the Euclidean de Sitter invariant vacuum by the path integral over regular fields on the resulting compact spacetime. \label{Fig.1}} \end{figure} In \cite{serendipity} it was assumed that the model with the action (\ref{action}) falls into the range of validity of this procedure, and the action itself coincides with the nonlocal effective action of the Euclidean QFT calculated within certain approximation of the curvature expansion \cite{quantum0,quantum1}. This assumption implies a particular vacuum state $|{IN}\rangle$ and the one-loop approximation (in which it was proven to the first order of perturbation theory in \cite{Hartle-Horowitz} and to all orders of the curvature expansion in \cite{beyond}). The extension of this range is likely to include multi-loop orders and is likely to be generalized to the (A)dS background considered below, with the state $|{IN}\rangle$ apparently coinciding with the Euclidean Bunch-Davies vacuum. At the heuristical level the justification for this extension follows from Fig.\ref{Fig.1} depicting the compact Euclidean spacetime used as a tool for constructing the Euclidean vacuum within a well-known no-boundary prescription \cite{noboundary}. Attaching a Euclidean space hemisphere to the Lorentzian de Sitter spacetime makes it {\em compact} instead of the original asymptotic de Sitter infinity. Thus it simulates by the path integral over regular field configurations on this spacetime the effect of the Euclidean de Sitter invariant vacuum. The role of spacetime {\em compactness} is very important here because it allows one to disregards possible surface terms originating from integrations by parts or using cyclic permutations under the functional traces in the Feynman diagrammatic technique for the effective action. In what follows this property will be very important. In particular, the Green's function will be uniquely defined by the condition of regularity on such a compact spacetime without a boundary. This information is sufficient to specify the Green's function, for which we require the following symmetric variational law (with respect to local metric variations in $\Box$ and $\hat P$) \begin{eqnarray} \delta\frac1{\Box+\hat P}=-\frac1{\Box+\hat P}\,\delta\big(\Box+\hat P\big) \frac1{\Box+\hat P}, \label{symvar} \end{eqnarray} characteristic of the Euclidean signature d'Alembertian defined on the space of regular fields on a compact spacetime without a boundary. A similar treatment of a nonlocal action in \cite{nonloccosm,DeffWood} was very reservedly called the "integration by parts trick" needing justification from the Schwinger-Keldysh technique. However, this trick only provides the causality of effective equations, but does not guarantee the Euclidean-Lorentzian relation (\ref{EuclidLorentz}). The latter is based, among other things, on the choice of the $|{IN}\rangle$-state.\footnote{The $f(\Box^{-1}R)$ approach to nonlocal cosmology \cite{nonloccosm,DeffWood,ParkDodelson} assumes as a first principle the existence of causal generally covariant equations of motion not necessarily derivable by variational procedure. Thus the action of \cite{nonloccosm} plays only the auxiliary role and is used merely as a tool of obtaining such equations, guaranteeing the matter stress tensor conservation. We are grateful to S. Deser and R. Woodard for clarifying this point.} \section{The new representation of the action and stability of Einstein space backgrounds} The action (\ref{action}) can be essentially simplified by using the compactness of Euclidean spacetime discussed above. Its new representation is based on the following local identity which is valid for an arbitrary pure trace tensor function $\varPhi_{\mu\nu}=g_{\mu\nu}\varPhi$, \begin{eqnarray} (\Box+\hat P)\,g_{\mu\nu}\varPhi=g_{\mu\nu}\left(\Box -\frac\alpha4\,R\right)\varPhi +A\,E_{\mu\nu}\varPhi. \label{equation0} \end{eqnarray} where $E_{\mu\nu}$ is a traceless part of the Ricci tensor defined by (\ref{Einsteinspace}). The nonlocal identity for the Green's function of the operator (\ref{operator}) arises if we take the scalar $\varPhi$ in the form of the nonlocal functional of another arbitrary scalar function $\varphi$ \begin{eqnarray} \varPhi=\frac1{\Box-\frac\alpha4\,R}\,\varphi \end{eqnarray} and act on (\ref{equation0}) by $(\Box+\hat P)^{-1}$ from the left, so that \begin{eqnarray} \frac1{\Box+\hat P}\,g_{\mu\nu}\varphi =g_{\mu\nu}\frac1{\Box-\frac\alpha4\,R}\,\varphi -\frac{A}{\Box+\hat P}\,E_{\mu\nu} \frac1{\Box-\frac\alpha4\,R}\,\varphi. \label{equation} \end{eqnarray} For a compact spacetime an important simplification occurs if we identify $\varphi$ with $R$ and take into account that \begin{eqnarray} \frac1{\Box-\frac\alpha4\,R}\,R =-\frac4\alpha . \label{equation1} \end{eqnarray} This equation holds for a compact spacetime without a boundary or under boundary conditions which do not generate surface terms under integration by parts in the following chain of relations \begin{eqnarray} &&\frac1{\Box-\frac\alpha4\,R}\,R=-\frac4\alpha\, \frac1{\Box-\frac\alpha4\,R}\, \left(\overrightarrow{\Box}-\frac\alpha4\,R\right)1 =-\frac4\alpha\, \frac1{\Box-\frac\alpha4\,R}\, \left(\overleftarrow{\Box}-\frac\alpha4\,R\right)1 =-\frac4\alpha. \label{equation2} \end{eqnarray} Therefore we have the basic identity \begin{eqnarray} \frac1{\Box+\hat P}\,g_{\mu\nu}\frac{R}4 =-\frac1\alpha\,g_{\mu\nu} +\frac{A}\alpha\frac1{\Box+\hat P}\, E_{\mu\nu} \label{equation3} \end{eqnarray} and two its straightforward corollaries \begin{eqnarray} &&\frac\alpha{\Box+\hat P}\,G_{\mu\nu} =g_{\mu\nu} +\frac{\alpha-A}{\Box+\hat P}\, E_{\mu\nu}, \label{equation4}\\ &&\frac\alpha{\Box+\hat P}\,R_{\mu\nu} =-g_{\mu\nu} +\frac{\alpha+A}{\Box+\hat P}\, E_{\mu\nu}. \label{equation5} \end{eqnarray} Systematically using these identities in the integrand of (\ref{action}) we see that the Einstein term (linear in curvature) gets canceled and the the result becomes quadratic in $E_{\mu\nu}$ \begin{eqnarray} &&S=\frac{M^2}2\int dx\,g^{1/2}\, \left\{-R+R^{\mu\nu} \left(\frac\alpha{\Box+\hat P} \,G_{\mu\nu}\right)\right\}\nonumber\\ &&\qquad\qquad\qquad= -\frac{M^2}2\,\frac{A^2-\alpha^2}\alpha \int dx\,g^{1/2}\,E^{\mu\nu} \frac1{\Box+\hat P}\,E_{\mu\nu}. \end{eqnarray} This is a new representation of the action (\ref{newrep}) which is exact and explicitly contains the effective Planck mass (\ref{effectivemass}) suggested in \cite{serendipity}. It immediately allows one to prove the existence of a generic Einstein space solutions (including the maximally symmetric ones derived in \cite{serendipity}) and the absence of ghost modes on top of them. Since (\ref{newrep}) is quadratic in $E_{\mu\nu}$ its first order derivative is at least linear in $E_{\mu\nu}$ with some complicated nonlocal operator coefficient, \begin{eqnarray} &&\frac{\delta S}{\delta g_{\mu\nu}}=\frac{M^2_{\rm eff}}2 \,g^{1/2}\, \Omega^{\mu\nu}_{\;\;\;\;\alpha\beta}(\nabla)\, \frac1{\Box+\hat P}\, E^{\alpha\beta}, \label{eom}\\ &&\Omega^{\mu\nu}_{\;\;\;\;\alpha\beta}(\nabla) =\Box\,\delta^{\mu\nu}_{\alpha\beta} +g^{\mu\nu}\nabla_\alpha\nabla_\beta -2\nabla_{(\alpha}\nabla^{(\mu}\delta^{\nu)}_{\beta)} +\frac12\,R\, \delta^{\mu\nu}_{\alpha\beta}+O[\,E\,], \label{Omega} \end{eqnarray} where $O[\,E\,]$ denotes terms vanishing in the limit $E_{\mu\nu}\to 0$. This guarantees the existence of vacuum solutions with $E_{\mu\nu}=0$. Perturbative stability of these solution follows from the quadratic part of the action, which is easily calculable now. In view of the quadratic nature of (\ref{newrep}), the quadratic part of the action on the Einstein space background requires variation of only two explicit $E_{\mu\nu}$-factors. For the metric variations $\delta g_{\mu\nu}\equiv h_{\mu\nu}$ satisfying the DeWitt gauge \begin{eqnarray} \chi^\mu\equiv\nabla_\nu h^{\mu\nu}-\frac12\,\nabla^\mu h=0, \label{DWgauge} \end{eqnarray} the variation of $E_{\mu\nu}$ reads \begin{eqnarray} &&\delta E_{\mu\nu}\Big|_{\;E_{\alpha\beta}=0}=-\frac12\,\Box h_{\mu\nu}-W_{(\mu\;\;\nu)}^{\;\,(\alpha\;\;\beta)} h_{\alpha\beta}+\frac1{12}\,Rh_{\mu\nu} +\frac18\,g_{\mu\nu}\left(\Box-\frac16\,R\right)h =-\frac12\hat D\, \bar h_{\mu\nu}, \label{deltabarR} \end{eqnarray} where the operator $\hat D$ \begin{eqnarray} \hat D\equiv \Box+2\hat W -\frac16\,R\,\hat1, \label{D} \end{eqnarray} acts on a traceless part of $h_{\mu\nu}$, the hat labels matrices acting on symmetric tensors, \begin{eqnarray} &&\bar h_{\mu\nu}\equiv \hat\varPi h_{\mu\nu}=h_{\mu\nu} -\frac14\,g_{\mu\nu}h,\quad \hat\varPi\equiv \varPi_{\mu\nu}^{\;\;\alpha\beta} =\delta_{\mu\nu}^{\alpha\beta} -\frac14\,g_{\mu\nu}g^{\alpha\beta},\\ &&\hat W h_{\mu\nu}\equiv W_{(\mu\;\;\nu)}^{\;\,(\alpha\;\;\,\beta)} h_{\alpha\beta}, \end{eqnarray} and $W_{\mu\;\nu}^{\;\alpha\;\beta}$ denotes the Weyl tensor. Note that the operator $\hat D$ commutes with the projector $\hat\varPi$, $[\hat\varPi,\hat D]=0$, because of the traceless nature of the Weyl tensor, $\hat\varPi\hat W=\hat W\hat\varPi=\hat W$, so that the variation (\ref{deltabarR}) of the traceless $E_{\mu\nu}$ is also traceless as it should. In matrix notations the operator $\Box+\hat P$ on the Einstein background reads \begin{eqnarray} \big(\Box+\hat P\big)\Big|_{\;E_{\mu\nu}=0}=\Box+a\,\hat W-\frac{C}4\,R\hat\varPi -\frac\alpha4\,R\,(\hat1-\hat\varPi). \end{eqnarray} Therefore, in view of (\ref{deltabarR}), the property $[\hat\varPi,\hat D]=0$ and the obvious relation \begin{eqnarray} \hat\varPi\,\frac1{\Box+\hat P}\,\hat\varPi=\hat\varPi\,\frac1{\Box+a\,\hat W-\frac{C}4\,R\,\hat1}\,\hat\varPi \end{eqnarray} we finally have the quadratic part of the action in terms of the traceless part $\bar h_{\mu\nu}$ of the metric perturbations $h_{\mu\nu}$ satisfying the DeWitt gauge \begin{eqnarray} S_{(2)}\Big|_{\;E_{\mu\nu}=0} =-\frac{M^2_{\rm eff}}2 \int d^4x\,g^{1/2}\big(\hat D\bar h^{\mu\nu}\big) \,\frac1{\Box+a\,\hat W -\frac{C}4\,R\,\hat1}\, \big(\hat D\bar h_{\mu\nu}\big). \label{S_2} \end{eqnarray} This expression was first derived in \cite{Solodukhin}. For generic values of the parameters $a$ and $C$ the propagator of the theory features double poles corresponding to the zero modes of the operator $\hat D$. This is a nonlocal generalization of the situation characteristic of the critical gravity theories with a local action containing higher-order derivatives \cite{critical}. Local theories with double poles have a distinguished status different from unstable higher-derivative models with massive ghosts -- their stability is determined also by special logarithmic modes which might or might not violate unitarity \cite{critical}. Interestingly, flexibility in the values of the parameters $a$ and $C$ allows us to avoid perturbative instability of the Einstein space background. The quadratic form (\ref{S_2}) can be made local and thus guarantee the existence of the propagator with a single positive-residue pole. This is easily achieved by demanding equality of the operator (\ref{D}) and the operator in the denominator of (\ref{S_2}) along with the positivity of $M^2_{\rm eff}$, \begin{eqnarray} \hat D=\Box+a\,\hat W-\frac{C}4\,R\,\hat1. \end{eqnarray} This yields the value $C=2/3$ derived in \cite{serendipity} by very extensive calculations and in addition leads to a unique value of another parameter $a=2$, which allows us to extend stability arguments to generic Einstein space backgrounds \cite{Solodukhin} (the condition $a=2$ is not necessary on maximally symmetric background with $\hat W=0$ and, thus, was derived in \cite{Solodukhin} in the course of generalizing the model of \cite{serendipity} to generic Einstein spaces). \section{GR phase: asymptotically flat spacetime vs cosmological boundary conditions} Using (\ref{Omega}) in the equation of motion (\ref{eom}) one can see that in the UV limit $\nabla\nabla\gg R$ the variational derivative of the action \begin{eqnarray} &&\frac{\delta S}{\delta g_{\mu\nu}}\simeq \frac{M^2_{\rm eff}}2 \,g^{1/2}\left(R_{\mu\nu} -\frac12\,\nabla_\mu\nabla_\nu\frac1\Box R\right) +O[\,E^2\,] \end{eqnarray} remains nonlocal and differs from the general relativistic expression even for $\alpha\to 0$. In particular, in the approximation linear in the curvatures matter sources are coupled to gravity according to \begin{eqnarray} R_{\mu\nu} -\frac12\,\nabla_\mu\nabla_\nu\frac1\Box R +O[\,R^2\,] =\frac1{M^2_{\rm eff}}\,T_{\mu\nu}, \label{mattersource} \end{eqnarray} where nonlinear in the curvature terms $O[\,R^2\,]$ include nonlinearity in $E_{\mu\nu}$. The local Ricci scalar term of the Einstein tensor is replaced here with the nonlocal expression which guarantees in this approximation the stress tensor conservation, but in contrast to anticipations of \cite{serendipity} does not provide the GR phase of the theory. The absence of the GR phase might seem paradoxical because the original action (\ref{action}) obviously reduces to the Einstein one in the limit $\alpha\to 0$. The explanation of this paradox consists in the observation that the transition from (\ref{action}) to the new representation (\ref{newrep}) is based on the identity (\ref{equation1}) which is not analytic both in $\alpha$ and in the curvature. The source of this property is the constant zero mode of the scalar operator $\Box$ on compact Euclidean spacetimes without a boundary. On such manifolds the left hand side of (\ref{equation1}) is not well defined for $\alpha=0$. The equivalence of the actions (\ref{action}) and (\ref{newrep}) was obtained only on this class of Euclidean manifolds. The latter, in turn, were motivated in Sect.2 by extending the duality between the Schwinger-Keldysh technique and Euclidean QFT \cite{beyond} to the cosmological (quasi-de Sitter) context. In contrast to this class of manifolds, the representations (\ref{action}) and (\ref{newrep}) are not equivalent in asymptotically flat (AF) spacetime because equations (\ref{equation1})-(\ref{equation5}) do not apply there. First, with zero boundary conditions at infinity the scalar $\Box$ does not have zero modes. Second, due to the natural AF falloff conditions, $R(x)\sim 1/|x|^4$ and $(1/\Box)\delta(x-y)\sim 1/|x-y|^2$, integration by parts in the chain of transformations (\ref{equation2}) gives a finite surface term at infinity $|x-y|\to\infty$. This leads to an alternative equation \begin{eqnarray} \frac1{\Box-\frac\alpha4\,R}\,R\,\Big|_{\,\rm AF} =O\,[\,R\,] \label{equation11} \end{eqnarray} with a nontrivial right hand side analytic in $\alpha$ and tending to zero for a vanishing scalar curvature. This explains why the model (\ref{action}) on AF background has a good GR limit with nonlinear curvature corrections controlled by a small $\alpha$ \cite{covnonloc,serendipity}.\footnote{Basic example of a physically nontrivial Einstein space is the Schwarzchild-de Sitter background. A priori it can also generate surface terms in (\ref{equation2}), because its metric is not smooth simultaneously at the black hole and cosmological horizons and has a conical singularity \cite{GibHawkPage}. However, one can show that for any type of boundary conditions at this singularity the relevant surface term vanishes and leaves Eq.(\ref{equation1}) intact. A similar issue remains open in the case of the Schwarzchild-AdS background for which the operator $\hat D$ with $R<0$ is not guaranteed to be free of zero modes and does not provide a well defined representation (\ref{newrep}) \cite{Solodukhin}. We are grateful to S. Solodukhin for a discussion of this point.} This undermines the utility of the model (\ref{action}) as a possible solution of the dark energy problem and simulation of dark matter phenomenon advocated in \cite{serendipity}. Absence of the GR limit for $\alpha\to 0$ and for short distance regime $\nabla\nabla\gg R$ becomes a critical drawback of this model\footnote{In \cite{Solodukhin} this was interpreted as the phase transition between the $R=4\Lambda>0$ and $R=0$ phases -- the absence of crossover between these phases. We see that in fact this transition has a topological nature.} caused by its infrared behavior -- presence of a constant zero mode on a compact spacetime. Possible solution of this problem could be a reformulation of the nonlocal action by projecting out this zero mode from the definition of the Green's function in (\ref{action}) (see \cite{Zelnikovetal} for the technique of such a truncation). Another possible way to circumvent this difficulty can be based on the conformal transformation to a new metric \begin{eqnarray} \tilde g_{\mu\nu}[\,g\,]=e^{2\sigma[\,g\,]}\,g_{\mu\nu}, \end{eqnarray} which is assumed to be physical (that is directly coupled to matter) in contrast to the original metric $g_{\mu\nu}$ playing the auxiliary role. With the conformal factor function \begin{eqnarray} \sigma[\,g\,]\simeq\frac14\,\frac1\Box R, \end{eqnarray} which is small in the UV limit, $\sigma\ll 1$, but has large second order derivatives\footnote{Note that this expression is assumed to hold only in the formal UV limit of $\nabla\nabla\gg R$, so that the zero mode of $\Box$ should not invalidate it.}, $\nabla\nabla\sigma\sim R$, one can express the covariant Einstein tensor of the new metric $\tilde G_{\mu\nu}$ in terms of the original metric as \begin{eqnarray} &&\tilde G_{\mu\nu}=G_{\mu\nu}+2\big(g_{\mu\nu}\Box\sigma -\nabla_\mu\nabla_\nu\sigma\big)+g_{\mu\nu}\sigma_\alpha^2 +2\sigma_\mu\sigma_\nu\nonumber\\ &&\qquad\qquad=R_{\mu\nu} -\frac12\,\nabla_\mu\nabla_\nu\frac1\Box R+O\left[\,\Big(\nabla\frac1\Box R\Big)^2\right], \quad \sigma_\mu\equiv\nabla_\mu\sigma. \end{eqnarray} We see that $\tilde G_{\mu\nu}$ in this limit in fact reproduces the left hand side of (\ref{mattersource}). Therefore, if we couple matter to the new metric $\tilde g_{\mu\nu}$ in the total action as \begin{eqnarray} S_{\rm total}[\,g,\phi\,]=S[\,g\,]+S_{\rm matter}[\phi,\tilde g[\,g\,]\,], \end{eqnarray} then for $\tilde g_{\mu\nu}$ in the short distance limit we will recover the usual Einstein equations \begin{eqnarray} \tilde R_{\mu\nu} -\frac12\,\tilde g_{\mu\nu}\tilde R =\frac1{M^2_{\rm eff}}\,\tilde T_{\mu\nu}, \quad \tilde T_{\mu\nu}=\frac2{\tilde g^{1/2}}\,\tilde g_{\mu\alpha}\tilde g_{\nu\beta}\,\frac{\delta S_{\rm matter}}{\delta\tilde g_{\alpha\beta}} \label{modeq} \end{eqnarray} where $\tilde T_{\mu\nu}$ is a matter stress tensor in the frame of the $\tilde g_{\mu\nu}$-metric. When deriving this equation we took into account smallness of $\sigma$ and $\delta\sigma/\delta g_{\mu\nu}=O(\sigma)$ in the short distance limit $\nabla\nabla\gg R$. Thus we get a GR phase in the conformally related frame of the theory. Unfortunately, however, the magnitude of corrections to the GR behavior is no longer controlled by a small parameter $\alpha$, which makes application of this idea to realistic cosmology still somewhat questionable. \section{Conclusions} We have derived the equivalent representation (\ref{newrep}) of the action (\ref{action}) with the critical value (\ref{relation}) of the parameter $\alpha$. This representation allows one in a systematic way to extend applications of these models from maximally symmetric to generic Einstein spaces and black hole solutions. Unfortunately, in contrast to AF spacetimes this model fails to have a general relativistic limit in the cosmological problems for the mean metric field, treated within the Euclidean version of the Schwinger-Keldysh formalism. Nevertheless, the short-distance GR limit can be attained in a special conformal frame (physical metric minimally coupled to matter) nonlocally related to the original one. This limit, however, cannot be controlled by smallness of the parameter $\alpha$ that was initially designed in \cite{serendipity} to moderate the effect of nonlocal corrections to the Einstein theory. Thus, direct cosmological implications of the model (\ref{action}) are not likely to be available. However, it might be interesting as a nonlocal generalization of critical gravity theories \cite{critical} which recently became popular as holographic duals of the logarithmic conformal models \cite{GrumillerSachs}. In fact, the relation (\ref{relation}) can be regarded as the analogue of the criticality condition in the local quadratic in curvature models. It eliminates massive gravitons and gives rise to logarithmic modes \cite{critical} corresponding to the double pole in the propagator. Zero energy of massless gravitons and positive energy of log modes \cite{critical} give rise to controversial statements on unitarity of these critical models (see the work \cite{PorratiRoberts} claiming the loss of unitarity due to lack of orthogonality between the logarithmic and Einstein states). Analogous reasoning might imply that our model is also stable even without imposing the conditions (\ref{Crelation}) and (\ref{arelation}). In fact, the theory (\ref{newrep}) bears a number of properties in common with critical gravity models of \cite{critical}. In particular, as advocated in \cite{Solodukhin}, it has Schwarzschild-de Sitter black hole solutions with zero entropy in parallel with zero entropy and energy black holes of \cite{critical}. All this makes the class of nonlocal gravity models open for interesting future implications. \section*{Acknowledgements} The authors strongly benefitted from fruitful discussions and correspondence with S. Deser, S. Solodukhin, R. Woodard and A. Zelnikov. This work of A. B. was supported by the RFBR grant No. 11-01-00830. The work of Yu. G. was supported by the RFBR grant No 11-02-00512 and the grant from FAPEMIG.
2,869,038,155,139
arxiv
\section{Introduction}\label{sec:intro} Recently, we have introduced~\cite{dam} and modified~\cite{ejgta1} two graph-decomposition theorems based on a new graph product, motivated by applications in the context of synchronising periodic real-time processes, in particular in the field of robotics. More on the background, definitions and applications can be found in two conference contributions \cite{boode2014cpa, boode2013cpa}, two journal papers \cite{dam,ejgta} and the thesis of the author~\cite{boodethesis}. We repeat some of the background, definitions and theorems here for convenience, and for supplying the motivation for the research that led to the third, fourth and fifth decomposition theorem that we state and prove in Section~\ref{sec:decomp}. The decomposition of graphs is well known in the literature. For example, a decomposition can be based on the partition of a graph into edge disjoint subgraphs. In our case, the decomposition is based on the contraction of a subset of the vertices of the graph, in such a manner that if $V'\subset V(G)$ is contracted giving $G'$ and $V''\subset V(G)$ is contracted giving $G''$ we have that the vertex-removing synchronised product (VRSP) of $G'$ and $G''$ is isomorphic to $G$. The rest of the paper is organised as follows. In the next sections, we first recall the formal graph definitions (in Section~\ref{sec:term}), the definition of the VRSP as well as the graph-decomposition theorems, together with other relevant terminology and notation (in Section~\ref{Terminology_products}), and the notions of graph isomorphism and contraction to labelled acyclic directed multigraphs (in Section~\ref{Terminology_morphisms}). Finally, we prove (in Section~\ref{sec:decomp}) the third, fourth and fifth decomposition theorem. \section{Terminology and notation}\label{sec:term} We use the textbook of Bondy and Murty \cite{GraphTheory} for terminology and notation we do not specify here. Throughout, unless we specify explicitly that we consider other types of graphs, all graphs we consider are {\em labelled acyclic directed multigraphs\/}, i.e., they may have multiple arcs. Such graphs consist of a {\em vertex set\/} $V$ (representing the states of a process), an {\em arc set\/} $A$ (representing the actions, i.e., transitions from one state to another), a set of {\em labels\/} $L$ (in our applications in fact a set of label pairs, each representing a type of action and the worst case duration of its execution), and two mappings. The first mapping $\mu: A\rightarrow V\times V$ is an incidence function that identifies the {\em tail\/} and {\em head\/} of each arc $a\in A$. In particular, $\mu(a)=(u,v)$ means that the arc $a$ is directed from $u\in V$ to $v\in V$, where $tail(a)=u$ and $head(a)=v$. We also call $u$ and $v$ the {\em ends\/} of $a$. The second mapping $\lambda :A\rightarrow L$ assigns a label pair $\lambda(a)=(\ell(a),t(a))$ to each arc $a\in A$, where $\ell(a)$ is a string representing the (name of an) action and $t(a)$ is the {\em weight\/} of the arc $a$. This weight $t(a)$ is a real positive number representing the worst case execution time of the action represented by $\ell(a)$. Let $G$ denote a graph according to the above definition. An arc $a\in A(G)$ is called an {\em in-arc\/} of $v\in V(G)$ if $head(a)=v$, and an {\em out-arc\/} of $v$ if $tail(a)=v$. The {\em in-degree\/} of $v$, denoted by $d^-(v)$, is the number of in-arcs of $v$ in $G$; the {\em out-degree\/} of $v$, denoted by $d^+(v)$, is the number of out-arcs of $v$ in $G$. The subset of $V(G)$ consisting of vertices $v$ with $d^-(v)=0$ is called the {\em source\/} of $G$, and is denoted by $S'(G)$. The subset of $V(G)$ consisting of vertices $v$ with $d^+(v)=0$ is called the {\em sink\/} of $G$, and is denoted by $S''(G)$. For disjoint nonempty sets $X,Y\subseteq V(G)$, $[X,Y]$ denotes the set of arcs of $G$ with one end in $X$ and one end in $Y$. If the head of the arc $a\in [X,Y]$ is in $Y$, we call $a$ a {\em forward arc\/} (of $[X,Y]$); otherwise, we call it a {\em backward arc\/}. The acyclicity of $G$ implies a natural ordering of the vertices into disjoint sets, as follows. We define $S^0(G)$ to denote the set of vertices with in-degree 0 in $G$ (so $S^0(G)=S'(G)$), $S^1(G)$ the set of vertices with in-degree 0 in the graph obtained from $G$ by deleting the vertices of $S^0(G)$ and all arcs with tails in $S^0(G)$, and so on, until the final set $S^{t}(G)$ contains the remaining vertices with in-degree 0 and out-degree 0 in the remaining graph. Note that these sets are well-defined since $G$ is acyclic, and also note that $S^{t}(G)\neq S''(G)$, in general. If a vertex $v\in V(G)$ is in the set $S^j(G)$ in the above ordering, we say that $v$ is {\em at $level\, j$\/} in $G$. This ordering implies that each arc $a\in A(G)$ can only have $tail(a)\in S^{j_1}(G)$ and $head(a)\in S^{j_2}(G)$ if $j_1<j_2$. A graph $G$ is called {\em weakly connected\/} if all pairs of distinct vertices $u$ and $v$ of $G$ are connected through a sequence of distinct vertices $u=v_0v_1\ldots v_k=v$ and arcs $a_1a_2\ldots a_k$ of $G$ with $\mu(a_i) = (v_{i-1}, v_i)$ or $(v_{i},v_{i-1})$ for $i=1,2,\ldots ,k$. We are mainly interested in weakly connected graphs, or in the weakly connected components of a graph $G$. If $X\subseteq V(G)$, then the {\em subgraph of $G$ induced by $X$\/}, denoted as $G[X]$, is the graph on vertex set $X$ containing all the arcs of $G$ which have both their ends in $X$ (together with $L$, $\mu$ and $\lambda$ restricted to this subset of the arcs). If $X\subseteq V$ induces a weakly connected subgraph of $G$, but there is no set $Y\subseteq V$ such that $G[Y]$ is weakly connected and $X$ is a proper subset of $Y$, then $G[X]$ is called a {\em weakly connected component\/} of $G$. Also, the set of arcs of $G[X]$ is denoted as $A[X]$. If $X\subseteq A(G)$, then the {\em subgraph of $G$ arc-induced by $X$\/}, denoted as $G\{X\}$, is the graph on arc set $X$ containing all the vertices of $G$ which are an end of an arc in $X$ (together with $L$, $\mu$ and $\lambda$ restricted to this subset of the arcs). If $X\subseteq A$ arc-induces a weakly connected subgraph of $G$, but there is no set $Y\subseteq A$ such that $G\{Y\}$ is weakly connected and $X$ is a proper subset of $Y$, then $G\{X\}$ is called a {\em weakly connected component\/} of $G$. In the sequel, throughout we omit the words weakly connected, so a component should always be understood as a weakly connected component. In contrast to the notation in the textbook of Bondy and Murty \cite{GraphTheory}, we use $\omega(G)$ to denote the number of components of a graph $G$. We denote the components of $G$ by $G_i$, where $i$ ranges from 1 to $\omega(G)$. In that case, we use $V_i$, $A_i$ and $L_i$ as shorthand notation for $V(G_i)$, $A(G_i)$ and $L(G_i)$, respectively. The mappings $\mu$ and $\lambda$ have natural counterparts restricted to the subsets $A_i\subset A(G)$ that we do not specify explicitly. We use $G=\sum\limits_{i=1}^{\omega(G)} G_i$ to indicate that $G$ is the disjoint union of its components, implicitly defining its components as $G_1$ up to $G_{\omega(G)}$. In particular, $G=G_1$ if and only if $G$ is weakly connected itself. Furthermore, we use $\overundercup{i=1}{\omega(G)} G_i$ to denote the graph with vertex set $\overundercup{i=1}{\omega(G)} V_i$, arc set $\overundercup{i=1}{\omega(G)} A_i$ with the mappings $\mu_i(a_i)=(u_i,v_i)$ and $\lambda(a_i)=(\ell(a_i),t(a_i))$ for each arc $a_i\in A_i$. A subgraph $B$ of $G$ according to the above definition is called \emph{bi-partite} if there exists a partition of non-empty sets $V_1$ and $V_2$ of $V(B)$ into two partite sets (i.e., $V(B) = V_1 \cup V_2$, $V_1 \cap V_2 = \emptyset$) such that every arc of $B$ has its head vertex and tail vertex in different partite sets. Such a graph is called a \emph{bipartite subgraph}, and we denote such a bipartite subgraph of $G$ by $B(V_1, V_2)$. A bipartite graph $B(V_1, V_2)$ is called complete if, for every pair $x \in V_1$, $y \in V_2$, there is an arc $a$ met $\mu(a)=(x,y)$ or $\mu(a)=(y,x)$ in $B(V_1, V_2)$. We call $B(V_1, V_2)$ a trivial bipartite graph if $|V_1|=|V_2|=1$. A bipartite subgraph $B(V_1,V_2)$ is semicomplete if, for every pair $x \in V_1$, $y \in V_2$, an arc $xy$ is in $B(V_1,V_2)$ or an arc $yx$ is in $B(V_1,V_2)$, or for every pair $x \in V_1$, $y \in V_2$, there is no arc $xy$ in $B(V_1,V_2)$ and there is no arc $yx$ in $B(V_1,V_2)$. If necessary, we divide $V$ into mutually disjoint subsets with a cardinality that is a prime number. We denote the union of mutually disjoint subsets $V_1,\ldots, V_{n}$ of $V$ with the same cardinality $p_i$ as $(V^{p_i})^{n}$. Hence, $|(V^{p_i})^{n}|=n\cdot p_i$. In the next two sections, we recall some of the definitions that appeared in~\cite{dam}. \section{Graph products}\label{Terminology_products} Instead of defining products for general pairs of graphs, for notational reasons we find it convenient to define those products for two components $G_i$ and $G_j$ of a disconnected graph $G$. We start with the next analogue of the Cartesian product. The {\em Cartesian product\/} $G_i\Box G_j$ of $G_i$ and $G_j$ is defined as the graph on vertex set $V_{i,j}=V_i\times V_j$, and arc set $A_{i,j}$ consisting of two types of labelled arcs. For each arc $a\in A_i$ with $\mu(a)=(v_i,w_i)$, an {\em arc of type $i$\/} is introduced between tail $(v_i,v_j)\in V_{i,j}$ and head $(w_i,w_j)\in V_{i,j}$ whenever $v_j=w_j$; such an arc receives the label $\lambda(a)$. This implicitly defines parts of the mappings $\mu$ and $\lambda$ for $G_i \Box G_j$. Similarly, for each arc $a\in A_j$ with $\mu(a)=(v_j,w_j)$, an {\em arc of type $j$\/} is introduced between tail $(v_i,v_j)\in V_{i,j}$ and head $(w_i,w_j)\in V_{i,j}$ whenever $v_i=w_i$; such an arc receives the label $\lambda(a)$. This completes the definition of $A_{i,j}$ and the mappings $\mu$ and $\lambda$ for $G_i \Box G_j$. So, arcs of type $i$ and $j$ correspond to arcs of $G_i$ and $G_j$, respectively, and have the associated labels. For $k\ge 3$, the Cartesian product $G_1\Box G_2\Box \cdots\Box G_k$ is defined recursively as $((G_1\Box G_2)\Box \cdots )\Box G_k$. This Cartesian product is commutative and associative, as can be verified easily and is a well-known fact for the undirected analogue. Since we are particularly interested in synchronising arcs, we modify the Cartesian product $G_i\Box G_j$ according to the existence of synchronising arcs, i.e., pairs of arcs with the same label pair, with one arc in $G_i$ and one arc in $G_j$. The first step in this modification consists of ignoring (in fact deleting) the synchronising arcs while forming arcs in the product, but additionally combining pairs of synchronising arcs of $G_i$ and $G_j$ into one arc, yielding the {\em intermediate product\/} which we denote by $G_i \boxtimes G_j$. An example of the intermediate product is given in Figure~\ref{BiPartiteExampleDecomposition2}. To be more precise, $G_i \boxtimes G_j$ is obtained from $G_i\Box G_j$ by first ignoring all except for the so-called {\em asynchronous\/} arcs, i.e., by only maintaining all arcs $a\in A_{i,j}$ for which $\mu(a)=((v_i,v_j),(w_i,w_j))$, whenever $v_j=w_j$ and $\lambda(a) \notin L_j$, as well as all arcs $a\in A_{i,j}$ for which $\mu(a)=((v_i,v_j),(w_i,w_j))$, whenever $v_i=w_i$ and $\lambda(a)\notin L_i$. Additionally, we add arcs that replace synchronising pairs $a_i\in A_i$ and $a_j\in A_j$ with $\lambda(a_i)=\lambda(a_j)$. If $\mu(a_i)=(v_i,w_i)$ and $\mu(a_j)=(v_j,w_j)$, such a pair is replaced by an arc $a_{i,j}$ with $\mu(a_{i,j})=((v_i,v_j),(w_i,w_j))$ and $\lambda(a_{i,j})=\lambda(a_i)$. We call such arcs of $G_i \boxtimes G_j$ {\em synchronous\/} arcs. The second step in this modification consists of removing (from $G_i \boxtimes G_j$) the vertices $(v_i,v_j)\in V_{i,j}$ together with the arcs $a$ with $tail(a)=(v_i,v_j)$ and the arcs $b$ with $head(b)=(v_i,v_j)$ for which $(v_i,v_j)$ has $in-degree > 0$ in $G_i\Box G_j$ but $in-degree = 0$ in $G_i\boxtimes G_j$. The removal of these vertices is then repeated in the newly obtained graph, and so on, until there are no more vertices with $in-degree = 0$ in the current graph with $in-degree > 0$ in $G_i\Box G_j$ and there are no more vertices with $out-degree = 0$ in the current graph with $out-degree > 0$ in $G_i\Box G_j$ . This finds its motivation in the fact that in our applications, the states that are represented by such vertices can never be reached, so are irrelevant. The resulting graph is called the {\em vertex-removing synchronised product\/} (VRSP for short) of $G_i$ and $G_j$, and denoted as $G_i \boxbackslash G_j$. For $k\ge 3$, the {VRSP} $G_1 \boxbackslash G_2 \boxbackslash \cdots \boxbackslash G_k$ is defined recursively as $((G_1 \boxbackslash G_2) \boxbackslash \cdots ) \boxbackslash G_k$. The VRSP is commutative, but not associative in general, in contrast to the Cartesian product. These properties are not relevant for the decomposition results that follow. However, for these results it is relevant to introduce counterparts of graph isomorphism and graph contraction that apply to our types of graphs. We define these counterparts in the next section. \section{Graph isomorphism and graph contraction}\label{Terminology_morphisms} The isomorphism we introduce in this section is an analogue of a known concept for unlabelled graphs, but involves statements on the labels. We assume that two different arcs with the same tail and head have different labels; otherwise, we replace such multiple arcs by one arc with that label, because these arcs represent exactly the same action at the same stage of a process. Formally, an {\em isomorphism\/} from a graph $G$ to a graph $H$ consists of two bijections $\phi : V(G)\rightarrow V(H)$ and $\rho : A(G)\rightarrow A(H)$ such that for all $a \in A(G)$, one has $\mu(a) = (u, v)$ if and only if $\mu(\rho(a))=(\phi(u),\phi(v))$ and $\lambda(a)=\lambda(\rho(a))$. Since we assume that two different arcs with the same tail and head have different labels, however, the bijection $\rho$ is superfluous. The reason is that, if $(\phi,\rho)$ is an isomorphism, then $\rho$ is completely determined by $\phi$ and the labels. In fact, if $(\phi,\rho)$ is an isomorphism and $\mu(a)=(u,v)$ for an arc $a\in A(G)$, then $\rho(a)$ is the unique arc $b\in A(H)$ with $\mu(b)=(\phi(u),\phi(v))$ and label $\lambda(b)=\lambda(a)$. Thus, we may define an isomorphism from $G$ to $H$ as a bijection $\phi : V(G)\rightarrow V(H)$ such that there exists an arc $a \in A(G)$ with $\mu(a)=(u,v)$ if and only if there exists an arc $b\in A(H)$ with $\mu(b)=(\phi(u),\phi(v))$ and $\lambda(b)=\lambda(a)$. An isomorphism from $G$ to $H$ is denoted as $G\cong H$. Next, we define what we mean by contraction. Let $X$ be a nonempty proper subset of $V(G)$, and let $Y=V(G)\setminus X$. By {\em contracting $X$\/} we mean replacing $X$ by a new vertex $\tilde{x}$, deleting all arcs with both ends in $X$, replacing each arc $a\in A(G)$ with $\mu(a)=(u,v)$ for $u\in X$ and $v\in Y$ by an arc $c$ with $\mu(c)=(\tilde{x},v)$ and $\lambda (c)=\lambda(a)$, and replacing each arc $b\in A(G)$ with $\mu(b)=(u,v)$ for $u\in Y$ and $v\in X$ by an arc $d$ with $\mu(d)=(u,\tilde{x})$ and $\lambda (d)=\lambda(b)$. We denote the resulting graph as $G/X$, and say that $G/X$ is the {\em contraction of $G$ with respect to $X$\/}. If we have a series of contractions of $G$ with respect to $X_1,\ldots,X_n$, $G/X_1/\ldots/X_n$, we denote the resulting graph as $G/_{i=1}^nX_i$. When $X_i \cap X_j\neq \emptyset, i<j,$ then due to the contraction with respect to $X_i$ the vertices of $X_i$ are replaced by $\tilde{x}_i$ and therefore the vertices $X_i \cap X_j$ of $X_j$ are also replaced by $\tilde{x}_i$. Hence, $X_{j}$ is a subset of the vertex set of the graph constructed by $G/X_1/\ldots/X_{j-1}$. Finally, we recall the two decomposition theorems that were introduced in~\cite{dam} and modified in~\cite{ejgta1} (Note that if we would allow $X_2$ to be empty then in the case that $X_2$ is empty Theorem~\ref{theorem_2} is identical to Theorem~\ref{theorem_1}.). \begin{theorem}[\cite{ejgta1}]\label{theorem_1} Let $G$ be a graph, let $X$ be a nonempty proper subset of $V(G)$, and let $Y=V(G)\setminus X$. Suppose that each largest subset of arcs with the same label of $[X,Y]$ arc-induces a complete bipartite subgraph of $G$ and that the arcs of $G/X$ and $G/Y$ corresponding to the arcs of $[X,Y]$ are the only synchronising arcs of $G/X$ and $G/Y$. If $S'(G)\subseteq X$ and $[X,Y]$ has no backward arcs, then $G\cong G/Y\boxbackslash G/X$. \end{theorem} \begin{theorem}[\cite{ejgta1}]\label{theorem_2} Let $G$ be a graph, and let $X_1$, $X_2$ and $Y=V(G)\setminus (X_1\cup X_2)$ be three disjoint nonempty subsets of $V(G)$. Suppose that each largest subset of arcs with the same label of $[X_1,Y]$ arc-induces a complete bipartite subgraph of $G$, each largest subset of arcs with the same label of $[Y,X_2]$ arc-induces a complete bipartite subgraph of $G$, the arcs of $[X_1,X_2]$ have no labels in common with any arc in $[X_1,Y]\cup[Y,X_2]$, and the arcs of $G/X_1/X_2$ and $G/Y$ corresponding to the arcs of $[X_1,Y]\cup [Y,X_2]\cup [X_1,X_2]$ are the only synchronising arcs of $G/X_1/X_2$ and $G/Y$. If $S'(G)\subseteq X_1$, and $[X_1,Y]$, $[Y,X_2]$ and $[X_1,X_2]$ have no backward arcs, then $G\cong G/Y\boxbackslash G/X_1/X_2$. \end{theorem} \section{The third, fourth and fifth graph-decomposition theorem.}\label{sec:decomp} We assume that the graphs we want to decompose are connected; if not, we can apply our decomposition results to the components separately. We continue with presenting and proving our third decomposition theorem, given in Theorem~\ref{theorem_5}, of which an illustrative example is given in Figure~\ref{BiPartiteExampleDecomposition1}. In the third decomposition theorem we are going to decompose a graph $G$ that contains semicomplete bipartite subgraphs. We continue with the decomposition of a graph $G$ where each subgraph of $G$ arc-induced by a set of all arcs with the same label in $G$ is a semicomplete bipartite subgraph $B(X_{i},Y_{j})$ of $G$. The decomposition of $G$ consists of decomposing each semicomplete bipartite subgraph $B(X_i,Y_i)$ of $G=\overundercup{i=1}{n}B(X_i,Y_i)$ in such a manner that each $B(X_i,Y_i)$ is decomposed into two semicomplete bipartite graphs. We give a simple example of this decomposition in Figure~\ref{BiPartiteExampleDecomposition}, where we have nine semicomplete bipartite subgraph $B(X_i,Y_i)$ of which eight subgraphs are trivial bipartite subgraphs. Because with respect to the VRSP a trivial bipartite subgraph $B(X,Y)$ is idempotent, $B(X,Y)\cong B(X,Y)\boxbackslash B(X,Y)$, we do not contract these subgraphs in the example depicted in Figure~\ref{BiPartiteExampleDecomposition}. \begin{figure}[H] \begin{center} \resizebox{1.0\textwidth}{!}{ \begin{tikzpicture}[->,>=latex,shorten >=0pt,auto,node distance=2.5cm, main node/.style={circle,fill=blue!10,draw, font=\sffamily\Large\bfseries} \tikzset{VertexStyle/.append style= font=\itshape\large, shape = circle,inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \tikzset{EdgeStyle/.append style={thin}} \tikzset{LabelStyle/.append style={font = \itshape}} \SetVertexMath \def1.5{0.0} \def-8.0{2.0} \node at (1.5-10,-8.0-0) {$G$}; \node at (1.5+4.0,-8.0-0) {$G/Y''_1/Y''_2$}; \node at (1.5-7.0,-8.0-11) {$G/X'_1/Y'_1/Y'_2$}; \node at (1.5+2.5,-8.0-11) {$G/X'_1/Y'_1/Y'_2\boxtimes G/Y''_1/Y''_2$}; \node at (1.5+8,-8.0-12.9) {$Z$}; \def1.5{-10.0} \def-8.0{-2.0} \Vertex[x=1.5+2, y=-8.0+4.0,L={u_{0}}]{u_1} \Vertex[x=1.5+1, y=-8.0+2.0,L={u_{1,1}}]{u_2} \Vertex[x=1.5+3, y=-8.0+2.0,L={u_{1,2}}]{u_3} \Vertex[x=1.5-1, y=-8.0-3,L={v_{1,1}}]{v_1} \Vertex[x=1.5+1, y=-8.0-3,L={v_{1,2}}]{v_2} \Vertex[x=1.5+3, y=-8.0-3,L={v_{2,1}}]{v_3} \Vertex[x=1.5+5, y=-8.0-3,L={v_{2,2}}]{v_4} \Vertex[x=1.5+1, y=-8.0-5,L={v_{3}}]{v_5} \Vertex[x=1.5+3, y=-8.0-5,L={v_{4}}]{v_6} \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={in = 105, out = 195,min distance=2cm}](u_1)(v_1) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={in = 15, out = 0,min distance=5cm}](u_1)(v_6) \Edge[label = b, labelstyle={xshift=0pt, yshift=2pt}](u_1)(u_2) \Edge[label = c, labelstyle={xshift=0pt, yshift=2pt}](u_1)(u_3) \Edge[label = d, labelstyle={xshift=-8pt, yshift=32pt}](u_2)(v_1) \Edge[label = d, labelstyle={xshift=-14pt, yshift=22pt}](u_2)(v_2) \Edge[label = d, labelstyle={xshift=-26pt, yshift=19pt}](u_2)(v_3) \Edge[label = d, labelstyle={xshift=-38pt, yshift=47pt}](u_2)(v_4) \Edge[label = d, labelstyle={xshift=-29pt, yshift=-5pt}](u_3)(v_1) \Edge[label = d, labelstyle={xshift=-24pt, yshift=-6pt}](u_3)(v_2) \Edge[label = d, labelstyle={xshift=-19pt, yshift=-8pt}](u_3)(v_3) \Edge[label = d, labelstyle={xshift=-18pt, yshift=-6pt}](u_3)(v_4) \Edge[label = e, labelstyle={xshift=0pt, yshift=2pt}](v_1)(v_5) \Edge[label = f, labelstyle={xshift=2pt, yshift=2pt}](v_2)(v_5) \Edge[label = g, labelstyle={xshift=0pt, yshift=2pt}](v_3)(v_6) \Edge[label = h, labelstyle={xshift=0pt, yshift=2pt}](v_4)(v_6) \def1.5{6.0} \def-8.0{-2.0} \Vertex[x=1.5+2, y=-8.0+4.0,L={u_{0}}]{u_1} \Vertex[x=1.5+1, y=-8.0+2.0,L={u_{1,1}}]{u_2} \Vertex[x=1.5+3, y=-8.0+2.0,L={u_{1,2}}]{u_3} \Vertex[x=1.5-0, y=-8.0-3,L={\tilde{y}''_{1}}]{v_1} \Vertex[x=1.5+4, y=-8.0-3,L={\tilde{y}''_{2}}]{v_3} \Vertex[x=1.5+1, y=-8.0-5,L={v_{3}}]{v_5} \Vertex[x=1.5+3, y=-8.0-5,L={v_{4}}]{v_6} \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={in = 105, out = 195,min distance=2cm}](u_1)(v_1) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={in = 15, out = 0,min distance=4cm}](u_1)(v_6) \Edge[label = b, labelstyle={xshift=0pt, yshift=2pt}](u_1)(u_2) \Edge[label = c, labelstyle={xshift=0pt, yshift=2pt}](u_1)(u_3) \Edge[label = d, labelstyle={xshift=-10pt, yshift=32pt}](u_2)(v_1) \Edge[label = d, labelstyle={xshift=-34pt, yshift=19pt}](u_2)(v_3) \Edge[label = d, labelstyle={xshift=-29pt, yshift=-5pt}](u_3)(v_1) \Edge[label = d, labelstyle={xshift=-19pt, yshift=-8pt}](u_3)(v_3) \Edge[label = e, labelstyle={xshift=0pt, yshift=2pt}, style={in = 135, out = -90,min distance=0cm}](v_1)(v_5) \Edge[label = f, labelstyle={xshift=2pt, yshift=2pt}, style={in = 90, out = -30,min distance=0cm}](v_1)(v_5) \Edge[label = g, labelstyle={xshift=0pt, yshift=2pt}, style={in = 90, out = 210,min distance=0cm}](v_3)(v_6) \Edge[label = h, labelstyle={xshift=0pt, yshift=2pt}, style={in = 45, out = -90,min distance=0cm}](v_3)(v_6) \def1.5{-10.0} \def-8.0{-15.0} \Vertex[x=1.5+2, y=-8.0+4.0,L={u_{0}}]{u_1} \Vertex[x=1.5+2, y=-8.0+2.0,L={\tilde{x}'_{1}}]{u_2} \Vertex[x=1.5-0, y=-8.0-3,L={\tilde{y}'_{1}}]{v_1} \Vertex[x=1.5+4, y=-8.0-3,L={\tilde{y}'_{2}}]{v_3} \Vertex[x=1.5+1, y=-8.0-5,L={v_{3}}]{v_5} \Vertex[x=1.5+3, y=-8.0-5,L={v_{4}}]{v_6} \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={in = 105, out = 195,min distance=2cm}](u_1)(v_1) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={in = 15, out = 0,min distance=4cm}](u_1)(v_6) \Edge[label = b, labelstyle={xshift=0pt, yshift=2pt}, style={in = 120, out = -120,min distance=0cm}](u_1)(u_2) \Edge[label = c, labelstyle={xshift=0pt, yshift=2pt}, style={in = 60, out = -60,min distance=0cm}](u_1)(u_2) \Edge[label = d, labelstyle={xshift=-10pt, yshift=32pt}](u_2)(v_1) \Edge[label = d, labelstyle={xshift=-34pt, yshift=19pt}](u_2)(v_3) \Edge[label = e, labelstyle={xshift=0pt, yshift=2pt}, style={in = 135, out = -90,min distance=0cm}](v_1)(v_5) \Edge[label = g, labelstyle={xshift=-28pt, yshift=8pt}, style={in = 90, out = -30,min distance=0cm}](v_1)(v_6) \Edge[label = f, labelstyle={xshift=20pt, yshift=24pt}, style={in = 90, out = 210,min distance=0cm}](v_3)(v_5) \Edge[label = h, labelstyle={xshift=0pt, yshift=2pt}, style={in = 45, out = -90,min distance=0cm}](v_3)(v_6) \tikzset{VertexStyle/.append style= font=\itshape\large, shape = rounded rectangle, inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \def1.5{10.0} \def-8.0{-13.0} \Vertex[x=1.5+0, y=-8.0+2.0,L={(u_0,u_0)}]{u0u0} \Vertex[x=1.5+3-14, y=-8.0+2.0,L={(u_0,u_{1,1})}]{u0u11} \Vertex[x=1.5+6-14, y=-8.0+2.0,L={(u_0,u_{1,2})}]{u0u12} \Vertex[x=1.5+9-14, y=-8.0+2.0,L={(u_0,\tilde{y}''_{1})}]{u0y1} \Vertex[x=1.5+12-8, y=-8.0+2.0,L={(u_0,\tilde{y}''_{2})}]{u0y2} \Vertex[x=1.5+15-8, y=-8.0+2.0,L={(u_0,v_{3})}]{u0v3} \Vertex[x=1.5+18-8, y=-8.0+2.0,L={(u_0,v_{4})}]{u0v4} \def1.5{6.0} \def-8.0{-15.0} \Vertex[x=1.5+0-7, y=-8.0+2.0,L={(\tilde{x}'_{1},u_0)}]{x1u0} \Vertex[x=1.5+3, y=-8.0+2.0,L={(\tilde{x}'_{1},u_{1,1})}]{x1u11} \Vertex[x=1.5+6, y=-8.0+2.0,L={(\tilde{x}'_{1},u_{1,2})}]{x1u12} \Vertex[x=1.5+9-13, y=-8.0+2.0,L={(\tilde{x}'_{1},\tilde{y}''_{1})}]{x1y1} \Vertex[x=1.5+12-13, y=-8.0+2.0,L={(\tilde{x}'_{1},\tilde{y}''_{2})}]{x1y2} \Vertex[x=1.5+15-4, y=-8.0+2.0,L={(\tilde{x}'_{1},v_{3})}]{x1v3} \Vertex[x=1.5+18-4, y=-8.0+2.0,L={(\tilde{x}'_{1},v_{4})}]{x1v4} \def1.5{-1.0} \def-8.0{-17.0} \Vertex[x=1.5+0, y=-8.0+2.0,L={(\tilde{y}'_{1},u_0)}]{y1u0} \Vertex[x=1.5+3, y=-8.0+2.0,L={(\tilde{y}'_{1},u_{1,1})}]{y1u11} \Vertex[x=1.5+6, y=-8.0+2.0,L={(\tilde{y}'_{1},u_{1,2})}]{y1u12} \Vertex[x=1.5+8.0, y=-8.0+1.0,L={(\tilde{y}'_{1},\tilde{y}''_{1})}]{y1y1} \Vertex[x=1.5+13.0, y=-8.0+1.0,L={(\tilde{y}'_{1},\tilde{y}''_{2})}]{y1y2} \Vertex[x=1.5+18+0, y=-8.0+2.0,L={(\tilde{y}'_{1},v_{3})}]{y1v3} \Vertex[x=1.5+21+0, y=-8.0+2.0,L={(\tilde{y}'_{1},v_{4})}]{y1v4} \def1.5{0.5} \def-8.0{-19.0} \Vertex[x=1.5+-1.5, y=-8.0+2.0,L={(\tilde{y}'_{2},u_0)}]{y2u0} \Vertex[x=1.5+1.5, y=-8.0+2.0,L={(\tilde{y}'_{2},u_{1,1})}]{y2u11} \Vertex[x=1.5+4.5, y=-8.0+2.0,L={(\tilde{y}'_{2},u_{1,2})}]{y2u12} \Vertex[x=1.5+9, y=-8.0+3.0,L={(\tilde{y}'_{2},\tilde{y}''_{1})}]{y2y1} \Vertex[x=1.5+14, y=-8.0+3.0,L={(\tilde{y}'_{2},\tilde{y}''_{2})}]{y2y2} \Vertex[x=1.5+16.5+0, y=-8.0+2.0,L={(\tilde{y}'_{2},v_{3})}]{y2v3} \Vertex[x=1.5+19.5+0, y=-8.0+2.0,L={(\tilde{y}'_{2},v_{4})}]{y2v4} \def1.5{-1.0} \def-8.0{-21.0} \Vertex[x=1.5+0, y=-8.0+2.0,L={(v_3,u_0)}]{v3u0} \Vertex[x=1.5+3, y=-8.0+2.0,L={(v_3,u_{1,1})}]{v3u11} \Vertex[x=1.5+6, y=-8.0+2.0,L={(v_3,u_{1,2})}]{v3u12} \Vertex[x=1.5+9, y=-8.0+2.0,L={(v_3,\tilde{y}''_{1})}]{v3y1} \Vertex[x=1.5+15, y=-8.0+2.0,L={(v_3,\tilde{y}''_{2})}]{v3y2} \Vertex[x=1.5+9, y=-8.0+3,L={(v_3,v_{3})}]{v3v3} \Vertex[x=1.5+21, y=-8.0+2,L={(v_3,v_{4})}]{v3v4} \def1.5{-1.0} \def-8.0{-23.0} \Vertex[x=1.5+0, y=-8.0+2.0,L={(v_4,u_0)}]{v4u0} \Vertex[x=1.5+3, y=-8.0+2.0,L={(v_4,u_{1,1})}]{v4u11} \Vertex[x=1.5+6, y=-8.0+2.0,L={(v_4,u_{1,2})}]{v4u12} \Vertex[x=1.5+9, y=-8.0+2.0,L={(v_4,\tilde{y}''_{1})}]{v4y1} \Vertex[x=1.5+15, y=-8.0+2.0,L={(v_4,\tilde{y}''_{2})}]{v4y2} \Vertex[x=1.5+18, y=-8.0+2.0,L={(v_4,v_{3})}]{v4v3} \Vertex[x=1.5+14.5, y=-8.0+5.0,L={(v_4,v_{4})}]{v4v4} \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={in = 120, out = 185,min distance=2cm}](u0u0)(y1y1) \Edge[label = b, labelstyle={xshift=0pt, yshift=2pt}](u0u0)(x1u11) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={in = 0, out = -15,min distance=3.5cm}](u0u0)(v4v4) \Edge[label = c, labelstyle={xshift=0pt, yshift=2pt}](u0u0)(x1u12) \Edge[label = d, labelstyle={xshift=-14pt, yshift=14pt}](x1u11)(y1y1) \Edge[label = d, labelstyle={xshift=-16pt, yshift=0pt}](x1u11)(y2y1) \Edge[label = d, labelstyle={xshift=-36pt, yshift=-21pt}](x1u12)(y1y1) \Edge[label = d, labelstyle={xshift=-18pt, yshift=-20pt}](x1u12)(y2y1) \Edge[label = d, labelstyle={xshift=-50pt, yshift=26pt}](x1u11)(y2y2) \Edge[label = d, labelstyle={xshift=-36pt, yshift=8pt}](x1u11)(y1y2) \Edge[label = d, labelstyle={xshift=0pt, yshift=0pt}](x1u12)(y2y2) \Edge[label = d, labelstyle={xshift=0pt, yshift=-20pt}](x1u12)(y1y2) \Edge[label = e, labelstyle={xshift=0pt, yshift=2pt}](y1y1)(v3v3) \Edge[label = f, labelstyle={xshift=2pt, yshift=2pt}](y2y1)(v3v3) \Edge[label = g, labelstyle={xshift=0pt, yshift=2pt}](y1y2)(v4v4) \Edge[label = h, labelstyle={xshift=0pt, yshift=2pt}](y2y2)(v4v4) \def1.5{6} \def-8.0{-12.0} \draw[circle, -,dashed, very thick,rounded corners=8pt] (1.5+0.1,-8.0+1.0)--(1.5+0.1,-8.0+1.8)-- (1.5+6.0,-8.0+1.8)-- (1.5+10.0,-8.0-2.7)--(1.5+10.0,-8.0-6.5)-- (1.5+1.5,-8.0-6.5)-- (1.5+0.1,-8.0-6.5) -- (1.5+0.1,-8.0-0.5)--(1.5+0.1,-8.0+1.0); \def1.5{-3} \def-8.0{-11.5} \draw[circle, -,dotted, very thick,rounded corners=8pt] (1.5+0.1,-8.0+1.0)--(1.5+0.1,-8.0+1.8)-- (1.5+24.0,-8.0+1.8)-- (1.5+24.0,-8.0-10.5)-- (1.5+1.5,-8.0-10.5)-- (1.5+0.1,-8.0-10.5) -- (1.5+0.1,-8.0-0.5)--(1.5+0.1,-8.0+1.0); \end{tikzpicture} } \end{center} \caption{Decomposition of $G\cong G/X'_1/Y'_1/Y'_2\boxbackslash G/Y''_1/Y''_2$, $X'_1=\{u_{1,1},u_{1,2}\},Y'_1=\{v_{1,1},v_{2,1}\},Y'_2=\{v_{1,2},v_{2,2}\},Y''_1=\{v_{1,1},v_{1,2}\},Y''_2=\{v_{2,1},v_{2,2}\}$. The set $Z$ from the proof of Theorem~\ref{theorem_5} and the graph isomorphic to $G$ induced by $Z$ in $G/X'_1/Y'_1/Y'_2\boxtimes G/Y''_1/Y''_2$ is indicated within the dotted region.} \label{BiPartiteExampleDecomposition} \end{figure} To decompose a graph with respect to the decomposition of a non-trivial semicomplete bipartite subgraph of $G$ where each arc has the same label we have to decompose each of these non-trivial bipartite semicomplete subgraphs of $G$. This is obvious, because if one of these subgraphs is not decomposed, say $B(X_1,X_2)$, the VRSP of the two decompositions $H$ and $I$ of $G$ will contain $B(X_1,X_2)\boxbackslash B(X_1,X_2)$. This subgraph has $|X_1|^2+|X_2|^2$ vertices in $H \boxbackslash I$ and therefore, $G\ncong H \boxbackslash I$ for $|X_1|>1$ or $|X_2|>1$. As mentioned before, if a subgraph induced by the set of all arcs with the same label in $G$ is a trivial bipartite subgraph $B(X_1,X_2)$, then this subgraph does not have to be decomposed because $B(X_1,X_2)\boxbackslash B(X_1,X_2)\cong B(X_1,X_2)$. But in the proof of Theorem~\ref{theorem_5}, we decompose all semicomplete bipartite subgraphs of $G$. For reasons we will clarify in Theorems~\ref{theorem_5} and~\ref{theorem_6}, we introduce the \emph{matrix graph}, the \emph{bipartite matrix graph} and the \emph{Cartesian matrix graph}. We define $M$ as a two-dimensional index set with pairs of indices that are numbered in the following manner: $M=\{(i,j)\mid i\in I= \{1,\ldots,m\}, j\in J= \{1,\ldots,n\}\}$. A graph $G$ of which the vertices are numbered according to the index set $M$ has sets of rows $R_i=\{v_{(i,j)}\mid j\in J\},i\in I$, and sets of columns $C_j=\{v_{(i,j)}\mid i\in I\},j\in J$. For brevity, in the sequel we denote the vertices $v_{(i,j)}$ as $v_{i,j}$. For a subgraph $G[X]$ of a graph $G$, we call $X$ a grid of vertices when the vertices of $X$ are numbered in the following manner. The vertices $v_{i,j}\in X$ are numbered such that $i\in I_X\subseteq I$ and $j\in J_X\subseteq J$, $|X|=|I_X|\cdot|J_X|=m_1\cdot n_1$, $1\leq m_1\leq m,1\leq n_1\leq n$. Hence, $X=\{v_{i,j}\mid i\in I_{X}\subseteq I,j\in J_{X}\subseteq J\}$, $|I_{X}|=m_1,|J_{X}|=n_1$, with rows $X'_{i}\subseteq R_i,X'_{i}=\{v_{i,j}|j\in J_{X}\},i\in I_{X},$ and with columns $X''_{j}\subseteq C_j$, $X''_{j}=\{v_{i,j}|i\in I_{X}\},j\in J_{X}$. In the example given in Figure~\ref{BiPartiteExampleDecomposition1}, each of the sets $X_1,\ldots,X_4$ is a grid. A \emph{matrix graph} $G$ is a graph $G$ for which the vertices are numbered according to a subset $M'$ of the index set $M$. A \emph{bipartite matrix graph} $G$ is a matrix graph $G$ consisting solely of $x$ bipartite subgraphs where each bipartite subgraph has arcs with identical labels and each pair of such bipartite subgraphs do not share a label. Therefore, we require, firstly, that the bipartite matrix graph $G$ is a matrix graph consisting of $x$ bipartite subgraphs $B(X_i,X_j)$ and $z$ not necessarily disjunct sets $X_k$ where $X_k=X_i$ or $X_k=X_j$ and $z\leq 2x$. Secondly, all subgraphs of $G$ arc-induced by a set of all arcs with identical labels are semicomplete bipartite subgraphs $B(X_i,X_j)$ of $G$, all $X_i$ and $X_j$ are grids of vertices, $i,j\in\chi=\{1,\ldots,z\},i\neq j$, and $[X_i,X_j]$ contains only forward arcs or $[X_i,X_j]$ contains only backward arcs. Thirdly, we require that whenever a row $X'_{k,x}$ of the set $X_k$ and a row $X'_{l,y}$ of the set $X_l$ share a vertex $v_{i,j}$ then $X'_{k,x}\subseteq R_i$ and $X'_{l,y}\subseteq R_i$, $k,l\in\chi$. Fourthly, let $R'_i\subseteq V(G)\subseteq R_i$. Then for any division of $R'_i$ into the sets $R'_{i_1}$ and $R'_{i_2}$, $R'_i=R'_{i_1}\cup R'_{i_2}$, there is always a row $X'_{k,x}\subseteq R_{i_1}$ and a row $X'_{l,y}\subseteq R'_{i_2}$ with $X'_{k,x}\cap X'_{l,y}\neq \emptyset$. Fifthly, we require that whenever a column $X''_{k,x}$ of the set $X_k$ and a column $X''_{l,y}$ of the set $X_l$ share a vertex $v_{i,j}$ then $X''_{k,x}\subseteq C_j$ and $X''_{l,y}\subseteq C_j$, $k,l\in\chi$. Sixthly, let $C'_j\subseteq V(G)\subseteq C_j$. Then for any division of $C'_j$ into the sets $C'_{j_1}$ and $C'_{j_2}$, $C'_j=C'_{j_1}\cup C'_{j_2}$, there is always a column $X''_{k,x}\subseteq C'_{j_1}$ and a column $X''_{l,y}\subseteq C'_{j_2}$ with $X''_{k,x}\cap X''_{l,y}\neq \emptyset$. We call a graph $G$ that fulfils these six requirements a \emph{bipartite matrix graph}. The purpose of the bipartite matrix graph is that after the decomposition of any subgraph $B(X_i,X_j)$ of the bipartite matrix graph $G$, into graphs $B(X'_i,X'_j)$ and $B(X''_i,X''_j)$ with $B(X_i,X_j)\cong B(X'_i,X'_j)\boxbackslash B(X''_i,X''_j)$ by Theorem~\ref{theorem_5}, we have that all vertices $v_{i,x}\in V(B(X_i,X_j))$ are replaced by the vertex $\tilde{x}_i\in V(B(X'_i,X'_j))$ and all vertices $v_{x,j}\in V(B(X_i,X_j))$ are replaced by the vertex $\tilde{x}_j\in V(B(X''_i,X''_j))$. With the third and fourth requirement, we assure that all vertices in the rows of $R_i$ must have the same first index and vertices not in the rows of $R_i$ have a different first index. With the fifth and sixth requirement, we assure that all vertices in the columns of $C_j$ must have the same second index and vertices not in the columns of $C_j$ have a different second index. A \emph{Cartesian matrix graph} $G$ is a matrix graph with rows $R_i, i\in I_i\subseteq I$ and columns $C_j, j\in J_j\subseteq J$, for which $G[R_x]\cong G[R_y],x,y\in I_i$, $G[C_{x}]\cong G[C_{y}],x,y\in J_j$, and the arcs of $G[R_i]$ and the arcs of $G[C_j]$ have no labels in common, if $a$ is an arc of $A(G)$ with $\mu(a)=uv$ then $u,v\in R_i$ or $u,v\in C_j$. In Figure~\ref{BiPartiteExampleDecomposition1}, we have depicted the vertex sets $X_i$ of the bipartite matrix graph $G$ comprising the bipartite semicomplete subgraphs $B(X_i,X_4)$ for $i=1,\ldots,3,$ where the labels of the arcs of $B(X_i,X_4)$ are the same and the labels of the arcs of $B(X_i,X_4)$ and $B(X_j,X_4), i\neq j$, are different. All vertex sets $X_i$ are grids. The arcs connected to the dotted box, dashdotted box and straight boxes are connected to the vertices these boxes contain. For example, the straight arcs with label $d$ connected to the boxes of vertex set $X_1$ represent the arc set $\{u_{2,2}u_{7,7},u_{2,4}u_{7,7},u_{2,5}u_{7,7},u_{5,2}u_{7,7},u_{5,4}u_{7,7},u_{5,5}u_{7,7},u_{6,2}u_{7,7},u_{6,4}u_{7,7}$, $u_{6,5}u_{7,7}\}$ of arcs with label $c$. Furthermore, due to the contraction of the second row of $X_2$ the vertices $u_{2,1},\ldots,u_{2,4}$ are replaced by $\tilde{x}_2$, which gives a new first row of $X_1$ consisting of the vertices $\tilde{x}_2$ and $u_{2,5}$. Later on, by contraction of the first row of $X_1$, the vertices $\tilde{x}_2$ and $u_{2,5}$ are replaced by $\tilde{x}_2$. In Figure~\ref{BiPartiteExampleDecomposition2}, we have depicted the graph $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}$ $/X'_{4,1}\boxtimes G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$ which is isomorphic to the graph $G$ of Figure~\ref{BiPartiteExampleDecomposition1} after deletion of the vertices with in-degree zero in $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}$ $/X'_{4,1}\boxtimes G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$ and in-degree greater than zero in $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}$ $/X'_{4,1}\Box G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$. Furthermore, because the pairwise intersection of the grids $X_1,X_2$ and $X_3$ are grids, the graph $G$ is isomorphic to the graph $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}$ $/_{i=1}^4X'_{3,i}/X'_{4,1}\boxbackslash G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$, which we will prove in Theorem~\ref{theorem_5}. Due to the numbering scheme of the vertices in $V(G)$ we have that $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}/X'_{4,1}\boxbackslash G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}\cong G/_{i=1}^7R_{i}\boxbackslash G/_{i=1}^7C_{i}\cong G$. In Theorem~\ref{theorem_5}, we use the notation with the sets $X_i$ and in Theorem~\ref{theorem_6}, we use the notation with the rows $R_i$ and the columns $C_i$. \begin{figure}[H] \begin{center} \resizebox{1\textwidth}{!}{ \begin{tikzpicture}[->,>=latex,shorten >=0pt,auto,node distance=2.5cm, main node/.style={circle,fill=blue!10,draw, font=\sffamily\Large\bfseries} \tikzset{VertexStyle/.append style= font=\itshape\large, shape = circle,inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \tikzset{EdgeStyle/.append style={thin}} \tikzset{LabelStyle/.append style={font = \itshape}} \SetVertexMath \clip (-2,2) rectangle (18, -19.0); \def1.5{0.0} \def-8.0{2.0} \node at (1.5-1.0,-8.0-0.75) {$G$}; \def1.5{0.0} \def-8.0{0.0} \Vertex[x=1.5+0, y=-8.0+0,L={u_{1,1}}]{u_00} \Vertex[x=1.5+2, y=-8.0+0,L={u_{1,2}}]{u_01} \Vertex[x=1.5+4, y=-8.0+0,L={u_{1,3}}]{u_02} \Vertex[x=1.5+6, y=-8.0+0,L={u_{1,4}}]{u_03} \def-8.0{-2.0} \Vertex[x=1.5+0, y=-8.0+0,L={u_{2,1}}]{u_10} \Vertex[x=1.5+2, y=-8.0+0,L={u_{2,2}}]{u_11} \Vertex[x=1.5+4, y=-8.0+0,L={u_{2,3}}]{u_12} \Vertex[x=1.5+6, y=-8.0+0,L={u_{2,4}}]{u_13} \Vertex[x=1.5+8, y=-8.0+0,L={u_{2,5}}]{u_14} \def-8.0{-4.0} \Vertex[x=1.5+0, y=-8.0+0,L={u_{3,1}}]{u_20} \Vertex[x=1.5+2, y=-8.0+0,L={u_{3,2}}]{u_21} \Vertex[x=1.5+4, y=-8.0+0,L={u_{3,3}}]{u_22} \Vertex[x=1.5+6, y=-8.0+0,L={u_{3,4}}]{u_23} \def-8.0{-6.0} \Vertex[x=1.5+0, y=-8.0+0,L={u_{4,1}}]{u_30} \Vertex[x=1.5+2, y=-8.0+0,L={u_{4,2}}]{u_31} \Vertex[x=1.5+4, y=-8.0+0,L={u_{4,3}}]{u_32} \Vertex[x=1.5+6, y=-8.0+0,L={u_{4,4}}]{u_33} \Vertex[x=1.5+10, y=-8.0+0,L={u_{4,6}}]{u_35} \def-8.0{-8.0} \Vertex[x=1.5+0, y=-8.0+0,L={u_{5,1}}]{u_40} \Vertex[x=1.5+2, y=-8.0+0,L={u_{5,2}}]{u_41} \Vertex[x=1.5+4, y=-8.0+0,L={u_{5,3}}]{u_42} \Vertex[x=1.5+6, y=-8.0+0,L={u_{5,4}}]{u_43} \Vertex[x=1.5+8, y=-8.0+0,L={u_{5,5}}]{u_44} \Vertex[x=1.5+10, y=-8.0+0,L={u_{5,6}}]{u_45} \def-8.0{-10.0} \Vertex[x=1.5+2, y=-8.0+0,L={u_{6,2}}]{u_51} \Vertex[x=1.5+4, y=-8.0+0,L={u_{6,3}}]{u_52} \Vertex[x=1.5+6, y=-8.0+0,L={u_{6,4}}]{u_53} \Vertex[x=1.5+8, y=-8.0+0,L={u_{6,5}}]{u_54} \Vertex[x=1.5+10, y=-8.0+0,L={u_{6,6}}]{u_55} \def-8.0{-13.0-2} \Vertex[x=1.5+0, y=-8.0+0,L={\tilde{x}''_{1}}]{x_0} \Vertex[x=1.5+2, y=-8.0+0,L={\tilde{x}''_{2}}]{x_1} \Vertex[x=1.5+4, y=-8.0+0,L={\tilde{x}''_{3}}]{x_2} \Vertex[x=1.5+6, y=-8.0+0,L={\tilde{x}''_{4}}]{x_3} \Vertex[x=1.5+8, y=-8.0+0,L={\tilde{x}''_{5}}]{x_4} \Vertex[x=1.5+10, y=-8.0+0,L={\tilde{x}''_{6}}]{x_5} \def1.5{2} \def-8.0{-10.0} \Vertex[x=1.5+12, y=-8.0+0,L={\tilde{x}'_{6}}]{y_5} \Vertex[x=1.5+12, y=-8.0+2,L={\tilde{x}'_{5}}]{y_4} \Vertex[x=1.5+12, y=-8.0+4,L={\tilde{x}'_{4}}]{y_3} \Vertex[x=1.5+12, y=-8.0+6,L={\tilde{x}'_{3}}]{y_2} \Vertex[x=1.5+12, y=-8.0+8,L={\tilde{x}'_{2}}]{y_1} \Vertex[x=1.5+12, y=-8.0+10,L={\tilde{x}'_{1}}]{y_0} \def1.5{-4} \def-8.0{-6.0} \Vertex[x=1.5+16, y=-8.0-6,L={u_{7,7}}]{v_0} \def1.5{16-2} \def-8.0{-5.0-7} \Vertex[x=1.5, y=-8.0,L={\tilde{x}_{7}}]{y'_0} \def1.5{5+7} \def-8.0{-15.0+2} \Vertex[x=1.5, y=-8.0-2,L={\tilde{x}_{7}}]{y'_1} \def1.5{-4} \def-8.0{-6.0} \Edge[label = a, labelstyle={xshift=0pt, yshift=-2pt}, style={very thick, dashdotted, in = 60, out = 30,min distance=5cm}](1.5+5.2+4.75,-8.0+1.0)(v_0) \Edge[label = a, labelstyle={xshift=0pt, yshift=-2pt}, style={very thick, dashdotted, in = 90, out = 0,min distance=2cm}](1.5+13.2+1.75,-8.0-0.5)(v_0) \Edge[label = b, labelstyle={xshift=0pt, yshift=-0pt}, style={thick,dotted, in = 210, out = 270,min distance=5.5cm}](1.5+4.4,-8.0-3.1)(v_0) \Edge[label = c, labelstyle={xshift=0+4pt, yshift=-4pt}, style={thick, in = 60, out = 20,min distance=8cm}](1.5+5.5+1.1,-8.0+4.8)(v_0) \Edge[label = c, labelstyle={xshift=-20pt, yshift=12pt}, style={thick, in = 75, out = 0,min distance=1cm}](1.5+9.5+3.4,-8.0+4.8-0.4)(v_0) \Edge[label = c, labelstyle={xshift=8pt, yshift=+6pt}, style={thick, in = 195, out = 300,min distance=-2cm}](1.5+5.5+0.5,-8.0+4.8-9.6-0.2)(v_0) \Edge[label = c, labelstyle={xshift=-28pt, yshift=10pt}, style={thick, in = 180, out = 300,min distance=-2cm}](1.5+9.5+2,-8.0+4.8-9.6-0.2)(v_0) \def1.5{-4} \def-8.0{-6.0} \Edge[label = \{b\}, labelstyle={xshift=-185pt, yshift=70pt}, style={in = 270, out = 270,min distance=2cm}](x_0)(y'_1) \Edge[label = \{a\textsf{,}b\textsf{,}c\}, labelstyle={xshift=-166pt, yshift=53pt}, style={in = 270, out = 270,min distance=2cm}](x_1)(y'_1) \Edge[label = \{a\textsf{,}b\}, labelstyle={xshift=-130pt, yshift=37pt}, style={in = 270, out = 270}](x_2)(y'_1) \Edge[label = \{a\textsf{,}b\textsf{,}c\}, labelstyle={xshift=-105pt, yshift=20pt}, style={in = 270, out = 270}](x_3)(y'_1) \Edge[label = \{c\}, labelstyle={xshift=-68pt, yshift=13pt}, style={in = 270, out = 270,min distance=2cm}](x_4)(y'_1) \Edge[label = \{a\}, labelstyle={xshift=-40pt, yshift=-8pt}, style={in = 270, out = 270,min distance=1cm}](x_5)(y'_1) \def1.5{-4} \def-8.0{-6.0} \Edge[label = \{b\}, labelstyle={xshift=-85pt, yshift=180pt}, style={in = 00, out = 0,min distance=2cm}](y_0)(y'_0) \Edge[label = \{b\textsf{,}c\}, labelstyle={xshift=-70pt, yshift=160pt}, style={in = 0, out = 0,min distance=2cm}](y_1)(y'_0) \Edge[label = \{b\}, labelstyle={xshift=-50pt, yshift=130pt}, style={in = 0, out = 0}](y_2)(y'_0) \Edge[label = \{a\textsf{,}b\}, labelstyle={xshift=-35pt, yshift=100pt}, style={in = 0, out = 0}](y_3)(y'_0) \Edge[label = \{a\textsf{,}b\textsf{,}c\}, labelstyle={xshift=-40pt, yshift=67pt}, style={in = 0, out = 0,min distance=2cm}](y_4)(y'_0) \Edge[label = \{a\textsf{,}c\}, labelstyle={xshift=-20pt, yshift=45pt}, style={in = 0, out = 0,min distance=1cm}](y_5)(y'_0) \def1.5{0} \def-8.0{-1.0} \draw[circle, -,dotted, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+6.4,-8.0+2) --(1.5+6.9,-8.0+1.5) -- (1.5+6.9,-8.0-7.5)-- (1.5+6.4,-8.0-8.0) -- (1.5-0.3,-8.0-8.0) -- (1.5-0.8,-8.0-7.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \draw[circle, -,dotted, very thick,rounded corners=8pt] (1.5+0.0,-8.0-8) -- (1.5-1,-8.0-9.0)--(1.5-1.5,-8.0-9.0); \node at (1.5-1.8,-8.0-9) {$X_2$}; \def1.5{1.9} \def-8.0{-5.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.4,-8.0+2) -- (1.5+6,-8.0-0.0)--(1.5+6.5,-8.0-0.0); \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.4,-8.0-2) -- (1.5+6,-8.0-0.0)--(1.5+6.5,-8.0-0.0); \draw[circle, -, very thick,rounded corners=8pt] (1.5+5.4,-8.0+2) -- (1.5+6,-8.0-0.0)--(1.5+6.5,-8.0-0.0); \draw[circle, -, very thick,rounded corners=8pt] (1.5+5.4,-8.0-2) -- (1.5+6,-8.0-0.0)--(1.5+6.5,-8.0-0.0); \node at (1.5+6.9,-8.0-0) {$X_1$}; \def1.5{2} \def-8.0{-3.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+0.4,-8.0+2) --(1.5+0.9,-8.0+1.5) -- (1.5+0.9,-8.0+0.5)-- (1.5+0.4,-8.0-0.0) -- (1.5-0.3,-8.0-0.0) -- (1.5-0.8,-8.0+0.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{6} \def-8.0{-3.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+2.4,-8.0+2) --(1.5+2.9,-8.0+1.5) -- (1.5+2.9,-8.0+0.5)-- (1.5+2.4,-8.0-0.0) -- (1.5-0.3,-8.0-0.0) -- (1.5-0.8,-8.0+0.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{2} \def-8.0{-9.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+0.4,-8.0+2) --(1.5+0.9,-8.0+1.5) -- (1.5+0.9,-8.0-1.5)-- (1.5+0.4,-8.0-2.0) -- (1.5-0.3,-8.0-2.0) -- (1.5-0.8,-8.0-1.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{6} \def-8.0{-9.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+2.4,-8.0+2) --(1.5+2.9,-8.0+1.5) -- (1.5+2.9,-8.0-1.5)-- (1.5+2.4,-8.0-2.0) -- (1.5-0.3,-8.0-2.0) -- (1.5-0.8,-8.0-1.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{1.9} \def-8.0{-7.0} \draw[circle, -,dashdotted, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+4.5,-8.0+2) --(1.5+5,-8.0+1.5) -- (1.5+5,-8.0-4)-- (1.5+4.5,-8.0-4.5) -- (1.5-0.3,-8.0-4.5) -- (1.5-0.8,-8.0-4) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \draw[circle, -,dashdotted, very thick,rounded corners=8pt] (1.5+8.2,-8.0+2)--(1.5+8.5,-8.0+2) --(1.5+9,-8.0+1.5) -- (1.5+9,-8.0-4)-- (1.5+8.5,-8.0-4.5) -- (1.5+8-0.3,-8.0-4.5) -- (1.5+8-0.8,-8.0-4) -- (1.5+8-0.8,-8.0+1.5) -- (1.5+8-0.3,-8.0+2)--(1.5+8.2,-8.0+2); \def1.5{1.6} \def-8.0{-1.0} \draw[circle, -,dashdotted, very thick,rounded corners=8pt] (1.5-0.0,-8.0-10.5) -- (1.5-1.5,-8.0-10.75)--(1.5-3.0,-8.0-10.9); \draw[circle, -,dashdotted, very thick,rounded corners=8pt] (1.5+8.0,-8.0-10.5) -- (1.5-0.0,-8.0-11.1)--(1.5-3.0,-8.0-10.9); \node at (1.5-3.3,-8.0-11) {$X_3$}; \def1.5{2} \def-8.0{-7.0} \def1.5{-0.1+12} \def-8.0{-0.9-12} \draw[circle, -,dashed, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+0.6,-8.0+2) --(1.5+1.1,-8.0+1.5) -- (1.5+1.1,-8.0+0.3)-- (1.5+0.6,-8.0-0.2) -- (1.5-0.3,-8.0-0.2) -- (1.5-0.8,-8.0+0.3) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def-8.0{-0.9-11.5} \draw[circle, -,dashed, very thick,rounded corners=8pt] (1.5-0.5,-8.0-0.5) -- (1.5-1,-8.0-1.0)--(1.5-1.5,-8.0-1.0); \node at (1.5-1.8,-8.0-1) {$X_4$}; \def1.5{17} \def-8.0{-5} \node at (1.5-1.8,-8.0-11) {$G/_{i=1}^3X_{1,i}/_{i=1}^4X_{2,i}/_{i=1}^4X_{3,i}/X_{4,1}$}; \def1.5{17} \def-8.0{12.5} \node at (1.5-1.8,-8.0-11.25) {$G/_{i=1}^3X_{1,i}/_{i=1}^5X_{2,i}/_{i=1}^3X_{3,i}/X_{4,1}$}; \end{tikzpicture} } \end{center} \caption{The decomposition of the graph $G$ into the graphs $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}$ $/_{i=1}^4X'_{3,i}/X'_{4,1}$ and $G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$, with $G\cong G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}$ $/X'_{4,1}\boxbackslash G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$.} \label{BiPartiteExampleDecomposition1} \end{figure} \begin{figure}[H] \begin{center} \resizebox{1\textwidth}{!}{ \begin{tikzpicture}[->,>=latex,shorten >=0pt,auto,node distance=2.5cm, main node/.style={circle,fill=blue!10,draw, font=\sffamily\Large\bfseries} \tikzset{VertexStyle/.append style= font=\itshape\large, shape = circle,inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \tikzset{EdgeStyle/.append style={thin}} \tikzset{LabelStyle/.append style={font = \itshape}} \SetVertexMath \clip (-2,2.5) rectangle (18, -15.0); \tikzset{VertexStyle/.append style= font=\itshape\large, shape = rounded rectangle, inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \def1.5{5.0} \def-8.0{2.5} \node at (1.5-1.0,-8.0-0.75) {$G/_{i=1}^3X_{1,i}/_{i=1}^4X_{2,i}/_{i=1}^4X_{3,i}/X_{4,1}\boxtimes G/_{i=1}^3X_{1,i}/_{i=1}^5X_{2,i}/_{i=1}^3X_{3,i}/X_{4,1}$}; \def1.5{0.0} \def-8.0{0.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_1')}]{u_00} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_1')}]{u_01} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_1')}]{u_02} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_1')}]{u_03} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_1')}]{u_04} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_1')}]{u_05} \def-8.0{-2.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_2')}]{u_10} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_2')}]{u_11} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_2')}]{u_12} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_2')}]{u_13} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_2')}]{u_14} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_2')}]{u_15} \def-8.0{-4.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_3')}]{u_20} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_3')}]{u_21} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_3')}]{u_22} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_3')}]{u_23} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_3')}]{u_24} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_3')}]{u_25} \def-8.0{-6.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_4')}]{u_30} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_4')}]{u_31} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_4')}]{u_32} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_4')}]{u_33} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_4')}]{u_34} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_4')}]{u_35} \def-8.0{-8.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_5')}]{u_40} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_5')}]{u_41} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_5')}]{u_42} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_5')}]{u_43} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_5')}]{u_44} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_5')}]{u_45} \def-8.0{-10.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_6')}]{u_50} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_6')}]{u_51} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_6')}]{u_52} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_6')}]{u_53} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_6')}]{u_54} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_6')}]{u_55} \def-8.0{-12.0} \Vertex[x=1.5+0, y=-8.0+0,L={(\tilde{x}_1'',\tilde{x}_7')}]{u_60} \Vertex[x=1.5+2, y=-8.0+0,L={(\tilde{x}_2'',\tilde{x}_7')}]{u_61} \Vertex[x=1.5+4, y=-8.0+0,L={(\tilde{x}_3'',\tilde{x}_7')}]{u_62} \Vertex[x=1.5+6, y=-8.0+0,L={(\tilde{x}_4'',\tilde{x}_7')}]{u_63} \Vertex[x=1.5+8, y=-8.0+0,L={(\tilde{x}_5'',\tilde{x}_7')}]{u_64} \Vertex[x=1.5+10, y=-8.0+0,L={(\tilde{x}_6'',\tilde{x}_7')}]{u_65} \Vertex[x=1.5+12, y=-8.0+0,L={(\tilde{x}_7'',\tilde{x}_7')}]{v_0} \def1.5{-4} \def-8.0{-6.0} \draw (1.5+5.2+4.75,-8.0+1.0)[very thick, dashdotted,font={\itshape}] .. controls ($ (1.5+16,-8.0+1.0) +(0,1)$) and ($ (v_0) +(0,8)$) .. (v_0) node[pos=0.5, inner sep=-1pt, label={a}] {}; \draw (1.5+13.2+1.75,-8.0-0.5)[very thick, dashdotted,font={\itshape}] .. controls ($ (1.5+13.2+1.75,-8.0-0.5) +(0.5,0)$) and ($ (v_0) +(0,4)$) .. (v_0) node[pos=0.5, inner sep=-1pt, label={a}] {}; \draw (1.5+4.9,-8.0-3.1)[very thick, dotted,font={\itshape}] .. controls ($ (1.5+4.9,-8.0-3.0) +(0,-6)$) and ($ (v_0) +(-10,-4)$) .. (v_0) node[pos=0.5, inner sep=-1pt, label={b}] {}; \draw (1.5+5.5+1.1,-8.0+4.8)[very thick, font={\itshape}] .. controls ($ (1.5+5.5+0.6,-8.0+5.0) +(12,2)$) and ($ (v_0) +(2,9)$) .. (v_0) node[pos=0.5, inner sep=-1pt, label={c}] {}; \draw (1.5+9.5+3.4,-8.0+4.8-0.4)[very thick, font={\itshape}] .. controls ($ (1.5+9.5+3.4,-8.0+4.8-0.4) +(2,1)$) and ($ (v_0) +(2,9)$) .. (v_0) node[pos=0.5, inner sep=-1pt, label={c}] {}; \draw (1.5+5.5+1,-8.0+4.8-9.6-0.05)[very thick, font={\itshape}] .. controls ($ (1.5+5.5+0.5,-8.0+4.8-9.6-0.2) +(4,-1)$) and ($ (v_0) +(-2,1)$) .. (v_0){}; \node at (1.5+12, -8.0-5.2) {c}; \draw (1.5+9.5+3.1,-8.0+4.8-9.6)[very thick, font={\itshape}] .. controls ($(1.5+9.5+5,-8.0+4.8-9.5)$) and ($(v_0) +(-1,1)$) .. (v_0){}; \node at (1.5+14, -8.0-4.6) {c}; \def1.5{0} \def-8.0{-1.0} \draw[circle, -,dotted, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+6.4,-8.0+2) --(1.5+6.9,-8.0+1.5) -- (1.5+6.9,-8.0-7.5)-- (1.5+6.4,-8.0-8.0) -- (1.5-0.3,-8.0-8.0) -- (1.5-0.8,-8.0-7.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{1.9} \def-8.0{-5.0} \def1.5{2} \def-8.0{-3.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+0.4,-8.0+2) --(1.5+0.9,-8.0+1.5) -- (1.5+0.9,-8.0+0.5)-- (1.5+0.4,-8.0-0.0) -- (1.5-0.3,-8.0-0.0) -- (1.5-0.8,-8.0+0.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{6} \def-8.0{-3.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+2.4,-8.0+2) --(1.5+2.9,-8.0+1.5) -- (1.5+2.9,-8.0+0.5)-- (1.5+2.4,-8.0-0.0) -- (1.5-0.3,-8.0-0.0) -- (1.5-0.8,-8.0+0.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{2} \def-8.0{-9.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+0.4,-8.0+2) --(1.5+0.9,-8.0+1.5) -- (1.5+0.9,-8.0-1.5)-- (1.5+0.4,-8.0-2.0) -- (1.5-0.3,-8.0-2.0) -- (1.5-0.8,-8.0-1.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{6} \def-8.0{-9.0} \draw[circle, -, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+2.4,-8.0+2) --(1.5+2.9,-8.0+1.5) -- (1.5+2.9,-8.0-1.5)-- (1.5+2.4,-8.0-2.0) -- (1.5-0.3,-8.0-2.0) -- (1.5-0.8,-8.0-1.5) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def1.5{1.9} \def-8.0{-7.0} \draw[circle, -,dashdotted, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+4.5,-8.0+2) --(1.5+5,-8.0+1.5) -- (1.5+5,-8.0-4)-- (1.5+4.5,-8.0-4.5) -- (1.5-0.3,-8.0-4.5) -- (1.5-0.8,-8.0-4) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \draw[circle, -,dashdotted, very thick,rounded corners=8pt] (1.5+8.2,-8.0+2)--(1.5+8.5,-8.0+2) --(1.5+9,-8.0+1.5) -- (1.5+9,-8.0-4)-- (1.5+8.5,-8.0-4.5) -- (1.5+8-0.3,-8.0-4.5) -- (1.5+8-0.8,-8.0-4) -- (1.5+8-0.8,-8.0+1.5) -- (1.5+8-0.3,-8.0+2)--(1.5+8.2,-8.0+2); \def1.5{1.6} \def-8.0{-1.0} \def1.5{2} \def-8.0{-7.0} \def1.5{-0.1+12} \def-8.0{-0.9-12} \draw[circle, -,dashed, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+0.6,-8.0+2) --(1.5+1.1,-8.0+1.5) -- (1.5+1.1,-8.0+0.3)-- (1.5+0.6,-8.0-0.2) -- (1.5-0.3,-8.0-0.2) -- (1.5-0.8,-8.0+0.3) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.2,-8.0+2); \def-8.0{-0.9-11.5} \def1.5{2} \def-8.0{2} \end{tikzpicture} } \end{center} \caption{The intermediate stage of $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}/X'_{4,1}$ and $G/_{i=1}^3X''_{1,i}$ $/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$, $G/_{i=1}^3X'_{1,i}/_{i=1}^4X'_{2,i}/_{i=1}^4X'_{3,i}/X'_{4,1}\boxtimes G/_{i=1}^3X''_{1,i}/_{i=1}^5X''_{2,i}/_{i=1}^3X''_{3,i}/X''_{4,1}$.} \label{BiPartiteExampleDecomposition2} \end{figure} \begin{theorem}\label{theorem_5} Let $G$ be a bipartite matrix graph consisting of semicomplete bipartite subgraphs $B(X_a,X_b)$ only, where each $B(X_a,X_b)$ is arc-induced by a set of all arcs of $G$ with identical labels, $V(G)=X_1\cup\ldots \cup X_x$ , $a,b\in\{1,\ldots,x\}, a\neq b$. Let $[X_a,X_b]$ have only forward arcs or let $[X_a,X_b]$ have only backward arcs. Let there be no arc $a=u_iv_j$ in $G$ with $u_i,v_j\in X_a$ or $u_i,v_j\in X_b$. Let $X_a=\{v_{i,j}\mid i\in I_{X_a}\subseteq I=\{1,\ldots,m\},j\in J_{X_a}\subseteq J=\{1,\ldots,n\}\}$, $|X_a|=k_a\cdot l_a$, $k_a, l_a\in \mathbb{N}^+,|I_{X_a}|=k_a,|J_{X_a}|=l_a$, with rows $X'_{a,i}=\{v_{i,j}\mid j\in J_{X_a}\},i\in I_{X_a}$ and columns $X''_{a,j}=\{v_{i,j}\mid i\in I_{X_a}\},j\in J_{X_a}$ and let $X_b=\{v_{i,j}\mid i\in I_{X_b}\subseteq I=\{1,\ldots,m\},j\in J_{X_b}\subseteq J=\{1,\ldots,n\}\}$, $|X_b|=k_b\cdot l_b$, $k_b, l_b\in \mathbb{N}^+,|I_{X_b}|=k_b,|J_{X_b}|=l_b$, with rows $X'_{b,i}=\{v_{i,j}\mid j\in J_{X_b}\},i\in I_{X_b}$ and columns $X''_{b,j}=\{v_{i,j}\mid i\in I_{X_b}\},j\in J_{X_b}$. If the intersection of $X_i$ and $X_j$ is empty or the intersection of $X_i$ and $X_j$ is a grid, for any $X_i$ and any $X_j$ of $G$ for $i,j\in\{1,\ldots,x\}$ then $G\cong G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$. \end{theorem} \begin{proof} It suffices to define a mapping $\phi: V(G)\rightarrow V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxbackslash G/_{y=1}^{x}$ $/_{z=1}^{l_y}X''_{y,z})$ and to prove that $\phi$ is an isomorphism from $G$ to $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $/_{y=1}^{x}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$. Let $\tilde{x}'_i$ be the new vertex replacing the set $X'_{y,z}$ with $v_{i,j}\in X'_{y,z}$, $\tilde{x}''_j$ be the new vertex replacing the set $X''_{y,z}$ with $v_{i,j}\in X''_{y,z}$, when defining $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ and $G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$, respectively. Let $\tilde{x}'_i$ be the new vertex replacing the vertices $v_{i,j}\in X'_{y,z}$, $\tilde{x}''_j$ be the new vertex replacing the vertices $v_{i,j}\in X''_{y,z}$, when defining $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ and $G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$, respectively. Consider the mapping $\phi: V(G)\rightarrow V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}$ $X''_{y,z})$ defined by $\phi(v_{i,j})=(\tilde{x}'_i,\tilde{x}''_j)$. Then $\phi$ is obviously a bijection if $V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})=Z$, where $Z$ is defined as $Z=\{(\tilde{x}'_i,\tilde{x}''_j) \mid v_{i,j}\in V(G)$, $ \phi(v_{i,j})=(\tilde{x}'_i,\tilde{x}''_j)\}$. We are going to show this later by arguing that all vertices $\tilde{x}'_i$ and $\tilde{x}'_j$, $i\neq j$, are different, and that all vertices $\tilde{x}''_i$ and $\tilde{x}''_j$, $i\neq j$, are different and that all the other vertices $(\tilde{x}'_k,\tilde{x}''_l)$ of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\Box G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ for which there is no $v_{k,l}\in V(G)$ will disappear from $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$. But first we are going to prove the following claim. \begin{claim}\label{claim3} The subgraph of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ induced by $Z$ is isomorphic to $G$. \end{claim} \begin{proof} We start with proving that $\tilde{x}'_i$ and $\tilde{x}'_j, i\neq j,$ implies $\tilde{x}'_i\neq \tilde{x}'_j$ and that $\tilde{x}''_i$ and $\tilde{x}''_j, i\neq j,$ implies $\tilde{x}''_i\neq \tilde{x}'_j$. Let $R_i$ be the set of rows with all vertices $v_{i,j}$ of $V(G)$. Therefore, all rows in $R_i$ have the number $i$ as their first index. Because $G$ is a bipartite matrix graph, we have that for any division of $R_i$ into the sets $R_{i_1}$ and $R_{i_2}$, $R_i=R_{i_1}\cup R_{i_2}$, there is always a row $X'_{k,x}\in R_{i_1}$ and a row $X'_{l,y}\in R_{i_2}$ with $X'_{k,x}\cap X'_{l,y}\neq \emptyset$. Therefore, all rows of $R_i$ are contracted to $\tilde{x}'_i$. Because all rows with vertices $v_{i,j}$ are in $R_i$, a row with a vertex $v_{k,l}$ with $v_{k,l}$ not in in any row of $R_i$ must have $i\neq k$. Likewise, let $R_j$ be the set of columns with all vertices $v_{i,j}$ of $V(G)$. Therefore, all columns in $R_j$ have the number $j$ as their second index. Because $G$ is a bipartite matrix graph, we have that for any division of $R_j$ into the sets $R_{j_1}$ and $R_{j_2}$, $R_j=R_{j_1}\cup R_{j_2}$, there is always a column $X''_{k,x}\in R_{j_1}$ and a column $X''_{l,y}\in R_{j_2}$ with $X''_{k,x}\cap X''_{l,y}\neq \emptyset$. Therefore, all columns of $R_j$ are contracted to $\tilde{x}''_i$. Because all columns with vertices $v_{i,j}$ are in $R_j$, a column with a vertex $v_{k,l}$ with $v_{k,l}$ not in in any column of $R_j$ must have $j\neq l$. Hence, we have that $\tilde{x}'_i$ and $\tilde{x}'_j, i\neq j,$ implies $\tilde{x}'_i\neq \tilde{x}'_j$ and that $\tilde{x}''_i$ and $\tilde{x}''_j, i\neq j,$ implies $\tilde{x}''_i\neq \tilde{x}'_j$. Therefore, $\phi$ maps each vertex $v_{i,j}\in V(G)$ to $(\tilde{x}'_i,\tilde{x}''_j)\in Z$ and if $v_{i_1,j_1}\neq v_{i_2,j_2}$ then $(\tilde{x}'_{i_1},\tilde{x}'_{i_2})\neq (\tilde{x}''_{j_1}\tilde{x}''_{j_2})$, $v_{i_1,j_1}, v_{i_2,j_2}\in V(G)$ and we have that $\phi$ a bijection from $V(G)$ to $Z$. It remains to show that this bijection preserves the arcs and their labels. Because there is no arc $a=v_{i,j}v_{k.l}$ in $A(G)$ with $v_{i,j},v_{k,l}\in X_a$ or $v_{i,j},v_{k,l}\in X_b$, we have that by the contractions each arc $a\in A(G)$ with $\mu(a)=(v_{i,j},v_{k,l}), \lambda(a)=a'$ is replaced by an arc $x'\in G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ with $\mu(x')=(\tilde{x}'_i,\tilde{x}'_k), \lambda(x')=a'$ and an arc $x''\in G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ with $\mu(x'')=(\tilde{x}''_j,\tilde{x}''_l), \lambda(x'')=a'$. Therefore, all arcs $x'$ are synchronising arcs of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ with respect to $G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ (by hypothesis) and all arcs $x''$ are synchronising arcs of $G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ with respect to $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ (by hypothesis). It follows that the arcs $x'$ and $x''$ correspond to an arc $y=(\tilde{x}'_i,\tilde{x}''_j)(\tilde{x}'_{k},\tilde{x}''_{l})$ of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ with $\lambda(y)=\lambda(x')$. Furthermore, $\phi$ maps vertices $v_{i,j}$ and $v_{k,l}$ on vertices $(\tilde{x}'_i,\tilde{x}''_j)$ and $(\tilde{x}'_k,\tilde{x}''_l)$, respectively, and therefore we have that an arc $z=v_{i,j}v_{k,l}$ of $G$ corresponds with an arc $y=(\tilde{x}'_i,\tilde{x}''_j)(\tilde{x}'_k,\tilde{x}''_l)$ of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$, with $\lambda(y)=\lambda(z)$. Because $(\tilde{x}'_i,\tilde{x}''_j)$ and $(\tilde{x}'_{k},\tilde{x}''_{l})$ are in $Z$, the arc $y$ is an arc of the graph induced by $Z$ and we have the one-to-one relationship between the arcs $y$ of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ and $z$ in $G$. Together with, there are no other vertices in $Z$ than $(\tilde{x}'_i,\tilde{x}''_j)$ and $(\tilde{x}'_{k},\tilde{x}''_{l}))$ and there are no other vertices in $G$ than $v_{i,j}$ and $v_{k,l}$, the subgraph of $G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}$ $\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ induced by $Z$ is isomorphic to $G$. \end{proof} By the definition of the Cartesian product, for each pair of vertices $\tilde{x}'_{i} \in V(G/_{y=1}^{x}/_{z=1}^{k_y}$ $X'_{y,z})$ and $\tilde{x}''_{j} \in V(G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$, there exists a vertex $(\tilde{x}'_{i},\tilde{x}''_{j}) \in V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z} \boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$. It remains to show that $\phi$ is a bijection from $V(G)$ to $Z'=V(G/_{y=1}^{x}/_{z=1}^{k_y}$ $X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$ preserving the arcs and their labels. Therefore, we have to show that all vertices of $V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$ not in $Z$ are removed from $V(G/_{y=1}^{x}/_{z=1}^{k_y}$ $X'_{y,z}\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$. Let $|G/_{y=1}^{x}/_{z=1}^{k_y}$ $/X'_{y,z}|=m_1\leq m$ and $|G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}|=n_1\leq n$. Let $v_{s,t}\notin G$ with $s\in \{1,\ldots ,m_1\}$ and $t\in \{1,\ldots ,n_1\}$. Then there cannot exist an arc $v_{i,j}v_{s,t}\in A(G)$ otherwise $v_{s,t}$ must be in $V(G)$. But there exist a vertex $\tilde{x}'_s\in G/_{y=1}^{x}/_{z=1}^{k_y}$ $X'_{y,z}$ and a vertex $\tilde{x}''_t \in G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$, and, therefore, there exists a vertex $(\tilde{x}'_s,\tilde{x}''_t)\in V(G/_{y=1}^{x}/_{z=1}^{k_y}$ $/X'_{y,z}\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$. The intersection of the set of labels $L'$ of arcs with head $\tilde{x}'_s$ and the set of labels $L''$ of arcs with head $\tilde{x}''_t$ is empty, because otherwise there exists an arc $a$ in $A(G)$ with head $v_{s,t}$. Hence, all arcs with head $\tilde{x}'_s$ are asynchronous with respect to all arcs with head $\tilde{x}''_t$. Therefore, there cannot exist a vertex $\tilde{x}'_s,\tilde{x}''_t \in V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$ and $Z$ must be equal to $V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$. Because the subgraph of $V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxtimes G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$ induced by $Z$ is isomorphic to $G$ and $Z=V(G/_{y=1}^{x}/_{z=1}^{k_y}X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z})$, it follows that $G \cong G/_{y=1}^{x}/_{z=1}^{k_y}$ $X'_{y,z}\boxbackslash G/_{y=1}^{x}/_{z=1}^{l_y}X''_{y,z}$. This completes the proof of Theorem~\ref{theorem_5}. \end{proof} We call a bipartite matrix graph consisting of semicomplete bipartite subgraphs that is decomposable by Theorem~\ref{theorem_5} a \emph{VRSP-decomposable bipartite matrix graph}. In the fourth decomposition theorem we are going to prove that $G/_{i\in I_R}R_i\boxbackslash G/_{j\in J_C}C_j\cong G$, where $V(G)$ consists of nonempty pairwise disjoint subsets $R_i=\{v_{i,j}\mid j\in J_C\subseteq J\},i\in I_R\subseteq I$, and nonempty pairwise disjoint subsets $C_j=\{v_{i,j}\mid i\in I_R\subseteq I\},j\in J_C\subseteq J,|I_R|=m_1,|J_C|=n_1,$, with $V(G)=\bigcup\limits_{i\in I_R}R_i=\bigcup\limits_{j\in J_C}C_j$, for which $G[R_x]\cong G[R_y],x,y\in I_R$, $G[C_x]\cong G[C_y],x,y\in J_C$, the arcs of $A_R=\bigcup\limits_{x\in I_R}A[R_x]$ and the arcs of $A_C=\bigcup\limits_{y\in I_C}A[C_y]$ have no labels in common and there are no other arcs in $A(G)$ than the arcs of $A_R$ and the arcs of $A_C$. We give an illustrative example of the decomposition by Theorem~\ref{theorem_6} in Figure~\ref{ThirdDecomposition}. \begin{figure}[H] \begin{center} \resizebox{0.75\textwidth}{!}{ \begin{tikzpicture}[->,>=latex,shorten >=0pt,auto,node distance=2.5cm, main node/.style={circle,fill=blue!10,draw, font=\sffamily\Large\bfseries} \tikzset{VertexStyle/.append style= font=\itshape\large, shape = circle,inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \tikzset{EdgeStyle/.append style={thin}} \tikzset{LabelStyle/.append style={font = \itshape}} \SetVertexMath \def1.5{0.0} \def-8.0{1.0} \node at (1.5+0.5,3+-8.0+3) {$G$}; \node at (1.5+4.5,-8.0+5.4) {$Y_1$}; \node at (1.5+1.5,-8.0+4.4) {$X_1$}; \node at (1.5+4.5,-8.0+2.6) {$Y_2$}; \node at (1.5+5.5,-8.0+4.4) {$X_2$}; \node at (1.5+4.5,-8.0-0.4) {$Y_3$}; \node at (1.5+9.5,-8.0+4.4) {$X_3$}; \node at (1.5+13.5,-8.0+4.4) {$X_4$}; \node at (1.5+5.5,-8.0-1-2) {$G/X_1^4$}; \node at (1.5+0.5,-8.0-5-1) {$G/Y_1^3$}; \node at (1.5+6,-8.0-5-1) {$Z=V(G/Y_1^3\boxtimes G/X_1^4)$}; \node at (1.5+7.5,-8.0-3.1-2) {$G/Y_1^3\boxtimes G/X_1^4$}; \def1.5{0.5} \def-8.0{6.0+1} \Vertex[x=1.5+1.5, y=-8.0+0.0,L={u_2}]{u_2} \Vertex[x=1.5+1.5, y=-8.0-3.0,L={u_3}]{u_3} \Vertex[x=1.5+1.5, y=-8.0-6.0,L={u_4}]{u_4} \Vertex[x=1.5+5.5, y=-8.0+0.0,L={u_5}]{u_5} \Vertex[x=1.5+5.5, y=-8.0-3,L={u_6}]{u_6} \Vertex[x=1.5+5.5, y=-8.0-6,L={u_7}]{u_7} \Vertex[x=1.5+9.5, y=-8.0+0.0,L={u_8}]{u_8} \Vertex[x=1.5+9.5, y=-8.0-3.0,L={u_9}]{u_9} \Vertex[x=1.5+9.5, y=-8.0-6.0,L={u_{10}}]{u_10} \Vertex[x=1.5+13.5, y=-8.0+0,L={u_{11}}]{u_11} \Vertex[x=1.5+13.5, y=-8.0-3,L={u_{12}}]{u_12} \Vertex[x=1.5+13.5, y=-8.0-6,L={u_{13}}]{u_13} \Edge[label = b](u_2)(u_3) \Edge[label = c](u_3)(u_4) \Edge[label = b](u_5)(u_6) \Edge[label = c](u_6)(u_7) \Edge[label = b](u_8)(u_9) \Edge[label = c](u_9)(u_10) \Edge[label = b](u_11)(u_12) \Edge[label = c](u_12)(u_13) \Edge[label = d](u_2)(u_5) \Edge[label = e](u_5)(u_8) \Edge[label = f](u_8)(u_11) \Edge[label = d](u_3)(u_6) \Edge[label = e](u_6)(u_9) \Edge[label = f](u_9)(u_12) \Edge[label = d](u_4)(u_7) \Edge[label = e](u_7)(u_10) \Edge[label = f](u_10)(u_13) \Edge(u_2)(u_3) \Edge(u_3)(u_4) \Edge(u_5)(u_6) \Edge(u_6)(u_7) \Edge(u_8)(u_9) \Edge(u_9)(u_10) \Edge(u_11)(u_12) \Edge(u_12)(u_13) \Edge(u_2)(u_5) \Edge(u_5)(u_8) \Edge(u_8)(u_11) \Edge(u_3)(u_6) \Edge(u_6)(u_9) \Edge(u_9)(u_12) \Edge(u_4)(u_7) \Edge(u_7)(u_10) \Edge(u_10)(u_13) \def1.5{4.0} \def-8.0{-3} \Vertex[x=1.5+1, y=-8.0+0.0,L={\tilde{x_1}}]{s_1} \Vertex[x=1.5+4, y=-8.0+0.0,L={\tilde{x_2}}]{s_2} \Vertex[x=1.5+7, y=-8.0+0.0,L={\tilde{x_3}}]{s_3} \Vertex[x=1.5+10, y=-8.0+0.0,L={\tilde{x_4}}]{s_4} \Edge[label = d](s_1)(s_2) \Edge[label = e](s_2)(s_3) \Edge[label = f](s_3)(s_4) \def1.5{+1.5} \def-8.0{-3.0} \Vertex[x=1.5+0, y=-8.0-3.0,L={\tilde{y}_1}]{t_1} \Vertex[x=1.5+0, y=-8.0-6.0,L={\tilde{y}_2}]{t_2} \Vertex[x=1.5+0, y=-8.0-9.0,L={\tilde{y}_3}]{t_3} \Edge[label = b](t_1)(t_2) \Edge[label = c](t_2)(t_3) \Edge(t_1)(t_2) \Edge(t_2)(t_3) \tikzset{VertexStyle/.append style= font=\itshape\large,shape = rounded rectangle,inner sep = 0pt, outer sep = 0pt,minimum size = 20 pt,draw}} \def1.5{2.0} \def-8.0{-7.0} \def1.5{2.0} \def-8.0{-6.0} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{y}_1,\tilde{x}_1)}]{t_1s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{y}_1,\tilde{x}_2)}]{t_1s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{y}_1,\tilde{x}_3)}]{t_1s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{y}_1,\tilde{x}_4)}]{t_1s_4} \def1.5{2.0} \def-8.0{-9.0} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{y}_2,\tilde{x}_1)}]{t_2s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{y}_2,\tilde{x}_2)}]{t_2s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{y}_2,\tilde{x}_3)}]{t_2s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{y}_2,\tilde{x}_4)}]{t_2s_4} \def1.5{2.0} \def-8.0{-12.0} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{y}_3,\tilde{x}_1)}]{t_3s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{y}_3,\tilde{x}_2)}]{t_3s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{y}_3,\tilde{x}_3)}]{t_3s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{y}_3,\tilde{x}_4)}]{t_3s_4} \def1.5{2.0} \def-8.0{-19.0} \Edge[label = d](t_1s_1)(t_1s_2) \Edge[label = e](t_1s_2)(t_1s_3) \Edge[label = f](t_1s_3)(t_1s_4) \Edge(t_1s_1)(t_1s_2) \Edge(t_1s_2)(t_1s_3) \Edge(t_1s_3)(t_1s_4) \Edge[label = d](t_2s_1)(t_2s_2) \Edge[label = e](t_2s_2)(t_2s_3) \Edge[label = f](t_2s_3)(t_2s_4) \Edge(t_2s_1)(t_2s_2) \Edge(t_2s_2)(t_2s_3) \Edge(t_2s_3)(t_2s_4) \Edge[label = d](t_3s_1)(t_3s_2) \Edge[label = e](t_3s_2)(t_3s_3) \Edge[label = f](t_3s_3)(t_3s_4) \Edge(t_3s_1)(t_3s_2) \Edge(t_3s_2)(t_3s_3) \Edge(t_3s_3)(t_3s_4) \Edge[label = b](t_1s_1)(t_2s_1) \Edge[label = b](t_1s_2)(t_2s_2) \Edge[label = b](t_1s_3)(t_2s_3) \Edge[label = b](t_1s_4)(t_2s_4) \Edge(t_1s_1)(t_2s_1) \Edge(t_1s_2)(t_2s_2) \Edge(t_1s_3)(t_2s_3) \Edge(t_1s_4)(t_2s_4) \Edge[label = c](t_2s_1)(t_3s_1) \Edge[label = c](t_2s_2)(t_3s_2) \Edge[label = c](t_2s_3)(t_3s_3) \Edge[label = c](t_2s_4)(t_3s_4) \Edge(t_2s_1)(t_3s_1) \Edge(t_2s_2)(t_3s_2) \Edge(t_2s_3)(t_3s_3) \Edge(t_2s_4)(t_3s_4) \def1.5{1.7} \def-8.0{5.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{5.7} \def-8.0{5.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{9.7} \def-8.0{5.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{13.7} \def-8.0{5.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{1.7} \def-8.0{5.1+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+2.8,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+13.0,-8.0+1.5) --(1.5+13.0,-8.0) -- (1.5+2.8,-8.0+0.0); \def1.5{1.7} \def-8.0{2.3+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+2.8,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+13.0,-8.0+1.5) --(1.5+13.0,-8.0) -- (1.5+2.8,-8.0+0.0); \def1.5{1.7} \def-8.0{-0.7+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+2.8,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+13.0,-8.0+1.5) --(1.5+13.0,-8.0) -- (1.5+2.8,-8.0+0.0); \def1.5{0.5} \def-8.0{6.2+1} \def1.5{1.5} \def-8.0{-6.0} \def1.5{3.5} \def-8.0{-6.4} \draw[circle, -,dashed, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+11.4,-8.0+2) --(1.5+11.9,-8.0+1.5) -- (1.5+11.9,-8.0-7)-- (1.5+11.4,-8.0-7.5) -- (1.5-0.3,-8.0-7.5) -- (1.5-0.8,-8.0-7) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.1,-8.0+2); \end{tikzpicture} } \end{center} \caption{Decomposition of $G\cong G/_{i=1}^3Y_i\boxbackslash G/_{i=1}^4X_i$. The set $Z$ from the proof of Theorem~\ref{theorem_6} and the graph isomorphic to $G$ induced by $Z$ in $G/_{i=1}^3Y_i\boxtimes G/_{i=1}^4X_i$ are indicated within the dotted region.} \label{ThirdDecomposition} \end{figure} \begin{theorem}\label{theorem_6} Let $G$ be a Cartesian matrix graph where $V(G)$ consists of nonempty pairwise disjoint subsets $R_i=\{v_{i,j}\mid j\in J_C\subseteq J\},i\in I_R\subseteq I$, and nonempty pairwise disjoint subsets $C_j=\{v_{i,j}\mid i\in I_R\subseteq I\},j\in J_C\subseteq J,|I_R|=m_1,|J_C|=n_1,$, with $V(G)=\bigcup\limits_{i\in I_R}R_i=\bigcup\limits_{j\in J_C}C_j$, for which $G[R_x]\cong G[R_y],x,y\in I_R$, $G[C_x]\cong G[C_y],x,y\in J_C$, the arcs of $A_R=\bigcup\limits_{x\in I_R}A[R_x]$ and the arcs of $A_C=\bigcup\limits_{y\in I_C}A[C_y]$ have no labels in common and there are no other arcs in $A(G)$ than the arcs of $A_R$ and the arcs of $A_C$. Then $G/_{i\in I_R}R_i\boxbackslash G/_{j\in J_C}C_j\cong G$. \end{theorem} \begin{proof} It clearly suffices to define a mapping $\phi: V(G)\rightarrow V(G/_{i\in I_R}R_i\boxbackslash G/_{j\in J_C}C_j)$ and to prove that $\phi$ is an isomorphism from $G$ to $G/_{i\in I_R}R_i\boxbackslash G/_{j\in J_C}C_j$. Let $\tilde{x}'_i$ be the new vertex replacing the set $R_i$ and let $\tilde{x}''_j$ be the new vertex replacing the set $C_j$, when defining $G/_{i\in I_R}R_i$ and $G/_{j\in J_C}C_j$, respectively. Consider the mapping $\phi: V(G)\rightarrow V(G/_{i\in I_R}R_i\boxbackslash G/_{j\in J_C}C_j)$ defined by $\phi(v_{i,j})=(\tilde{x}'_i,\tilde{x}''_j)$ for all $v_{i,j}\in V(G)$. \noindent Then $\phi$ is obviously a bijection if $V(G/_{i\in I_R}R_i\boxbackslash G/_{j\in J_C}C_j)=Z$, where $Z$ is defined as $Z=\{(\tilde{x}'_i,\tilde{x}''_j)\mid \phi(v_{i,j})=(\tilde{x}_i,\tilde{x}_j), v_{i,j}\in V(G)\}$. Furthermore, the set of vertices of $Z$ is identical to the set of vertices of $G/_{i\in I_R}R_i\boxtimes G/_{j\in J_C}C_j$. We start with proving that the contraction of $R_i$ to $\tilde{x}'_i$ and the contraction of $R_j$ to $\tilde{x}'_j$ for $i\neq j,$ implies $\tilde{x}'_i\neq \tilde{x}'_j$ and that the contraction of $C_j$ to $\tilde{x}''_j$ and the contraction of $C_k$ to $\tilde{x}''_k$ for $j\neq k,$ implies $\tilde{x}''_j\neq \tilde{x}''_k$. Because $R_i$ is the row with all vertices $v_{i,k}$ of $V(G)$ (by hypothesis) the vertices of $R_i$ are replaced by $\tilde{x}'_i$ and $R_j$ is the row with all vertices $v_{j,k}$ of $V(G)$ (by hypothesis) the vertices of $R_j$ are replaced by $\tilde{x}'_j$, and $R_i\cap R_j=\emptyset$ for $i\neq j$, we have that the contraction of $R_i$ to $\tilde{x}'_i$ and the contraction of $R_j$ to $\tilde{x}'_j$ for $i\neq j$ implies $\tilde{x}'_i\neq \tilde{x}'_j$. Likewise, Because $C_j$ is the column with all vertices $v_{i,j}$ of $V(G)$ (by hypothesis) the vertices of $C_j$ are replaced by $\tilde{x}''_j$ and $C_k$ is the column with all vertices $v_{i,k}$ of $V(G)$ (by hypothesis) the vertices of $C_k$ are replaced by $\tilde{x}''_k$, and $C_j\cap C_k=\emptyset$ for $j\neq k$, we have that the contraction of $C_k$ to $\tilde{x}''_k$ for $j\neq k,$ implies $\tilde{x}''_j\neq \tilde{x}''_k$. Because $Z$ consists of vertices $(\tilde{x}'_i,\tilde{x}''_j)$ only and $\phi$ maps $v_{i,j}$ onto $(\tilde{x}'_i,\tilde{x}''_j)$, and if $v_{i_1,j_1}\neq v_{i_2,j_2}$ then $(\tilde{x}'_{i_1},\tilde{x}''_{i_2})\neq (\tilde{x}'_{j_1}\tilde{x}''_{j_2})$, $v_{i_1,j_1}, v_{i_2,j_2}\in V(G)$ we have that $\phi$ is a bijection from $V(G)$ to $Z$. It remains to show that this bijection preserves the arcs and their labels. By hypothesis, the arcs of the rows $R_i$ of $G$ are asynchronous with respect to the arcs of the columns $C_j$ of $G$ and by hypothesis we have only arcs $a\in A(G)$ with $\mu(a)=(u_{i,j},u_{i,k})$ for $u_{i,j}\in R_i$, $u_{i,k}\in R_i$ and arcs $a\in A(G)$ with $\mu(a)=(u_{i,k},u_{j,k})$ for $u_{i,k}\in C_k$, $u_{j,k}\in C_k$. Hence, together with the definition of the Cartesian product, for each arc $a\in A(G)$ with $\mu(a)=(u_{i,j},u_{i,k})$ for $u_{i,j}\in R_i$, $u_{i,k}\in R_i$, there exists an arc $b$ in $G/_{i\in I_R}R_i\boxtimes G/_{j\in J_C}C_j$ with $\mu(b)=((\tilde{x}'_i,\tilde{x}''_j),(\tilde{x}'_i,\tilde{x}'_k))=(\phi(u_{i,j}),\phi(u_{i,k}))$ and $\lambda(b)=\lambda(a)$. Likewise, for each arc $a\in A(G)$ with $\mu(a)=(u_{i,k},u_{j,k})$ for $u_{i,k}\in C_k$, $u_{j,k}\in C_k$, there exists an arc $b$ in $G/_{i\in I_R}R_i\boxtimes G/_{j\in J_C}C_j$ with $\mu(b)=((\tilde{x}'_i,\tilde{x}''_k),(\tilde{x}'_j,\tilde{x}''_k))=(\phi(u_{i,k}),\phi(u_{j,k}))$ and $\lambda(b)=\lambda(a)$. Because $G$ is acyclic, the above arcs are the only arcs in $G/_{i\in I_R}R_i\boxtimes G/_{j\in J_C}C_j$ induced by the vertices of $Z$. Furthermore, there are no other vertices in $G/_{i\in I_R}R_i\boxtimes G/_{j\in J_C}C_j$ than the vertices of $Z$, because all vertices of $Z$ are of the type $(\tilde{x}'_i,\tilde{x}''_j)$ (for the head and the tail of asynchronous arcs). This completes the proof of Theorem~\ref{theorem_6}. \end{proof} Note that the decomposition by Theorem~\ref{theorem_6} iteratively decomposes any graph $G$ that is the product of graphs $G_1,\ldots,G_n$, $G\cong \overundersyncprod{i=1}{{n}}G_i$, that do not share a label. We call a matrix graph that is decomposable by Theorem~\ref{theorem_6} a \emph{VRSP-decomposable Cartesian matrix graph} and we call a subgraph $G'$ of a matrix graph $G$ a \emph{maximal VRSP-decomposable Cartesian matrix subgraph} if $G'$ is a VRSP-decomposable Cartesian matrix graph and there is no subgraph $G''$ of $G$ where $G''$ is a VRSP-decomposable Cartesian matrix graph and $G'$ is a proper subgraph of $G''$. We continue with a decomposition theorem where we use implicitly both Theorem~\ref{theorem_5} and Theorem~\ref{theorem_6}. The graphs containing maximal VRSP-decomposable Cartesian matrix subgraphs and VRSP-decomposable semicomplete bipartite matrix subgraphs cannot be decomposed by either Theorem~\ref{theorem_5} or Theorem~\ref{theorem_6}. In Figure~\ref{FourthDecomposition}, we give an example where the vertices are numbered according to the matrix scheme for maximal VRSP-decomposable Cartesian matrix subgraphs and VRSP-decomposable semicomplete bipartite matrix subgraphs. This scheme leads to five rows and six columns for which the contraction of the rows produces the graph $G/_{i=1}^5R_i$ and the contraction of the columns produces the graph $G/_{i=1}^6C_i$. The VRSP of these two graphs gives the graph $G/_{i=1}^5R_i\boxbackslash G/_{i=1}^6C_i$ which is isomorphic to $G$. In Theorem~\ref{theorem_7}, we state and proof the scheme described in Figure~\ref{FourthDecomposition}. \begin{theorem}\label{theorem_7} Let $G$ be a matrix graph consisting solely of a set of maximal VRSP-decomposable Cartesian matrix subgraphs $G_{M}$ of $G$ and a set of VRSP-decomposable semicomplete bipartite matrix subgraphs $G_B$ of $G$ where each semicomplete bipartite subgraph is arc-induced by a set of all arcs of $G$ with identical labels. Let any subgraph $G_{M_{1}}$ of $G_M$ and any subgraph $G_{M_{2}}$ of $G_M$ with $ V(G_{M_1})\cap V(G_{M_2})=\emptyset$ and the subgraphs of $G_B$ have no labels in common. Let there be no arc $a$ of $G_B$ with $\mu(a)=v_{i,j}v_{i,k}$ and $v_{i,j},v_{i,k}$ in any $V(G_{M_x})$ of $G_M$ and let there be no arc $a$ of $G_B$ with $\mu(a)=v_{i,j}v_{k,j}$ and $v_{i,j},v_{k,j}$ in any $V(G_{M_y})$ of $G_M$. If each row $R_x$ of $G$ that contains the vertex $v_{i,j}$ has the index $i$ and if each column $C_y$ of $G$ that contains the vertex $v_{i,j}$ has the index $j$ then $G\cong G/_{i=1}^{m}R_i\boxbackslash G/_{j=1}^{n}C_j$. \end{theorem} \begin{figure}[H] \begin{center} \resizebox{0.95\textwidth}{!}{ \begin{tikzpicture}[->,>=latex,shorten >=0pt,auto,node distance=2.5cm, main node/.style={circle,fill=blue!10,draw, font=\sffamily\Large\bfseries} \tikzset{VertexStyle/.append style= font=\itshape\large, shape = circle,inner sep = 2pt, outer sep = 0pt,minimum size = 20 pt,draw}} \tikzset{EdgeStyle/.append style={thin}} \tikzset{LabelStyle/.append style={font = \itshape}} \SetVertexMath \def1.5{0.0} \def-8.0{1.0} \node at (1.5-1.5,3+-8.0+2) {$G$}; \node at (1.5-1.5,-8.0+10.8) {$R_1=C_1$}; \node at (1.5-1.5+19,-8.0-1.2) {$R_5=C_6$}; \node at (1.5+4.5,-8.0+8.4) {$R_2$}; \node at (1.5+1.5,-8.0+4.4) {$C_2$}; \node at (1.5+4.5,-8.0+5.6) {$R_3$}; \node at (1.5+5.5,-8.0+4.4) {$C_3$}; \node at (1.5+4.5,-8.0+3-0.4) {$R_4$}; \node at (1.5+9.5,-8.0+4.4) {$C_4$}; \node at (1.5+13.5,-8.0+4.4) {$C_5$}; \node at (1.5+0.5,-8.0-1-2) {$G/C_1^6$}; \node at (1.5-1.5,-8.0-5-2) {$G/R_1^5$}; \node at (1.5+2,-8.0-5-2) {$Z$}; \node at (1.5+15.5,-8.0-3.1-2) {$G/_{i=1}^5R_i\boxtimes G/_{j=1}^6C_i$}; \def1.5{0.5} \def-8.0{9.0+1} \Vertex[x=1.5+-2, y=-8.0+3.0,L={u_{1,1}}]{u_1} \Vertex[x=1.5+1.5, y=-8.0+0.0,L={u_{2,2}}]{u_2} \Vertex[x=1.5+1.5, y=-8.0-3.0,L={u_{3,2}}]{u_3} \Vertex[x=1.5+1.5, y=-8.0-6.0,L={u_{4,2}}]{u_4} \Vertex[x=1.5+5.5, y=-8.0+0.0,L={u_{2,3}}]{u_5} \Vertex[x=1.5+5.5, y=-8.0-3,L={u_{3,3}}]{u_6} \Vertex[x=1.5+5.5, y=-8.0-6,L={u_{4,3}}]{u_7} \Vertex[x=1.5+9.5, y=-8.0+0.0,L={u_{2,4}}]{u_8} \Vertex[x=1.5+9.5, y=-8.0-3.0,L={u_{3,4}}]{u_9} \Vertex[x=1.5+9.5, y=-8.0-6.0,L={u_{4,4}}]{u_10} \Vertex[x=1.5+13.5, y=-8.0+0,L={u_{2,5}}]{u_11} \Vertex[x=1.5+13.5, y=-8.0-3,L={u_{3,5}}]{u_12} \Vertex[x=1.5+13.5, y=-8.0-6,L={u_{4,5}}]{u_13} \Vertex[x=1.5+17, y=-8.0-9.0,L={u_{5,6}}]{u_14} \Edge[label = b](u_2)(u_3) \Edge[label = c](u_3)(u_4) \Edge[label = b](u_5)(u_6) \Edge[label = c](u_6)(u_7) \Edge[label = b](u_8)(u_9) \Edge[label = c](u_9)(u_10) \Edge[label = b](u_11)(u_12) \Edge[label = c](u_12)(u_13) \Edge[label = d](u_2)(u_5) \Edge[label = e](u_5)(u_8) \Edge[label = f](u_8)(u_11) \Edge[label = d](u_3)(u_6) \Edge[label = e](u_6)(u_9) \Edge[label = f](u_9)(u_12) \Edge[label = d](u_4)(u_7) \Edge[label = e](u_7)(u_10) \Edge[label = f](u_10)(u_13) \Edge[label = j](u_13)(u_14) \Edge(u_2)(u_3) \Edge(u_3)(u_4) \Edge(u_5)(u_6) \Edge(u_6)(u_7) \Edge(u_8)(u_9) \Edge(u_9)(u_10) \Edge(u_11)(u_12) \Edge(u_12)(u_13) \Edge(u_2)(u_5) \Edge(u_5)(u_8) \Edge(u_8)(u_11) \Edge(u_3)(u_6) \Edge(u_6)(u_9) \Edge(u_9)(u_12) \Edge(u_4)(u_7) \Edge(u_7)(u_10) \Edge(u_10)(u_13) \Edge(u_13)(u_14) \Edge[label = j, labelstyle={xshift=0pt, yshift=2pt}, style={in = -180, out = -45,min distance=1cm}](u_10)(u_14) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=-60,min distance=12cm}](u_1)(u_14) \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=-30,min distance=1cm}](u_1)(u_2) \Edge[label = a, labelstyle={xshift=-30pt, yshift=-5pt}, style={bend right=-30,min distance=1cm}](u_1)(u_3) \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=-45,min distance=1cm}](u_1)(u_5) \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=-45,min distance=2cm}](u_1)(u_6) \Edge[label = g, labelstyle={xshift=-16pt, yshift=-4pt}, style={in = 210, out=225,min distance=6cm}](u_2)(u_7) \def1.5{4.0} \def-8.0{-2.5} \Vertex[x=1.5+-2, y=-8.0+0.0,L={\tilde{x}''_1}]{u_1} \Vertex[x=1.5+1, y=-8.0+0.0,L={\tilde{x}''_2}]{s_1} \Vertex[x=1.5+4, y=-8.0+0.0,L={\tilde{x}''_3}]{s_2} \Vertex[x=1.5+7, y=-8.0+0.0,L={\tilde{x}''_4}]{s_3} \Vertex[x=1.5+10, y=-8.0+0.0,L={\tilde{x}''_5}]{s_4} \Vertex[x=1.5+13, y=-8.0+0.0,L={\tilde{x}''_6}]{u_14} \Edge[label = a](u_1)(s_1) \Edge[label = d](s_1)(s_2) \Edge[label = e](s_2)(s_3) \Edge[label = f](s_3)(s_4) \Edge[label = j](s_4)(u_14) \Edge[label = j, labelstyle={xshift=0pt, yshift=0pt}, style={in = -150, out = -30,min distance=1cm}](s_3)(u_14) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=-25,min distance=1cm}](u_1)(u_14) \Edge[label = a, labelstyle={xshift=0pt, yshift=0pt}, style={bend left=-25,min distance=1cm}](u_1)(s_2) \Edge[label = g, labelstyle={xshift=0pt, yshift=-0pt}, style={in = 150, out=30,min distance=1cm}](s_1)(s_2) \Edge[label = h, labelstyle={xshift=0pt, yshift=-0pt}, style={in = -150, out=-30,min distance=1cm}](s_2)(s_4) \Edge[style={in = 150, out=30,min distance=1cm}](s_1)(s_2) \Edge[style={in = -150, out=-30,min distance=1cm}](s_2)(s_4) \def1.5{-0.5} \def-8.0{-7.0} \Vertex[x=1.5+0, y=-8.0+0.0,L={\tilde{x}'_1}]{u_1} \Vertex[x=1.5+0, y=-8.0-3.0,L={\tilde{x}'_2}]{t_1} \Vertex[x=1.5+0, y=-8.0-6.0,L={\tilde{x}'_3}]{t_2} \Vertex[x=1.5+0, y=-8.0-9.0,L={\tilde{x}'_4}]{t_3} \Vertex[x=1.5+0, y=-8.0-12.0,L={\tilde{x}'_5}]{u_14} \Edge[label = a](u_1)(t_1) \Edge[label = b](t_1)(t_2) \Edge[label = c](t_2)(t_3) \Edge[label = j](t_3)(u_14) \Edge[label = i, labelstyle={xshift=-16pt, yshift=0pt}, style={bend right=25,min distance=1cm}](u_1)(u_14) \Edge[label = a, labelstyle={xshift=0pt, yshift=0pt}, style={bend right=25,min distance=1cm}](u_1)(t_2) \Edge[label = h, labelstyle={xshift=-16pt, yshift=-0pt}, style={in = 60, out=30,min distance=5.5cm}](u_5)(u_12) \Edge(u_1)(t_1) \Edge(t_1)(t_2) \Edge(t_2)(t_3) \Edge(t_3)(u_14) \Edge[style={in = 210, out=225,min distance=6cm}](u_2)(u_7) \Edge[style={in = 60, out=30,min distance=5.5cm}](u_5)(u_12) \Edge[label = g, labelstyle={xshift=0pt, yshift=-0pt}, style={in = 120, out=-120,min distance=1cm}](t_1)(t_3) \Edge[label = h, labelstyle={xshift=0pt, yshift=-0pt}, style={in = 60, out=-60,min distance=1cm}](t_1)(t_2) \Edge[style={in = 120, out=-120,min distance=1cm}](t_1)(t_3) \Edge[style={in = 60, out=-60,min distance=1cm}](t_1)(t_2) \tikzset{VertexStyle/.append style= font=\itshape\large,shape = rounded rectangle,inner sep = 0pt, outer sep = 0pt,minimum size = 20 pt,draw}} \def1.5{2.0} \def-8.0{-7.0} \Vertex[x=1.5+0.0, y=-8.0-0.0,L={(\tilde{x}'_1,\tilde{x}''_1)}]{u_1u_1} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{x}'_1,\tilde{x}''_2)}]{u_1s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{x}'_1,\tilde{x}''_3)}]{u_1s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{x}'_1,\tilde{x}''_4)}]{u_1s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{x}'_1,\tilde{x}''_5)}]{u_1s_4} \Vertex[x=1.5+15.0, y=-8.0-0.0,L={(\tilde{x}'_1,\tilde{x}''_6)}]{u_1u_14} \def1.5{2.0} \def-8.0{-10.0} \Vertex[x=1.5+0.0, y=-8.0-0.0,L={(\tilde{x}'_2,\tilde{x}''_1)}]{t_1u_1} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{x}'_2,\tilde{x}''_2)}]{t_1s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{x}'_2,\tilde{x}''_3)}]{t_1s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{x}'_2,\tilde{x}''_4)}]{t_1s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{x}'_2,\tilde{x}''_5)}]{t_1s_4} \Vertex[x=1.5+15.0, y=-8.0-0.0,L={(\tilde{x}'_2,\tilde{x}''_6)}]{t_1u_14} \def1.5{2.0} \def-8.0{-13.0} \Vertex[x=1.5+0.0, y=-8.0-0.0,L={(\tilde{x}'_3,\tilde{x}''_1)}]{t_2u_1} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{x}'_3,\tilde{x}''_2)}]{t_2s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{x}'_3,\tilde{x}''_3)}]{t_2s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{x}'_3,\tilde{x}''_4)}]{t_2s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{x}'_3,\tilde{x}''_5)}]{t_2s_4} \Vertex[x=1.5+15.0, y=-8.0-0.0,L={(\tilde{x}'_3,\tilde{x}''_6)}]{t_2u_14} \def1.5{2.0} \def-8.0{-16.0} \Vertex[x=1.5+0.0, y=-8.0-0.0,L={(\tilde{x}'_4,\tilde{x}''_1)}]{t_3u_1} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{x}'_4,\tilde{x}''_2)}]{t_3s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{x}'_4,\tilde{x}''_3)}]{t_3s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{x}'_4,\tilde{x}''_4)}]{t_3s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{x}'_4,\tilde{x}''_5)}]{t_3s_4} \Vertex[x=1.5+15.0, y=-8.0-0.0,L={(\tilde{x}'_4,\tilde{x}''_6)}]{t_3u_14} \def1.5{2.0} \def-8.0{-19.0} \Vertex[x=1.5+0.0, y=-8.0-0.0,L={(\tilde{x}'_5,\tilde{x}''_1)}]{u_14u_1} \Vertex[x=1.5+3.0, y=-8.0-0.0,L={(\tilde{x}'_5,\tilde{x}''_2)}]{u_14s_1} \Vertex[x=1.5+6.0, y=-8.0-0.0,L={(\tilde{x}'_5,\tilde{x}''_3)}]{u_14s_2} \Vertex[x=1.5+9.0, y=-8.0-0.0,L={(\tilde{x}'_5,\tilde{x}''_4)}]{u_14s_3} \Vertex[x=1.5+12.0, y=-8.0-0.0,L={(\tilde{x}'_5,\tilde{x}''_5)}]{u_14s_4} \Vertex[x=1.5+15.0, y=-8.0-0.0,L={(\tilde{x}'_5,\tilde{x}''_6)}]{u_14u_14} \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=10,min distance=1cm}](u_1u_1)(t_1s_1) \Edge[label = a, labelstyle={xshift=-30pt, yshift=-5pt}, style={bend right=-10,min distance=1cm}](u_1u_1)(t_1s_2) \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=10,min distance=1cm}](u_1u_1)(t_2s_1) \Edge[label = a, labelstyle={xshift=0pt, yshift=2pt}, style={bend right=-20,min distance=2cm}](u_1u_1)(t_2s_2) \Edge[label = j](t_3s_4)(u_14u_14) \Edge[label = d](u_1s_1)(u_1s_2) \Edge[label = e](u_1s_2)(u_1s_3) \Edge[label = f](u_1s_3)(u_1s_4) \Edge(t_3s_4)(u_14u_14) \Edge(u_1s_1)(u_1s_2) \Edge(u_1s_2)(u_1s_3) \Edge(u_1s_3)(u_1s_4) \Edge[label = d](t_1s_1)(t_1s_2) \Edge[label = e](t_1s_2)(t_1s_3) \Edge[label = f](t_1s_3)(t_1s_4) \Edge(t_1s_1)(t_1s_2) \Edge(t_1s_2)(t_1s_3) \Edge(t_1s_3)(t_1s_4) \Edge[label = d](t_2s_1)(t_2s_2) \Edge[label = e](t_2s_2)(t_2s_3) \Edge[label = f](t_2s_3)(t_2s_4) \Edge(t_2s_1)(t_2s_2) \Edge(t_2s_2)(t_2s_3) \Edge(t_2s_3)(t_2s_4) \Edge[label = d](t_3s_1)(t_3s_2) \Edge[label = e](t_3s_2)(t_3s_3) \Edge[label = f](t_3s_3)(t_3s_4) \Edge(t_3s_1)(t_3s_2) \Edge(t_3s_2)(t_3s_3) \Edge(t_3s_3)(t_3s_4) \Edge[label = d](u_14s_1)(u_14s_2) \Edge[label = e](u_14s_2)(u_14s_3) \Edge[label = f](u_14s_3)(u_14s_4) \Edge(u_14s_1)(u_14s_2) \Edge(u_14s_2)(u_14s_3) \Edge(u_14s_3)(u_14s_4) \Edge[label = b](t_1u_1)(t_2u_1) \Edge[label = b](t_1s_1)(t_2s_1) \Edge[label = b](t_1s_2)(t_2s_2) \Edge[label = b](t_1s_3)(t_2s_3) \Edge[label = b](t_1s_4)(t_2s_4) \Edge[label = b](t_1u_14)(t_2u_14) \Edge(t_1u_1)(t_2u_1) \Edge(t_1s_1)(t_2s_1) \Edge(t_1s_2)(t_2s_2) \Edge(t_1s_3)(t_2s_3) \Edge(t_1s_4)(t_2s_4) \Edge(t_1u_14)(t_2u_14) \Edge[label = c](t_2u_1)(t_3u_1) \Edge[label = c](t_2s_1)(t_3s_1) \Edge[label = c](t_2s_2)(t_3s_2) \Edge[label = c](t_2s_3)(t_3s_3) \Edge[label = c](t_2s_4)(t_3s_4) \Edge[label = c](t_2u_14)(t_3u_14) \Edge(t_2u_1)(t_3u_1) \Edge(t_2s_1)(t_3s_1) \Edge(t_2s_2)(t_3s_2) \Edge(t_2s_3)(t_3s_3) \Edge(t_2s_4)(t_3s_4) \Edge(t_2u_14)(t_3u_14) \Edge[label = j, labelstyle={xshift=0pt, yshift=0pt}, style={in = 165, out = -30,min distance=1cm}](t_3s_3)(u_14u_14) \Edge[label = i, labelstyle={xshift=0pt, yshift=2pt}, style={in = 70, out = 15,min distance=15.1cm}](u_1u_1)(u_14u_14) \Edge[label = g, labelstyle={xshift=0pt, yshift=-0pt}, style={in = 210, out=240,min distance=5cm}](t_1s_1)(t_3s_2) \Edge[label = h, labelstyle={xshift=0pt, yshift=-0pt}, style={in = 60, out=30,min distance=4.8cm}](t_1s_2)(t_2s_4) \Edge[style={in = 210, out=240,min distance=5cm}](t_1s_1)(t_3s_2) \Edge[style={in = 60, out=30,min distance=4.8cm}](t_1s_2)(t_2s_4) \def1.5{1.7} \def-8.0{8.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{5.7} \def-8.0{8.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{9.7} \def-8.0{8.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{13.7} \def-8.0{8.9+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.7) --(1.5+1.0,-8.0+0.7) -- (1.5+1.0,-8.0-6.6) -- (1.5-0.5,-8.0-6.6) -- (1.5-0.5,-8.0+0.0); \def1.5{1.7} \def-8.0{8.1+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+2.8,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+13.0,-8.0+1.5) --(1.5+13.0,-8.0) -- (1.5+2.8,-8.0+0.0); \def1.5{1.7} \def-8.0{5.3+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+2.8,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+13.0,-8.0+1.5) --(1.5+13.0,-8.0) -- (1.5+2.8,-8.0+0.0); \def1.5{1.7} \def-8.0{3-0.7+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+2.8,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+13.0,-8.0+1.5) --(1.5+13.0,-8.0) -- (1.5+2.8,-8.0+0.0); \def1.5{17.2} \def-8.0{-0.8+1} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+1.0,-8.0+1.5) --(1.5+1.0,-8.0) -- (1.5+0.0,-8.0+0.0); \def1.5{-1.8} \def-8.0{-0.8+13} \draw[circle, -,dashed, very thick,rounded corners=18pt] (1.5+0.5,-8.0+0.0)--(1.5-0.5,-8.0+0.0)--(1.5-0.5,-8.0+1.5)--(1.5+1.0,-8.0+1.5) --(1.5+1.0,-8.0) -- (1.5+0.0,-8.0+0.0); \def1.5{0.5} \def-8.0{9.2+1} \def1.5{1.5} \def-8.0{-6.0} \draw[circle, -,dotted, very thick,rounded corners=8pt] (1.5-0.5,-8.0-0.5)--(1.5-0.5,-8.0-0.3) --(1.5+1.5,-8.0-0.3) -- (1.5+3.3,-8.0-2.0) -- (1.5+14,-8.0-2.0) -- (1.5+14,-8.0-4.7)--(1.5+14,-8.0-10)-- (1.5+16.8,-8.0-12.7) -- (1.5+16.8,-8.0-13.2) -- (1.5+16.5,-8.0-13.7) -- (1.5+15,-8.0-13.7) -- (1.5+11.75,-8.0-11.7)-- (1.5+1.5,-8.0-11.7)-- (1.5+1.5,-8.0-3.7)-- (1.5+0.5,-8.0-1.7)-- (1.5-0.5,-8.0-1.7) -- (1.5-0.5,-8.0-0.5); \def1.5{1.5} \def-8.0{-6.4} \draw[circle, -,dashed, very thick,rounded corners=8pt] (1.5+0.2,-8.0+2)--(1.5+17.4,-8.0+2) --(1.5+17.9,-8.0+1.5) -- (1.5+17.9,-8.0-13)-- (1.5+17.4,-8.0-13.5) -- (1.5-0.3,-8.0-13.5) -- (1.5-0.8,-8.0-13) -- (1.5-0.8,-8.0+1.5) -- (1.5-0.3,-8.0+2)--(1.5+0.1,-8.0+2); \end{tikzpicture} } \end{center} \caption{Decomposition of $G\cong G/_{i=1}^5R_i\boxbackslash G/_{j=1}^6C_i$. The set $Z$ from the proof of Theorem~\ref{theorem_6} and the graph isomorphic to $G$ induced by $Z$ in $G/_{i=1}^5R_i\boxtimes G/_{j=1}^6C_i$ are indicated within the dotted region (except for the arc with label $i$).} \label{FourthDecomposition} \end{figure} \begin{proof} It clearly suffices to define a mapping $\phi: V(G)\rightarrow V(G/_{i=1}^{m}R_i\boxbackslash G/_{j=1}^{n}C_j)$ and to prove that $\phi$ is an isomorphism from $G$ to $G/_{i=1}^{m}R_i\boxbackslash G/_{j=1}^{n}C_j$. Let $\tilde{x}'_i$ be the new vertex replacing the sets $R_i$ with $v_{i,j}\in R_i$, $\tilde{x}''_j$ be the new vertex replacing the set $C_j$ with $v_{i,j}\in C_j$, when defining $G/_{i=1}^{m}R_i$ and $G/_{j=1}^{n}C_j$, respectively. Consider the mapping $\phi: V(G)\rightarrow V(G/_{i=1}^{m}R_i\boxbackslash G/_{j=1}^{n}C_j)$ defined by $\phi(v_{i,j})=(\tilde{x}_i,\tilde{x}_j)$ for all $v_{i,j}\in V(G)$. Then $\phi$ is obviously a bijection if $V(G/_{i=1}^{m}R_i\boxbackslash G/_{j=1}^{n}C_j)=Z$, where $Z$ is defined as $Z=\{(\tilde{x}_i,\tilde{x}_j)\mid \phi(v_{i,j})=(\tilde{x}_i,\tilde{x}_j), v_{i,j}\in V(G)\}$. We are going to show this later by arguing that all the other vertices of $G/_{i=1}^{m}R_i\Box G/_{j=1}^{n}C_j$ will disappear from $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$. But first we are going to prove the following claim. \begin{claim}\label{claim4} The subgraph of $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ induced by $Z$ is isomorphic to $G$. \end{claim} \begin{proof} We start with proving that $\tilde{x}'_i$ and $\tilde{x}'_j, i\neq j,$ implies $\tilde{x}'_i\neq \tilde{x}'_j$ and that $\tilde{x}''_i$ and $\tilde{x}''_j, i\neq j,$ implies $\tilde{x}''_i\neq \tilde{x}'_j$. Because $R_i$ is the row with all vertices $v_{i,k}$ of $V(G)$ (by hypothesis) the vertices of $R_i$ are replaced by $\tilde{x}'_i$ and $R_j$ is the row with all vertices $v_{j,k}$ of $V(G)$ (by hypothesis) the vertices of $R_j$ are replaced by $\tilde{x}'_j$, and $R_i\cap R_j=\emptyset, i\neq j$, $\tilde{x}'_i$ and $\tilde{x}'_j, i\neq j,$ implies $\tilde{x}'_i\neq \tilde{x}'_j$. Likewise, Because $C_i$ is the column with all vertices $v_{i,k}$ of $V(G)$ (by hypothesis) the vertices of $C_i$ are replaced by $\tilde{x}'_i$ and $C_j$ is the column with all vertices $v_{j,k}$ of $V(G)$ (by hypothesis) the vertices of $C_j$ are replaced by $\tilde{x}''_j$, and $C_i\cap C_j=\emptyset$, $\tilde{x}''_i$ and $\tilde{x}''_j, i\neq j,$ implies $\tilde{x}''_i\neq \tilde{x}''_j$ . Next, because all vertices $v_{i,j}$ are replaced by $\tilde{x}'_i$ by $G/_{i=1}^{m}R_i$ and all vertices $v_{i,j}$ are replaced by $\tilde{x}''_j$ by $G/_{j=1}^{n}C_j$, it follows that $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ contains $(\tilde{x}'_i,\tilde{x}''_j)$ with as a result that er is a one-to-one correspondence between $v_{i,j}$ and $(\tilde{x}'_i,\tilde{x}''_j)$. It follows that $\phi:V(G)\rightarrow Z$ is a bijection. It remains to show that this bijection preserves the arcs and their labels. By hypothesis, the arcs of the rows of the subgraphs of $G_M$ are asynchronous with respect to the arcs of the columns of the subgraphs of $G_M$ and the arcs of the subgraphs of $G_M$ are asynchronous with respect to the arcs of the subgraphs of $G_B$. For each arc $a$ of $G$ with $\mu(a)=(v_{i,j},v_{k,l})$, $i\neq k$, there is an arc $b$ of $G/_{i=1}^mR_i$ with $\mu(b)=(\tilde{x}'_i,\tilde{x}'_k)$ and $\lambda(a)=\lambda(b)$ and for each arc $c$ of $G$ with $\mu(c)=(v_{i,j},v_{k,l})$, $j\neq l$, there is an arc $d$ of $G/_{j=1}^nC_j$ with $\mu(d)=(\tilde{x}''_j,\tilde{x}''_l)$ and $\lambda(c)=\lambda(d)$. Because the arcs of each subgraph $G_{B_x}$ of $G_B$ are synchronous arcs, we have that if $a$ is a synchronous arc of $G_{B_x}$ then $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ contains an arc $d$ with $\mu(d)=((\tilde{x}'_i,\tilde{x}''_j),(\tilde{x}'_k,\tilde{x}''_l))$ and $\lambda(a)=\lambda(d)$. Because the arcs of the rows of each subgraph $G_{M_x}$ of $G_M$ are asynchronous arcs with respect to the arcs of the columns of $G_{M_x}$ (and vice versa), we have that if $a$ is such an asynchronous arc of a subgraph of $G_M$ then $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ contains arcs $d$ with $\mu(d)=((\tilde{x}'_i,\tilde{x}''_k),(\tilde{x}'_j,\tilde{x}''_k))$ and $\lambda(a)=\lambda(d)$ or arcs $d$ with $\mu(d)=((\tilde{x}'_i,\tilde{x}''_k),(\tilde{x}'_i,\tilde{x}''_l))$ and $\lambda(a)=\lambda(d)$ Because $G$ consists of subgraphs of $G_M$ and $G_B$ only, there are no other arcs $a$ of $G$. Therefore, the subgraph of $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ induced by $Z$ is isomorphic to $G$. \end{proof} We continue with the proof of Theorem~\ref{theorem_7}. It remains to show that all other vertices of $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$, except for the vertices of $Z$, disappear from $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$. First, we observe that all vertices of $Z$ are of the type $(\tilde{x}'_i,\tilde{x}''_j)$. Therefore, it suffices to show that vertices of the types $(\tilde{x}'_i,v_j)$, $(v_i,\tilde{x}''_i)$ and $(v_i,v_j)$ do not exist in $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ and the vertices $(\tilde{x}'_i, \tilde{x}''_j)$ of $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ that are not in $Z$ will disappear from $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$. Because all vertices $v_{i,j}$ of $G$ are in $R_{i}$, the set of vertices $\{v_{i,j}\}$ is replaced by the vertex $\tilde{x}'_i$ and therefore $v_{i,j}$ does not exist in $G/_{i=1}^{m}R_i$ and all vertices $v_{i,j}$ of $G$ are in $C_{j}$, the set of vertices of $v_{i,j}$ is replaced by the vertex $\tilde{x}''_j$ and therefore $v_{i,j}$ does not exist in $G/_{i=1}^{n}C_i$. Hence, by definition of the Cartesian product, vertices of the types $(\tilde{x}'_i,v_j)$, $(v_i,\tilde{x}''_i)$ and $(v_i,v_j)$ do not exist in $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$. By definition of the VRSP, if a vertex $(\tilde{x}'_i, \tilde{x}''_j)\notin Z$ has $level~0$ in $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$, $(\tilde{x}'_i, \tilde{x}''_j)$ is removed from $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$. This followes directly from $\phi$ mapping the source of $G$ into the source of the graph induced by $Z$. Therefore, assume $(\tilde{x}'_k, \tilde{x}''_l)\notin Z$ has $level > 0$ in $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$. For a vertex $(\tilde{x}'_k, \tilde{x}''_l)\notin Z$ to have level$>0$ in $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ there must be an arc $a$ in $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ with $\mu(a)=((\tilde{x}'_i, \tilde{x}''_j),(\tilde{x}'_k, \tilde{x}''_l))$ with either $(\tilde{x}'_i, \tilde{x}''_j)\in Z$ or $(\tilde{x}'_i, \tilde{x}''_j)\notin Z$. In the case that $(\tilde{x}'_i, \tilde{x}''_j)\notin Z$ we can recursively backtrack the paths until we reach a vertex $(\tilde{x}'_i, \tilde{x}''_j)\in Z$ or we reach a vertex $(\tilde{x}'_i, \tilde{x}''_j)\notin Z$ with $level~0$. In the former case, the arc $a$ cannot exist, because otherwise $a$ corresponds to an arc $b$ in $A(G_{M_x})$ or $a$ corresponds to an arc $b$ in $A(G_{B_x})$ with $\mu(b)=(v_{i,j},v_{k,l})$ and $\lambda(a)=\lambda(b)$. But such an arc $b$ cannot exist because for such an arc $b$ we have that there exists an arc $c$ in $G/_{i=1}^{m}R_i$ with $\mu(c)=(\tilde{x}'_i,\tilde{x}_k')$ and $\lambda(b)=\lambda(c)$ and there exists an arc $d$ in $G/_{j=1}^{n}C_j$ with $\mu(d)=(\tilde{x}''_j,\tilde{x_l}'')$ and $\lambda(b)=\lambda(c)=\lambda(d)$. Therefore, there exists an arc $e$ with $\mu(e) = ((\tilde{x}'_i,\tilde{x}''_j), (\tilde{x}'_k,\tilde{x}''_l))$ and $\lambda(e)=\lambda(a)$ in the graph induced by $Z$. This contradicts the assumption $(\tilde{x}'_i, \tilde{x}''_j)\notin Z$. In the latter case, the vertex $(\tilde{x}'_i, \tilde{x}''_j)$ is removed from $G/_{i=1}^{m}R_i\boxtimes G/_{j=1}^{n}C_j$ together with the arc $a$ with $\mu(a)=((\tilde{x}'_i, \tilde{x}''_j),(\tilde{x}'_{i_1}, \tilde{x}''_{j_1}))$, recursively, until the arc $a'$ with $\mu(a')=((\tilde{x}'_{i_n}, \tilde{x}''_{j_n}),(\tilde{x}'_{k}, \tilde{x}''_{l}))$ is removed. This completes the proof of Theorem~\ref{theorem_7}. \end{proof} \section{Future work} In this paper, we believe that we have supplied all ingredients with which we can decompose a labelled acyclic directed multigraph with respect to the VRSP. Based on Theorems~\ref{theorem_1},~\ref{theorem_2},~\ref{theorem_5},~\ref{theorem_6}~and~\ref{theorem_7} we believe that graphs that cannot be decomposed by any of these theorems must be a prime-graph with respect to the VRSP. But, this still has to be proved in future work.
2,869,038,155,140
arxiv
\section{Introduction} The idea of using quantum dots (QDs) for quantum computer implementations follows from the possibility of a clear selection of a two level system, on which a qubit can be realized \cite{loss98}. To this end, both charge and spin states of confined carriers are employed, where the latter is preferable, since spin states are generally more resistant to decoherence processes. Moreover, it is possible to exploit the charge evolution dependent on spin (via selection rules and Pauli exclusion principle) in order to manipulate the spin by optical means \cite{imamoglu99,pazy03a,calarco03,gauger08} on picosecond time scales, that is, much faster than previously proposed magnetic or electrical control. Many spin control schemes in such hybrid systems which use off-resonant interband excitations together with STIRAP processes \cite{troiani03}, adiabatic \cite{chen04} and fast \cite{economou06,economou07} evolution within trapped states in $\Lambda$ or four-level \cite{emary07a} systems have been proposed. These hybrid systems are considered now as the most promising candidates for QD-based quantum computers since during the millisecond spin decoherence time \cite{khaetskii01} it is possible to perform about $10^9$ optical quantum gates. Optical rotation of a single spin performed via picosecond laser pulses with the optical Stark effect as the operative mechanism was recently experimentally demonstrated \cite{berezovsky08}. This pioneering experiment showed that fast optical spin control is feasible and the current task is to thoroughly study the decoherence mechanisms that limit the fidelity of the achieved quantum control. The fundamental question is whether the spin degrees of freedom are indeed affected by decoherence mechanisms to a smaller degree than the charge ones and what constitutes their main dephasing channel. In this paper, we show that the spin state of a confined carrier can undergo dephasing even in the absence of spin-reservoir coupling if the spin rotation is achieved by a conditional evolution induced on the orbital degrees of freedom, as is the case in an optical control scheme. Although the dynamical details of this dephasing process depend on the specific implementation, the fundamental idea of the indirect dephasing can be understood with the help of a ``generic model'' of a three-component system: the carrier spin, its orbital state, and the reservoir. We show that this additional decoherence channel occurs on comparable or even shorter timescales than the spin precession and trion decay during the optical manipulation. Thus, it may constitute the main source of imperfections of the optical spin rotations. This shows that phonon-induced dephasing should be included in the analysis of optical spin control schemes even though the commonly studied decoherence mechanism related to the material dependent spin-orbit coupling leads to very small errors for short gates \cite{khaetskii01} and indeed can be neglected. The paper is organized as follows. In Sec.~\ref{sec:indirect}, a generic model describing the indirect spin dephasing is introduced. Next, in Sec.~\ref{sec:model}, we present the model for the specific optical spin control protocol in a single QD. Section \ref{sec:error} describes decoherence processes resulting from carrier-phonon coupling. Section \ref{sec:concl} concludes the paper with final remarks. \section{Indirect dephasing}\label{sec:indirect} The idea of optical spin rotation is based on a spin-dependent evolution of the charge, which finally brings it to the original state, up to an additional phase accumulated during the evolution. Let the initial state be $|\psi(t_{0})\rangle = (\alpha|0\rangle_{\mathrm s} + \beta |1\rangle_{\mathrm s}) \otimes |0\rangle_{\mathrm c}$, where the components refer to spin (s) and charge (c) states, respectively. The ideal evolution then has the form: \begin{equation*} |\psi_{\mathrm{id}}(t)\rangle = \alpha |0\rangle_{\mathrm s} \otimes |0\rangle_{\mathrm c} + \beta |1\rangle_{\mathrm s} \otimes \left[\eta(t) |0\rangle_{\mathrm c} + \xi(t)|1\rangle_{\mathrm c}\right], \end{equation*} where, at the final time $t_{1}$, $\eta(t_{1})=e^{i\phi}$ and $\xi(t_{1})=0$. Typically, the occupation of the excited charge state is kept small, $|\xi(t)|\ll|\eta(t)|$. This evolution realizes a rotation of the spin by an angle $\phi$ around the axis defined by the states $|0\rangle_{\mathrm s},|1\rangle_{\mathrm s}$, which may be selected at will using selection rules and appropriate pulse phases and polarizations. While the interaction between the spin and the environment is very weak, there is much stronger scattering of the reservoir quanta on the charge excitation, inducing, in a static situation, the usual phase damping channel on the charge subsystem. In the present case, when the charge state performs a conditional loop in its Hilbert space, the transient occupation of the excited charge state leads to the accumulated scattering amplitude (in the leading order in $\xi$ and $\epsilon$), \begin{equation*} w = i \epsilon\int_{t_{0}}^{t_{1}}dt |\xi(t)|^{2}, \end{equation*} where $|\epsilon|^{2}$ is proportional to the scattering rate and we assume that the reservoir quanta are non-resonant with the transitions between the charge states (otherwise, additional leakage out of the computational subspace appears). The final state of the three-component system is therefore \begin{eqnarray*} |\psi_{\mathrm{ac}}(t_1)\rangle &=& \alpha|0\rangle_{\mathrm s} \otimes |0\rangle_{\mathrm c} \otimes |0\rangle_{\mathrm e} \\ && + e^{i\phi} \beta |1\rangle_{\mathrm s} \otimes |0\rangle_{\mathrm c} \otimes (\sqrt{1-|w|^{2}}|0\rangle_{\mathrm e} + w|1\rangle_{\mathrm e}), \end{eqnarray*} where the last component (e) represents the environment states. Thus, the charge state separates but the spin state becomes entangled with the environment. Tracing out the charge and environment degrees of freedom one arrives at the operator sum representation for the effect of the imperfect rotation on the spin state, \begin{equation*} \rho_{\mathrm{ac}} = \sum_{\mu=0}^{1}M_{\mu}\rho_{\mathrm{id}} M_{\mu} \end{equation*} with $M_{0}=|0\rangle_{\mathrm{ss}}\langle 0|+\sqrt{1-|w|^{2}}|1\rangle_{\mathrm{ss}}\langle 1|$, $M_{1}=|w||1\rangle_{\mathrm{ss}}\langle 1|$. In this way, the coupling between the orbital degrees of freedom and the reservoir has induced an indirect phase damping channel on the spin qubit (in the gate-dependent basis $|0\rangle_{\mathrm s},|1\rangle_{\mathrm s}$), analogous to the indirect measurement scheme \cite{breuer02} with the spin, charge and environment playing the roles of the quantum object, quantum probe and measurement device, respectively. In the following, we study in detail the indirect dephasing process for a specific optical spin control protocol \cite{economou07}, including the microscopic description of the interaction between charges and their phonon reservoir as well as the non-Markovian nature of the latter. We show that this dephasing process leads to considerable errors, much larger than those induced by the spin-orbit coupling or hyperfine interaction over the relatively short gate duration on the picosecond timescale. \section{Model system}\label{sec:model} The considered system consists of a single QD doped with one electron. A magnetic field is applied in the $x$ direction (Voigt configuration) and generates Zeeman splittings $2\omega_e$ between the two electron spin states $|\bar x\rangle$ and $|x\rangle$ with fixed spin projection on the $x$ axis equal to $-1/2$ and $+1/2$, respectively. Analog for the trion spin states $|T_{\bar x}\rangle$ and $|T_{x}\rangle$ with energy splitting $2\omega_h$. These states are linear combinations of the electron ($|\bar z\rangle$, $|z\rangle$) and trion ($|\bar T\rangle$, $|T\rangle$) spin states along the growth and optical axis $z$. Depending on the light polarization, rotations about different axes are accomplished. As shown in Ref.~\onlinecite{economou06}, a rotation about the $z$ axis is performed with off-resonant circularly polarized light which, according to selection rules, couples the two spin states to only one trion state. Thus, we deal with an evolution of a three-level $\Lambda$ system (see Fig.~\ref{fig:lambda}). The control Hamiltonian, including free carrier part and carrier-light interaction, reads \begin{eqnarray*} H_{\mathrm C} & = & \omega_e (|z\rangle\!\langle \bar z| + |\bar z\rangle\!\langle z|) + \epsilon_T |T\rangle\!\langle T|\\ && + \Omega_z(t) \left(e^{i\omega_z t} |z\rangle\!\langle T| + \mathrm{H.c.}\right), \end{eqnarray*} where the laser pulse couples only the one spin state $|z\rangle$ and a trion state $|T\rangle$, whereas the orthogonal spin state $|\bar z\rangle$ is indirectly coupled via the magnetic field. After a passage of a $2\pi$ sech pulse, $\Omega_z(t) = \Omega_z \mathrm{sech}(\sigma_z t)$, the state acquires a phase, which, in consequence, leads to a spin rotation. The angle of rotation, $\phi_{z} = 2 \arctan(\sigma_{z}/\Delta_z)$, is defined via the laser bandwidth $\sigma_{z}$ and detuning of the laser from the transition energy $\Delta_z = \epsilon_T - \omega_z$. No population transfer to a trion state is possible for $\sigma_z = \Omega_z$. The approximation made in this scheme requires that the spin is considered to be frozen during the pulse, i.e. $\sigma_z \gg \gamma$, where $\gamma = 2(\omega_e+\omega_h)$, which from the beginning imposes a limitation on driving conditions (short pulse durations especially for large Zeeman splittings). \begin{figure}[tb] \unitlength 1mm {\resizebox{50mm}{!}{\includegraphics{3levels2.eps}}} \caption{\label{fig:lambda} $\Lambda$ system in a single quantum dot.} \end{figure} The free phonon Hamiltonian has the form $H_{\mathrm{ph}} = \sum_{{\bm k}} \hbar \omega_{\bm k}^{\phantom{\dagger}} \beta_{\bm k}^\dagger \beta_{\bm k}^{\phantom{\dagger}}$, where ${\bm k}$ is the phonon wave number and $\beta_{\bm k}^\dagger$ ($\beta_{\bm k}^{\phantom{\dagger}}$) is a phonon creation (annihilation) operator with corresponding frequencies $\omega_{\bm k}$. The Hamiltonian describing the interaction of the carriers with phonons reads \begin{equation*} H_{\mathrm{c-ph}} = \sum_{n,n'} |n \rangle\!\langle n'| \sum_{{\bm k}} f_{nn'}({\bm k}) (\beta_{\bm k}^{\phantom{\dagger}} + \beta_{-{\bm k}}^{\dagger}), \end{equation*} where $f_{nn'}({\bm k})$ are coupling elements and $n=z, \bar z, T$, and $\bar T$. The off-diagonal elements can be neglected due to energetic reasons and low efficiency of direct phonon-assisted spin-flip processes. Moreover, $f_{zz}({\bm k})=f_{\bar z \bar z}({\bm k})$ since the orbital wave functions are the same. Before the pulse is switched on, the lattice is already in a new dressed equilibrium state \cite{machnikowski07a} due to doping with one electron, and the phonon modes can be redefined in terms of new operators $b_{\bm k} = \beta_{\bm k} + f_{zz}({\bm k})/(\hbar \omega_{\bm k})$. In the strong confinement regime, a trion state can be written in a product form of electron and hole states. The resulting carrier-phonon Hamiltonian is \begin{equation*} H_{\mathrm{c-ph}} = |T\rangle\!\langle T| \sum_{{\bm k}} F_{TT} ({\bm k}) \left( b_{\bm k}^{\phantom{\dagger}} + b_{-{\bm k}}^{\dagger} \right) \end{equation*} with the following deformation potential coupling element between a trion and the phonon environment~\cite{grodecka07} \begin{equation*} F_{TT}({\bm k}) = f_{TT}({\bm k}) - f_{zz}({\bm k}) = \sqrt{\frac{\hbar k}{2\rho V c_l}} (D_e - D_h) {\cal F} ({\bm k}). \end{equation*} Here, $\rho = 5360$~kg/m$^3$ is the crystal density, $V$ is the normalization volume of the phonon modes, $c_l = 5150$~m/s is the longitudinal speed of sound, ${\cal F} ({\bm k})$ is the form factor reflecting the geometrical properties of the wave functions \cite{grodecka08}, and $D_e$ ($D_h$) is the deformation potential constant for electrons (holes), where $D_e - D_h = 8$~eV. These parameters correspond to self-assembled InAs/GaAs quantum dots with the electron and hole confinement in-plane equal to $4$~nm and in growth direction $1$~nm. \begin{figure}[tb] \unitlength 1mm {\resizebox{90mm}{!}{\includegraphics{ph-den-c.eps}}} \caption{\label{fig:spec} Phonon spectral density $R(\omega)$ at two temperatures and two spectral characteristics of the driving $s_1(\omega)$ and $s_2(\omega)$ for $\pi/2$ rotation about the $z$ axis with detuning and pulse bandwidth $\sigma_z = \Delta_z = 2.6$~meV.} \end{figure} \section{Phonon-induced decoherence}\label{sec:error} To measure the quality of the operation on a qubit we use the \textit{error} of the quantum gate, $\delta = 1-F^2$, defined as the loss of fidelity $F$. The error is a difference between the ideal final state (without decoherence) and the actually achieved one including the coupling to environment. Here, we consider the interaction with phonon environment, however, the trion radiative coupling (carrier-photon interaction) can be described in the same manner. The effect of the interaction with the phonon reservoir is calculated via the second order Born expansion of the density matrix evolution equation (for details, see Ref.~\onlinecite{grodecka07}). The interaction with light is included exactly and coupling to phonons is treated within a non-Markovian perturbation theory. As a result, one can write the error of the quantum gate as an overlap between two spectral functions reflecting the properties of the two above interactions, \begin{equation*} \delta = \int d\omega R(\omega) S(\omega). \end{equation*} Here, \begin{equation*} R(\omega) = \frac{n_B+1}{\hbar^2} \sum_{\bm k} |F_{TT}({\bm k})|^2 [\delta(\omega - \omega_{\bm k}) + \delta(\omega + \omega_{\bm k})] \end{equation*} is the phonon spectral density representing phonon emission ($\omega>0$) and absorption ($\omega<0$, nonzero only at finite temperature) processes (see Fig.~\ref{fig:spec}). \begin{figure}[tb] \unitlength 1mm {\resizebox{80mm}{!}{\includegraphics{zpi05-c.eps}}} \caption{\label{fig:pi0.5} Phonon-induced error contribution due to (a) pure dephasing and (b) phonon-assisted trion generation during the $\pi/2$ rotation about the $z$ axis.} \end{figure} The spectral characteristics of the driving, $S(\omega)$, has as many contributions as the dimension of the orthogonal complement of the initial state. In the case of $z$ rotation, there are two contributions, $S(\omega) = s_1(\omega) + s_2(\omega)$ reflecting two phonon-induced decoherence channels. One represents pure dephasing mechanism and reads \begin{equation*} s_{1}(\omega) = \frac{1}{4} \sin^{2}\vartheta \left| \int dt \; e^{-i\omega t} \; \left|-\frac{i}{c^*} \xi^{c^*} (1-\xi)^c \right|^2 \right|^{2}, \end{equation*} where $c = (1+i\Delta_z/\sigma_z)/2$ and the time dependence is enclosed in $\xi(t) = [ \tanh(\sigma_{z}t) +1 ]/2$. This function is always centered at $\omega=0$ (Fig.~\ref{fig:spec}) and its width grows with growing pulse bandwidth and detuning. It results from the fact, that the dynamical errors depend on the evolution speed, i.e., for a given pulse duration only some phonon modes can follow the evolution adiabatically whereas the others relax contributing to dephasing. The same applies to the second spectral function \begin{equation*} s_2(\omega) = \cos^{2}\frac{\vartheta}{2} \left| \int dt \; e^{-i\omega t} \; \frac{(-i)}{c^*} \xi^{c^*} (1-\xi)^c \left( 1 - \frac{\xi}{c^*} \right) \right|^{2}, \end{equation*} but the center of this function is shifted to a negative frequency around detuning value $\omega \approx -\Delta_z$. This contribution represents real transition and constitutes a decoherence channel referred to as the phonon-assisted trion generation. The resulting phonon-induced errors during a $\pi/2$ rotation about the $z$ axis, averaged over all initial spin states, are plotted in Fig.~\ref{fig:pi0.5} as functions of detuning and bandwidth (in this case, $\Delta_z = \sigma_z$) at four different temperatures $T$. The first contribution to the error resulting from pure dephasing effects [Fig.~\ref{fig:pi0.5}(a)] initially grows with growing detuning and pulse bandwidth. For small pulse bandwidths, the evolution is really slow and the relevant function $s_1(\omega)$ is extremely narrow covering only the diminishing part of the phonon density at $\omega\approx 0$. Thus, the phonons are able to adiabatically follow the change of the charge distribution and as a result the decoherence is reduced. Unfortunately, the proposed schemes require usually bandwidths much larger than Zeeman splitting and one cannot use the discussed bandwidth sector with small errors. This error contribution reaches its maximum value for $\Delta_z = \sigma_z \approx 1.5$~meV for all temperature values, where the pure dephasing effects are most efficient [$s_1(\omega)$ is broad and covers the whole spectrum of phonons]. The second error due to phonon-assisted transitions to the trion state is plotted in Fig.~\ref{fig:pi0.5}(b). In this case, the temperature dependence is stronger, since the spectral characteristics is centered at the negative frequency part of the phonon spectral density, which is strongly temperature dependent. Even for small bandwidths at a relatively low temperature $T=1$~K, the error is larger than $10^{-4}$. At each temperature, the maximum error is reached for the detuning corresponding to the maximal value of the phonon density. The error diminishes for large detunings ($>50$~meV) after the spectral characteristics reaches the phonon cut-off, where the one-phonon processes are not efficient. \begin{figure}[tb] \unitlength 1mm {\resizebox{85mm}{!}{\includegraphics{mzpi05-c.eps}}} \caption{\label{fig:pi0.5tot} Total phonon-induced error for (a) positive and (b) negative detuning during the $\pi/2$ rotation about the $z$ axis.} \end{figure} The total phonon-induced error during the $\pi/2$ rotation about growth direction $z$ is plotted in Fig.~\ref{fig:pi0.5tot}(a). To guarantee the coherent control and reach small errors, one needs either very small values of detunings and pulse bandwidth or very large ones of a few tens of meV. Taking into account the bandwidth limitation for typical Zeeman splitting of $0.1$~meV, the available parameters lead to large gate errors even at zero temperature. The only way to obtain desired small errors is to use very large detunings and short pulse durations. However, under such conditions, many other decoherence channels like resonant and off-resonant transitions to higher states or interaction with optical phonons are likely to appear. Moreover, this can lead to experimental difficulties, since large detunings require very strong pulses. In order to perform a rotation about an arbitrary axis, rotations about two orthogonal axes are needed, e.g. $z$ and $x$, and detunings above the energy gap may be needed. This leads to much larger phonon-induced errors, since emission processes become very important here. The total phonon-induced error for negative detunings is plotted in Fig.~\ref{fig:pi0.5tot}(b). Now, the spectral characteristics $s_2(\omega)$ responsible for phonon-assisted trion generation is centered at positive frequencies, where the phonon spectral density has much larger values especially at low temperature. In this case, the errors are up to two orders of magnitude larger in comparison with those for positive detunings. For experimentally reasonable values of detunings and pulse bandwidth, the error is always larger than $10^{-2}$ and has the maximal value of $\approx 10^{-1}$ for $\Delta_z = \sigma_z \approx 1$~meV. \begin{figure}[t] \unitlength 1mm {\resizebox{70mm}{!}{\includegraphics{4levels2.eps}}} \caption{\label{fig:4lev} $4-$level system in a single quantum dot.} \end{figure} \begin{figure}[b] \unitlength 1mm {\resizebox{70mm}{!}{\includegraphics{pi05x-c.eps}}} \caption{\label{fig:xrot} Total phonon-induced error for the $\pi/2$ rotation about the $x$ axis.} \end{figure} The spin rotation about the $x$ axis is realized via linearly polarized $\pi_{x}$ pulse. The relevant control Hamiltonian is: \begin{eqnarray*} H_{{\rm C}x} & = & \omega_e (|x\rangle\!\langle x| - |\bar x\rangle\!\langle \bar x|) + \epsilon_T^{(-)} |T_x\rangle\!\langle T_x| + \epsilon_T^{(+)} |T_{\bar x}\rangle\!\langle T_{\bar x}| \\ && + \left[ \Omega_x(t)e^{-i\omega_x t} (|x\rangle\!\langle T_x| + |\bar x\rangle\!\langle T_{\bar x}|) + \mathrm{H.c.} \right], \end{eqnarray*} where $\epsilon_T^{(\pm)} = \epsilon_T \pm \omega_h$. In this case, all four levels participate in the evolution (see Fig.~\ref{fig:4lev}). In consequence, there are two paths for phonon-assisted trion generation with two different detunings. The resulting total phonon-induced gate error for the $\pi/2$ rotation about the $x$ axis is shown in Fig.~\ref{fig:xrot}. One can see, that already at $T=1$~K, the error is always larger than $10^{-4}$ and grows with growing bandwidth. Adding the individual errors, it is possible to estimate the error of an arbitrary spin rotation. As we already discussed, even for a rotation about one of the axes, it is impossible to find driving conditions leading to errors smaller than $10^{-4}$, thus for the $y$-rotation the situation is even worse. Moreover, the calculated errors for single-qubit gates provide an estimation for the two-qubit spin gates employing, for instance, the electron hole exchange interaction in coupled QDs \cite{economou08}. \section{Discussion and conclusions}\label{sec:concl} It has been shown that even in the absence of direct spin-reservoir coupling, the spin state of a confined carrier is exposed to indirect dephasing through the entangling optically induced charge evolution. We have proposed a model for this indirect decoherence channel consisting of three components: spin, charge, and environment. As an illustration, the optical spin manipulation in a single doped quantum dot has been considered. It was shown, that optical driving of such a system leads to a strong dynamical response of the lattice and to strong indirect dynamical phonon-induced decoherence channels for the spin degrees of freedom. Finally, we compare the considered optical spin control proposal with two previous schemes \cite{troiani03,chen04}. All of them use single or double quantum dots doped with one additional electron and the excitation of intermediate trion state. One of them \cite{troiani03} makes use of stimulated Raman adiabatic passage (STIRAP) and is implemented in a double quantum dot. The main limitations of this proposal are slow adiabatic evolution requirement and necessity of electron transfer between two QDs and the delocalized hole state. The second one \cite{chen04} is implemented in a single QD so that the two latter constrains are overcome. However, the evolution still has to be adiabatic. The proposal considered in this paper prevails over all the limitations discussed above, since the optical rotation is performed by means of fast laser pulses. However, this leads to larger phonon-induced errors $\delta > 10^{-3}$ even at low temperature, whereas the errors in the case of adiabatic evolution \cite{roszak05b,grodecka07} are at least one order of magnitude smaller $\delta < 10^{-4}$. On the other hand, the fast evolution leads to smaller errors resulting from carrier-photon interaction. The trion state is excited only for a short moment, thus the probability of its radiative decay is low \cite{grodecka07}. All in all, the fast optical spin rotation analyzed here possesses many advantages in comparison with the other two proposals, however the dynamical phonon-induced indirect spin dephasing is in this case much stronger. The phonon-induced decoherence processes may in many cases constitute the dominant source of errors, since they are much more efficient than those due to spin-orbit mechanism assisted by phonons and up to two orders of magnitude larger than the errors resulting from trion radiative decay \cite{grodecka07}. Moreover, these dynamical phonon-induced processes are most efficient exactly on the timescales for the proposed and demonstrated optical spin rotations. Therefore, in order to overcome the phonon-induced indirect spin dephasing one should avoid such detunings and timescales. Another idea is to reduce the dephasing by means of collective encoding of quantum information in QD arrays \cite{grodecka06,grodecka06b}. Pulse optimization may also lead to error reduction \cite{axt05a,wenin06,hohenester04}. \acknowledgments A. G. and J. F. acknowledge support from the Emmy Noether Program of the DFG (Grant No. FO 637/1-1) and the DFG Research Training Group GRK 1464, and thank the John von Neumann Institut f{\"u}r Computing (NIC) for computing time. \bibliographystyle{prsty}
2,869,038,155,141
arxiv
\section{Introduction}\label{sec:intro} Several programming platforms are nowadays available, providing methods to transform and manipulate various formal language objects: Grail/Grail+\cite{RayWood:1994,grail}, Vaucanson 2~\cite{Vauc,DDLS:2013}, FAdo \cite{FAdo,AAAMR:2009}, OpenFST \cite{OpenFST}, JFLAP \cite{OpenFST}. Some of these systems allow one to manipulate such objects within simple script environments. Grail for example, one of the oldest systems, provides a set of filters manipulating automata and regular expressions on a UNIX command shell. Similarly, FAdo provides a set of methods manipulating such objects on a Python shell~\cite{Python}. Software environments for symbolic manipulation of formal languages are widely recognized as important tools for theoretical and practical research. They allow easy prototyping of new algorithms, testing algorithm performance with large datasets, corroborate or disprove descriptional complexity bounds for manipulations of formal systems representations, etc. A typical example is, for a given operation on regular languages, to find an upper bound for the number states of a minimal deterministic finite automata (DFA) for the language that results from the operation, as a function of the number of states of the minimal DFAs of the operands. Due to the combinatorial nature of formal languages representations this kind of calculations are almost impossible without computational add. In this work, we extend the capabilities of FAdo and LaSer~\cite{DudKon:2012,Laser} by implementing transducer methods and by going to the higher level of implementing objects representing classes of independent formal languages, also known as code properties. More specifically, the contributions of the present paper are as follows. \begin{enumerate} \item Implementation of transducer objects and state of the art transducer methods. FAdo is a regularly maintained and user friendly Python package that was lacking transducer objects. Now available are general transducers as well as transducers in standard and normal forms. Some important methods that have been implemented are various product constructions between transducers and between transducers and automata, as well as a transducer functionality test. \item Definitions of objects representing code properties and methods for their manipulation, which to our knowledge is a new development in software related to formal language objects. In simple mathematical terms, a code property is a class of languages that is closed for maximal languages. In addition to some fixed known code properties (such as prefix code, suffix code, hypercode), these methods can be used to construct new code properties, including various error-detecting properties, which are specified either via a trajectory regular expression~\cite{Dom:2004} or via a transducer description~\cite{DudKon:2012}. Moreover, our methods can be used to combine any defined properties and produce a new property that is the conjunction of the defined properties. \item Enhancement and implementation of state of the art decision algorithms for code properties of regular languages. In particular, many such algorithms have been implemented and enhanced so as to provide witnesses (counterexamples) in case of a negative answer, for example, when the given regular language does not satisfy the property, or is not maximal with respect to the property. To our knowledge such implementations are not openly available. In particular, the satisfaction of the classic property of unique decipherability, or decodability, is implemented for any given NFA based on the algorithm in~\cite{Head:Weber:decision} as well as the satisfaction of various error-detecting properties~\cite{Kon:2002}. \item A mathematical definition of what it means to simulate (and hence implement) a hierarchy of properties and the proof that there is no complete simulation of the set of error-detecting properties. \item Generation of executable Python code based on the requested question about a given code property. This is mostly of use in the online LaSer~\cite{Laser}, which receives client requests and attempts to compute answers. However, as the algorithm required to compute an answer can take a long time, LaSer provides the option to compute and return a self-contained executable Python program that can be executed at the client's machine and return the required answer. \item All the above classes and methods are open source (GPL : available for anyone to copy, use and modify to suit their own application. \end{enumerate} Our work is founded on independence theory \cite{Shyr:Thierrin:relations,JuKo:handbook} as well as the theory of rational relations and transducers \cite{Be:1979,Sak:2009}. We present our algorithmic enhancements in a detailed mathematical manner with \textit{two aims in mind:} first to establish the correctness of the enhanced algorithms and, second to allow interested readers to obtain a deeper understanding of these algorithms, which could potentially lead to further developments. \pssn The paper is organized as follows. \begin{description} \item[Section 2] contains some basic terminology and background about various formal language concepts as well as a few examples of manipulating FAdo automata in Python. \item[Section 3] describes our implementation of transducer object classes and a few basic methods involving product constructions and rational operations. \item[Section 4] describes the decision algorithm for transducer functionality, and then our enhancement so as to provide witnesses when the transducer in question is not functional. \item[Section 5] describes our implementation of code property objects and basic methods for their manipulation. Moreover, a mathematical approach to defining syntactic simulations of infinite sets of properties is presented, explaining that a linear simulation of all error-detecting properties exists, but no complete simulation of these properties is possible. \item[Section 6] continues on code property methods by describing our implementation of the satisfaction and maximality methods. Again, we describe our enhancements so as to provide witnesses when the answer to the satisfaction/maximality question is negative. \item[Section 7] describes the implementation of the unique decodability (or decipherability) property and its satisfaction and maximality algorithms. This property is presented separately, as it is a classic one that cannot be defined within the methods of transducer properties. \item[Section 8] describes the new version of LaSer~\cite{Laser} including the capability to generate executable programming code in response to a client's request for the satisfaction or maximality of a given code property. \item[Section 9] contains a few concluding remarks including directions for future research. \end{description} \section{Terminology and Background}\label{sec:terminology} \emshort{Sets, alphabets, words, languages.} We write $\mathbb{N},\mathbb{N}_0,\mathbb{Z},\mathbb{R}$ for the sets of natural numbers (not including 0), non-negative integers, integers, and real numbers, respectively. If $S$ is a set, then $\card S$ denotes the cardinality of $S$, and $\pset S$ denotes the set of all subsets of $S$. An \emdef{alphabet} is a finite nonempty set of symbols. In this paper, we write $\al,\alD$ for any arbitrary alphabets. If $q\in\mathbb{N}$, then $\al_q$ denotes the alphabet $\{0,1,\ldots,q-1\}$. The set of all words, or strings, over an alphabet $\al$ is written as $\al^*$, which includes the \emdef{empty word} $\ew$. A \emdef{language} (over $\al$) is any set of words. In the rest of this paragraph, we use the following arbitrary object names: $i,j$ for nonnegative integers, $K,L$ for languages and $u,v,w,x,y$ for words. If $w\in L$ then we say that $w$ is an \emdef{$L$-word}. When there is no risk of confusion, we write a singleton language $\{w\}$ simply as $w$. For example, $L\cup w$ and $v\cup w$ mean $L\cup\{w\}$ and $\{v\}\cup\{w\}$, respectively. We use standard operations and notation on words and languages \cite{HopcroftUllman,Wood:theory:of:comput,MaSa:handbook,FLHandbookI}. For example, $uv$, $w^i$, $KL$, $L^i$, $L^*$, $L^+$ represent respectively, the concatenation of $u$ and $v$, the word consisting of $i$ copies of $w$, the concatenation of $K$ and $L$, the language consisting of all words obtained by concatenating any $i$ $L$-words, the Kleene star of $L$, and $L^+=L^*\setminus\ew$. If $w$ is of the form $uv$ then $u$ is a \emdef{prefix} and $v$ is a \emdef{suffix} of $w$. If $w$ is of the form $uxv$ then $x$ is an \emdef{infix} of $w$. If $u\not=w$ then $u$ is called a \emdef{proper prefix} of $w$---the definitions of proper suffix and proper infix are similar. \pmsn \emshort{Codes, properties, independent languages, maximality.} A \emdef{property} (over $\al$) is any set $\pty$ of languages. If $L$ is in $\pty$ then we say that $L$ \emdef{satisfies} $\pty$. Let $\aleph_0$ denote the cardinality of $\mathbb{N}$. A \emdef{code property}, or \emdef{independence}, \cite{JuKo:handbook}, is a property $\pty$ for which there is $n\in\mathbb{N}\cup\{\aleph_0\}$ such that \[ L\in\pty, \quad\hbox{if and only if} \quad L'\in\pty,\hbox{ for all $L'\subseteq L$ with $0<\card{L'}<n$,} \] that is, $L$ satisfies the property exactly when all nonempty subsets of $L$ with less than $n$ elements satisfy the property. In this case, we also say that $\pty$ is an $n$-\emdef{independence}. In the rest of the paper we only consider properties $\pty$ that are code properties. A language $L\in\pty$ is called \emdef{$\pty$-maximal}, or a maximal $\pty$ code, if $L\cup w\notin\pty$ for any word $w\notin L$. From \cite{JuKo:handbook} we have that every $L$ satisfying $\pty$ is included in a maximal $\pty$ code. To our knowledge, all known code related properties in the literature \cite{Ham:1950,Shyr:book,JuKo:handbook,SSYu:book,BePeRe:2009,Dom:2004,DudKon:2012,PAF:2013} are code properties as defined above. For example, consider the `prefix code' property: $L$ is a \emdef{prefix code} if no word in $L$ is a proper prefix of a word in $L$. This is a code property with $n=3$. Indeed, let $\pty'$ be the set of all singleton languages union with the set of all languages $\{u,v\}$ such that $u$ is not a proper prefix of $v$ and vice versa. Then, every element in $\pty'$ is a prefix code. Moreover any $L$ is a prefix code if and only if any nonempty subset $L'$ of $L$ with less than three elements is in $\pty'$. As we shall see further below the focus of this work is on 3-independence properties that can also be viewed as independent with respect to a binary relation in the sense of \cite{Shyr:Thierrin:relations}. \pmsn \emshort{Automata and regular languages \cite{Yu:handbook,Sak:2009}.} A nondeterministic finite automaton with empty transitions, for short \emdef{automaton} or \emdef{$\ew$-NFA}, is a quintuple $$\aut=(Q,\al, T,I, F)$$ such that $Q$ is the set of states, $\al$ is an alphabet, $I,F\subseteq Q$ are the sets of start (or initial) states and final states, respectively, and $T\subseteq Q\times(\al\cup\ew)\times Q$ is the finite set of \emdef{transitions}. Let $(p,x,q)$ be a transition of $\aut$. Then $x$ is called the \emdef{label} of the transition, and we say that $p$ has an \emdef{outgoing} transition (with label $x$). A \emdef{path} of $\aut$ is a finite sequence $(p_0,x_1,p_1,\ldots,x_\ell,p_\ell)$, for some nonnegative integer $\ell$, such that each triple $(p_{i-1},x_i,p_i)$ is a transition of $\aut$. The word $x_1\cdots x_\ell$ is called the \emdef{label} of the path. The path is called \emdef{accepting} if $p_0$ is a start state and $p_\ell$ is a final state. The \emdef{language accepted} by $\aut$, denoted as $\lang(\aut)$, is the set of labels of all the accepting paths of $\aut$. The $\ew$-NFA $\aut$ is called \emdef{trim}, if every state appears in some accepting path of $\aut$. The automaton $\aut$ is called an \emdef{NFA}, if no transition label is empty, that is, $T\subseteq Q\times\al\times Q$. A deterministic finite automaton, or \emdef{DFA} for short, is a special type of NFA where $I$ is a singleton set and there is no state $p$ having two outgoing transitions with equal labels. The \emdef{size} $\sz{\aut}$ of the automaton $\aut$ is $\card{Q}+\card{T}$. The automaton $\aut^{\ew}$ results when we add $\ew$-loops in $\aut$, that is, transitions $(p,\ew,p)$ for all states $p\in Q$. \pmsn \emshort{Transducers and (word) relations \cite{Be:1979,Yu:handbook,Sak:2009}.} A (word) \emdef{relation} over $\al$ and $\alD$ is a subset of $\al^*\times\alD^*$, that is, a set of pairs $(x,y)$ of words over the two alphabets (respectively). The \emdef{inverse} of a relation $\rho$, denoted as $\rho^{-1}$ is the relation $\{(y,x)\mid (x,y)\in \rho\}$. A (finite) \emdef{transducer} is a sextuple $$\tr=(Q, \al, \alD, T, I, F)$$ such that $Q,I,F$ are exactly the same as those in $\ew$-NFAs, $\al$ is now called the \ \emdef{input} alphabet, $\alD$ is the \emdef{output} alphabet, and $T\subseteq Q\times\al^*\times\alD^*\times Q$ is the finite set of transitions. We write $(p,x/y,q)$ for a transition -- the \emdef{label} here is $(x/y)$, with $x$ being the input and $y$ being the output label. The concepts of path, accepting path, and trim transducer are similar to those in $\ew$-NFAs. In particular the \emdef{label} of a path $(p_0,x_1/y_1,p_1,\ldots,x_\ell/y_\ell,p_\ell)$ is the pair $(x_1\cdots x_\ell, y_1\cdots y_\ell)$ consisting of the input and output labels in the path. The \emdef{relation realized} by the transducer $\tr$, denoted as $\rel(\tr)$, is the set of labels in all the accepting paths of $\tr$. We write $\tr(x)$ for the set of \emdef{possible outputs of} $\tr$ on input $x$, that is, $y\in\tr(x)$ iff $(x,y)\in \rel(\tr)$. The \emdef{domain} of $\tr$ is the set of all words $w$ such that $\tr(w)\not=\emptyset$. The \emdef{inverse} of a transducer $\tr$, denoted as $\tr^{-1}$, is the transducer that results from $\tr$ by simply switching the input and output alphabets of $\tr$ and also switching the input and output parts of the labels in the transitions of $\tr$. It follows that $\tr^{-1}$ realizes the inverse of the relation realized by $\tr$. The transducer $\tr$ is said to be in \emdef{standard form}, if each transition $(p,x/y,q)$ is such that $x\in(\al\cup\ew)$ and $y\in(\alD\cup\ew)$. It is in \emdef{normal form} if it is in standard form and exactly one of $x$ and $y$ is equal to $\ew$. We note that every transducer is effectively equivalent to one (realizing the same relation, that is) in standard form and one in normal form. As in the case of automata, the transducer $\trt^{\ew}$ results when we add $\ew$-loops in $\trt$, that is, transitions $(p,\ew/\ew,p)$ for all states $p\in Q$. The \emdef{size} of a transition $(p,x/y,q)$ is the number $1+|x|+|y|$. The size $\sz{\tr}$ of the transducer $\tr$ is the sum of the number of states and sizes of transitions in $T$. If $\trs$ and $\tr$ are transducers, then there is a transducer $\trs\lor\tr$ of size $O(|\trs|+|\tr|)$ realizing $\rel(\trs)\cup\rel(\tr)$. \pmsn \emshort{Automata and finite languages in FAdo \cite{FAdo}.} The modules \texttt{fa} for automata, \texttt{fl} for finite languages, and \texttt{fio} for input/output of formal language objects, can be imported in a standard Python manner as follows. \begin{verbatim} import FAdo.fl as fl import FAdo.fio as fio from FAdo.fa import * # import all fa methods for readability \end{verbatim} The FAdo object classes \texttt{FL}, \texttt{DFA} and \texttt{NFA} manipulate finite languages, DFAs and $\ew$-NFAs, respectively. \begin{EX} The following code builds a finite language object \verb1L1 from a list of strings, and then builds an NFA object \aai accepting the language $\{a, ab, aab\}$. \begin{verbatim} lst = ['a', 'ab', 'aab'] L = fl.FL(lst) a = L.toNFA() \end{verbatim} The second line uses the class name \verb1FL1 to create the finite language \verb1L1 from the given list of strings. The last line returns an NFA accepting the language \verb1L1. \end{EX} \pbsn The following code reads an automaton, or transducer, \aai from a file, and then writes it into a file. \begin{verbatim} a = fio.readOneFromFile(filename1) ... fio.saveToFile(filename2, a) \end{verbatim} These methods assume that automata and transducers are written into those files in FAdo format---see examples below and further ones in \cite{FAdo,Laser}. We can also read an automaton or transducer from a string \verb1s1 using the function \verb1readOneFromString(s)1. \begin{EX}\label{exNFA1} The following code defines a string that contains an automaton accepting $a^*b$ and then uses that string to define an \verb1NFA1 object. \begin{verbatim} st = '@NFA 1 * 0\n0 a 0\n0 b 1\n' a = fio.readOneFromString(st) \end{verbatim} As usual, the pattern \verb1\n1 denotes the \emdef{end of line character}, so the string \verb1st1 consists of three lines: the first indicates the type of object followed by the final states (in this case 1) and the start states after \verb1*1 (in this case 0); the second line contains the transition \verb#0 a 0#; and the third line contains the transition \verb#0 b 1#. The string has to end with a \verb1\n1. \end{EX} \pnsn Next we list a few useful methods for \verb1NFA1 objects. We assume that \texttt{a} and \texttt{b} are automata, and \texttt{w} is a string (a word). \pbsn \texttt{a.evalWordP(w)}: returns \texttt{True} or \texttt{False}, depending on whether the automaton \texttt{a} accepts \texttt{w}. \pssn \texttt{a.emptyP()}: returns \texttt{True} or \texttt{False}, depending on whether the language accepted by $\texttt{a}$ is empty. \pssn \texttt{a \& b}: returns an NFA accepting $\lang(\texttt{a})\cap\lang(\texttt{b})$. Both $\texttt{a}$ and $\texttt{b}$ must be NFAs with no $\ew$-transitions. \pssn \texttt{a.elimEpsilon()}: alters $\texttt{a}$ to an equivalent NFA with no $\ew$-transitions. \pssn \texttt{a.epsilonP()}: returns \texttt{True} or \texttt{False}, depending on whether $\texttt{a}$ has any $\ew$-transitions. \pssn \texttt{a.makePNG(fileName='xyz')}: creates a file \verb1xyz.jpg1 containing a picture of the automaton (or transducer) $\texttt{a}$. \pssn \texttt{a.enumNFA(n)}: returns the set of words of length up to \texttt{n} that are accepted by the automaton $\texttt{a}$. \begin{EX} The following example shows a naive implementation of \verb1a.evalWordP(w)1 \begin{verbatim} b = fl.FL([w]).toNFA() c = a & b return not c.emptyP() \end{verbatim} One verifies that \texttt{w} is accepted by $\texttt{a}$ if and only if $\texttt{a}$ intersected with the automaton accepting \{\texttt{w}\} accepts something. \end{EX} \section{Transducer Object Classes and Methods}\label{sec:transducers} In this section we discuss some aspects of the implementation of transducer objects and then continue with several important methods, which we divide into two subsections: product constructions and rational operations. We discuss the method for testing functionality in Section~\ref{sec:nonfunc}. The module containing all that is discussed in this and the next section is called \verb1transducers.py1. We can import all transducer methods as follows. \begin{verbatim} from FAdo.transducers import * \end{verbatim} \pssn \emshort{Transducer objects and basic methods} \pssn From a mathematical point of view, a \emdef{Python dictionary} is an onto function $\texttt{delta}:D\to R$ between two finite sets of values $D$ and $R$. One writes \verb1delta[x]1 for the image of \texttt{delta} on \verb1x1. One can define completely the dictionary using an expression of the form \[ \texttt{delta = \{d1:r1, d2:r2,}\>\>\ldots\>\>\> \texttt{, dN:rN\}} \] which defines \verb1delta[d1$i$\verb1] = r1$i$, for all $i=1,\ldots,\mathrm{N}$. In FAdo, the class \texttt{GFT}, for General Form Transducer, is a subclass of \texttt{NFA}. A transducer $\trt=(Q,\al,\alD,T,I,F)$ is implemented as an object \tti with six instance variables \pmsi \texttt{States, Sigma, Output, delta, Initial, Final} \pmsn corresponding to the six components of $\trt$. Specifically, \texttt{States} is a list of unique state names, meaning that each state name has an index which is the position of the state name in the list, with 0 being the first index value. The variables \texttt{Sigma, Output, Initial} and \texttt{Final} are sets, where the latter two are sets of state indexes. For efficiency reasons, the set of transitions $T$ is implemented as a dictionary \pmsi \texttt{delta:} $\{0,\ldots,n-1\} \to$ (\texttt{Sigma} $\to$ $2^{\texttt{Output} \times\{0,...,n-1\}}$), \pssn where $n$ is the number of states. Thus, for any $p\in\{0,\ldots,n-1\} $, \texttt{delta[$p$]} is a dictionary, and for any input label $x$, \texttt{delta[$p$][$x$]} is a set of pairs $(y,q)$ corresponding to all transitions $\{(p,x/y,q)\in T\mid y\in \texttt{Output},\> q \hbox{ is a state index}\}$. \pssn Standard form transducers are objects of the FAdo class \texttt{SFT}, which is a subclass of \texttt{GFT}. The class \texttt{SFT} is very important from an algorithmic point of view, as most product constructions require a transducer to be in standard form. The conversion from GFT to SFT is done using \pmsi \verb1 s = t.toSFT()1 \pssn which assigns to \tsi a new SFT object equivalent to \ttin. The implementation of Normal Form Transducers is via the FAdo class \texttt{NFT}. This form of transducers is convenient in proving mathematical statements about transducers~\cite{Sak:2009}. \begin{EX}\label{exSuffixTran} The following code defines a string \verb1s1 that contains a transducer description and then constructs an SFT transducer from that string. The transducer, on input $x$, returns the set of all proper suffixes of $x$---see also Fig~\ref{fig:suffix}. It has an initial state 0 and a final state 1, and deletes at least one of the input symbols so that what gets outputted at state 1 is a proper suffix of the input word. \begin{verbatim} s = '@Transducer 1 * 0\n'\ '0 a @epsilon 0\n'\ '0 b @epsilon 0\n'\ '0 a @epsilon 1\n'\ '0 b @epsilon 1\n'\ '1 a a 1\n'\ '1 b b 1\n' t = fio.readOneFromString(s) \end{verbatim} \end{EX} \begin{figure}[ht!] \begin{transducer} \node [state,initial] (q0) {$0$}; \node [state,accepting,right of=q0] (q1) {$1$}; \path (q0) edge [loop above] node [above] {$\sigma/\ew$} () (q0) edge node [below] {$\sigma/\ew$} (q1) (q1) edge [loop above] node [above] {$\sigma/\sigma$} (); \end{transducer} \begin{center} \parbox{0.85\textwidth}{\caption{On input $x$ the transducer shown in the figure outputs any proper suffix of $x$. {Note}: In this and the following transducer figures, the input and output alphabets are equal. An arrow with label $\sigma/\sigma$ represents a set of transitions with labels $\sigma/\sigma$, for all alphabet symbols $\sigma$; and similarly for an arrow with label $\sigma/\epsilon$. An arrow with label $\sigma/\sigma'$ represents a set of transitions with labels $\sigma/\sigma'$ for all distinct alphabet symbols $\sigma,\sigma'$.} \label{fig:suffix} } \end{center} \end{figure} Recall, for a transducer $\trt$ and word $w$, $\trt(w)$ is the set of possible outputs of $\trt$ on input $w$. Note that this set could be empty, finite, or even infinite. In any case, it is always a regular language. The FAdo method \texttt{t.runOnWord(w)} assumes that \tti is an SFT object and returns an automaton accepting the language \ttin(w). \begin{EX} The following code is a continuation of Example~\ref{exSuffixTran}. It prints the set of all proper suffixes of the word \texttt{ababb}. \begin{verbatim} a = t.runOnWord('ababb') n = len('ababb') print a.enumNFA(n) \end{verbatim} \end{EX} \pssn Assuming again that \tti is an SFT object, we have the following methods. \pssn \texttt{t.inverse()}: returns the inverse of the transducer \ttin. \pssn \texttt{t.evalWordP((w,w'))}: returns \texttt{True} or \texttt{False}, depending whether the pair \texttt{(w,w')} belongs to the relation realized by \ttin. \pssn \texttt{t.nonEmptyW()}: returns some word pair (\texttt{u, v}) which belongs to the relation realized by \ttin, if nonempty. Else, it returns the pair (\texttt{None, None}). \pssn \texttt{t.toInNFA()}: returns the NFA that results if we remove the output alphabet and the output labels of the transitions in \ttin. \pssn \texttt{t.toOutNFA()}: returns the NFA that results if we remove the input alphabet and the input labels of the transitions in \ttin. \pbsn \emshort{Product constructions \cite{Be:1979,Yu:handbook,Kon:2002}} \pssn The following methods are available in FAdo. They are adaptations of the standard product construction \cite{HopcroftUllman} between two NFAs which produces an NFA with transitions $((p_1,p_2),\sigma,(q_1,q_2))$, where $(p_1,\sigma,q_1)$ and $(p_2,\sigma,q_2)$ are transitions of the two NFAs, such that the new NFA accepts the intersection of the corresponding languages. We assume that \tti and \tsi are SFT objects and \aai is an NFA object. \pbsn \texttt{t.inIntersection(a)}: returns a transducer realizing all word pairs $(x,y)$ such that $x$ is accepted by \aai and $(x,y)$ is realized by \ttin. \pssn \texttt{t.outIntersection(a)}: returns a transducer realizing all word pairs $(x,y)$ such that $y$ is accepted by \aai and $(x,y)$ is realized by \ttin. \pssn \texttt{t.runOnNFA(a)}: returns the automaton accepting the language \[ \bigcup_{w\in\lang(\aut)}\tr(w). \] \pssn \texttt{t.composition(s)}: returns a transducer realizing the composition $\rel(\ttin)\circ\rel(\tsin)$ of the relations realized by the two transducers. \pmsn To make the presentation a little more concrete for interested readers, we comment on one of the above methods, \texttt{t.runOnNFA(a)}. The construction first considers whether \tti has any $\ew$-input transitions, that is transitions with labels $\ew/y$. If yes, then a copy \abi of \aai is made with $\ew$-loops added, that is, transitions $(p,\ew,p)$ for all states $p$ in \aain. Then, if \aai has any $\ew$-transitions then a copy \tsi of \tti is made with $\ew$-loops added, that is, transitions $(q,\ew/\ew,q)$ for all states $q$ in \ttin. Then, the actual product construction is carried out: if $(p,x,p')$ and $(q,x/y,q')$ are transitions of \abi and \tsin, respectively, then $((p,q),y,(p',q'))$ is a transition of the resulting automaton. \begin{EX} Continuing Examples \ref{exNFA1} and \ref{exSuffixTran}, the following code constructs an NFA object \abi accepting every word that is a proper suffix of some word in $a^*b$. It then enters a loop that prints whether a given string $w$ is a suffix of some word in $a^*b$. \begin{verbatim} b = t.runOnNFA(a) while True: w = raw_input() if 'a' not in w and 'b' not in w: break print b.evalWordP(w) \end{verbatim} \end{EX} \pmsn \emshort{Rational operations \cite{Be:1979}}\pssn A relation $\rho$ is a \emdef{rational relation}, if it is equal to $\emptyset$, or $\{(x,y)\}$ for some words $x$ and $y$, or can be obtained from other ones by using a finite number of times any of the three (rational) operators: union, concatenation, Kleene star. A classic result on transducers says that a relation is rational if and only if it can be realized by a transducer. The following methods are now available in FAdo, where we assume that \tsi and \tti are SFT transducers. \pmsn \texttt{t.union(s):} returns a transducer realizing the union of the relations realized by \tsi and \ttin. \pmsn \texttt{t.concat(s):} returns a transducer realizing the concatenation of the relations realized by \tsi and~\ttin. \pmsn \texttt{t.star(flag=False):} returns a transducer realizing the Kleene star of the relation realized by \ttin, assuming the argument is missing or is \texttt{False}. Else it returns a transducer realizing $(\rel(\ttin))^+$. \pmsn The implementation of the above methods mimics the implementation of the corresponding methods on automata. \section{Witness of Transducer \emph{non}-functionality}\label{sec:nonfunc} A transducer $\trt$ is called \emdef{functional} if, for every word $w$, the set $\trt(w)$ is either empty or a singleton. A triple of words $(w,z,z')$ is called a \emdef{witness of $\trt$'s non-functionality}, if $z\not=z'$ and $z,z'\in\trt(w)$. In this section we present the SFT method \texttt{t.nonFunctionalW()}, which returns a witness of \ttin's non-functionality, or the triple (\texttt{None,None,None}) if \tti is functional. The method is an adaptation of the decision algorithms in \cite{AllMoh:2003,BeCaPrSa2003} that return whether a given transducer in standard form is functional. Although there are some differences in the two algorithms, we believe that conceptually the algorithmic technique is the same. We first describe that algorithmic technique following the presentation in \cite{BeCaPrSa2003}, and then we modify it in order to produce the method \texttt{t.nonFunctionalW()}. We also note that, using a careful implementation and assuming fixed alphabets, the time complexity of the decision algorithm can be quadratic with respect to the size of the transducer---see \cite{AllMoh:2003}. \pssn Given a standard form transducer $\trt=(Q,\al,\alD,T,I,F)$, the first phase is to construct the \emdef{square machine} $\tru$, which is defined by the following process. \pmsi \emshort{Phase 1} \begin{enumerate} \item First define an automaton $\tru'$ as follows: states $Q\times Q$, initial states $I\times I$, and final states $F\times F$. \item If $\trt$ contains $\ew$-input transitions, that is, transitions with labels of the form $\ew/u$ then we let $\trt$ be $\trt^{\ew}$. The transitions of $\tru'$ are all the triples $$((p,p'),(x,x'),(q,q'))$$ such that $(p,v/x,q)$ and $(p',v/x',q')$ are transitions of $\trt$. \item Return $\tru$ = a trim version of $\tru'$. \end{enumerate} Note that any accepting path of $\tru$ has a label $(x_1,x_1')\cdots(x_n,x_n')$ such that the words $x_1\cdots x_n$ and $x_1'\cdots x_n'$ are outputs (possibly equal) of $\trt$ on the \textit{same} input word. The next phase is to perform a process that starts from the initial states and assigns a \emdef{delay value} to each state, which is either ZERO or a pair of words in $\{(\ew,\ew),(\ew,u),(u,\ew)\}$, with $u$ being nonempty. A delay $(y,y')$ on a state $(p,p')$ indicates that there is a path in $\tru$ from $I\times I$ to $(p,p')$ whose label is a word pair of the form $(fy,fy')$. This means that there is an input word that can take the transducer $\trt$ to state $p$ with output $fy$ and also to state $p'$ with output $fy'$. A delay ZERO at $(p,p')$ means that there is an input word that can take $\trt$ to state $p$ with output of the form $f\sigma g$ and to state $p'$ with output of the form $f\sigma'g'$, where $\sigma$ and $\sigma'$ are distinct alphabet symbols. \pmsi \emshort{Phase 2} \begin{enumerate} \setcounter{enumi}{3} \item Assign to each initial state the delay value $(\ew,\ew)$. \item Starting from the initial states, visit all transitions in breadth-first search mode such that, if $(p,p')$ has the delay value $(y,y')$ and a transition $((p,p'),(x,x'),(q,q'))$ is visited, then the state $(q,q')$ gets a delay value $D$ as follows: \begin{itemize} \item If $y'x'$ is of the form $yxu$ then $D=(\ew,u)$. If $yx$ is of the form $y'x'u$ then $D=(u,\ew)$. If $y'x'=yx$ then $D=(\ew,\ew)$. Else, $D$ = ZERO. \end{itemize} \item The above process stops when a delay value is ZERO, or a state gets two different delay values, or every state gets one delay value. \item If every state has one delay value and every final state has the delay value $(\ew,\ew)$ then return \true (the transducer is functional). Else, return \falsen. \end{enumerate} \pnsn Next we present our witness version of the transducer functionality algorithm. First, the square machine $\tru$ is revised such that its transitions are of the form $$((p,p'),(v,x,x'),(q,q')),$$ that is, we now record in $\tru$ information about the common input $v$ (see Step~2 in Phase~1). Then, to each state $(q,q')$ we assign not only a delay value but also a path value $(\alpha,\beta,\beta')$ which means that, on input $\alpha$, the transducer $\trt$ can reach state $q$ with output $\beta$ and also state $q'$ with output $\beta'$---see Fig.~\ref{fig:Functionality}. \begin{figure}[ht!] \begin{transducer} \node [node distance=0.75cm,above of=q0,anchor=east](refl) {}; \node [state,initial] (q0) {$s_i$}; \node [node distance=1cm,left=of q0,anchor=east] {$\trt\colon$}; \node [state,right of=refl] (q1) {$p$}; \node [state,right of=q1] (q2) {$q$}; \node [node distance=1cm,right=of q0,anchor=east] {$\cdots$}; \node [node distance=1.25cm,state,below of=q1] (q4) {$p'$}; \node [state,right of=q4] (q5) {$q'$}; \node [node distance=0.75cm,below of=q2,anchor=east](refr) {}; \node [state,accepting,right of=refr] (q3) {$f_j$}; \node [node distance=1cm,left=of q3,anchor=east] {$\cdots$}; \path (q1) edge node [above] {$v/x$} (q2) (q4) edge node [above] {$v/x'$} (q5); \node [node distance=2.5cm,state,initial,below of=q0] (p0) {$s_i,s_{i'}$}; \node [node distance=1cm,left=of p0,anchor=east] {$\tru\colon$}; \node [state,right of=p0] (p1) {$p,p'$}; \node [state,right of=p1] (p2) {$q,q'$}; \node [node distance=1cm,right=of p0,anchor=east] {$\cdots$}; \node [state,accepting,right of=p2] (p3) {$f_j,f_{j'}$}; \node [node distance=1cm,left=of p3,anchor=east] {$\cdots$}; \path (p1) edge node [above] {$v,x,x'$} (p2); \node [node distance=1cm,below of=p1](dell) {delay $(y,y')$\\path $(w,z,z')$}; \node [node distance=1cm,below of=p2](delr) {delay $D$\\path $(wv,zx,z'x')$}; \end{transducer} \begin{center} \parbox{0.85\textwidth}{\caption{The figure shows the structure of the (revised) square machine $\tru$ corresponding to the given transducer $\trt$. The delay and path values for the states of $\tru$ are explained in Definition~\ref{def1}} \label{fig:Functionality}} \end{center} \end{figure} \begin{definition}\label{def1} Let $(q,q')$ be a state of the new square machine $\tru$. The set of \emdef{delay-path values} of $(q,q')$ is defined as follows. \begin{itemize} \item If $(q,q')$ is an initial state then $((\ew,\ew),(\ew,\ew,\ew))$ is a delay-path value of $(q,q')$. \item If $((p,p'),(v,x,x'),(q,q'))$ is a transition in $\tru$ and $(p,p')$ has a delay-path value $(C,(w,z,z'))$, then $(D,(wv,zx,z'x'))$ is a delay-path value of $(q,q')$, where $D$ is defined as follows. \begin{enumerate} \item If $C=(y,y')\not=$ ZERO and $y'x'$ is of the form $yxu$ then $D=(\ew,u)$. \item If $C=(y,y')\not=$ ZERO and $yx$ is of the form $y'x'u$ then $D=(u,\ew)$. \item If $C=(y,y')\not=$ ZERO and $y'x'=yx$ then $D=(\ew,\ew)$ \item Else, $D$ = ZERO. \end{enumerate} \end{itemize} For $(q,q')$, we also define a \emdef{suffix triple} $(w_{qq'},z_{qq'},z_{qq'}')$ to be the label of any path from $(q,q')$ to a final state of $\tru$. \end{definition} \begin{remark}\label{rem1} The above definition implies that if a state $(p,p')$ has a delay-path value $(C,(w,z,z'))$, then there is a path in $\tru$ whose label is $(w,z,z')$. Moreover, by the definition of $\tru$, the transducer $\trt$ on input $w$ can reach state $p$ with output $z$ and also state $p'$ with output $z'$. Thus, if $(p,p')$ is a final state, then $z,z'\in\trt(w)$. \end{remark} \pssi \emshort{Algorithm nonFunctionalW} \begin{enumerate} \item Define function \texttt{completePath}($q,q'$) that follows a shortest path from $(q,q')$ to a final state of $\tru$ and returns a suffix triple (see Definition~\ref{def1}). \item Construct the revised square machine $\tru$, as in Phase 1 above but now use transitions of the form $$((p,p'),(v,x,x'),(q,q')),$$ (see step~2 in Phase~1). \item Assign to each initial state the delay-path value $((\ew,\ew),(\ew,\ew,\ew))$. \item Starting from the initial states, visit all transitions in breadth-first search mode. If $(p,p')$ has delay-path value $((y,y'),(w,z,z'))$, and a transition $((p,p'),(v,x,x'),(q,q'))$ is visited, then compute the delay value $D$ of $(q,q')$ as in steps 1--4 of Definition~\ref{def1}, and let $R=(wv,zx,z'x')$. Then, \begin{enumerate} \item if $D$ is ZERO, then invoke \texttt{completePath}$\,(q,q')$ to get a suffix triple $(w_{qq'},z_{qq'},z_{qq'}')$ and \texttt{return} $(wvw_{qq'},zxz_{qq'},z'x'z_{qq'}')$. \item if $(q,q')$ is final and $D\not=(\ew,\ew)$, \texttt{return} $(wv,zx,z'x')$. \item if $(q,q')$ already has a delay value $\not=D$ and, hence, a path value $P=(w_1,z_1,z_1')$, then invoke \texttt{completePath}$\,(q,q')$ to get a suffix triple $(w_{qq'},z_{qq'},z_{qq'}')$. Then, \begin{itemize} \item If $zxz_{qq'}\not=z'x'z_{qq'}'$ \texttt{return} $(wvw_{qq'}, zxz_{qq'},z'x'z_{qq'}')$. \item Else \texttt{return} $(w_1w_{qq'},z_1z_{qq'},z_1'z_{qq'}')$. \end{itemize} \item else assign $(D,R)$ to $(q,q')$ as delay-path value and continue the breadth-first process. \end{enumerate} \item At this point \texttt{return (None,None,None)}, as the breadth-first process has been completed. \end{enumerate} \emph{Terminology.} Let $A=(w_1,\ldots,w_k)$ be a tuple consisting of words. The \emdef{size} $\sz A$ of $A$ is the number $\sum_{i=1}^k\sz{w_i}+(k-1)$. For example, $\sz{(0,01,10)}=7$. If $\{A_i\}$ is any set of word tuples then a \emdef{minimal} element (of that set) is any $A_i$ whose size is the minimum of $\{\sz{A_i}\}$. \begin{theorem}\label{resNonFunc} If algorithm {\rm\texttt{nonFunctionalW}} is given as input a standard form transducer $\trt$, then it returns either a size $O(|\trt|^2)$ witness of $\trt$'s non-functionality, or the triple (\texttt{None,None,None}) if $\trt$ is functional. \end{theorem} Before we proceed with the proof of the above result, we note that there is a sequence $(\trt_i)$ of non-functional transducers such that $\sz{\trt_i}\to\infty$ and any minimal witness of $\trt_i$'s non-functionality is of size $\Theta(|\trt_i|^2)$. Indeed, let $(p_i)$ be the sequence of primes in increasing order and consider the transducer $\trt_i$ shown in Fig.~\ref{fig:tr:seq}. It has size $\Theta(p_i)$ and every output word $w$ of $\trt_i$ has length equal to that of the input used to get $w$. The relation realized by $\trt_i$ is $$ \{(0^{mp_i},0^{mp_i}),\,(0^{n(p_i+1)},10^{n(p_i+1)-1})\>:\>m,n\in\mathbb{N}\}. $$ Any minimal witness of $\trt_i$'s non-functionality is of the form $w_i=(0^{mp_i},0^{mp_i},10^{n(p_i+1)-1})$ such that $mp_i=n(p_i+1)$. Using standard facts from number theory, we have that $n\ge p_i$. Hence, $\sz{w_i}\ge2+3\times p_i(p_i+1)$, that is, $\sz{w_i}=\Theta(\sz{\trt_i}^2)$.% \begin{figure}[ht!] \begin{transducer} \node [state,initial] (q0) {$0$}; \node [node distance=1.5cm,left of= q0] (name) {$\trt_i=$}; \node [state, right of=q0] (q1) {$1'$}; \node [node distance=2.20cm,right of=q1] (q2){$\cdots\cdots$}; \node [state,accepting,node distance=2.20cm,right of=q2] (q3) {$p_i'$}; \node [state,below of= q1] (q1b) {$1$}; \node [below of=q2] (q2b) {$\cdots\cdots$}; \node [state,accepting,below of=q3] (q3b) {$p_i+1$}; \path (q0) edge node [above] {$0/0$} (q1) (q0) edge node [right] {$\>0/1$} (q1b) (q1) edge node [above] {$0/0$} (q2) (q1b) edge node [below] {$0/0$} (q2b) (q2) edge node [above] {$0/0$} (q3) (q2b) edge node [below] {$0/0$} (q3b) (q3) edge [bend left=38] node [above] {$0/0$} (q1) (q3b) edge [bend right=38] node [below] {$0/0$} (q1b) ; \end{transducer} \begin{center} \caption{ Transducers with quadratic size minimal witnesses of non-functionality. \label{fig:tr:seq}} \end{center} \end{figure} \pssi The following lemma is useful for establishing the correctness of the algorithm \texttt{nonFunctionalW}. \begin{lemma} If a state $(q,q')$ has a delay-path value $((s,s'),(\alpha,\beta,\beta'))$ then there is a word $h$ such that $\beta=hs$ and $\beta'=hs'$. \end{lemma} \begin{proof} We use induction based on Definition~\ref{def1}. If the given delay-path value is $((\ew,\ew),(\ew,\ew,\ew))$ the statement is true. Now suppose that there is a transition $((p,p'),(v,x,x'),(q,q'))$ such that the statement is true for state $(p,p')$ (induction hypothesis) and $((s,s'),(\alpha,\beta,\beta'))$ results from a delay-path value $(C,(w,z,z'))$ of $(p,p')$. As $(s,s')\not=$ ZERO then also $C\not=$ ZERO, so $C$ is of the form $(y,y')$ and one of the three cases 1--3 of Definition~\ref{def1} applies. Moreover, by the induction hypothesis on $(p,p')$ we have $z=gy$ and $z'=gy'$, for some word $g$, hence, $\beta=gyx$ and $\beta'=gy'x'$. \pssn Now we consider the three cases. If $y'x'=yxu$ then $(s,s')=(\ew,u)$. Also, for $h=gyx$ we have $\beta=hs$ and $\beta'=hs'$, as required. If $yx=y'x'u$ then $(s,s')=(u,\ew)$ and one works analogously. If $yx=y'x'$ then $(s,s')=(\ew,\ew)$. Also, $\beta=\beta'$ and the statement follows using $h=\beta$. \end{proof} \pmsn \begin{proof} \emph{(of Prop.~\ref{resNonFunc})} First note that the algorithm returns a triple other than (\texttt{None,None,None}) exactly the {first} time when one of the following occurs (i) a ZERO value for $D$ is computed, or (ii) a value of $D$ other than $(\ew,\ew)$ is computed for a final state, or (iii) a value of $D$, other than the existing delay value, of a visited state is computed. Thus, the algorithm assigns at most one delay value to each state $(q,q')$. If the algorithm assigns exactly one delay value to each state and terminates at step~5, then its execution is essentially the same as that of the decision version of the algorithm, except for the fact that in the decision version no path values are computed. Hence, in this case the transducer is functional and the algorithm correctly returns (\texttt{None,None,None}) in step~5. In the sequel we assume that the algorithm terminates in one of the three subcases (a)---(c) of step~4. So let $(q,q')$ be a state at which the algorithm computes some delay value $D$ and path value $R=(\alpha,\beta,\beta')$---see~step 4. It is sufficient to show the following statements. \begin{description} \item[S1] If $D$ is ZERO then $(\alpha w_{qq'},\beta z_{qq'},\beta'z_{qq'}')$ is a witness of $\trt$'s non-functionality. \item[S2] If $(q,q')$ is final and $D\in\{(\ew,u),(u,\ew)\}$, with $u$ nonempty, then $(\alpha,\beta,\beta')$ is a witness of $\trt$'s non-functionality. \item[S3] If $D$ is of the form $(s,s')$ and $((s_1,s'_1),(\alpha_1,\beta_1,\beta_1'))$ is the existing delay-path value of $(q,q')$ with $(s_1,s'_1)\not=(s,s')$, then one of the following triples is a witness of $\trt$'s non-functionality $$(\alpha w_{qq'},\beta z_{qq'},\beta'z_{qq'}'), \>\>\> (\alpha_1 w_{qq'},\beta_1 z_{qq'},\beta_1'z_{qq'}').$$ \end{description} For statement S1, by Remark~\ref{rem1}, it suffices to show that $\beta z_{qq'}\not=\beta'z'_{qq'}$. First note that $D$ is ZERO exactly when there is a transition $((p,p'),(v,x,x')$, $(q,q'))$ such that state $(p,p')$ has a delay-path value $((y,y'),(w,z,z'))$ and $yx,y'x'$ are of the form $f\sigma g$ and $f\sigma'g'$, respectively, with $\sigma,\sigma'$ being distinct letters, and $\alpha=wv$, $\beta=zx$, $\beta'=z'x'$. Now using the above lemma we get, for some $h$ \pssi $\beta z_{qq'}=zxz_{qq'}=hyxz_{qq'}=hf\sigma gz_{qq'}$ and \pnsi $\beta' z'_{qq'}=z'x'z'_{qq'}=hy'x'z'_{qq'}=hf\sigma' g'z'_{qq'}$, \pssn which implies $\beta z_{qq'}\not=\beta'z'_{qq'}$, as required. \pssn For statement S2, by Remark~\ref{rem1}, it suffices to show that $\beta \not=\beta'$. By symmetry, we only consider the case of $D=(\ew,u)$. First note that $D$ is $(\ew,u)$ exactly when there is a transition $((p,p'),(v,x,x'),(q,q'))$ such that state $(p,p')$ has a delay-path value $((y,y'),(w,z,z'))$ and $y'x'=yxu$, and $\alpha=wv$, $\beta=zx$, $\beta'=z'x'$. Using the above lemma we get, for some $h$ \pssi $\beta =zx=hyx$ and $\beta' =z'x'=hy'x'=hyxu$, \pssn which implies $\beta \not=\beta'$, as required. \pssn For statement S3, we assume that $\beta z_{qq'}=\beta'z'_{qq'}$ and we show that $\beta_1 z_{qq'}\not=\beta_1 z'_{qq'}$. Assume for the sake of contradiction that also $\beta_1 z_{qq'}=\beta_1 z'_{qq'}$. Using the above lemma we get, for some $h$ \pssi $\beta=hs,\,\beta'=hs',\,\beta_1=h_1s_1,\, \beta_1'=h_1s_1'$. \pssn Also by the assumptions we get $hsz_{qq'}=hs'z'_{qq'}$ and $h_1s_1z_{qq'}=h_1s_1'z'_{qq'}$, implying that $sz_{qq'}=s'z'_{qq'}$ and $s_1z_{qq'}=s_1'z'_{qq'}$. If $z_{qq'}=z'_{qq'}$ then $s=s'=\ew$ and $s_1=s_1'=\ew$, which is impossible as $(s,s')\not=(s_1,s_1')$. If $z_{qq'}$ is of the form $z_1z'_{qq'}$ (or vice versa), then we get that $(s,s')=(s_1,s_1')$, which is again impossible. \pssn Regarding the size of the witness returned, consider again statements S1--S3 above. Then, the size of the witness is $\sz{(x,y,z)}+\sz{(w_{qq'},z_{qq'},z'_{qq'})}-2$, where $(w_{qq'},z_{qq'},z'_{qq'})$ could be $(\ew,\ew,\ew)$ and $(x,y,z)$ is a path value of state $(q,q')$: $(\alpha,\beta,\beta')$ or $(\alpha_1,\beta_1,\beta_1')$. As $(w_{qq'},z_{qq'},z'_{qq'})$ is based on a shortest path from $(q,q')$ to a final state of $\tru$, we have $\sz{(w_{qq'},z_{qq'},z'_{qq'})}<\sz{\tru}$. As the algorithm visits each transition of $\tru$ at most once, and $(x,y,z)$ is built by concatenating transition labels starting from label $(\ew,\ew,\ew)$, we have that the size of $(x,y,z)$ is bounded by the sum of the sizes of the transitions of $\tru$. Hence, the size of the witness is $O(\sz{\tru})$. The claim about the size of the witness follows by the fact that $\sz{\tru}=\Theta(\sz{\trt}^2)$. \end{proof} \section{Object Classes Representing Code Properties}\label{sec:codes} In this section we discuss our implementation of objects representing code properties. The set of all code properties is uncountable, but of course one can only implement countably many properties. So we are interested in systematic methods that allow one to {formally} describe code properties. Three such formal methods are the implicational conditions of \cite{Jurg:1999}, where a property is described by a first order formula of a certain type, the regular trajectories of \cite{Dom:2004}, where a property is described by a regular expression over $\{0,1\}$, and the transducers of \cite{DudKon:2012}, where a property is described by a transducer. These formal methods appear to be able to describe most properties of practical interest. The formal methods of regular trajectories and transducers are implemented here, as the transducer formal method follows naturally our implementation of transducers, and every regular expression of the regular trajectory formal method can be converted efficiently to a transducer object of the transducer formal method. The implementation of implicational conditions is an interesting topic for future research. \pmsn Next we review quickly the formal methods of regular trajectories and transducers, and then discuss our implementation of these formal methods. \pmsn \emshort{Regular trajectory properties \cite{Dom:2004}.} In this formal method a regular expression $\ree$ over $\{0,1\}$ describes the code property $\pty_{\ree}$ as follows. The 0s in $\ree$ indicate positions of alphabet symbols that make up a word $v$, say, and the 1s in $\ree$ indicate positions of arbitrary symbols that, together with the 0s, make up a word $u$, say. A language $L$ satisfies $\pty_{\ree}$ if there are no two different words in $u,v\in L$ such that $u$ has the structure indicated by $\ree$ and $v$ has a structure obtained by deleting the 1s from $\ree$. For example, the \emdef{infix code} property is defined by the regular expression $1^*0^*1^*$, which says that by deleting consecutive symbols at the beginning and/or at the end of an $L$-word $u$, one cannot get a different $L$-word. Equivalently, $L$ is an infix code if no $L$-word is an infix of another $L$-word. Note that $1^*0^*1^*$ describes all infix codes over all possible alphabets. \pmsn \emshort{Input-altering transducer properties \cite{DudKon:2012}.} A transducer $\trt$ is \emdef{input-altering} if, for all words $w$, $w\notin\trt(w)$. In this formal method such a transducer $\trt$ describes the code property $\iatp_{\trt}$ consisting of all languages $L$ over the input alphabet of $\trt$ such that \begin{equation}\label{eqIAT} \trt(L)\cap L = \emptyset. \end{equation} With this formal method we can define the \emdef{suffix code} property: $L$ is a suffix code if no $L$-word is a proper suffix of an $L$-word. The transducer defined in Example~\ref{exSuffixTran} is input-altering and describes the suffix code property over the alphabet \texttt{\{a, b\}}. Similarly, we can define the infix code property by making another transducer that, on input $w$, returns any proper infix of $w$. We note that, for every regular expression $\ree$ over $\{0,1\}$ and alphabet $\al$, one can construct in linear time an input-altering transducer $\trt$ with input alphabet $\al$ such that $\pty_{\ree}=\iatp_{\trt}$ \cite{DudKon:2012}. Thus, every regular trajectory property is an input-altering transducer property. \pmsn \emshort{Error-detecting properties via input-preserving transducers \cite{Kon:2002,DudKon:2012}.} A transducer $\trt$ is \emdef{input-preserving} if, for all words $w$ in the domain of $\rel(\trt)$, $w\in\trt(w)$. Such a transducer $\trt$ is also called a \emdef{channel transducer}, in the sense that an input message $w$ can be transmitted via $\trt$ and the output can always be $w$ (no transmission error), or a word other than $w$ (error). In this formal method the transducer $\trt$ describes the \emdef{error-detecting for $\trt$} property $\edp_{\trt}$ consisting of all languages $L$ over the input alphabet of $\trt$ such that \begin{equation}\label{eqIPT} \trt(w)\cap (L-w)=\emptyset,\>\>\hbox{ for all words $w\in L$.} \end{equation} The term error-detecting \emshort{for} $\trt$ is used in the sense that $L$ is meant to consist of all valid messages one can transmit via $\trt$, and $\trt$ cannot turn a valid message into a different valid message. We note that, for every input-altering transducer $\trt$, one can make in linear time a channel transducer $\trt'$ such that $\iatp_{\trt}=\edp_{\trt'}$ \cite{DudKon:2012}. Thus, every input-altering transducer property is an error-detecting property. \begin{EX}\label{exSub1} Consider the property \emph{1-substitution error-detecting code} over \texttt{\{a, b\}}, where error means the substitution of one symbol by another symbol. A classic characterization is that, $L$ is such a code if and only if the Hamming distance between any two different words in $L$ is at least 2~\cite{Ham:1950}. The following channel transducer defines this property---see also Fig~\ref{fig:ed}. The transducer will substitute at most one symbol of the input word with another symbol. \begin{verbatim} s1 = '@Transducer 0 1 * 0\n'\ '0 a a 0\n'\ '0 b b 0\n'\ '0 b a 1\n'\ '0 a b 1\n'\ '1 a a 1\n'\ '1 b b 1\n' t1 = fio.readOneFromString(s1) \end{verbatim} \end{EX} \pssn We note that the transducer approach to defining error-detecting code properties is very powerful, as it allows one to model insertion and deletion errors, in addition to substitution errors---see Fig~\ref{fig:ed}. Codes for such errors are actively under investigation---see~\cite{PAF:2013}, for instance. \begin{figure}[ht!] \begin{transducer} \node [state,initial,accepting] (q0) {$0$}; \node [node distance=0.75cm,left=of q0,anchor=east] {$\trt_{\rm 1sub}\colon$}; \node [state,accepting,right of=q0] (q1) {$1$}; \node [state,initial,accepting,right of=q1] (q2) {$0$}; \node [node distance=0.75cm,left=of q2,anchor=east] {$\trt_{\rm 1id}\colon$}; \node [state,accepting,right of=q2] (q3) {$1$}; \path (q0) edge [loop above] node [above] {$\sigma/\sigma$} () (q0) edge node [above] {$\sigma/\sigma'$} (q1) (q1) edge [loop above] node [above] {$\sigma/\sigma$} () (q2) edge node [above] {$\sigma/\ew$} (q3) (q2) edge node [below] {$\ew/\sigma$} (q3) (q2) edge [loop above] node [above] {$\sigma/\sigma$} () (q3) edge [loop above] node [above] {$\sigma/\sigma$} (); \end{transducer} \begin{center} \parbox{0.85\textwidth}{\caption{On input $x$ the transducer $\trt_{\rm 1sub}$ outputs either $x$, or any word that results by substituting exactly one symbol in $x$. On input $x$ the transducer $\trt_{\rm 1id}$ outputs either $x$, or any word that results by deleting, or inserting, exactly one symbol in $x$. {Note}: The use of labels on arrows is explained in Fig.~\ref{fig:suffix}. } \label{fig:ed}} \end{center} \end{figure} \pmsn \emshort{Error-correcting properties via input-preserving transducers.} An input-preserving (or channel) transducer is used to describe the \emdef{error-correcting for $\trt$} property $\ecp_{\trt}$ consisting of all languages $L$ over the input alphabet of $\trt$ such that \begin{equation}\label{eqEC} \trt(v)\cap\trt(w)=\emptyset,\>\>\hbox{ for all distinct words $v,w\in L$.} \end{equation} The term error-correcting \emshort{for} $\trt$ is used in the sense that any message $w'$ received from an $L$-word via $\trt$ can result from only one such $L$-word, so if $w'\notin L$ then $w'$ can in principle be corrected to exactly one $L$-word. \begin{remark}\label{remECvsED} It can be shown that a language is error-correcting for $\trt$ if and only if it is error-detecting for any transducer realizing the relation $\rel(\trt^{-1})\compose\rel(\trt)$. \end{remark} \begin{remark}\label{rem3indep} All input-altering, error-detecting and error-correcting properties are 3-independences. \end{remark} \subsection{Implementation in FAdo.} We present now our implementation of the previously mentioned code properties. We have defined the Python classes \pmsi \texttt{TrajProp, IATProp, ErrDetectProp, ErrCorrectProp} \pmsn corresponding to the four types of properties discussed above. These four property types are described, respectively, by regular trajectory expressions, input-altering transducers, input-preserving transducers, and input-preserving transducers. In all four cases, given a transducer object, an object of the class is created. An object \ppi of the class \texttt{IATProp}, say, is defined via some transducer \tti and represents a particular code property, that is, the class of languages satisfying Eq.~(\ref{eqIAT}). \pssn The class \texttt{ErrDetectProp} is a {superclass} of the others. These classes and all related methods and functions are in the module \texttt{codes.py} and can be imported as follows. \begin{verbatim} import FAdo.codes as codes \end{verbatim} Although each of the above four classes requires a transducer to create an object of the class, we have defined a set of what we call \emdef{build functions} as a user interface for creating code property objects. These build functions are shown next in use with specific arguments from previous examples. \begin{EX}\label{exBuildFunctions} Consider again Examples \ref{exSuffixTran} and \ref{exSub1} in which the strings \texttt{s} and \texttt{s1} are defined containing, respectively, the proper suffixes transducer and the transducer permitting up to 1 substitution error. The following object definitions are possible with the FAdo package \begin{verbatim} icp = codes.buildTrajPropS('1*0*1*', {'a', 'b'}) scp = codes.buildIATPropS(s) s1dp = codes.buildErrorDetectPropS(s1) s1cp = codes.buildErrorCorrectPropS(s1) pcp = codes.buildPrefixProperty({'a', 'b'}) icp2 = codes.buildInfixProperty({'a', 'b'}) \end{verbatim} In the first statement, \texttt{icp} represents the infix code property over the alphabet \texttt{\{a, b\}} and is defined via the trajectory expression \texttt{1*0*1*}. In the next three statements, \texttt{scp, s1dp, s1cp} represent, respectively, the suffix code property, the 1-substitution error-detecting property and the 1-substitution error-correcting property. The last two statements are explained below---\texttt{pcp} and \texttt{icp2} represent the prefix code and infix code properties, respectively. \end{EX} \pmsn \emshort{Fixed properties.} Some properties are well known in the theory of codes, so we have created specific classes for these properties and, therefore, FAdo users need not write transducers, or trajectory regular expressions, for creating these properties. As before, users need only to know about the \texttt{build}-interfaces for creating objects of these classes. \pmsn \texttt{buildPrefixProperty(Sigma)}: returns an object of the class \texttt{PrefixProp} that represents all prefix codes over the alphabet \texttt{Sigma}. \pmsn \texttt{buildSuffixProperty(Sigma)}: returns an object of the class \texttt{SuffixProp} that represents all suffix codes over the alphabet \texttt{Sigma}. \pmsn \texttt{buildInfixProperty(Sigma)}: returns an object of the class \texttt{InfixProp} that represents all infix codes over the alphabet \texttt{Sigma}. Note that an infix code is both prefix and suffix and this fact is reflected in the class definition, by considering \texttt{InfixProp} a Python subclass of both \texttt{PrefixProp} and \texttt{SuffixProp}. \pmsn \texttt{buildOutfixProperty(Sigma)}: returns an object of the class \texttt{OutfixProp} that represents all outfix codes over the alphabet \texttt{Sigma}. A language $L$ is an \emdef{outfix code} if deleting an infix of an $L$-word cannot result into another $L$-word. Note that an outfix code is both prefix and suffix, and, as above, \texttt{OutfixProp} is a subclass of the correspondent Python classes. \pmsn \texttt{buildHypercodeProperty(Sigma)}: returns an object of the class \texttt{HypercodeProp} that represents all hypercodes over the alphabet \texttt{Sigma}. A language $L$ is a \emdef{hypercode} if deleting any symbols of an $L$-word cannot result into another $L$-word. Note that a hypercode is both infix and outfix and as above, \texttt{HypercodeProp} is a subclass of the correspondent Python classes. \pmsn Each of the above methods creates internally a transducer whose input and output alphabets are equal to the given alphabet \texttt{Sigma}, and then passes the transducer to the constructor of the respective class \pmsn \emshort{Combining code properties.} In practice it is desirable to be able to talk about languages that satisfy more than one code property. For example, most of the 1-substitution error-detecting codes used in practice are infix codes (in fact \emdef{block codes}, that is, those whose words are of the same length). We have defined the operation \verb1&1 between any two error-detecting properties independently of how they were created. This operation returns an object representing the class of all languages satisfying both properties. This object is constructed via the transducer that results by taking the union of the two transducers describing the two properties---see Rational Operations in Section~\ref{sec:transducers}. \begin{EX} Using the properties \texttt{icp, s1dp} created above in Example~\ref{exBuildFunctions}, we can create the conjunction \verb2p12 of these properties, and using the properties \texttt{pcp, scp} we can create their conjunction \texttt{bcp} which is known as the \emdef{bifix code property}. \begin{verbatim} p1 = icp & s1dp bcp = pcp & scp \end{verbatim} \pnsn The object \verb2p12 represents the property $\ensuremath{\mathcal{P}}\xspace_{\ree}\cap\edp_{\texttt{s1}}$, where $\ree=\texttt{1*0*1*}$. It is of type \texttt{ErrDetectProp}. If, however, the two properties involved are input-altering then our implementation makes sure that the object returned is also of type input-altering---this is the case for \texttt{bcp}. This is important as the satisfaction problem for input-altering transducer properties can be solved more efficiently than the satisfaction problem for error-detecting properties---see Section~\ref{sec:code-methods}. \end{EX} \subsection{Aspects of Code Hierarchy Implementation}\label{sec:hierarchy} As stated above, our top Python superclass is \texttt{ErrDetectProp}. When viewed as a set of (potential) objects, this class implements the set of properties \begin{equation}\label{eqEDP} \edp \>=\> \{\edp_{\trt}\mid \trt\hbox{ is an input-preserving transducer}\}. \end{equation} For any \texttt{ErrDetectProp} object \texttt{p}, let us denote by \rep{\texttt{p}} the property in $\edp$ represented by \texttt{p}. If \texttt{p}\; and \texttt{q}\; are any \texttt{ErrDetectProp} objects such that $\rep{\texttt{p}}\subseteq\rep{\texttt{q}}$ and we know that a language satisfies $\rep{\texttt{p}}$ then it follows logically that the language also satisfies $\rep{\texttt{q}}$ and, therefore, one does not need to execute the method \texttt{q.notSatsfiesW} on the automaton accepting that language. Similarly, as $\rep{\texttt{p}\&\texttt{q}}=\rep{\texttt{p}}$ the method invocation $\texttt{p}\&\texttt{q}$ should simply return \texttt{p}. It is therefore desirable to have a method `$\po$' such that if $\texttt{p}\po\texttt{q}$ returns true then $\rep{\texttt{p}}\subseteq\rep{\texttt{q}}$. In fact, for \texttt{ErrDetectProp} objects, we have implemented the methods `$\&$' and `$\po$' in a way that the triple $(\texttt{ErrDetectProp},\&,\po)$ constitutes a syntactic hierarchy (see further below) which can be used to simulate the properties in~(\ref{eqEDP}). In practice this means that `$\&$' simulates intersection between properties and `$\po$' simulates subset relationship between two properties such that the following desirable statements hold true, for any \texttt{ErrDetectProp} objects \texttt{p, q} \pmsi \verb1p & p1 returns \verb1p1 \pssi $\verb1p1 \po \verb1q 1$ if and only if \verb1 p & q 1 returns \verb1p1 \pmsn We note that the syntactic simulation of the properties in Eq.~\eqref{eqEDP} is not complete (in fact it {cannot} be complete): for any \texttt{ErrDetectProp} objects \texttt{p, q} defined via transducers $\trt$ and $\trs$ with $\edp_{\trt}\subseteq\edp_{\trs}$ it does not always hold that $\verb1p1 \po \verb1q1$. On the other hand, our implementation of the set of the five fixed properties constitutes a complete simulation of these properties, when the same alphabet is used. This implies, for instance, that \pmsi \verb1pcp & icp2 1 returns \verb1 icp21 \pmsn where we have used the notation of Example~\ref{exBuildFunctions}. Our implementation associates to each object \texttt{p}\; of type \texttt{ErrDetectProp} a nonempty set \texttt{p}.ID of names. If \texttt{p}\; is a fixed property object, \texttt{p}.ID has one hardcoded name. If \texttt{p}\; is built from a transducer $\trt$, \texttt{p}.ID has one name, the name of $\trt$---this name is based on a string description of $\trt$. If \texttt{p}\; = \verb1q&r1, then \texttt{p}.ID is the union of \texttt{q}.ID and \texttt{r}.ID minus any fixed property name $N$ for which another fixed property name $M$ exists in the union such that the $M$-property is contained in the $N$-property. \pssi Next we give a mathematical definition of what it means to simulate a set of code properties $\psetq=\{\ensuremath{\mathcal{Q}}\xspace_j\mid j\in J\}$ via a syntactic hierarchy $(G,\sop,\po)$---see definition below---which can ultimately be implemented (as is the case here) in a standard programming language. The idea is that each $g\in G$ represents a property $\rep{g}=\ensuremath{\mathcal{Q}}\xspace_j$, for some index $j$, and $G$ is the set of generators of the semigroup $(\gene{G},\sop)$ whose operation `$\sop$' simulates the process of combining properties in $\psetq$, that is $\rep{x\sop y}=\rep x\cap\rep y$, and the partial order `$\po$' simulates subset relation between properties, that is $x\po y$ implies $\rep x\subseteq\rep y$, for all $x,y\in\gene G$. \pnsi The first result is that there is an efficient simulation of the set $\edp$ in Eq.~\eqref{eqEDP}---see Theorem~\ref{th:sim}. The second result is that there can be no \emph{complete} simulation of that set of properties, that is, a simulation such that $\rep x\subseteq\rep y$ implies $x\po y$, for all $x,y\in\gene G$---see Theorem~\ref{th:nonsim}. \pssn \begin{definition}\label{def:sh} A syntactic hierarchy is a triple $(G,\sop,\po)$ such that $G$ is a nonempty set and \begin{enumerate} \item $(\gene{G},\sop)$ is the commutative semigroup generated by $G$ with computable operation `$\sop$'. \item $(\gene{G},\po)$ is a decidable partial order (reflexive, transitive, antisymmetric). \item For all $x,y\in\gene G$, $x\po y$ implies $x\sop y=x$. \item For all $x,y\in\gene G$, $x\sop y\po x$. \end{enumerate} \end{definition} \pssn \begin{comment} In the above definition, computable `$\sop$' means that there is an algorithm that returns $x\sop y$, given the descriptions of any two elements $x,y\in\gene{G}$. Similarly, decidable $\po$ means that there is an algorithm that returns whether $x\po y$, given the descriptions of any two elements $x,y\in\gene{G}$. By description of the elements in $\gene{G}$ we mean any appropriate encoding of these elements, in the same way that we talk about algorithms on numbers and graphs without having to specify exactly the encodings of these objects. \end{comment} Next we list a few properties of the operation `$\sop$' and the order `$\po$'. \pssn \begin{lemma}\label{lemSH} The following statements hold true, for all $x,y,z\in\gene G$, \begin{enumerate} \item $x\po x$ and $x\sop x = x$ \item $x\po y$ if and only if $x=y\sop z$ for some $z\in\gene G$. \item $x=x\sop y$ if and only if $x\po y$. \item If $x\le y$ and $x\le z$ then $x\le y\sop z$. \item If $x=g_1\sop\cdots\sop g_n$, for some $g_1,\ldots,g_n\in G$, with all $g_i$'s distinct and $n\ge2$, then $x<g_1$ or $x<g_2$, and hence $x$ is not maximal. \item $x$ is maximal if and only if $x$ is prime (meaning, $x=u\sop v$ implies $x=u=v$). \end{enumerate} \end{lemma} \begin{proof} The proof of correctness is based on the previous definition and uses standard logical arguments. We present only proofs for the second and fourth statements. \pnsi The `if' part of the second statement follows from the fourth statement of the above definition, and the `only if' part follows from the third statement of the above definition. \pnsi For the fourth statement, using the fact that $x\sop(y\sop z)\po y\sop z$, it is sufficient to show that $x=x\sop(y\sop z)$. This follows when we note that $x\po y$ implies $x=x\sop y$ and $x\po z$ implies $x=x\sop z$. \end{proof} \pssn \begin{definition}\label{def:sim} Let $\psetq=\{\ensuremath{\mathcal{Q}}\xspace_j\mid j\in J\}$ be a set of properties, for some index set $J$. A (syntactic) simulation of $\psetq$ is a quintuple $(G,\sop,\po,\rep{\>},\phi)$ such that $(G,\sop,\po)$ is a syntactic hierarchy and \begin{enumerate} \item $\rep{\>}$ is a surjective mapping of $\gene G$ onto $\psetq$; \item for all $x,y\in\gene G$, $x\po y$ implies $\rep x\subseteq \rep y$; \item for all $x,y\in\gene G$, $\rep{x\sop y}=\rep x\cap\rep y$; \item $\phi$ is a computable function of $J$ into $\gene G$ such that $\rep{\phi(j)}=\ensuremath{\mathcal{Q}}\xspace_j$. \end{enumerate} The simulation is called \emph{complete} if, for all $x,y$ \[ \rep x\subseteq \rep y\quad\hbox{implies}\quad x\po y. \] The simulation is called linear if $J$ has a size function $\sz{\cdot}$ and $\gene G$ has a size function $\szg{\cdot}$ such that $\szg{\phi(j)}=O(\sz{j})$, for all $j\in J$, and for all $x,y$ \[ \szg{x\sop y}\>=\>O(\szg{x}+\szg{y}). \] \end{definition} By a size function on a set $X$, we mean any function $f$ of $X$ into $\mathbb{N}_0$. \begin{theorem}\label{th:sim} There is a linear simulation of the set $$\{\edp_{\trt}\mid \trt\hbox{ is an input-preserving transducer}\}.$$ \end{theorem} \begin{proof} Let $\mathbf T$ be the set consisting of all finite sets of transducers. Let $T_1,T_2\in\mathbf T$. We define \pmsi $G = \{\{\trt\}\mid \trt\hbox{ is an input-preserving transducer}\}$. \pmsi $T_1\sop T_2=T_1\cup T_2$. \pmsi $T_1\po T_2$, if $T_2\subseteq T_1$. \pssn The above definitions imply that $\gene G$ consists of all $T$, where $T$ is a finite nonempty set of input-preserving transducers, and that indeed $(\gene G,\sop)$ is a commutative semigroup and $(\gene G,\po)$ is a partial order. Moreover one verifies that the last two requirements of Definition~\ref{def:sh} are satisfied. Thus $(G,\sop,\po)$ is a syntactic hierarchy. \pmsn Next we define the size function. For a finite nonempty set $T=\{\trt_1,\ldots,\trt_n\}$ of transducers, we denote with $\lor T$ the transducer $\trt_1\lor\cdots\lor\trt_n$ of size $O(\sum_1^n\sz{\trt_i})$ realizing $\rel(\trt_1)\cup\cdots\cup\rel(\trt_n)$. Then, define $\szg{T} = \sz{\lor T}$. One verifies that $\szg{T_1\sop T_2}=O(\szg{T_1}+\szg{T_2})$ as required. \pssn Next we use the syntactic hierarchy $(G,\sop,\po)$ to define the required simulation. First, define $\phi(\trt)=\{\trt\}$, for any input-preserving transducer $\trt$. Then, define \[ \rep{T} = \edp_{\lor T},\>\hbox{ which equals } \>\bigcap_{\trt\in T}\edp_{\trt}. \] One verifies that the requirements of Definition~\ref{def:sim} are satisfied. \end{proof} The next result shows that there can be no complete simulation of the set of error-detecting properties. This follows from the undecidability of the Post Correspondence problem using methods from establishing the undecidability of basic transducer related problems \cite{Be:1979}. \begin{theorem}\label{th:nonsim} There is no complete simulation of the set of properties $$\{\edp_{\trt}\mid \trt\hbox{ is an input-preserving transducer}\}.$$ \end{theorem} Before we present the proof, we establish a few necessary auxiliary results. \begin{lemma}\label{lem:pties} For any input-preserving transducers $\trt,\trs$ we have \[ \edp_{\trt}\subseteq\edp_{\trs}\>\hbox{ if and only if }\> \rel(\trs\lor\trs^{-1})\subseteq\rel(\trt\lor\trt^{-1}). \] \end{lemma} \begin{proof} First note that Eq.~(\ref{eqIPT}) is equivalent to \[ (w,v)\in\rel(\trt)\>\hbox{ implies }\>w=v,\>\>\hbox{ for all $v,w\in L$.} \] This implies that, for all input-preserving transducers $\trt,\trs$, we have $\edp_{\trs}=\edp_{\trs^{-1}}$ and \begin{eqnarray} \rel(\trs)\subseteq\rel(\trt)&\hbox{ implies }&\edp_{\trt}\subseteq\edp_{\trs},\\ \edp_{\trs}&=& \edp_{\trs\lor\trs^{-1}}. \end{eqnarray} The statement of the lemma follows from the above observations using standard set theoretic arguments. \end{proof} \begin{lemma}\label{lem:undecide} The following problem is undecidable. \pssi Input: input-preserving transducers $\trt,\trs$ \pssi Return: whether $\rel(\trs\lor\trs^{-1})\subseteq\rel(\trt\lor\trt^{-1})$ \end{lemma} \begin{proof} We reduce the Post Correspondence Problem (PCP) to the given problem. Consider any instance $((u_i)_1^p,\,(v_i)_1^p)$ of PCP which is a pair of sequences of $p$ nonempty words over some alphabet $B$, for some positive integer $p$. As mentioned before, we use tools that have been used in showing the undecidability of basic transducer problems \cite{Be:1979}. In particular, we have that the given instance is a YES instance if and only if $U^+\cap V^+\not=\emptyset$, where \[ U=\{(ab^i,u_i)\mid i=1,\ldots,p\}\>\hbox{ and }\> V=\{(ab^i,v_i)\mid i=1,\ldots,p\}, \] and we make no assumption about the intersection of the alphabets $B$ and $\{a,b\}$. Here we define the following objects \begin{eqnarray} C &=& \{ab,ab^2,\ldots,ab^p\}\\ \diag(L) &=& \{(x,x)\mid x\in L\},\>\hbox{for any language $L$}\\ D &=& \diag(C^+)\cup\diag(aaB^+)\\ X &=& (\ew,aa)U^+\cup D\\ Y &=& ((C^+\times aaB^+)-(\ew,aa)V^+)\cup D\\ \trt &=& \>\hbox{ any transducer realizing X}\\ \trs &=& \>\hbox{ any transducer realizing Y} \end{eqnarray} We use rational relation theory to show the following claims, which establish the required reduction from PCP to the given problem. \pssi C1: $X$ and $Y$ are rational relations \pssi C2: $\trt$ and $\trs$ are input preserving \pssi C3: $U^+\cap V^+\not=\emptyset\>$ if and only if $(X\cup X^{-1})\subseteq(Y\cup Y^{-1})$ \pssn \emph{Claim C1:} First note that, as $C^+$ and $aaB^+$ are regular languages, we have that $D$ is a rational relation. In \cite{Be:1979}, the author shows that $U^+$ and $((C^+\times B^+)-V^+)$ are rational relations. It follows then that $X$ is a rational relation. Similarly, rational is also the relation \[ (\ew,aa)((C^+\times B^+)-V^+)=((C^+\times aaB^+)-(\ew,aa)V^+), \] which implies that $Y$ is rational as well. \pssn \emph{Claim C2:} From the previous claim there are transducers (in fact effectively constructible) $\trt$ and $\trs$ realizing $X$ and $Y$, respectively. Note that the domains of both transducers are equal to $C^+\cup aaB^+$. The fact that $(x,x)\in\rel(\trt)\cap\rel(\trs)$ for all $x\in C^+\cup aaB^+$ implies that both transducers are indeed input-preserving. \pssn \emph{Claim C3:} First it is easy to confirm that $X^{-1}\subseteq Y^{-1}$ if and only if $X\subseteq Y$ (in fact for any relations $X$ and $Y$). Also the fact that $C^+\cap aaB^+=\emptyset$ implies that $(\ew,aa)U^+$ is disjoint from the sets \[ ((\ew,aa)U^+)^{-1},\>D,\>((C^+\times aaB^+)-(\ew,aa)V^+)^{-1} \] and similarly $((C^+\times aaB^+)-(\ew,aa)V^+)$ is disjoint from the same sets. The above observations imply that \pmsi $(X\cup X^{-1})\subseteq(Y\cup Y^{-1})$ if and only if \pssi $(\ew,aa)U^+\subseteq ((C^+\times aaB^+)-(\ew,aa)V^+)$ if and only if \pssi $(\ew,aa)U^+ \cap (\{a,b\}^*\times B^*)-((C^+\times aaB^+)-(\ew,aa)V^+)=\emptyset$ if and only if \pssi $(\ew,aa)U^+ \cap (\ew,aa)V^+=\emptyset$ if and only if \pssi $U^+\cap V^+=\emptyset$, as required. \end{proof} \begin{proof} (of Theorem~\ref{th:nonsim}.) For the sake of contradiction, assume there is a complete simulation $(G,\sop,\po,\rep{\>},\phi)$ of the given set of properties. Then, for all $x,y\in\gene G$, we have \[ \rep x\subseteq\rep y\>\hbox{ implies }\> x\po y. \] We derive a contradiction by showing that the problem in Lemma~\ref{lem:undecide} is decidable as follows. \pmsi 1. Let $x = \phi(\trt)$ \pssi 2. Let $y = \phi(\trs)$ \pssi 3. if $y\po x$: return YES \pssi 4. else: return NO \pmsn The correctness of the `if' clause is established as follows: $y\po x$ implies $\rep y\subseteq\rep x$, which implies $\edp_{\trt}\subseteq\edp_{\trs}$, and then $\rel(\trs\lor\trs^{-1})\subseteq\rel(\trt\lor\trt^{-1})$, as required. The correctness of the `else' clause is established as follows: first note $y\not\po x$. We need to show $\rel(\trs\lor\trs^{-1})\not\subseteq\rel(\trt\lor\trt^{-1})$. For the sake of contradiction, assume the opposite. Then, $\edp_{\trt}\subseteq\edp_{\trs}$, which implies $\rep y\subseteq\rep x$, and then (by completeness) $y\po x$, which is a contradiction. \end{proof} \section{Methods of Code Property Objects}\label{sec:code-methods} In the context of the research on code properties, we consider the following three algorithmic problems as fundamental. \begin{description} \item[\emdef{Satisfaction problem}.] Given the description of a code property and the description of a language, decide whether the language satisfies the property. In the \emdef{witness version} of this problem, a negative answer is also accompanied by an appropriate set of words showing how the property is violated. \item[\emdef{Maximality problem}.] Given the description of a code property and the description of a language $L$, decide whether the language is maximal with respect to the property. In fact we allow the more general problem, where the input includes also the description of a second language $M$ and the question is whether there is no word $w\in M\setminus L$ such that $L\cup w$ satisfies the property. The default case is when $M=\al^*$. In the \emdef{witness version} of this problem, a negative answer is also accompanied by any word $w$ that can be added to the language $L$. \item[\emdef{Construction problem}.] Given the description of a code property and two positive integers $n$ and $\ell$, construct a language that satisfies the property and contains $n$ words of length $\ell$ (if possible). \end{description} It is assumed that the code property can be implemented as \ppi via a transducer $\trt$ (whether input-altering or input-preserving) and, in the first two problems, the language is given via an NFA $\aut$. In the maximality problem, the second language $M$ is given via an NFA $\autb$. In fact one can give a language via a regular expression, in which case it is converted to an automaton. Next we present the implementation of methods for the satisfaction and maximality problems. We discuss briefly the construction problem in the last section of the paper. \pbsn \emshort{Methods} \texttt{p.satisfiesP(a)} \pmsn The satisfaction problem is decidable in time $O(\sz{\trt}\sz{\aut}^2)$, if the property \ppi is of the input-altering transducer type. This follows from Eq.~(\ref{eqIAT}) and can be implemented as follows \begin{verbatim} c = t.runOnNFA(a) return (a & c).emptyP() \end{verbatim} For the case of \ppi being the error-detecting property, the transducer $\trt$ is input-preserving and Eq.~(\ref{eqIPT}) is decided via a transducer functionality test~\cite{Kon:2002}. In FAdo this test can be implemented as follows, where the method \texttt{functionalP()} returns whether the transducer is functional. \begin{verbatim} s = t.inIntersection(a) return s.outIntersection(a).functionalP() \end{verbatim} \begin{comment} The time complexity for the satisfaction of error-detection is $O(\sz{\trt}^2\sz{\aut}^4)$, as the transducer functionality algorithm is used on a transducer of size $O(\sz{\trt}\cdot\sz{\aut}^2)$---this is the transducer that results by intersecting the given transducer twice with the given automaton. \pmsn \end{comment} For the case of \ppi being the error-correcting property, again the given transducer is input-preserving and Eq.~(\ref{eqEC}) is decided via a transducer functionality test~\cite{Kon:2002}. In FAdo this test can be implemented as follows \begin{verbatim} s = t.inverse() return s.outIntersection(a).functionalP() \end{verbatim} \begin{comment} The time complexity for the satisfaction of error-correction is $O(\sz{\trt}^2\sz{\aut}^2)$, as the transducer functionality algorithm is used on a transducer of size $O(\sz{\trt}\cdot\sz{\aut})$. \pbsn \end{comment}\pmsn \emshort{Method} \texttt{p.maximalP(a, b)} \pmsn The maximality problem is decidable but PSPACE-hard \cite{DudKon:2012}. In particular, for the case of both input-altering transducer and error-detecting properties, the decision algorithm is very simple: the language $\lang(\aut)$ is \ppin-maximal (within $\lang(\autb)$) if and only if \begin{equation}\label{eq-max} \lang(\autb)\setminus (\lang(\aut)\cup\trt(\aut)\cup\trt^{-1}(\aut))=\emptyset. \end{equation} The above emptiness test is implemented in the method \texttt{p.maximalP(a, b)}, which returns \true or \falsen, and uses standard NFA methods as well as the transducer methods \texttt{t.inverse()} and \texttt{t.runOnNFA(a)}. For the case of error-correcting properties, our implementation makes use of Remark~\ref{remECvsED}. \pbsn \emshort{Methods with witnesses:} \texttt{p.notMaximalW(a, b)} \pmsn It can be shown that any word belonging to the set shown in Eq.~(\ref{eq-max}) can be added to $\lang(\aut)$ and the resulting language will still satisfy the property \cite{DudKon:2012}. This word can serve as a witness of the non-maximality of $\lang(\aut)$. If no such word exists, the method returns \nonen. \pbsn \emshort{Methods with witnesses:} \texttt{p.notSatisfiesW(a)} \pmsn For input-altering transducer and error-detecting properties, the witness version of the method \texttt{p.satisfiesP(a)} returns either a pair of {\em different} words $u,v\in\lang(\aain)$ violating the property, that is, $v\in\ttin(u)$ or $u\in\ttin(v)$, or they return the pair $(\nonen, \nonen)$. In the former case, the pair $(u,v)$ is called a \emdef{witness of the non-satisfaction of \ppi by} the language $\lang(\aain)$. For error-correcting properties \ppin, a witness of non-satisfaction by $\lang(\aain)$ is a triple of words $(z,u,v)$ such that $u,v\in\lang(\aain)$ and $u\not=v$ and $z\in\ttin(u)\cap\ttin(v)$. Next we discuss how to accomplish this by changing the implementations of \texttt{p.satisfiesP(a)} shown before. \pmsn \begin{description} \item[Case 1:] For input-altering transducer properties, we have the Python code \begin{verbatim} return t.inIntersection(a).outIntersection(a).nonEmptyW() \end{verbatim} Recall from Section~\ref{sec:transducers}, the above returns (if possible) a witness for the nonemptiness of the transducer \tti when the input and output parts of \tti are intersected by the language $\lang(\aain)$. This witness corresponds to the emptiness test in Eq.~(\ref{eqIAT}), as required. \pmsn \item[Case 2:] For error-detecting properties, the defining transducer is a channel (input-preserving) and, therefore, we use the method \texttt{nonFunctionalW()} instead of \texttt{nonEmptyW()}. More specifically, we use the code \begin{verbatim} u, v, w = t.inIntersection(a).outIntersection(a).nonFunctionalW() if u == v: return u, w else: return u, v \end{verbatim} If the method \texttt{nonFunctionalW()} returns a triple of words $(u,v,w)$ then, by Proposition~\ref{resNonFunc} and the definitions of the \texttt{in/out} intersection methods, we have that $v\not=w$, $v\in\ttin(u)$, $w\in\ttin(u)$ and all three words are in $\lang(\aain)$. This implies that at least one of $u\not=v$ and $u\not=w$ must be true and, therefore, the returned value is the pair $(u,v)$ or $(u,w)$. Moreover, the returned pair indeed violates the property. Conversely, if the non-functionality method returns a triple of \nonen{}s then the constructed transducer is not functional. Then $\lang(\aain)$ must satisfy the property, otherwise any different words $v,w\in\lang(\aain)$ violating the property could be used to make the triple $(v,v,w)$, or $(w,w,v)$, which would serve as a witness of the non-functionality of the transducer. \pmsn \item[Case 3:] For error-correcting properties, we use again the non-functionality witness method. \begin{verbatim} return t.inverse().outIntersection(a).nonFunctionalW() \end{verbatim} For the correctness of this algorithm, one argues similarly as in the previous case. \end{description} \begin{comment} If the method \texttt{nonFunctionalW()} returns a triple of words $(z,u,v)$ then $u,v\in\lang(\aain)$, $u\not=v$ and $u,v\in\ttin^{-1}(z)$ which implies that $z\in\ttin(u)\cap\ttin(v)$ and, therefore, $\lang(\aain)$ is not error-correcting for \ttin. Conversely, if the non-functionality method returns a triple of \nonen{}s then the constructed transducer is not functional and $\lang(\aain)$ must satisfy the property, otherwise any triple of words $(z,u,v)$... \end{comment} \pmsn The above discussion establishes the following consequence of Proposition~\ref{resNonFunc} and the definitions of product constructions in Section~\ref{sec:transducers}. \begin{proposition} The algorithms implemented in the three forms (input-altering transducer, error-detecting, error-correcting) of the method {\rm \texttt{p.notSatisfiesW(a)}} correctly returns a witness of the non-satisfaction of the property {\rm\ppin} by the language {\rm $\lang(\aain)$}. \end{proposition} \begin{EX}\label{exIPython1} The following Python interaction shows that the language $a^*b$ is a prefix and 1-error-detecting code. Recall from previous examples that the Python strings \texttt{st, s1} contain, respectively, the descriptions of an NFA accepting $a^*b$, and a channel transducer that performs up to one substitution error when given an input word. \begin{verbatim} >>> a = fio.readOneFromString(st) >>> pcp = codes.buildPrefixProperty({'a','b'}) >>> s1dp = codes.buildErrDetectPropS(s1) >>> p2 = pcp & s1dp >>> p2.notSatisfiesW(a) (None, None) \end{verbatim} \end{EX} \section{Uniquely Decodable/Decipherable Codes}\label{sec:UDCode} The property of unique decodability or decipherability, \emdef{UD code property} for short, is probably the first historically property of interest in coding theory from the points of view of both information theory \cite{Sardinas:Patterson} as well as formal language theory \cite{Sch:1955,Niv:1966}. A language $L$ is said to be a UD code if, for any two sequences $(u_i)_1^n$ and $(v_j)_1^m$ of $L$-words such that $u_1\cdots u_n=v_1\cdots v_m$, we have that $n=m$ and the two sequences are identical. In simpler terms, every word in $L^*$ can be decomposed in exactly one way as a sequence of $L$-words. In this section we describe our implementation of the satisfaction and maximality methods for the UD~code property, as the corresponding methods for error-detecting properties discussed above are not applicable to the UD~code property. \begin{remark}\label{rem:UDcodes} In \cite{JuKo:handbook}, as a consequence of a more general result, it is shown that the UD~code property is not an $n$-independence for any $n<\aleph_0$. Thus, this property is not an error-detecting property, so it is not describable by any input-preserving transducer. A specific argument showing that this property is not a 3-independence is as follows: the language $L=\{a,ab,ba\}$ is not a UD~code, as $a(ba)=(ab)a$, but every subset of $L$ having $<3$ elements is a UD~code. \end{remark} We assume that \aai is an NFA object without $\ew$-transitions. One can create the UD code property using the following syntax \begin{verbatim} p = codes.buildUDCodeProperty(a.Sigma) \end{verbatim} \pssn \emshort{The method} \texttt{p.notSatisfiesW(a)}\pnsn The satisfaction problem for this property was discussed first in the well known paper \cite{Sardinas:Patterson}, where the authors produce a necessary and sufficient condition for a finite language to be a UD code---we note that some feel that that condition does not clearly give an algorithm, as for instance the papers \cite{Markov:62,Levenshtein:62}. Over the years people have established faster algorithms and generalized the problem to the case where the language in question is regular. To our knowledge, the asymptotically most efficient algorithms for regular languages are given in \cite{Head:Weber:decision,McC:1996}, and they are both of quadratic time complexity. Our implementation follows the algorithm in \cite{Head:Weber:decision}, as that approach makes use of the transducer functionality algorithm. As before we enhance that algorithm to produce a \emdef{witness of non-satisfaction} which, given an NFA object $\aain$, it either returns the pair (\nonen, \nonen) if $\lang(\aain)$ is a UD code, or a pair of two different lists of $\lang(\aain)$-words such that the concatenation of the words in each list produces the same word (if $\lang(\aain)$ is not a UD code). \pssn We now describe the algorithm in \cite{Head:Weber:decision} modified appropriately to return a witness of non-satisfaction. Again, the heart of the algorithm is the transducer functionality method. Let $\aut = (Q,\al,T,I,F)$ be the given NFA (with no $\ew$-transitions). \begin{enumerate} \item If any of the initial states is final, then (as $\ew$ is in the language) return $(\,[\ew], [\ew,\ew]\,)$. \item Construct the transducer $\trt = (Q,\al,\{0,1\},T',I,I)$ in which the transition set $T'$ is defined by the following process: If $(p,\sigma,q)\in T$ then $(p,\sigma/0,q)\in T'$ and, if in addition $q\in F$ then also $(p,\sigma/1,i)\in T'$, for every $i\in I$. Note that the domain of the transducer is exactly the language $\lang(\aut)^*$. \item Let \texttt{w, x, y = t.nonFunctionalW(a)} \item If any of \texttt{w, x, y} is \none then return (\nonen, \nonen) \item At this point, we know that $w\in\lang(\aut)$, $x$ and $y$ are different and each one is of the form $$0^{r_1}1\cdots 0^{r_n}1.$$ Moreover, each of $x$ and $y$ corresponds to a decomposition of $w$ in terms of $\lang(\aut)$-words. More specifically, the binary word $0^{r_1}1\cdots 0^{r_n}1$ encodes a sequence of words $(w_i)_1^n$ such that their concatenation is equal to $w$ and each $w_i$ is the infix of $w$ that starts at position $s_i$ and ends at position $s_i+r_i$, where $s_1=0$ and $r_i=|w_i|-1$ and $s_{i+1}=s_i+r_i+1$. For example, if $w=ababab$ and $x=010001$, then the decomposition is $(ab,abab)$. The algorithm returns the two lists of words corresponding to $x$ and $y$. \end{enumerate} \begin{EX} The following Python interaction produces a witness of the non-satisfaction of the UD code property by the language $\{ab, abba, bab\}$. \begin{verbatim} >>> L = fl.FL(['ab', 'abba', 'bab']) >>> a = L.toNFA() >>> p = codes.buildUDCodeProp(a.Sigma) >>> p.notSatisfiesW(a) (['ab', 'bab', 'abba', 'bab'], ['abba', 'bab', 'bab', 'ab']) \end{verbatim} The two word lists are different, but the concatenation of the words in each list produces the same word. \end{EX} \pssn \emshort{The method} \texttt{p.maximalP(a)}\pssn This method is based on the fundamental theorem of Schutzenberger \cite{BePeRe:2009} that a regular language $L$ is a UD code if and only if the set of all infixes of $L^*$ is equal to $\al^*$. Using the tools implemented in FAdo this test can be performed as follows. \begin{verbatim} t = infixTransducer(a.Sigma, True) b = a.star() return (~(t.runOnNFA(b))).emptyP() \end{verbatim} The first statement above returns a transducer $\trt$ that, on input $w$, outputs any infix of $w$. The second statement returns an NFA accepting $\lang(\aain)^*$, and the last statement returns whether there is no word in the complement of all infixes of $\lang(\aut)^*$. \section{LaSer and Program Generation}\label{sec:laser} The first version of LaSer \cite{DudKon:2012} was a self-contained set of C++ automaton and transducer methods as well as a set of Python and HTML documents with the following functionality: a client uploads a file containing an automaton in Grail format and a file containing either a trajectory automaton, or an input altering-transducer, and LaSer would respond with an answer to the witness version of the satisfaction problem for input-altering transducer properties. The new version, which we discuss here, is based on the FAdo set of automaton and transducer methods and allows clients to request a response about the witness versions of the satisfaction and maximality problems for input-altering transducer, error-detecting and error-correcting properties. \pmsn We call the above type of functionality, where LaSer computes and returns the answer, the \emdef{online service} of LaSer. Another feature of the new version of LaSer, which we believe to be original in the community of software on automata and formal languages, is the \emdef{program generation service}. This is the capability to generate a self-contained Python program that can be downloaded on the client's machine and executed on that machine returning thus the desired answer. This feature is useful as the execution of certain algorithms, even of polynomial time complexity---see the error-detection satisfaction problem--, can be quite time consuming for a server software like LaSer. \pssn The user interface of LaSer is very simple---see Fig.~\ref{figInterface}. \begin{figure}[!ht] \begin{center} \scalebox{0.8}{\includegraphics[trim = 0.5in 5.75in 0.5in 1in, clip]{laser.pdf}} \parbox{0.85\textwidth}{\caption{The main user interface of LaSer}\label{figInterface}} \end{center} \end{figure} The user provides the file containing the automaton whose language will be tested for a question (satisfaction or maximality) about some property. When the decision question is chosen, then the user is asked to enter the type of property, which can be one of fixed, trajectory, input-altering transducer, error-detecting, error-correcting. Then, the user either clicks on the button ``Submit Request'' or on the button ``Generate Program''. \pmsn Next we present some parts of the program generation module. The program to be generated will contain code to answer one of the satisfaction or maximality questions for any of the properties regular trajectory (TRAJECT), input-altering transducer (INALT), error-detecting (ERRDET), error-correcting (ERRCORR), or any of the fixed properties. First a Python dictionary is prepared to enable easy reference to the property type to be processed, and another dictionary for the question to be answered, as follows \begin{verbatim} buildName = {"CODE": ("buildUDCodeProperty", ["a.Sigma"], 1), "ERRCORR": ("buildErrorCorrectPropS", ["t"], 1), "ERRDET": ("buildErrorDetectPropS", ["t"], 1), "HYPER": ("buildHypercodeProperty", ["a.Sigma"], 1), "INALT": ("buildIATPropS", ["t"], 1), "INFIX": ("buildInfixProperty", ["a.Sigma"], 1), "OUTFIX": ("buildOutfixProperty", ["a.Sigma"], 1), "PREFIX": ("buildPrefixProperty", ["a.Sigma"], 1), "SUFFIX": ("buildSuffixProperty", ["a.Sigma"], 1), "TRAJECT": ("buildTrajPropS", ["$strexp", "$sigma"], 1) } \end{verbatim} \begin{verbatim} tests = {"MAXP": "maximalP", "MAXW": "notMaximalW", "SATP": "satisfiesP", "SATW": "notSatisfiesW"}. \end{verbatim} Next we show the actual program generation function which takes as parameters the property name (\texttt{pname}), which must be one appearing in the dictionary \texttt{buildName}, the question to answer (\texttt{test}), the file name for the automaton (\texttt{aname}), the possible trajectory regular expression string and alphabet (\texttt{strexp} and \texttt{sigma}), and the possible file name for the transducer (\texttt{tname}). The function returns a list of strings that constitute the lines of the desired Python program---we have omitted here the initial lines that contain the commands to import the required FAdo modules. \begin{verbatim} 01 def program(pname, test=None, aname=None, strexp=None, sigma=None, tname=None): 02 def expand(s): 03 s1 = Template(s) 04 return s1.substitute(strexp=strexp, sigma=sigma) \end{verbatim} \begin{verbatim} 05 l = list() 06 l.append("ax = \ 07 l.append("a = readOneFromString(base64.b64decode(ax))\n") 08 if buildName[pname][2] == 1: 09 if tname: 10 l.append("tx = \ 11 l.append("t = base64.b64decode(tx)\n") 12 s = "p = " + buildName[pname][0] + "(" 13 for s1 in buildName[pname][1]: 14 if s1 == "$strexp": 15 s += "t, " 16 else: 17 s += 18 s = s[:-1] + ")\n" 19 l.append(s) 20 l.append("print p 21 else: ............... \end{verbatim} We refer to the lines above as meta-lines, as they are used to generate lines of the desired Python program. Meta-line~06 above generates the first line of the program, which would read the automaton file in a string \texttt{ax} that is encoded in binary to allow for safe transmission and reception over different operating systems. The next meta-line generates the line that would create an NFA object from the decoded string \texttt{ax}. The if part of the \texttt{if-else} statement is the one we use in this paper---the else part is for other LaSer questions. If there is a transducer file then, as in the previous case of the automaton file, LaSer generates a line that would create an SFT object from the encoded file. Then, meta-lines~12--18 generate a line that would create the property \texttt{p} to be processed, using the appropriate build-property function. Finally, meta-line~20 generates the line that would print the result of invoking the desired method \texttt{test} of the property \texttt{p}. \section{Concluding Remarks}\label{sec:conclude} We have presented a simple to use set of methods and functions that allow one to define many natural code properties and obtain answers to the satisfaction and maximality problems with respect to these properties. This capability relies on our implementation of basic transducer methods, including our witness version of the non-functionality transducer method, in the FAdo set of Python packages. We have also produced a new version of LaSer that allows clients to inquire about error-detecting and -correcting properties, as well as to generate programs that can be executed and provide answers at the client site. \pmsn There are a few important directions for future research. First, the existing implementation of transducer objects is not always efficient when it comes to describing code properties. For example, the transducer defined in Example~\ref{exSub1} consists of 6 transitions. In general, if the alphabet is of size $s$, then that transducer would require $s+s(s-1)+s=s^2+s$ transitions. However, a symbolic notation for transitions, say of the form \begin{verbatim} 0 @s @s 0 0 @s @s' 1 1 @s @s 1 \end{verbatim} is more compact and can possibly be used by modifying the appropriate transducer methods. In this hypothetical notation, \verb1@s1 represents any alphabet symbol and \verb1@s'1 represents any alphabet symbol other than the previous one. Of course the syntax of such transducer descriptions has to be defined carefully so as to be as expressive as possible and at the same time efficiently utilizable in transducer methods. We note that a similar idea for symbolic transducers is already investigated in \cite{Vea:2013}. \pssi Formal methods for defining code properties need to be better understood or new ones need to be developed with the aim of ultimately implementing these properties and answering efficiently the satisfaction problem. Moreover, these methods should be capable of allowing to express properties that cannot be expressed in the transducer methods considered here. In particular, all transducer properties in this work are independences with parameter $n=3$, so they do not include, for instance, the comma-free code property. A language $L$ is a comma-free code~\cite{Shyr:book} if \[ LL\cap \al^+ L\al^+=\emptyset. \] The formal method of~\cite{Jurg:1999} is quite expressive, using a certain type of first order formulae to describe properties. It could perhaps be further worked out in a way that some of these formulae can be mapped to finite-state machine objects that are, or can be, implemented in available formal language packages like FAdo. We also note that if the defining method is too expressive then even the satisfaction problem could become undecidable---see for example the method of multiple sets of trajectories in \cite{DomSal:2006}. \pmsn We consider the construction problem to be the Holy Grail of coding theory. We believe that as long as the satisfaction problem is efficiently decidable one can use an algorithmic approach to address the problem to some extent. For example, we already have implemented a randomized algorithm that starts with an initial small language $L$ (in fact a singleton one) and then performs a loop in which it does the following task. It randomly picks words $w$ of length $\ell$ that are not in $L$ until $L\cup w$ satisfies the property, in which case $L$ becomes $L\cup w$, or no such $w$ is found after a MAX value of trials. Although this is a simple approach, we feel that it can lead to further developments with respect to the construction problem. \bibliographystyle{plain} \input ppr.bbl \end{document}
2,869,038,155,142
arxiv
\section{Introduction} Let $(M^n, \partial M, g)$ be a smooth Riemannian manifold with boundary. Consider the following basic question: \begin{ques} Let $\SS \subset \mathcal R$ be a subset of the space of curvature tensors on $M$. Does there exist a conformally related metric $\hat{g} = e^{-2u} g$ such that $u_{|\partial M} \equiv 0$ and the curvature tensor of $\hat{g}$ is in $\SS$? \end{ques} \noindent It can also be interesting to ask the question with no boundary restriction on $u$, or to ask for $g$ to be complete on the interior of $M$. More specifically, we can ask: Does there exist a conformally related metric with positive/negative scalar curvature, Ricci tensor, Schouten tensor, sectional curvature, curvature operator? On closed manifolds, the answer to all of these questions is ``no'' in full generality, due to the maximum principle. In the other direction, we note that due to Gromov's h-principle for open diffeomorphism invariant differential relations on open manifolds, the answer to all of these questions is ``yes'' if $\SS$ is open and we allow ourselves to consider \emph{all} metrics (see \cite{EM}, \cite{Geiges}). The restriction to a conformal class is interesting in its own right, and can have implications the soft methods do not yield, such as the existence of a metric compatible with a given almost-complex structure with certain curvature properties. Furthermore, our methods below produce \emph{complete} metrics of negative Ricci curvature on manifolds with boundary, a conclusion certainly not forthcoming from the soft methods. Our first theorem is a solution to the Dirichlet problem for the $\sigma_k$-Ricci curvature problem on manifolds with boundary. Before stating the theorem we need a definition. \begin{defn} \label{conedefs} If $A$ is a symmetric matrix $\sigma_k(A)$ is the $k$-th elementary symmetric polynomial in the eigenvalues of $A$. Furthermore, let $\Gamma_k^+$ be the connected component of the set $\{\sigma_k > 0 \}$ which contains the positive definite cone. \end{defn} \begin{thm} \label{dirichlet} Given $(M^n, \partial M, g)$ a manifold with boundary and $1 \leq k \leq n$, there exists a unique function $w_k \in C^{\infty}(M)$ such that ${w_k}_{|\partial M} \equiv 0$, $- \Ric(e^{2 w_k} g) \in \Gamma_k^+$, and $\sigma_k \left[ - g^{-1} \Ric(e^{2 w_k} g) \right] = e^{2 k w_k}$. \end{thm} In particular observe that $\Ric(e^{2 w_n} g) < 0$. It is important to note here that many topological obstructions exist for curvature conditions on manifolds with boundary. Specifically in \cite{Ananov} a sphere-type theorem for manifolds with positive Ricci curvature and positive second fundamental form is shown. Results in a similar spirit appear in \cite{Hsiung} where the classical Bonnet-Meyers and Cartan-Hadamard theorems are extended to manifolds with boundary. Further topological obstructions appear in \cite{Ichida}, \cite{Nakae}. An interesting geometric conclusion based on curvature and mean curvature conditions appears in \cite{HangWang}. A common feature of all of these results is a (usually quite strong) hypothesis on the second fundamental form. It is to be emphasized that our conformal factors result in metrics with completely unknown second fundamental form, and therefore none of the previous obstructions can apply. By solving the Dirichlet problem with larger and larger boundary data, we can solve the ``infinite boundary data'' Dirichlet problem to produce complete metrics with constant $\sigma_k$ curvature on manifolds with boundary. The case $k = 1$ of this result appeared in \cite{AM}. Also, the existence of a complete metric of negative Ricci curvature with constant $\sigma_k$-Ricci curvature was shown in \cite{Guan2} with the assumption that the given background metric already has negative Ricci curvature. Negative Ricci curvature of the resulting metric is a \emph{consequence} of our theorem in the case $k = n$. \begin{thm} \label{complete} Given $(M^n, \partial M, g)$ a manifold with boundary and $1 \leq k \leq n$, there exists a unique function $w_k \in C^{\infty}(M \backslash \partial M)$ such that $e^{2w_k} g$ is complete, $- \Ric(e^{2 w_k} g) \in \Gamma_k^+$, and $\sigma_k \left[ - g^{-1} \Ric (e^{2w_k} g) \right] = e^{2 k w_k}$. Also, if $r$ denotes distance to $\partial M$, one has \begin{align*} \lim_{x \to \partial M} w_k + \ln r - \frac{1}{2} \ln (n-1) = 0. \end{align*} \end{thm} This theorem has an interesting application to understanding the existence and moduli of Poincar\'e-Einstein metrics. In particular, in section 6 we adopt results of \cite{MP} to our setting and exhibit the space of conformally compact Poincar\'e-Einstein metrics on a given manifold with boundary as an intersection of finitely many locally closed Banach manifolds in the space of conformally compact metrics. For the statement of this theorem we adopt the notation most commonly used in the study of Poincar\'e-Einstein metrics. The relevant terminology and the constants $\til{\beta}_{k, n}$ are defined in section 6. \begin{thm} \label{CCThm1} Let $(X^{n+1}, g_+)$ be a conformally compact manifold. Let $\Theta_k$ denote the set of conformally compact metrics on $X^{n+1}$ with $\sigma_k[-g_+^{-1} \Ric] = \tilde{\beta}_{k,n}$. \vskip.1in \noindent $(i)$ Given a conformally compact metric $g_{+} = \rho^{-2} \bar{g}$, and $1 \leq k \leq n+1$, there is a unique conformally compact metric $h_k = e^{2w_k} \bar{g} \in \Theta_k$. \vskip.1in \noindent $(ii)$ Let $\mathcal E$ denote the space of Poincar\'e-Einstein metrics. Then \begin{align*} \mathcal{E} = \bigcap_{k=1}^{n+1} \Theta_k, \end{align*} Hence $\mathcal{E}$ is a finite intersection of locally closed Banach submanifolds, and in particular is always closed in the space of conformally compact metrics on $X^{n+1}$. \end{thm} In fact, the characterization of $\mathcal E$ is much weaker, only requiring that $\Theta_k \cap \Theta_{n+1} \neq \emptyset$ for some $k < n+1$. This fact is captured by a family of nonlocal conformally invariant functions we define in section 6. In principle these invariant functions open a path towards proving existence of new Poincar\'e-Einstein metrics. Specifically, on K\"ahler manifolds one has many natural families of conformal classes, and it may be possible to show vanishing of this invariant for carefully chosen conformal classes. Here is an outline of the rest of the paper. In section 2 we recall some basic formulas and set up the continuity method we use to prove Theorem \ref{dirichlet}. In sections 3 and 4 we derive the $C^1$ and $C^2$ estimates for the continuity method respectively, and give the proof of Theorem \ref{dirichlet}. In section 5 we prove Theorem \ref{complete} and in section 6 discuss the relationship of these metrics to Poincar\'e-Einstein metrics and prove Theorem \ref{CCThm1}. Finally in section 7 we conclude with a brief discussion of the case of positive curvature. \section{Setup for Theorem \ref{dirichlet}} We will explicitly solve the case $k = n$, and discuss the extension to the case $k < n$ at the end of the proof. Fix $(M^n, \partial M, g)$ a compact manifold with boundary and let \begin{gather*} \rho = - \Ric. \end{gather*} We recall that if $\hat{g} = e^{-2 u} g$ one has \begin{gather*} \begin{split} \hat{\Ric} =&\ \Ric + (n-2) \nabla^2 u + \Delta u g + (n-2) \left( du \otimes du - \brs{du}^2 g \right). \end{split} \end{gather*} It follows that \begin{gather} \begin{split} \hat{\rho} =&\ \rho - (n-2) \nabla^2 u - \Delta u g - (n-2) \left( du \otimes du - \brs{du}^2 g \right). \end{split} \end{gather} Thus if we set $w = - u$, one has \begin{gather*} \begin{split} \hat{\rho} =&\ \rho + (n-2) \nabla^2 w + \Delta w g + (n-2) \left( \brs{dw}^2 g - dw \otimes dw \right). \end{split} \end{gather*} Given a conformal factor $w$, consider the tensor \begin{gather*} \begin{split} W_t(w) := (1-t) g + t \rho + (n-2) \nabla^2 w + \Delta w g + (n-2) \left( \brs{dw}^2 g - dw \otimes dw \right). \end{split} \end{gather*} For the remainder of this section and the next two sections we relabel our dummy variable $w$ as $u$. Therefore, consider the Dirichlet boundary-value Monge-Amp\`ere equation \begin{gather*} \begin{split} F_t(u) := \det W_t(u) - e^{2n u} =&\ 0 \qquad \mbox{ ($\star_t$)}\\ u_{|\partial M} \equiv&\ 0. \end{split} \end{gather*} Let $\Omega = \{t \in [0,1] | \exists u \in C^{4, \alpha}(M) \mbox{ solving } (\star_t), W_t(u) \in \Gamma_n^+ \} \}$. A few observations are immediate. First of all, equation $(\star_0)$ has the unique solution $w \equiv 0$, thus $\Omega$ is nonempty. Also, by construction, it is clear that $W_0 \in \Gamma_n^+$. By the intermediate value theorem it follows that $W_t \in \Gamma_n^+$ for all $t$, and in particular $W_1$ will be in $\Gamma_n^+$, as soon as the continuity method is completed. Therefore indeed a solution to $(\star_1)$ is the function required for the theorem. We can show that the set of times $t$ such that $(\star_t)$ is solvable is open (Lemma \ref{openness}), therefore the crux of the matter, as always, is showing a-priori estimates, which we will take up in the next section. Before proving openness of $\Omega$ we show a general maximum principle which will be of use to us. \begin{prop} \label{mp} \textbf{Maximum principle} Suppose that $u$ and $v$ are smooth sub and super solutions (respectively) to equation $(\star_t)$. If $u\leq v$ on $\partial M,$ then $u\leq v$ on $M$. \begin{proof} Suppose that $u>v$ somewhere. Let $C$ be the maximum of $u-v$ on $M$, which is attained at some point $x_{0}$ in the interior of $M$. Then $w=u-C$ is a strict subsolution to $(\star_t)$, hence at the point $x_0$ we conclude \begin{align*} w(x_{0}) & =v(x_{0})\\ dw(x_{0}) & =dv(x_{0})\\ F_{t}(w,dw,\nabla^{2}w)(x_{0}) & >F_{t}(v,dv,\nabla^{2}v)(x_{0}). \end{align*} It follows immediately that at the point $x_0$ we have \begin{align*} \det& \left[ (1-t)g+t\rho+(n-2)\nabla^{2}w+\Delta w g + (n-2) \left( d w \otimes d w - \brs{d w}^2 g \right) \right]\\ >&\ \det\left[ (1-t)g+t\rho+(n-2)\nabla^{2}v+\Delta v g + (n-2) \left( d v \otimes d v - \brs{d v}^2 g \right) \right]. \end{align*} However, note that $v\geq w$ near $x_{0}$, which means that \begin{align*} \Delta v(x_{0}) & \geq \Delta w(x_{0}),\\ \nabla^{2}v(x_{0}) & \geq \nabla^{2}w(x_{0}). \end{align*} Using these inequalities and the fact that $dw(x_0) = dv(x_0)$ we conclude that at the point $x_0$, \begin{align*} \mathcal W :=&\ (n-2) \nabla^2 w + \Delta w g + (n-2) \left( d w \otimes d w - \brs{d w}^2 g \right)\\ \leq&\ (n-2) \nabla^2 v + \Delta v g + (n-2) \left( d v \otimes d v - \brs{d v}^2 g \right)\\ =:&\ \mathcal V \end{align*} where the matrix inequality $\mathcal W \leq \mathcal V$ has the usual interpretation that $\mathcal V - \mathcal W$ is positive semidefinite. We therefore conclude that \begin{align*} \det \left[ (1-t) g + t \rho + \mathcal V \right] =&\ \det \left[ (1-t) g + t \rho + \mathcal W + \left(\mathcal V - \mathcal W \right) \right]\\ \geq&\ \det \left[ (1-t) g + t \rho + \mathcal W \right] \end{align*} which is a contradiction, and the result follows. \end{proof} \end{prop} Note that this maximum principle immediately implies uniqueness of solutions to $(\star_t)$ for all $0 \leq t \leq 1$. Next we observe openness of $\Omega$. \begin{lemma} \label{openness} $\Omega$ is open in $[0, 1]$. \begin{proof} We compute the linearized operator \begin{align*} F_t'(u_t)(h) =&\ T_{n-1} \left( W_t \right)^{ij} \left( (n-2) \nabla^2 h + \Delta h g_{i j} \right. \\ &\ \left. + (n-2) \left( 2 \left<du_t, dh \right> g - dh \otimes du_t + du_t \otimes dh \right) \right)\\ &\ - 2 n h e^{2n u_t}. \end{align*} where $T_{n-1} (W_t)^{ij}$ is the $(n-1)$ Newton transformation, which is positive definite since $W_t$ is by construction. Thus $F_t'(u_t)$ is a strictly elliptic operator with $C^{2, \alpha}$ coefficients and negative constant term, and is hence invertible. The result thus follows by the implicit function theorem. \end{proof} \end{lemma} \section{Construction of Subsolutions} In this section we derive a subsolution to the equations $(\star_t)$ which is at the heart of our estimates. We begin with an auxiliary geometric construction. Given $(M, \partial M)$, let $N = \partial M$ and consider the manifold $\bar{M} = M \cup \left( N \times [0, 1] \right) / \sim$ where for $x \in N = \partial M$ we have $x \times{1} \sim x$. One should picture an ``exterior'' collar neighborhood of $\partial M$. Using a standard partition of unity argument one may extend the metric $g$ to a metric $\bar{g}$ defined on $\bar{M}$ such that $\bar{g}_{|M} = g$. Consider a point $x_0 \in \partial M$. Fix a point $\bar{x} \in \bar{M} \backslash M$ in the connected component of $N$ which contains $x_0$ chosen so that $x_0$ is the closest point to $\bar{x}$ which lies on the boundary. Let $r$ denote geodesic distance from $\bar{x}$. We may arrange things so that $d(\bar{x}, \partial M) > \delta$ where $\delta$ only depends on the background metric. See Figure \ref{fig:figure1} below. \begin{figure}[ht] \begin{center} \resizebox{275pt}{!}{\input{collar2.pstex_t}} \end{center} \caption{Exterior collar neighborhood of $\partial M$} \label{fig:figure1} \end{figure} Fix constants $A$ and $p$ whose exact size will be determined later, and let \begin{align*} \underbar{u} := A \left(\frac{1}{r^p} - \frac{1}{r(x_0)^p} \right) \end{align*} Our goal is to show that $\underbar{u}$ is a subsolution of $(\star_t)$ for all $t$. First we recall the Hessian comparison theorem. \begin{lemma} \label{hessiancomp1} \textbf{Hessian comparison theorem} Let $(M^n, g)$ be a complete Riemannian manifold with $\sect \geq K$. For any point $p \in M$ the distance function $r(x) = d(x, p)$ satisfies \begin{align*} \nabla^2 r \leq \frac{1}{n-1} H_K(r) g \end{align*} where \begin{gather*} H_K(r) = \begin{cases} (n-1) \sqrt{K} \cot \left( \sqrt{K} r \right) & K > 0\\ \frac{n-1}{r} & K = 0\\ (n-1) \sqrt{|K|} \coth \left( \sqrt{|K|} r \right) & K < 0 \end{cases} \end{gather*} \end{lemma} \begin{lemma} \label{hessiancomp2} Let $(\bar{M}, \bar{g})$ be the metric we constructed above, and let $r$ denote distance from a point $\bar{x} \in \bar{M} \backslash M$ chosen so that $d(\bar{x}, \partial M) > \delta > 0$ for some fixed small constant $\delta$. Then there exists a constant $C$ such that \begin{align*} \nabla^2 r(x) \leq \frac{C}{r(x)} g \end{align*} holds at any point where $r$ is smooth. \begin{proof} If $K \geq 0$ the result follows immediately from Lemma \ref{hessiancomp1}. Assume $K \leq 0$. The distance of $\bar{x}$ to any point in $M$ is bounded, therefore standard estimates on the $\coth$ function give this result away from a controlled ball around $\bar{x}$, which we can assume is contained in $\bar{M} \backslash M$. \end{proof} \end{lemma} \begin{lemma} \label{subsolnlemma1} For $A$ and $p$ chosen large enough with respect to constants depending only on $g$, at any point where $r$ is smooth we have \begin{gather*} F_t(\underbar{u}) > 0. \end{gather*} \begin{proof} We first compute $(n-2) \nabla^2 v + \Delta v g$. First we compute the action of this operator on $\underbar{u}$. Since the action is linear we suppress the constant $A$ and reinsert it at the end. \begin{align*} \nabla^2 \underbar{u} =&\ p (p+1) r^{- p - 2} \nabla r \otimes \nabla r - p r^{- p - 1} \nabla^2 r\\ \Delta \underbar{u} =&\ p(p+1) r^{-p-2} \brs{\nabla r}^2 - p r^{-p-1} \Delta r\\ \end{align*} Since $p > 0$, applying Lemma \ref{hessiancomp2} yields \begin{align*} - p r^{- p - 1} \nabla^2 r \geq - C p r^{- p - 2} g \end{align*} Since the first term in the expression for $\nabla^2 \underbar{u}$ above is positive we conclude \begin{align*} \nabla^2 \underbar{u} \geq - C p r^{-p-2} g \end{align*} for some constant $C = C(g)$. Similarly applying Lemma \ref{hessiancomp2} one has \begin{align*} - p r^{p - 1} \Delta r \geq - C p r^{- p-2} \end{align*} for some constant $C$. Note that $\brs{\nabla r} = 1$ at $x_1$ since $r$ is smooth here. It follows that \begin{align*} \Delta \underbar{u} \geq&\ r^{-p-2} \left( p(p+1) - C p \right)\\ \geq&\ \frac{p^2}{2} r^{-p-2} \end{align*} for $p$ chosen large with respect to $C$. In sum we can conclude, reinserting the factor $A$, \begin{align*} (n-2) \nabla^2 \underbar{u} + \Delta \underbar{u} g \geq A \frac{p^2}{2} r^{-p-2} g. \end{align*} It is clear then that for $p$ chosen large with respect to universal constants and then $A$ chosen large with respect to the diameter of $g$ we have \begin{gather*} (n-2) \nabla^2 \underbar{u} + \Delta \underbar{u} g \geq \frac{p^2}{4} g. \end{gather*} Now choose $p$ still larger depending on the ambient Ricci curvature, i.e. so that $p \geq 4 \sqrt{ - \min_{v \in UTM} \rho(v,v)}$. Observing that the gradient terms in the definition of $W_t$ are always positive, and noting that $\underbar{u} \leq 0$, we conclude the result. \end{proof} \end{lemma} \section{Proof of Theorem \ref{dirichlet}} \begin{lemma} \label{subsolnlemma2} Given $\underbar{u}$ as in Lemma \ref{subsolnlemma1}, for all $t \in [0, 1]$, one has $\underbar{u} \leq u_t$. \begin{proof} Fix a $t \in [0, 1]$ and suppose that $\underbar{u} > u_t$ somewhere. We can fix a positive constant $C$ and a point $x_1 \in M$ achieving the maximum of $\underbar{u} - u_t$, such that $\underbar{u} - C \leq u_t$ and $(\underbar{u} - C)(x_1) = u_t(x_1)$. It is clear by construction that this point must be inside of $M$. We also claim that $\underbar{u}$, and equivalently, $r$, must be smooth at this point $x_1$. Indeed, if this were not the case, at $x_1$ there would be two geodesics $\gg_1$, $\gg_2$ which are each minimizing from $\bar{x}$ to $x_1$. Suppose $d(\bar{x}, x_1) = R$. Let $\gg_1$ be given a unit speed parametrization in $c$. One concludes \begin{gather} \label{subsolnlemma205} \lim_{c \to R^-} \nabla r(\gg_1(c)) \cdot \gg_1' = 1. \end{gather} We next claim that \begin{gather} \label{subsolnlemma210} \lim_{c \to R^+} \nabla r(\gg_1(c)) \cdot \gg_1' < 1. \end{gather} The argument of the following paragraph is summarized in Figure \ref{fig:figure2}. Fix a constant $\ge > 0$ so small that $B_{\ge}(x_1)$ is geodesically convex. Consider the point $\til{x}_{\ge} = \gg_1(R + \ge)$. Construct a new curve $\til{\gg}$ from $\bar{x}$ to $\til{x}_{\ge}$ as follows: follow the geodesic $\gg_2$ from $\bar{x}$ to $\gg_2(R - \ge)$, then connect $\gg_2(R - \ge)$ to $\til{x}_{\ge}$ by the unique geodesic in $B_{\ge}(x_1)$ between these two points. Recall that $\gg_1$ and $\gg_2$ are distinct geodesics. In particular, by uniqueness of solutions to ODE, it follows that $\gg_1'(R) \neq \gg_2'(R)$ since $\gg_1(R) = \gg_2(R)$. In particular, the triangle formed by the three points $\gg_2(R - \ge)$, $\gg_1(R) = \gg_2(R) = x_1$, and $\gg_1(R + \ge) = \til{x}_{\ge}$ is nondegenerate. It follows from the Toponogov comparison theorem that $d(\gg_2(R - \ge), \til{x}_{\ge})$ is strictly less than the sum of the lengths of the other two sides of the triangle, with the difference given in terms of a lower bound for the curvature of $g$. Specifically, there exists a $\delta > 0$ depending on this lower bound and the angles of the triangle so that \begin{gather*} d(\gg_2(R - \ge), \til{x_{\ge}}) \leq \left(2 - \delta \right) \ge \end{gather*} (In fact, since our triangle is very small, the curvature does not need to enter into the bound. One can forgo the Toponogov theorem and get a bound strictly in terms of the angles of the triangle). Using $\til{\gg}$ as a test curve for the distance function, it follows that \begin{align*} d(\bar{x}, \til{x}_{\ge}) \leq R - \ge + \left(2 - \delta\right) \ge =&\ R + \ge - \delta \ge. \end{align*} Taking the limit as $\ge \to 0$, we immediately conclude that \begin{align*} \lim_{c \to R^+} \nabla r(\gg_1(c)) \cdot \gg_1' =&\ \lim_{\ge \to 0} \frac{r(\gg_1(R+\ge)) - r(\gg_1(R))}{\ge}\\ \leq&\ \lim_{\ge \to 0} \frac{R + \ge - \delta \ge - R}{\ge}\\ <&\ 1. \end{align*} \begin{figure}[ht] \begin{center} \resizebox{275pt}{!}{\input{geodesics.pstex_t}} \end{center} \caption{Geodesics at the cut locus} \label{fig:figure2} \end{figure} We now finish the argument that $\underbar{u}$ is smooth at $x_1$. Indeed, it follows from (\ref{subsolnlemma205}) and (\ref{subsolnlemma210}) by direct calculation that the derivative of the function $f(c) := \underbar{u}(\gg_1(c))$ jumps a certain positive amount at $c = R$. Considering next the smooth function $\psi(c) := u_t(\gg_1(c))$, by assumption we have that $(\psi - f)(c)$ has a local minimum at $c = R$. Thus \begin{gather*} \lim_{c \to R^-} f' - \psi' \leq \lim_{c \to R^+} f' - \psi'. \end{gather*} Since $\psi$ is smooth, we therefore conclude \begin{align*} \lim_{c \to R^-} f' \geq \lim_{c \to R^+} f'. \end{align*} This contradicts (\ref{subsolnlemma205}) and (\ref{subsolnlemma210}) since $\frac{d\underbar{u}}{d r} < 0$. Given that $\underbar{u}$ is smooth at $x_1$, using Lemma \ref{subsolnlemma1} the argument of Proposition \ref{mp} applies at this point to yield the required contradiction to the assumption that $\underbar{u} > u_t$ somewhere. The lemma follows. \end{proof} \end{lemma} \begin{lemma} \label{subsolnlemma4} The inequality $u_t \leq 0$ holds for all $0 \leq t \leq 1$. \begin{proof} To get this estimate we exhibit $u_t$ as a subsolution of $(\star_0)$. We may assume without loss of generality that by scaling $g$ in space we have $g \geq \rho$. It follows that \begin{align*} e^{2 n u_t} =&\ F_t(u_t) + e^{2n u_t}\\ =&\ \det \left( (1-t) g + t \rho + (n-2) \nabla^2 u_t + \dots \right)\\ \leq&\ \det \left( g + (n-2) \nabla^2 u_t + \dots \right)\\ =&\ F_0(u_t) + e^{2n u_t}. \end{align*} Therefore $u_t$ is a subsolution of the equation $F_0(u) = 0$, and the result follows by Proposition \ref{mp}. \end{proof} \end{lemma} \begin{prop} \label{NRboundaryC1} There exists a constant $C$ such that for all $x_0 \in \partial M$ and for all $0 \leq t \leq 1$ we have $\brs{\frac{\partial}{\partial \nu} u_t} \leq C$ where $\nu$ is the unit normal to $\partial M$ at $x_0$. \begin{proof} This follows immediately from Lemmas \ref{subsolnlemma2} and \ref{subsolnlemma4} since for instance \begin{align*} \frac{\underbar{u}(x) - \underbar{u}(x_0)}{d(x,x_0)} \leq \frac{u(x) - u(x_0)}{d(x,x_0)}. \end{align*} Our construction of $\underbar{u}$ is specific to each $x_0 \in \partial M$, but it is clear that the choice of $p$ etc. are all universally controlled, and so the proposition follows. \end{proof} \end{prop} \begin{prop} \label{C1Estimate} There exists a constant $C$ such that for all $0 \leq t \leq 1$ we have \begin{align*} \brs{u_t}_{C^1} \leq C \end{align*} \begin{proof} We have already shown the global $C^0$ estimate and the boundary $C^1$ estimate. Suppose that the maximum of $\brs{\nabla u_t}$ occurs at a point in the interior. One may follow the calculation of \cite{GV} Proposition 4.1, which is justified at any interior point of $M$, to yield the a-priori $C^2$ estimate. The result follows. \end{proof} \end{prop} We now proceed with the $C^2$ estimates. Fix $x_0 \in \partial M$ and let $u$ be a solution to $(\star_t)$ for some $0 \leq t \leq 1$. Suppose further that $\brs{\nabla_n u} < C$. We will use the indices $e_{i},e_{j}$ to refer to tangent directions to $\partial M$, and $e_n$ to refer to the unit inward normal at $x_0$. We require separate proofs for the different types of boundary second derivatives $\nabla_i \nabla_i u$, $\nabla_i \nabla_n u$, and $\nabla_n \nabla_n u$. \begin{lemma} \label{TTC2Estimate} There exists a constant $C$ depending on $\sup_{0 \leq t \leq 1} \brs{u_t}_{C^1}$ such that for all $x_0 \in \partial M$, for all $0 \leq t \leq 1$ we have \begin{align*} \brs{\nabla_i \nabla_j u(x_0)} < C. \end{align*} \begin{proof} We note first, using that $u_{| \partial M} \equiv 0$, \begin{align*} \nabla_i \nabla_j u(x_0) =&\ - \nabla_n u (x_0) A(e_i, e_j) \end{align*} where $A$ is the second fundamental form of $\partial M$. Since $\brs{\nabla_n u (x_0)} < C$ we immediately conclude the result. \end{proof} \end{lemma} Next we need to bound the derivatives of the form $\nabla_{e_i} \nabla_n u$ at the boundary. For our given $t \in [0, 1]$, let $\mathcal L$ denote the linearization of $F_t$ at $u$. As in Lemma \ref{openness} we have \begin{gather} \label{Ldef} \begin{split} \mathcal L(\psi) =&\ T_{n-1}(W_t)^{ij} \left( (n-2) \nabla_i \nabla_j \psi + \Delta \psi g_{i j} + \right.\\ &\ \qquad \left. (n-2) \left( 2 \left< \nabla \psi, \nabla u \right> g_{i j} - \nabla_i \psi \otimes \nabla_j u - \nabla_i u \otimes \nabla_j \psi \right) \right)\\ &\ - 2 n \psi e^{2n u}. \end{split} \end{gather} Fix a point $x_0 \in \partial M$ and let $B_{\delta}$ be the ball of some small radius $\delta > 0$ around $x_0$. Pick coordinates in $B_{\delta}$ so that $\partial M$ is the plane $x_n = 0$, and let $\{e_i, e_n \}$ be the corresponding coordinate vector fields. Fix some $\alpha$ and consider the function $\phi = e_{\alpha} u_t$ defined in $B_{\delta}$. Note that $\phi_{|\partial M} = 0$. We aim to apply a maximum principle argument similar to the $C^1$ boundary estimate to yield a bound on the normal derivative of $\phi$, which will yield the required bound. The first step is to bound the action of $\mathcal L$ on $\phi$. \begin{lemma} \label{TNlemma1} Using the notation above, there exists a constant $C$ such that \begin{align*} \brs{\mathcal L(\phi)} \leq C \left(1 + \sup_{0 \leq t \leq 1} \brs{u_t}_{C^1} \right) \sum F^{ii}. \end{align*} \begin{proof} Differentiating equation $(\star_t)$ with respect to $e_{\alpha}$ yields \begin{align*} 0 =&\ F^{ij} \left( t \nabla_{\alpha} \rho_{ij} + (n-2) \nabla_{\alpha} \nabla_i \nabla_j u + \nabla_{\alpha} \Delta u g_{i j} \right.\\ &\ \qquad \left. + (n-2) \left( 2 \left< \nabla_{\alpha} \nabla u, \nabla u \right> g - \nabla_\alpha \nabla_i u \otimes \nabla_j u - \nabla_i u \otimes \nabla_{\alpha} \nabla_j u \right) \right)\\ &\ - 2 n \nabla_{\alpha} u e^{2n u}. \end{align*} Commuting derivatives we conclude \begin{align*} \nabla_{\alpha} \nabla_i \nabla_j u =&\ \nabla_i \nabla_{\alpha} \nabla_j u + \Rm * \nabla u\\ =&\ \nabla_i \nabla_j \nabla_{\alpha} u + \Rm * \nabla u\\ =&\ \nabla_i \nabla_j \phi + \Rm * \nabla u. \end{align*} Similarly we have \begin{align*} \nabla_{\alpha} \Delta u =&\ \Delta \phi + \Rm * \nabla u. \end{align*} Combining these calculations yields \begin{align*} \mathcal L(\phi) = F^{ij} \left( t \nabla_{\alpha} \rho_{ij} + \left(\Rm * \nabla u \right)_{ij} \right). \end{align*} Applying the $C^1$ bound we immediately conclude \begin{align*} \brs{\mathcal L(\phi)} \leq C \brs{F^{ij}} \left(1 + \brs{\nabla u} \right) \leq C \brs{F^{ij}} \end{align*} where $\brs{F^{ij}}$ refers to the matrix norm. Note that $F^{ij}$ is positive definite, and therefore its norm is dominated by a dimensional constant times its trace. The result follows. \end{proof} \end{lemma} \begin{lemma} \label{TNC2Estimate} There exists a constant $C$ depending on $\sup_{0 \leq t \leq 1} \brs{u_t}_{C^1}$ such that for all $x_0 \in \partial M$, for all $0 \leq t \leq 1$ we have \begin{align*} \brs{\nabla_{i} \nabla_{n} u_t(x_0)} \leq C. \end{align*} \begin{proof} Fix a point $x_0 \in \partial M$. Choose a small constant $\delta < \frac{1}{2}$ so that $B_{\delta}(x_0)$ is a geodesically convex ball. Furthermore choose $\bar{x}$ such that $\bar{x} \in B_{\frac{\delta}{4}}(x_0)$. Following the construction in section 3, let \begin{align*} \underbar{u} = \frac{1}{r^p} - \frac{1}{r(x_0)^p}. \end{align*} Note that by construction we have that $\underbar{u}$ is smooth in $U := M \cap B_{\delta}(x_0)$. We want to compute $\mathcal L(\underbar{u})$. Following the calculations of section 3 we conclude \begin{align*} (n-2) \nabla^2 \underbar{u} + \Delta \underbar{u} g \geq \frac{p^2}{4} r^{-p-2} g. \end{align*} Consider one of the linear terms in $\mathcal L$ acting on $\underbar{u}$. In particular we note that \begin{align*} \left< \nabla \underbar{u}, \nabla u \right> g \leq&\ p r^{-p-1} \brs{\nabla u} g\\ \leq&\ C p r^{-p-2} g \end{align*} using that $r \leq 1$ and $\brs{\nabla u} \leq C$. All of the linear terms are bounded thusly. Furthermore, $\underbar{u} \leq 0$, so we can throw away the constant term and conclude \begin{align*} \mathcal L(\underbar{u}) \geq \frac{p^2}{8} r^{-p-2} \sum F^{ii} \end{align*} when $p$ is chosen large enough with respect to fixed constants. It follows that if we choose $p$ larger still and note that $r \leq 1$, Lemma \ref{TNlemma1} yields \begin{align*} \mathcal L( \phi - \underbar{u} ) < \left( C - \frac{p^2}{8} \right) \sum F^{ii} < 0. \end{align*} Thus by the maximum principle we conclude that the minimum of $\phi - \underbar{u}$ occurs on the boundary of $U$. It remains to check these boundary values. There are two components of $U$ to check. First we have $U \cap \partial M$. Here $\phi \equiv 0$ and $\underbar{u} \leq 0$ with $\underbar{u} = 0$ at $x_0$. Next we consider the component $U \cap \partial B_{\delta}(x_0)$. Here using the $C^1$ estimate we have $\phi \geq - C$ for some controlled constant $C$. Since $\bar{x} \in B_{\frac{\delta}{4}}(x_0)$, it follows that for $x \in U \cap \partial B_{\delta}(x_0)$ one has \begin{align*} \underbar{u}(x) \leq&\ \frac{1}{\left( \frac{\delta}{2} \right)^p} - \frac{1}{\left( \frac{\delta}{4} \right)^p}\\ =&\ \left( \frac{1}{\delta} \right)^p \left( 2^p - 4^p \right). \end{align*} Thus for $p$ chosen large enough with respect to controlled constants one concludes $\underbar{u} < \phi$ on $U \cap \partial B_{\delta}(x_0)$. It follows that the minimum of $\phi - \underbar{u}$ on $\partial U$ is zero and occurs at $x_0$. It follows that the normal derivative of $\phi - \underbar{u}$ is nonnegative, and therefore we conclude \begin{gather*} \nabla_i \nabla_n u \geq - C \end{gather*} However, using Lemma \ref{TNlemma1} it is clear that the same argument applies to $- \phi$, and therefore the result follows. \end{proof} \end{lemma} \begin{lemma} \label{NNC2Estimate} There exists a constant $C$ depending on $\sup_{0 \leq t \leq 1} \brs{u_t}_{C^1}$,\\ $\sup_{0 \leq t \leq 1} \sup_{x \in \partial M} \brs{\nabla_i \nabla_j u_t(x)}$, and $\sup_{0 \leq t \leq 1} \sup_{x \in \partial M} \brs{\nabla_i \nabla_n u_t(x)}$ such that for all $x_0 \in \partial M$, for all $0 \leq t \leq 1$ we have \begin{align*} \brs{\nabla_{n} \nabla_{n} u(x_0)} \leq C. \end{align*} \begin{proof} Orthogonally decompose the matrix $W_t$ at $x_0$ in terms of $n$ and $e_i$. Using the assumed bounds this yields \begin{gather} W = \left( \begin{matrix} (n-1) u_{nn} & 0\\ 0 & u_{nn} g_{|\partial M} \end{matrix} \right) + \mathcal O(1) \end{gather} It is clear that there exists a constant $C > > 0$ such that if $|u_{nn}| > C$ then \begin{align*} \det W > \frac{1}{10} C^n > 1 > e^{2n u_t} \end{align*} which contradicts the equation $(\star_t)$. Thus $|u_{nn}(x_0)| < C$ and the result follows. \end{proof} \end{lemma} \begin{prop} \label{C2Estimate} There exists a constant $C$ such that for all $0 \leq t \leq 1$ we have \begin{align*} \brs{u_t}_{C^2} \leq C \end{align*} \begin{proof} By Proposition \ref{C1Estimate} and Lemmas \ref{TTC2Estimate}, \ref{TNC2Estimate}, and \ref{NNC2Estimate} we conclude uniform global $C^1$ bounds and boundary $C^2$ estimates. Suppose that the maximum of $\brs{\nabla^2 u}$ occurs at a point in the interior. One may follow the calculation of \cite{GV} Proposition 5.1, which is justified at any interior point of $M$, to yield the a-priori $C^2$ estimate. The result follows. \end{proof} \end{prop} \noindent We can now give the proof of Theorem \ref{dirichlet}. \begin{proof} As discussed in section 2 it suffices to solve our equation $(\star_1)$. Lemma \ref{openness} yields that the set $\Omega$ of $t$ where $(\star_t)$ is solvable is open in $[0, 1]$. Proposition \ref{C2Estimate} yields a uniform $C^2$ estimate for $u_t$. Thus $(\star_t)$ becomes a uniformly elliptic equation, and the Evans-Krylov estimates yield uniform $C^{2,\alpha}$ bounds on $u_t$. Now the Schauder estimates apply to yield uniform $C^4$ bounds. Thus $\Omega$ is closed in $[0, 1]$, and hence $(\star_1)$ is solvable, which completes the proof of existence. By Proposition \ref{mp}, the solution to $(\star_1)$ is unique. To solve the analogous $\sigma_k$ problem, $k < n$, it suffices to observe that the subsolution we construct for the determinant equation is also a subsolution to the $\sigma_k$ problem by MacLaurin's inequality. The proof is otherwise followed line for line to yield this result. \end{proof} \begin{rmk} \label{bndrycond} Note that the proof works equally well if we require the boundary condition $u_{|\partial M} \equiv j$ for any constant $j$. Indeed, with minor modification the proof applies to the general boundary value problem. \end{rmk} \section{Proof of Theorem \ref{complete}} We begin by proving a lemma which serves as a subsolution for solutions to the Dirichlet problem with large boundary condition. \begin{lemma} \label{universalsubsoln} Let $(M, \partial M, g)$ be a manifold with boundary. There exists a function $w$ which is smooth in a controlled neighborhood of the boundary such that the following holds: Fix $0 < \ge < < 1$, and let $u$ be a solution to \begin{align*} F_1(u) =&\ 0\\ {u}_{|\partial M} \equiv&\ - \ln \ge. \end{align*} Let $r$ denote distance from the boundary. Then \begin{align*} u \geq - \ln (r + \ge) + \frac{1}{2} \ln(n-1) + w. \end{align*} where $w \leq 0$ and $w_{|\partial M} = 0$. \begin{proof} Let $r$ denote distance from $\partial M$. Fix a small constant $\delta > 0$, constants $A, p > 0$ and let \begin{align*} w =&\ A \left( \frac{1}{(r + \delta)^p} - \frac{1}{\delta^p} \right). \end{align*} This choice of $w$ is obviously modeled on our subsolution from section 3, except that here one thinks of $r + \delta$ as distance from an exterior copy of $\partial M$ instead of distance from a point outside of $\partial M$. The constant $\delta$ remains fixed for arbitrarily small values of $\ge$. Fix a small constant $\ge > 0$, and let $\til{r} = r + \ge$. Again, $\til{r}$ should be thought of as distance from an exterior copy of $\partial M$, but this time one getting arbitrarily close to the actual boundary $\partial M$. Let \begin{align*} \underbar{u} =&\ - \ln \til{r} + \frac{1}{2} \ln (n-1) + w. \end{align*} Our goal is to show that $\underbar{u}$ is a subsolution for $(\star_t)$ with boundary condition $u_{|\partial M} = - \ln \ge$. The estimate will proceed in two steps. First we estimate $\underbar{u}$ in a small collar neighborhood of the boundary, where the $-\ln \til{r}$ term dominates the behaviour of $\underbar{u}$. Next we exploit the $w$ term using estimates similar to those in section 3 to control $\underbar{u}$ on the rest of the manifold. Fix a point $x_0 \in M \backslash \partial M$ at which $r$ is smooth, and choose coordinates at $x_0$ as follows: Let $e_1 = \frac{\frac{\partial}{\partial r}}{\brs{\frac{\partial}{\partial r}}}$, and let $\{e_2, \dots, e_n \}$ be chosen so that $\{e_i \}$ is an orthonormal basis at $x_0$. First observe the preliminary calculation \begin{align*} (n-2) \left( \brs{d \underbar{u}}^2 g - d \underbar{u} \otimes d \underbar{u} \right) =&\ (n-2) \frac{ \left( \til{r} w' - 1 \right)^2}{\til{r}^2} \left( \begin{matrix} 0 & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1 \end{matrix} \right) \end{align*} Therefore at $x_0$ we conclude \begin{align*} \hat{\rho} =&\ \rho + (n-2) \nabla^2 \underbar{u} + \Delta \underbar{u} g + (n-2) \left(\brs{d \underbar{u}}^2 g - d\underbar{u} \otimes d\underbar{u} \right)\\ =&\ \rho + (n-2) \nabla^2 w + \Delta w g - \frac{1}{\til{r}} \left( (n-2) \nabla^2 r + \Delta r g \right)\\ &\ + \frac{1}{\til{r}^2} \left( \begin{matrix} (n-1) & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1 \end{matrix} \right) + (n-2) \frac{ \left( \til{r} w' - 1 \right)^2}{\til{r}^2} \left( \begin{matrix} 0 & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1 \end{matrix} \right)\\ =&\ \rho + (n-2) \nabla^2 w + \Delta w g + (n-2) \left( w' \right)^2 \left( \begin{matrix} 0 & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1 \end{matrix} \right)\\ &\ + \frac{1}{\til{r}^2} \left[ (n-1) g - 2 (n-2) \til{r} w' \left( \begin{matrix} 0 & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1 \end{matrix} \right) - \til{r} \left( (n-2) \nabla^2 r + \Delta r g \right) \right]. \end{align*} We now show that the determinant of the bracketed term above, call it $\Phi$, is positive in a collar neighborhood of $\partial M$ of a fixed width $\eta > 0$. Observe that when $\til{r}$ is small this term is dominating the behaviour of $\hat{\rho}$. We initially choose $\eta$ small so that the hypersurfaces $\{r = c \}$ are smooth for $c \leq \eta$. First note that $\nabla^2 r$ is simply the second fundamental form of a smooth hypersurface $\{r = c \}$. Therefore it is a tensor with a uniform bound as $r \to 0$ depending only on $g$. In particular for $\eta$ chosen small with respect to constants depending on $g$ we can conclude \begin{align*} (n-2) \nabla^2 r + \Delta r g \geq - \lambda g. \end{align*} Also we can directly compute that on the collar neighborhood of radius $\eta$ \begin{align*} w' = - \frac{A p}{(r + \delta)^{p+1}} \leq - \frac{A p}{(\eta + \delta)^{p+1}} \leq - A p \end{align*} provided $\eta + \delta < 1$, which is easily arranged. We therefore conclude that if we choose $A_0$ and $p$ large with respect to $\lambda$, then for any $A \geq A_0$ and $p \geq p_0$ we have \begin{align*} \Phi \geq&\ (n-1) g + \til{r} \left( \begin{matrix} - \lambda & & & \\ & \lambda & & \\ & & \ddots & \\ & & & \lambda \end{matrix} \right) \end{align*} It follows that if $\eta$ is chosen small with respect to $\lambda$, one has $\det \Phi \geq (n-1)^n$ for $\til{r} \leq 2 \eta$. Since we have chosen $\ge < < 1$ is follows that for $r < \eta$ we have \begin{align*} \det (\hat{\rho}) \geq \frac{(n-1)^n}{\til{r}^{2n}}. \end{align*} We would like to show this inequality on the rest of $M$, i.e. for any $r > \eta$. Recall from section 3 that given any constant $C > 0$ we may choose our constants $A$ and $p$ such that \begin{align*} (n-2) \nabla^2 w + \Delta w g \geq C g. \end{align*} Since $\nabla^2 r$ is a bounded tensor as described above, we may therefore choose $A$ and $p$ large so that \begin{align*} (n-2) \nabla^2 w + \Delta w g \geq&\ -\rho + \frac{1}{\eta + \ge} \left( (n-2) \nabla^2 r + \Delta r g \right)\\ \geq&\ - \rho + \frac{1}{2 \eta} \left( (n-2) \nabla^2 r + \Delta r g \right). \end{align*} Note here that the choices of $A$ and $p$ depend on $\eta$. We were careful above to ensure that the choice of $\eta$ only depended on lower bounds for $A$ and $p$, therefore we are free to choose them still larger, even with respect to $\eta$. It follows that at any point where $r$ is smooth we have \begin{align*} \det (\hat{\rho}) \geq \left( \frac{1}{\til{r}^2} (n-1) \right)^n = \frac{(n-1)^n}{\til{r}^{2n}} = e^{2n (- \ln \til{r} + \frac{1}{2} \ln (n-1))} \geq&\ e^{2n \underbar{u}} \end{align*} where the last inequality follows since $w \leq 0$. It follows that $\underbar{u}$ is a subsolution to $(\star_1)$. It is clear by our construction of $\underbar{u}$ that the comparison argument of Lemma \ref{subsolnlemma2} applies to show that any point where $\underbar{u} - u$ achieves it maximum must be smooth, and hence the argument of Proposition \ref{mp} applies to show that $u \geq \underbar{u}$. The lemma follows. \end{proof} \end{lemma} \begin{lemma} \label{universalsupersoln} Let $(M, \partial M, g)$ be a manifold with boundary. Let $u$ be a solution to \begin{align*} F_1(u) =&\ 0 \end{align*} with any boundary condition. Then \begin{align*} \lim_{x \to \partial M} \left[ u(x) + \ln r(x) - \frac{1}{2} \ln(n-1) \right] \leq 0. \end{align*} Furthermore, there given any small constant $R > 0$ there exists a constant $C(R) > 0$ so that given $x_0 \in M$, $B_{R}(x_0) \subset M$ one has \begin{align*} u(x_0) \leq C(R). \end{align*} \begin{proof} The proof is an adaptation of an argument in \cite{LN} Theorem 4 to the case where the background geometry is not conformally flat. We observe that by the Maclaurin inequality it suffices to find a supersolution to the equation \begin{align*} -S := \sigma_1[- \Ric(e^{2 u})] = n e^{2 u} \end{align*} to bound the solution to any of the $\sigma_k$ equations from above. This equation is given by \begin{gather} \label{universalsupersoln1} - \frac{S_g}{n-1} + 2 \Delta u + (n-2) \brs{d u}^2 = \frac{n}{n-1} e^{2 u}. \end{gather} We proceed to find a local supersolution to this equation. Take a point $x_0$ in $M$ with distance $d$ from the boundary. Consider a geodesic running from the point on the boundary which is closest to $x_{0}$, passing through $x_{0}$, and out a small distance $R$ into the manifold to a point $z_{0}$. We will fix a small $R$ and a function $f(t)$ based on the following. Ensure that we choose both $d$ and $R$ small enough so that given any such point $z_{0}$ as above a distance $R + d$ from the boundary, then the geodesics inside $B_R(z_0)$ will intersect the boundary only once, and on this ball $\Delta d^{2}(z_{0},\cdot)\geq1.$ Further we would like to choose $R$ small enough so that there is a solution to the differential relation on $[0,R^{2}]$ \begin{align*} (n-2)\left( f^{\prime}\right) ^{2}+2f^{\prime \prime} & \leq0,\\ f^{\prime} & > \max_{M}|S|+C(g)\\ f(0) & =0 \end{align*} where $S$ as above is the scalar curvature on $M$. In particular, one may choose \begin{align*} f(t) =&\ \sqrt{t + \ge^2} - \ge \end{align*} and then for $t$ and $\ge$ chosen small with respect to fixed constants the required properties are satisfied. Let $r$ denote the distance function from the point $z_0$, and define a function $\bar{u}$ on $B_R(z_0)$ by \begin{align*} \bar{u} =&\ -\ln(R^{2}-r^{2})+f(R^{2}-r^{2})+\ln2+\ln R+\frac{1}{2}\ln(n-1) \end{align*} One directly computes \begin{align*} d\bar{u} =&\ \left( \frac{1}{R^{2}-r^{2}}-f^{\prime}\right) dr^{2}\\ \nabla^{2}\bar{u} =& \left( \frac{1}{R^{2}-r^{2}}-f^{\prime}\right) \nabla^{2}r^{2}+\left( \left(\frac{1}{R^{2}-r^{2}}\right)^{2}+f^{\prime\prime}\right) dr^{2}\otimes dr^{2}\\ \Delta \bar{u}=&\ \left( \frac{1}{R^{2}-r^{2}}-f^{\prime}\right) \Delta r^{2}+\left( \left( \frac{1}{R^{2}-r^{2}}\right) ^{2}+f^{\prime\prime }\right) |d r^{2}|^{2} \end{align*} It follows that the left hand side of equation (\ref{universalsupersoln1}) becomes \begin{align*} & -\frac{S_{0}}{n-1}+2\left( \frac{1}{R^{2}-r^{2}}-f^{\prime}\right) \Delta r^{2}+2 \left( \left( \frac{1}{R^{2}-r^{2}}\right) ^{2}% +f^{\prime\prime}\right) |d r^{2}|^{2}\\ &\ +(n-2)\left( \frac{1}{R^{2}% -r^{2}}-f^{\prime}\right) ^{2}|d r^{2}|^{2}\\ & =\frac{1}{\left( R^{2}-r^{2}\right) ^{2}}\left\{ \begin{array} [c]{c}% \left[ 2\left( R^{2}-r^{2}\right) -2 f^{\prime}\left( R^{2}-r^{2}\right)^2 \right] (2n+ \mbox{tr}_g K)+2\left[ 1+\left( R^{2}-r^{2}\right) ^{2}f^{\prime\prime }\right] 4r^{2}\\ +(n-2)(\left[ 1-2\left( R^{2}-r^{2}\right) f^{\prime}+\left( R^{2}% -r^{2}\right) ^{2}\left( f^{\prime}\right) ^{2}\right] 4r^{2}-\frac{S_{0}% }{n-1}\left( R^{2}-r^{2}\right) ^{2}% \end{array} \right\} \end{align*} where $K:=\nabla^{2}r^2-2I.$ \ Now using $(n-2)\left( f^{\prime}\right) ^{2}+2f^{\prime\prime}\leq0,$ we may continue the calculation to bound the above expression. In particular \begin{align*} & \leq\frac{1}{\left( R^{2}-r^{2}\right) ^{2}}\left\{ \begin{array} [c]{c}% \left[ 2\left( R^{2}-r^{2}\right) -2 f^{\prime}\left( R^{2}-r^{2}\right)^2 \right] (2n+ \mbox{tr}_g K)+8r^{2}\\ +(n-2)(\left[ 1-2\left( R^{2}-r^{2}\right) f^{\prime}\right] 4r^{2}% -\frac{S_{0}}{n-1}\left( R^{2}-r^{2}\right) ^{2}% \end{array} \right\} \\ & =\frac{1}{\left( R^{2}-r^{2}\right) ^{2}}\left\{ \begin{array} [c]{c}% 4nR^{2}-4nr^{2}+2 \mbox{tr}_g K \left( R^{2}-r^{2}\right) -2 f^{\prime}\left( R^{2}% -r^{2}\right)^2 \Delta r^{2}+8r^{2}\\ +4(n-2)r^{2}-8(n-2)\left( R^{2}-r^{2}\right) f^{\prime}r^{2}-\frac{S_{0}}% {n-1}\left( R^{2}-r^{2}\right) ^{2}% \end{array} \right\} \\ & =\frac{1}{\left( R^{2}-r^{2}\right) ^{2}}\left\{ \begin{array} [c]{c}% 4nR^{2}+2 \mbox{tr}_g K\left( R^{2}-r^{2}\right) -2f^{\prime}\left( R^{2}-r^{2}\right)^2 \Delta r^{2}\\ -8(n-2)\left( R^{2}-r^{2}\right) f^{\prime}r^{2}-\frac{S_{0}}{n-1}\left( R^{2}-r^{2}\right) ^{2}% \end{array} \right\} \\ & \leq\frac{1}{\left( R^{2}-r^{2}\right) ^{2}}\left\{ 4nR^{2}% -(-2 \mbox{tr}_g K+\frac{S_{0}}{n-1}\left( R^{2}-r^{2}\right) +2 f^{\prime}(R^2 - r^2))\left( R^{2}-r^{2}\right) \right\} \end{align*} where in the last line we used that $f' > 0$ and $\Delta r^2 \geq 1$. Applying the second defining property of $f$ we conclude that \begin{align*} - \frac{S_g}{n-1} + 2 \Delta u + (n-2) \brs{d u}^2 \leq&\ 4 n R^2 \frac{1}{(R^2 - r^2)^2}\\ \leq&\ 4 n R^2 \frac{1}{(R^2 - r^2)^2} e^{2 f}\\ =&\ \frac{n}{n-1} e^{2 u}. \end{align*} So, given $x_0$ as above and noting that $\bar{u}$ is infinite on $\partial B_{R}(z_0)$, we apply the maximum principle on this ball to conclude that \begin{align*} u(x_{0}) & \leq \bar{u}(x_{0})\\ =&\ -\ln(R^{2}-(R-d)^{2})+\ln2+\ln R+\frac{1}{2}% \ln(n-1)+f(R^{2}-r^{2})\\ =&\ -\ln(2Rd-d^{2})+\ln2+\ln R+\frac{1}{2}\ln(n-1)+f(2Rd-d^{2})\\ =&\ -\ln d-\ln(2R-d)+\ln2R+\frac{1}{2}\ln(n-1)+f(2Rd-d^{2})\\ =&\ -\ln d+\frac{1}{2}\ln(n-1)-\ln\frac{(2R-d)}{2R}+f(d(2R-d)). \end{align*} Taking the limit as $d$ goes to zero yields the first result of the lemma. To see the second statement, note that the supersolution $\bar{u}$ can be constructed as above on any sufficiently small ball in $M$. Indeed, given $z_0 \in M$ with $B_R(z_0) \subset M$ with $R$ sufficiently small, the estimates above yield that \begin{align*} u(z_0) \leq \bar{u}(z_0) =&\ \ln \frac{(n-1)^{\frac{1}{2}}}{R} + f(R^2) \leq C(R). \end{align*} \end{proof} \end{lemma} \noindent We are now ready to give the proof of Theorem \ref{complete}. \begin{proof} Our proof is similar in nature to \cite{LN} Theorem 4, and indeed we will exploit an estimate derived there for our purposes. We reuse the notation of the previous sections. The first step is to construct a solution to the problem \begin{gather} \label{infdir} \begin{split} F_1(u) =&\ 0\\ \lim_{x \to \partial M} u(x) =&\ \infty. \end{split} \end{gather} Remark \ref{bndrycond} guarantees the existence of functions $u_j$ solving \begin{align*} F_1(u_j) =&\ 0\\ {u_j}_{|\partial M} \equiv&\ j. \end{align*} We claim that we can extract a subsequence which converges uniformly on compact sets to a solution to (\ref{infdir}). First observe that $u_j \geq u_0$ by Proposition \ref{mp}. Furthermore, by the second statement of Lemma \ref{universalsupersoln} we have that for given $K \subset M \backslash \partial M$, there exists a constant $C = C(K)$ such that $u_j \leq C(K)$ for all $j \geq 0$, the constant depending on $d(K, \partial M)$. Therefore we may apply the interior regularity estimates for solutions to $F_1(u) = 0$ to conclude uniform $C^l$ bounds for any $l$ on any given compact subset $K \subset M \backslash \partial M$. Interior regularity for such equations is well established, and one may see for instance \cite{Guan2} Theorems 2.1, 3.1. By the Arzela-Ascoli theorem, we conclude that a subsequence $u_{j_n}$ converges uniformly on compact sets to a function $u_{\infty}$. To show that $u$ is indeed a solution to (\ref{infdir}), first note that by Proposition \ref{mp} the sequence $\{u_j\}$ is monotonically increasing. Applying Lemma \ref{universalsubsoln} we conclude \begin{align*} u_{\infty} \geq&\ \lim_{j \to \infty} u_j\\ \geq&\ \lim_{j \to \infty} \left[- \ln (r + e^{-j}) + \frac{1}{2} \ln (n-1) + w \right]\\ =&\ - \ln r + \frac{1}{2} \ln(n-1) + w. \end{align*} Therefore \begin{align*} \lim_{x \to \partial M} u_{\infty} \geq&\ \lim_{x \to \partial M} - \ln r + \frac{1}{2} \ln(n-1) + w\\ =&\ \lim_{x \to \partial M} - \ln r + \frac{1}{2} \ln(n-1). \end{align*} This in fact yields the precise expected asymptotic lower limit for $u_{\infty}$. Lemma \ref{universalsupersoln} yields the asymptotic upper limit, implying the precise asymptotic behaviour of $u_{\infty}$ near the boundary. Finally, we show uniqueness of the solution. Suppose one had two solutions $u$ and $v$ to equation (\ref{infdir}). We have already shown that the asymptotic limits of $u$ and $v$ are the same at $\partial M$. Therefore, let $r$ denote distance from the boundary of $M$ and consider let $M_{\delta} = \{r \geq \delta \}$ with boundary $\partial M_{\delta} = \{r = \delta \}$ for small $\delta > 0$. Both $u$ and $v$ are solutions to $F_1(\cdot) = 0$ on $M_{\delta}$ with nearly equal boundary conditions. In particular, given $\ge > 0$ we may choose $\delta$ small such that $u \leq v + \ge$ on $\partial M_{\delta}$. Furthermore, it is clear that $v + \ge$ is a supersolution to $F_1(\cdot) = 0$, therefore by Proposition \ref{mp} we conclude that $u \leq v + \ge$ on $M_{\delta}$. Taking the limit as $\ge$ goes to $0$ we see that $\delta \to 0$ as well, and so we conclude that $u \leq v$. However, the argument is symmetric hence $v \leq u$ and so $u \equiv v$. \end{proof} \section{Conformally Compact Manifolds} In this section we give an application of Theorem \ref{complete} to the study of Poincar\'e-Einstein metrics. The material is inspired by the work of Mazzeo-Pacard, and we will refer to \cite{MP} for many of the details. We also change notation for this section to that more commonly used in the study of Poincar\'e-Einstein metrics. The following definition of conformally compact metrics contains the basic setup. \begin{defn} Let $\bar{X}^{n+1}$ be a compact manifold with boundary $M^n = \partial X^{n+1}$. A Riemannian metric $g_{+}$ defined in the interior $X^{n+1}$ is said to be {\em conformally compact} if there is a nonnegative defining function $\rho \in C^{\infty}(\bar{X}^{n+1})$ with \begin{align*} \rho &> 0 \ \mbox{ in } X^{n+1}, \\ \rho &= 0 \ \mbox{ on } M^n, \\ |\nabla_g \rho| &\neq 0 \ \mbox{ on } M^n, \end{align*} such that $\bar{g} = \rho^2 g_{+}$ defines a Riemannian metric on $\bar{X}^{n+1}$, and $\bar{g}$ extends at least continuously to $M$. The manifold $(M^n,\bar{g})$ is called the {\em conformal infinity} of $(X^{n+1},g_{+})$. \end{defn} Note that one can obtain other defining functions through multiplication by a positive function; thus the object naturally associated to a conformally compact manifold is not the metric $\bar{g}$ per se (which depends on $\rho$) but the conformal class $[\bar{g}]$ of its conformal infinity. The curvature transformation formulas for conformal metrics automatically imply that any conformally compact manifold is {\em asymptotically hyperbolic}: i.e., all the sectional curvatures of $g_{+}$ converge to $-1$ at infinity. A conformally compact manifold $(X^{n+1},g_{+})$ satisfying the Einstein condition \begin{align} \label{ECon} Ric(g_{+}) = -n g_{+} \end{align} is called a {\em Poincar\'{e}-Einstein} (P-E) metric. The canonical example of such a metric is $X^{n+1} = B^{n+1} \subset \mathbb{R}^{n+1}$, the unit ball, with $g_{+}$ the hyperbolic metric, $\bar{g} = \frac{1}{4} (1-|x|^2) g_{+} = ds^2$ the Euclidean metric, and the conformal infinity is the round sphere $(\mathbb{S}^n,g = \bar{g}|_{\mathbb{S}^n})$. Due to its connection to the AdS/CFT correspondence (see \cite{Witten}) there is an extensive literature on the subject of P-E metrics and their physical/geometric properties. The question of the existence of a P-E metric with given conformal infinity can be interpreted as an asymptotic Dirichlet problem. In \cite{MP}, Mazzeo-Pacard explored the connection between the existence of P-E metrics and the $\sigma_k$-Yamabe problem. More precisely, let $g_{+}$ be a P-E metric; by (\ref{ECon}) the Schouten tensor is given by \begin{align*} A(g_{+}) = -\frac{1}{2}g_{+}, \end{align*} so that \begin{align} \label{skY} \sigma_k[ -A(g_{+})] = \frac{1}{2^k} \binom{n + 1}{k} \equiv \beta_{k,n}. \end{align} Therefore, a P-E metric is a solution (indeed, the unique solution) of the $k$-Yamabe problem in its conformal class, for all $1 \leq k \leq n+1$. The converse is also true: a conformally compact metric $g_{+}$ satisfying (\ref{skY}) for all $1 \leq k \leq n+1$ is obviously P-E. The main result of Mazzeo-Pacard is a more precise statement of this equivalence: \begin{thm} \label{MPThm} {\em (See \cite{MP}, Theorems 1, 3)} Let $\Sigma_k$ denote the set of conformally compact metrics on $X^{n+1}$ with Schouten $\sigma_k$-curvature equal to $\beta_{k,n}$. \vskip.1in \noindent $(i)$ If $g \in \Sigma_k$, then there is a neighborhood $\mathcal{U}$ of $g$ in the space of conformally compact metrics on $X^{n+1}$ such that $\mathcal{U} \cap \Sigma_k$ is an analytic Banach submanifold of $\mathcal{U}$ (with respect to an appropriate Banach topology). \vskip.1in \noindent $(ii)$ In addition, \begin{align*} \mathcal{E} = \bigcap_{k=1}^{n+1} \Sigma_k, \end{align*} where $\mathcal{E}$ is the set of Poincar\'{e}-Einstein metrics. Hence, $\mathcal{E}$ is a finite intersection of locally closed Banach submanifolds, and in particular is always closed in the space of conformally compact metrics on $X^{n+1}$. \end{thm} This equivalence is not just an algebraic curiosity: as Mazzeo-Pacard point out, the linearization of the P-E condition may have a nontrivial finite dimensional cokernel, while (as we saw in Section 2), the linearization of the Schouten equations do not, at least in the negative cone. On the other hand, it is important to point out that when $k \geq 2$, aside from P-E metrics (and perturbations arising from the above Theorem) there are no general existence results for metrics in $\Sigma_k$. Indeed, for $k \geq 2$, given a conformal infinity $(M^n,[\bar{g}])$ there may be no conformally compact metrics $g_{+} \in \Sigma_k$ with $\rho^2 g_{+}|_{M^n} = \bar{g}$. By contrast, it follows from Theorem \ref{infdir} that {\em every} conformally compact manifold $(X^{n+1}, g_{+} = \rho^{-2}\bar{g})$ admits a unique conformal metric $h_k = e^{2w_k} \bar{g}$ with \begin{gather*} \sigma_k[ - h_k^{-1} Ric(h_k)] = \til{\beta}_{k, n} > 0 \end{gather*} where we define the constants \begin{align} \label{betap} \tilde{\beta}_{k,n} = \sigma_k( n g ) = n^k \binom{n + 1}{k}, \end{align} as the values of $\sigma_k[-Ric]$ for a Poincar\'{e} Einstein metric normalized as in (\ref{ECon}). Therefore, it is natural to ask whether the results of Mazzeo-Pacard have a counterpart for symmetric functions of the Ricci tensor. The answer turns out to be yes. \begin{thm} Let $(X^{n+1}, g_+)$ be a conformally compact manifold. Let $\Theta_k$ denote the set of conformally compact metrics on $X^{n+1}$ with $\sigma_k[-g_+^{-1} \Ric]$ equal to $\tilde{\beta}_{k,n}$. \vskip.1in \noindent $(i)$ Given a conformally compact metric $g_{+} = \rho^{-2} \bar{g}$, and $1 \leq k \leq n+1$, there is a unique conformally compact metric $h_k = e^{2w_k} \bar{g} \in \Theta_k$. \vskip.1in \noindent $(ii)$ Let $\mathcal E$ denote the space of Poincar\'e-Einstein metrics. Then \begin{align*} \mathcal{E} = \bigcap_{k=1}^{n+1} \Theta_k, \end{align*} Hence $\mathcal{E}$ is a finite intersection of locally closed Banach submanifolds, and in particular is always closed in the space of conformally compact metrics on $X^{n+1}$. \begin{proof} We sketch the details as the proof of a straightforward adaptation of the proof of Theorem \ref{MPThm}. That proof relies on the structure of the linearized operator for the Schouten tensor equations, and except for some differences of constants, the linearized operator of the corresponding equations for the Ricci tensor is the same. To be more precise, let \begin{align*} \mathcal{H}_k(g_{+},w) =&\ \sigma_k \big[ -Ric(g_{+}) + (n-2)\nabla^2 w + \Delta w g_{+} \\ &\ \qquad - (n-2) (dw \otimes dw -|dw|^2 g_{+}) \big] - \tilde{\beta}_{k,n} e^{2kw}. \end{align*} Thus, if $\mathcal{H}_k[g_{+},w] = 0$ (and $e^{2w}g_{+}$ is conformally compact) then $e^{2w}g_{+} \in \Theta_k$, and conversely. If $g_{+}$ is P-E, then the linearization of $\mathcal{H}_k$ with respect to $w$ is given by \begin{align*} (\mathcal{L}_{Ric})_k \phi = \tilde{c}_{k,n} \Delta \phi - 2k \tilde{\beta}_{k,n} \phi, \end{align*} where \begin{align*} \tilde{c}_{k,n} = 2(n-1) n^{k-1} \binom{n}{k}. \end{align*} If we linearize the Schouten tensor equations at a P-E metric, the operator is given by \begin{align*} (\mathcal{L}_{A})_k \phi = c_{k,n} \Delta \phi - 2k \beta_{k,n} \phi, \end{align*} where \begin{align*} c_{k,n} = 2^{1-k} \binom{n}{k}. \end{align*} The essential feature is that, in both cases, there is one positive and one negative indicial root of the associated normal operator. This makes it possible to choose an appropriate weighted space on which $(\mathcal{L}_{A})_k$ and $(\mathcal{L}_{Ric})_k$ are Fredholm (see Section 2 of \cite{MP}). After setting up the right function spaces and mappings, both statements of Theorem \ref{MPThm} follow from a version of the implicit function theorem in, for example, \cite{MS}. \end{proof} \end{thm} Finally, we note that the above characterization of $\mathcal{E}$ can be weakened considerably, and this involves the introduction of a family of $n$ potentially interesting conformal invariants. \begin{defn} Let $(X^{n+1},g_{+} = \rho^{-2}\bar{g})$, be a conformally compact manifold. Fix $1 \leq k \leq n+1$, and let $h_k = e^{2w_k}\bar{g}$ be the unique conformally compact metric satisfying \begin{align*} \sigma_k [ - h_k^{-1} \Ric(h_k) ] =&\ \til{\beta}_{k,n}. \end{align*} Given $1 \leq k \leq n$, let \begin{align} \label{Hdef} H_{k} = w_k - w_{n+1}. \end{align} \end{defn} \begin{prop} \label{Hprop} Let $(X^{n+1}, g_{+} = \rho^{-2} \bar{g})$ be a conformally compact manifold, and fix $1 \leq k \leq n$. \begin{enumerate} \item{$H_{k}$ is a conformal invariant, that is, the definition above does depend on the choice of conformal background metric $\bar{g}$.} \item {$H_{k} \in C^{\infty}(X^{n+1}) \cap C^0(\bar{X}^{n+1})$} \item{$H_{k} = 0$ on $\partial X^{n+1} = M^n$} \item{$H_{k} \geq 0$ in $X^{n+1}$. Moreover, $H(x_0) = 0$ at some point $x_0 \in X^{n+1}$ if and only if $H_{k} \equiv 0$ and $g_{+}^1 = g_{+}^{n+1}$ is a Poincar\'{e}-Einstein metric.} \end{enumerate} \begin{proof} (1) If we change $\bar{g}$ to $e^{2\phi}\bar{g}$ for some $\phi \in C^{\infty}(\bar{X}^{n+1})$, then the corresponding solutions to the $\sigma_k$ problem are respectively $w_k + 2 \phi$ and $w_n + 2 \phi$. Therefore, $H = H(g_{+})$ is uniquely determined by a given conformally compact metric, independent of the choice of defining function. (2,3) The solutions $w_k$ are always smooth on the interior. Moreover, by Lemmas \ref{universalsubsoln} and \ref{universalsupersoln} they have the same asymptotic limit at $\partial M$, hence $H_{|\partial M} \equiv 0$. (4) By MacLaurin's inequality $w_k$ is a supersolution of the $\sigma_{n+1}$ equation, therefore by Proposition \ref{mp} we conclude that $H \geq 0$, and $H(x_0) = 0$ if and only if $H$ vanishes identically. If this is the case, the characterization of equality in the Newton-MacLaurin inequality yields that $g_+^1 = g_+^{n+1}$ is a Poincar\'e-Einstein metric. \end{proof} \end{prop} This proposition says that $\mathcal{E}$ is the set of all conformally compact metrics $g_{+}$ for which for some $1 \leq k \leq n$, the function $H_{k} (g_{+})$ vanishes somewhere, and hence everywhere. From this perspective, the conformal invariants $H_{k}$ carry the same information for different choices of $k$. \section{Remarks on positive curvature} \begin{rmk} In the notation of the introduction, let $\mathcal S = \{ S_g > 0 \}$ where here $S_g$ is the Schouten tensor of $g$. Recall the equation for the conformal Schouten tensor. \begin{gather*} S_{e^{-2u} g} = S_g + \nabla^2 u + du \otimes du - \frac{1}{2} \brs{d u}^2 g. \end{gather*} If we let $u = \ln w$ this is rewritten as \begin{gather*} S_{w^{-2} g} = S_g + \frac{1}{w} \nabla^2 w - \frac{1}{2} \frac{\brs{d w}^2}{w^2} g \end{gather*} Now consider an open set $U$ in $\mathbb R^n$ with nonconvex boundary. Suppose one had a function $w > 0$ such that $S_{w^{-2} g} > 0$ and $w_{\partial U} \equiv 0$. Since the metric background metric on $U$ is flat, it follows that $\nabla^2 w > 0$. Since $w_{\partial U} \equiv 0$, it follows that $\partial U$ is a level set of a convex function, and as such should be convex, which it is not. Thus we can never solve for conformal deformation to positive Schouten tensor with restricted boundary condition. However, it is not yet clear if we can solve without the boundary condition, or whether positive Ricci curvature can be solved for. \end{rmk} However, it is easy to show that on a surface with boundary one can always deform to a metric of positive scalar curvature. \begin{prop} \label{surfaces} Let $(M^2, \partial M, g)$ be a compact Riemannian surface with boundary. There exists $u \in C^{\infty}(M)$ such that $R(e^{-2u} g) > 0$ and $u_{|\partial M} \equiv 0$. \begin{proof} On a surface one has \begin{align*} e^{2u} R(e^{-2u} g) = R(g) - 2 \Delta u. \end{align*} Therefore the problem reduces to solving the Dirichlet problem \begin{align*} - 2 \Delta u =&\ 1 - R(g)\\ u \equiv&\ 0 \mbox{ on } \partial M. \end{align*} Solvability of this equation is a known result (\cite{Aubin} Theorem 4.8). \end{proof} \end{prop} \noindent Therefore deformation to positive curvatures remains elusive, and the nature of the obstructions, if any, are not clear. To emphasize the issues here, we formally ask the following question. \begin{ques} Given $(M^{n}, \partial M, g), n \geq 3$, can we conformally deform $g$ to a metric with positive Ricci curvature? Scalar curvature? Can we do either while preserving the induced metric on the boundary? \end{ques} \bibliographystyle{hamsplain}
2,869,038,155,143
arxiv
\section*{Supporting Online Material} \section*{Materials and Methods} We perform our measurements on $\left|2\right\rangle$-$\left|12\right\rangle$ and $\left|2\right\rangle$-$\left|23\right\rangle$ atom-dimer mixtures consisting of roughly $8.5\times 10^4$ atoms and $7.5\times 10^4$ dimers in an optical dipole trap with trapping frequencies $( \omega_x, \omega_y,\omega_z) \approx 2\pi \times (820,820,75)$\,Hz at a temperature of about $1\,\rm{\mu K}$, which we prepare as described in \cite{wir_unten, wir_atomdimer}. The homogenous magnetic field is created by a pair of Helmholtz coils. To associate trimers we apply RF-fields for 35-50 ms using an antenna resonant at $\sim 76$\,MHz driven by a 100\,W RF-amplifier. The Rabi frequency is $\Omega \approx 2 \pi \times 7$\,kHz ($~20$\,kHz) for the $\left|2\right\rangle$-$\left|3\right\rangle$ ($\left|1\right\rangle$-$\left|2\right\rangle$) transition. The decay rate of the trimer is constrained by a lower bound of $50$\,kHz derived from the width of the three-atom loss resonance at 895\,G\cite{braaten_6li}, however as the observed width of the atom-dimer resonance at 685\,G is larger the decay rate in the magnetic field region of interest is expected to be higher. From the width of the association features we obtain a decay rate of $~300$\,kHz, however this is only an upper bound, as the features are broadened by temperature and saturation effects. Due to interference caused by the strong RF-fields, we have to deactivate the feedback of the magnetic field stabilization for the duration of the RF-pulse. This leads to a magnetic field uncertainty of at most $1$\,G. However, as the RF-transitions tune weakly with the magnetic field this only causes a shift of the bare transition which is small compared to the width of the features. After the RF-pulse we perform state-selective absorption imaging, which detects both free atoms in state $\left|2\right\rangle$ and - with slightly reduced efficiency - atoms in state $\left|2\right\rangle$ bound in $\left|12\right\rangle$ or $\left|23\right\rangle$ molecules. When fitting the spectra obtained for the $\left|2\right\rangle$-$\left|12\right\rangle$ mixture we perform a least-squares fit of the data with a model of the form $N(\nu) = N_0 - A_0 e^{-\frac{(\nu-\nu_0)^2}{w_0^2}} - A_1 e^{-\frac{(\nu-\nu_1)^2 }{w_1^2}}$, where we exclude data points which are affected by dimer dissociation. The free parameters are position, width and amplitude of the free-free ($\nu_0, w_0, A_0$) and association ($\nu_1, w_1, A_1$) peaks and the overall offset $N_0$. The given errors are the $2\sigma$ confidence bounds of the fit. However, due to the naive model for the lineshapes there can be additional systematic errors, which we estimate to be a fraction of the width $w_1 \approx 50$\, kHz of the association features. Temperature effects can cause an additional shift on the order of $\frac{3}{2}k_bT/h \approx 30$\,kHz. For the $\left|2\right\rangle$-$\left|23\right\rangle$ mixture we follow the same fitting procedure, except that we have to constrain the position of the free-free peak to the value calculated from the magnetic field to achieve a stable fit. \clearpage \begin{figure} \centering \includegraphics [width= 8cm] {schema_efimov_scenario_v3.pdf} \caption{Sketch of Efimov's scenario for the case of three identical scattering lengths. The square root of the binding energy of the universal dimer and trimer states is shown as a function of the inverse scattering length $1/a$. The crossings of the trimer states with the continuum follow a geometric scaling with a universal scaling factor $e^{\pi /s_0} \approx 22.7$, where $s_0 \approx 1.00624$ is a universal scaling parameter. For negative scattering length the first Efimov trimer crosses into the three-atom continuum at a critical scattering length $a_-^1$, followed by the second trimer state at $a_-^2 \approx 22.7 \, a_-^1$. If the scattering length diverges there is an infinite series of trimer states with exponentially decreasing binding energy. For positive scattering length these trimer states become unbound when they cross the atom-dimer threshold, i.e. their energy becomes degenerate with the energy of a dimer and a free atom. These crossings are spaced by the same universal scaling factor $e^{\pi /s_0}$.} \label{fig:schema_efimov_scenario} \end{figure} \begin{figure} \centering \includegraphics [width= 8cm] {sample_spectra_2-23_v3.pdf} \caption{Spectra for trimer association from a mixture of atoms in state $\left|2\right\rangle$ and $\left|23\right\rangle$ dimers (black circles). The red lines are a fit of two overlapping Gaussians to the data. The fit to the central dip is shown as a blue line, the difference between this fit and the data is shown as red squares. As the free atoms are driven to state $\left|1\right\rangle$ instead of state $\left|3\right\rangle$ the features for dimer dissociation appear at lower frequency than the bare transition, while the trimer association appears for higher frequency. Due to the smaller separation between the bare transition and the trimer association the visibility of the association dip is not as good as for the $\left|2\right\rangle$-$\left|12\right\rangle$ mixture. For magnetic fields below 705\,G the association feature is too close the the bare transition to determine its position. Each data point is the average of eight to nine individual measurements} \label{fig:sample_plots_2_23} \end{figure} \end{document}
2,869,038,155,144
arxiv
\section{Introduction} The dimension-free Harnack inequality was firstly introduced by Wang \cite{FYW0} to derive the log-Sobolev inequality on Riemannian manifolds. As a weaker version of the power-Harnack inequality, the log-Harnack inequality was considered in \cite{RW} for semi-linear SDEs. These two Harnack-type inequalities have been intensively investigated and applied for various finite- and infinite-dimensional SDEs and SPDEs driven by Brownian noise; we refer to the monograph by F.-Y.\ Wang \cite{Wbook} for a systematic theory on dimension-free Harnack inequalities and applications. For the functional SDEs and SPDEs, the Harnack inequalities are also investigated in \cite{BWY,BWY13}, see also \cite{SWY} for SDEs with non-Lipschitz coefficients and \cite{HW, HZ} for SDEs with Dini drifts. However, the noise in all the above results is assumed to contain a Brownian motion part. The central aim of this work is to establish Harnack inequalities for functional SDEs driven by subordinate Brownian motions, which form a very large class of L\'{e}vy processes. It turns out that our results cover the corresponding ones in the case without delay derived by J.\ Wang and F.-Y.\ Wang \cite{WW} (cf.\ \cite{Den14} for an improved estimate). Fix a constant $r_0\geq0$. Denote by $\C$ the family of all right continuous functions $f:[-r_0,0]\to\mathbb{R}^{d}$ with left limits. To characterize the state space, equip $\C$ with the norm $\|\cdot\|_2$ given by $$\|\xi\|_2^2:=\int_{-r_0}^0|\xi(s)|^2\,\d s+|\xi(0)|^2 ,\quad \xi\in\C. $$ For $f:[-r_0,\infty)\to\mathbb{R}^{d}$, we will denote $f_t \in \C$, $t\geq 0$, the corresponding segment process, by letting $$f_t(s):=f(t+s),\quad s\in [-r_0,0].$$ Let $S=(S(t))_{t\geq0}$ be a subordinator (without killing), i.e.\ a nondecreasing L\'{e}vy process on $[0,\infty)$ starting at $S(0)=0$. Due to the independent and stationary increments property, it is uniquely determined by the Laplace transform $$ \E\,\e^{-uS(t)}=\e^{-t\phi(u)},\quad u>0,\,t\geq 0, $$ where the characteristic (Laplace) exponent $\phi:(0,\infty)\rightarrow(0,\infty)$ is a Bernstein function with $\phi(0+):=\lim_{r\downarrow0}\phi(r)=0$, i.e.\ a $C^\infty$-function such that $(-1)^{n-1}\phi^{(n)}\geq0$ for all $n\in\N$. Every such $\phi$ has a unique L\'{e}vy--Khintchine representation (cf. \cite[Theorem 3.2]{SSV}) \begin{equation}\label{bern} \phi(u) =\kappa u+\int_{(0,\infty)}\left(1-\e^{-ux}\right) \,\nu(\d x),\quad u>0, \end{equation} where $\kappa\geq0$ is the drift parameter and $\nu$ is a L\'{e}vy measure on $(0,\infty)$ satisfying $$\int_{(0,\infty)}(1\wedge x) \,\nu(\d x)<\infty.$$ It is clear that $\tilde{\phi}(u):=\phi(u)-\kappa u$ is the Bernstein function of the subordinator $\tilde{S}(t):=S(t)-\kappa t$ having zero drift and L\'{e}vy measure $\nu$. Consider the following functional SDEs on $\mathbb{R}^{d}$: \beq\label{E1} \d X(t)=b(X(t))\,\d t+B(X_t)\,\d t +\d W(S(t)), \end{equation} where $W=(W(t))_{t\geq 0}$ is a $d$-dimensional standard Brownian motion with respect to a complete filtered probability space $(\OO, \F, \{\F_{t}\}_{t\ge 0}, \P)$, $S=(S(t))_{t\geq 0}$ is a subordinator with Bernstein function of the form \eqref{bern} and independent of $W$, $b: \mathbb{R}^{d}\to \mathbb{R}^{d}$ is continuous, and $B: \C\to \mathbb{R}^{d}$ is measurable. We shall need the following conditions on $b$ and $B$: \beg{enumerate} \item[\bf{(H)}] There exist constants $K\in\mathbb{R}$ and $K_1\geq0$ such that $$ \langle x-y,b(x)-b(y)\rangle\leq K|x-y|^2, \quad x,y\in\mathbb{R}^d, $$ and $$ |B(\xi)-B(\eta)|\leq K_1\|\xi-\eta\|_{2},\quad \xi,\eta\in\C. $$ \end{enumerate} \begin{rem}\label{EAU} The condition {\bf{(H)}} ensures the existence, uniqueness and non-explosion of the solution to \eqref{E1}. Indeed, letting $L(t)=W(S(t))$, $\hat{b}(t,x)=b(x+L(t))$ and $\hat{B}(t,\xi)=B(\xi+L_t)$, one has $$\langle x-y,\hat{b}(t,x)-\hat{b}(t,y)\rangle\leq K|x-y|^2, \quad x,y\in\mathbb{R}^d,t\geq0$$ and $$ |\hat{B}(t,\xi)-\hat{B}(t,\eta)|\leq K_1\|\xi-\eta\|_{2},\quad \xi,\eta\in\C,t\geq0.$$ Then the following (functional) ordinary differential equation $$\d \hat{X}(t)=\hat{b}(t,\hat{X}(t))\,\d t+\hat{B}(t,\hat{X}_t)\,\d t$$ has a unique solution which does not explode in finite time; setting $X(t):=\hat{X}(t)+L(t)$, we know that \eqref{E1} has a unique non-explosive solution. \end{rem} For $\xi\in\C$, let $X_t^\xi$ be the solution to \eqref{E1} with $X_0=\xi$. Let $P_t$ be the semigroup associated to $X_t^\xi$, i.e. $$P_t f(\xi)=\mathbb{E}f(X_t^\xi),\quad f\in\B_b(\C).$$ The remaining part of this paper is organized as follows. In Section 2, we state our main results. By using the coupling by change of measure and an approximation technique, we establish in Section 3 the Harnack inequalities for functional SDEs driven by non-random time-changed Brownian motions. Section 4 is devoted to the proofs of Theorem \ref{T3.2} and Example \ref{ex1} presented in Section 2. \section{Main results} As usual, we make the convention that $\frac10=\infty$ and $0\cdot\infty=0$. \begin{thm}\label{T3.2} Assume {\bf (H)} and let $T>r_0$ and $S$ be a subordinator with Bernstein function $\phi$ of the form \eqref{bern}. \smallskip\noindent\textup{i)} \ For any $\xi, \eta\in \C$ and $f\in \B_b(\C)$ with $f\geq1$, \beg{equation*}\beg{split} P_T\log f(\eta)&\leq\log P_T f(\xi)+|\xi(0)-\eta(0)|^2\, \mathbb{E}\left(\int_0^{T-r_0}\e^{-2Kt}\,\d S(t)\right)^{-1} \\ &\quad+\frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right).\end{split}\end{equation*} \smallskip\noindent\textup{ii)} \ For any $p>1$, $\xi, \eta\in \C$ and non-negative $f\in \B_b(\C)$, \beg{equation*}\beg{split} (P_Tf)^p(\eta)& \le P_Tf^p(\xi)\left(\E\exp\left[\frac{p}{(p-1)^2}\,|\xi(0)-\eta(0)|^2 \left(\int_0^{T-r_0}\e^{-2Kt}\,\d S(t)\right)^{-1}\right] \right)^{p-1}\\ &\quad\times\exp\left[\frac{p}{p-1} \frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}\,|\xi(0)-\eta(0)|^2\right)\right]. \end{split}\end{equation*} \end{thm} \begin{rem}\label{nd} If $B=0$, then we can choose $r_0=0$ and $K_1=0$, and thus the assertions in Theorem \ref{T3.2} reduce to the ones derived in \cite{WW} for the case without delay. \end{rem} For a measurable space $(E,\scr F)$, let $\scr P(E)$ denote the family of all probability measures on $(E,\F)$. For $\mu,\nu\in\scr P(E)$, the entropy $\Ent(\nu|\mu)$ is defined by $$\Ent(\nu|\mu):= \beg{cases} \int (\log \ff{\d\nu}{\d\mu})\,\d\nu, \ &\text{if}\ \nu\ \text{ is\ absolutely\ continuous\ with\ respect\ to}\ \mu,\\ \infty,\ &\text{otherwise;}\end{cases}$$ the total variation distance $\|\mu-\nu\|_{\operatorname{var}}$ is defined by $$\|\mu-\nu\|_{\operatorname{var}} := \sup_{A\in\F}|\mu(A)-\nu(A)|.$$ By Pinsker's inequality (see \cite{CK, Pin}), \beq\label{ETX} \|\mu-\nu\|_{\operatorname{var}}^2\le \ff 1 2 \Ent(\nu|\mu),\quad \mu,\nu\in \scr P(E).\end{equation} For $\xi\in\C$, let $P_T(\xi,\cdot)$ be the distribution of $X_T^\xi$. The following corollary is a direct consequence of Theorem \ref{T3.2}, see \cite[Theorem 1.4.2]{Wbook} for the proof; we also refer to \cite[Subsection 1.4.1]{Wbook} for an in-depth explanation of the applications of the Harnack inequalities. \begin{cor}\label{density0} Let the assumptions in Theorem \ref{T3.2} hold. Then the following assertions hold. \smallskip\noindent\textup{i)} \ For any $\xi,\eta\in\C$, $P_T(\xi,\cdot)$ is equivalent to $P_T(\eta,\cdot)$ and \begin{align*}\Ent\big(P_{T}(\xi,\cdot)|P_{T}(\eta,\cdot)\big) &\leq |\xi(0)-\eta(0)|^2\mathbb{E}\left(\int_0^{T-r_0}\e^{-2Kt}\,\d S(t)\right)^{-1}\\ &\quad+\frac{K_1^2}{\kappa} \left(r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}\, |\xi(0)-\eta(0)|^2\right), \end{align*} which together with Pinsker's inequality \eqref{ETX} implies that \begin{align*} 2\|P_T(\xi,\cdot)-P_T(\eta,\cdot)\|_{\operatorname{var}}^2 &\le |\xi(0)-\eta(0)|^2\mathbb{E}\left(\int_0^{T-r_0}\e^{-2Kt}\,\d S(t)\right)^{-1}\\ &\quad +\frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right). \end{align*} \smallskip\noindent\textup{ii)} \ For any $p>1$ and $\xi,\eta\in\C$, \begin{align*}&P_T\left\{\left(\frac{\d P_T(\xi,\cdot)}{\d P_T(\eta,\cdot)}\right)^{1/(p-1)}\right\}(\xi)\leq \E\exp\left[\frac{p}{(p-1)^2}\,|\xi(0)-\eta(0)|^2\left(\int_0^{T-r_0}\e^{-2Kt}\,\d S(t)\right)^{-1}\right]\\ &\qquad\qquad\quad\times\exp\left[\frac{p}{(p-1)^2} \frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}\,|\xi(0)-\eta(0)|^2\right)\right]. \end{align*} \end{cor} \begin{exa}\label{ex1} Assume that {\bf(H)} holds with $K=0$. Let $T> r_0$, and $S$ be a subordinator with Bernstein function $\phi(u)\geq\kappa u+cu^\alpha$ \textup{(}$\kappa\geq0$, $c>0$, $0<\alpha<1$\textup{)}. \smallskip\noindent\textup{i)} \ There exists $C=C(\alpha,c)>0$ such that for any $\xi, \eta\in \C$ and $f\in \B_b(\C)$ with $f\geq1$, \begin{align*} P_T\log f(\eta)&\leq\log P_T f(\xi)+ \frac{C|\xi(0)-\eta(0)|^2}{[\kappa(T-r_0)] \vee(T-r_0)^{1/\alpha}}\\ &\quad+\frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2+(T+1)(T-r_0)|\xi(0)-\eta(0)|^2\right). \end{align*} \smallskip\noindent\textup{ii)} \ If in addition $1/2<\alpha<1$, then there exists $C=C(\alpha,c)>0$ such that for any $p>1$, $\xi, \eta\in \C$ and non-negative $f\in \B_b(\C)$, \beg{equation*}\beg{split} &(P_Tf)^p(\eta) \le P_Tf^p(\xi)\cdot\exp\left[\frac{p}{p-1} \frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)(T-r_0)|\xi(0)-\eta(0)|^2\right)\right]\\ &\quad\times \exp\left[ C\left( \frac{p|\xi(0)-\eta(0)|^2}{(p-1)(T-r_0)^{1/\alpha}} +\frac{\left[p|\xi(0)-\eta(0)|^2\right]^{\alpha/(2\alpha-1)}} {\left[(p-1)(T-r_0)\right]^{1/(2\alpha-1)}} \right) \wedge \frac{p|\xi(0)-\eta(0)|^2}{(p-1)\kappa(T-r_0)} \right]. \end{split}\end{equation*} \end{exa} \section{Harnack inequalities under deterministic time-change} Let $\ell:[0,\infty)\rightarrow[0,\infty)$ be a sample path of $S$ (with Bernstein function $\phi$ of the form \eqref{bern}), which is a non-decreasing and c\`{a}dl\`{a}g function with $\ell(0)=0$. By {\bf(H)} and the same explanation as in Remark \ref{EAU}, for any $\xi\in\C$, the following functional SDE has a unique non-explosive solution with $X_0^{\ell}=\xi$: \begin{equation}\label{jg4dgv} \d X^\ell(t)=b(X^\ell(t))\,\d t+B(X^\ell_t)\,\d t +\d W(\ell(t)). \end{equation} We denote the solution by $X_t^{\ell,\xi}$. Let $$P^\ell_t f(\xi)=\mathbb{E}f(X_t^{\ell,\xi}),\quad t\geq0,f\in\B_b(\C),\xi\in\C.$$ \begin{prp}\label{dfr3s} Assume {\bf(H)} and let $T>r_0$. \smallskip\noindent\textup{i)} \ For any $\xi, \eta\in \C$ and $f\in \B_b(\C)$ with $f\geq1$, \begin{align*} P_T^{\ell} \log f(\eta) &\leq \log P_T^{\ell} f(\xi)+ |\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell(t) \right)^{-1}\\ &\quad+\frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right). \end{align*} \smallskip\noindent\textup{ii)} \ For any $p>1$, $\xi, \eta\in \C$ and non-negative $f\in \B_b(\C)$, \begin{align*} \left(P_T^{\ell} f(\eta)\right)^p& \leq P_T^{\ell} f^p(\xi) \exp\left[\frac{p}{p-1}|\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell(t) \right)^{-1}\right]\\ &\quad\times\exp\left[\frac{p}{p-1} \frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right)\right]. \end{align*} \end{prp} Following the line of \cite{Den14, DS16, WW, WZ15, Zha13}, for $\varepsilon\in(0,1)$, consider the following regularization of $\ell$: $$\ell^\varepsilon(t):=\frac{1}{\varepsilon} \int_{t}^{t+\varepsilon}\ell(s)\,\d s+\varepsilon t =\int_0^1\ell(\varepsilon s+t)\,\d s+\varepsilon t, \quad t\geq0.$$ It is clear that, for each $\varepsilon\in(0,1)$, the function $\ell^\varepsilon$ is absolutely continuous, strictly increasing and satisfies for any $t\geq0$ \begin{equation}\label{approximation} \ell^\varepsilon(t)\downarrow\ell(t)\quad \text{as $\varepsilon\downarrow0$}. \end{equation} For $\xi\in\C$, let $X_t^{\ell^\varepsilon,\xi}$ be the solution to the following functional SDE with initial value $\xi$: $$ \d X^{\ell^\varepsilon,\xi}(t)=b(X^{\ell^\varepsilon,\xi}(t))\,\d t+B(X_t^{\ell^\varepsilon,\xi})\,\d t+\d W(\ell^\varepsilon(t)-\ell^\varepsilon(0)). $$ The associated semigroup is denoted by $P_t^{\ell^\varepsilon}$. Note that this SDE is indeed driven by Brownian motions and thus the method of coupling and Girsanov's transformation can be used to establish the dimension-free Harnack inequalities for $P_t^{\ell^\varepsilon}$. \begin{lem}\label{Ptl} Fix $\varepsilon\in(0,1)$, assume {\bf(H)} and let $T>r_0$. \smallskip\noindent\textup{i)} \ For any $\xi, \eta\in \C$ and $f\in \B_b(\C)$ with $f\geq1$, \begin{align*} P_T^{\ell^\varepsilon} \log f(\eta) &\leq \log P_T^{\ell^\varepsilon} f(\xi)+ |\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) \right)^{-1}\\ &\quad+\frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right). \end{align*} \smallskip\noindent\textup{ii)} \ For any $p>1$, $\xi, \eta\in \C$ and non-negative $f\in \B_b(\C)$, \begin{align*} \left(P_T^{\ell^\varepsilon} f(\eta)\right)^p& \leq P_T^{\ell^\varepsilon} f^p(\xi) \exp\left[\frac{p}{p-1}|\,\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) \right)^{-1}\right]\\ &\quad\times\exp\left[\frac{p}{p-1} \frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)\frac{\e^{2K(T-r_0)}-1}{2K} |\xi(0)-\eta(0)|^2\right)\right]. \end{align*} \end{lem} \begin{proof} Due to the existence of the delay part $B$, we will construct couplings as follows. Let $Y_t$ solve the equation \begin{equation}\begin{split}\label{EY} \d Y(t)&=b(Y(t))\,\d t+B(X_t^{\ell^\varepsilon,\xi})\,\d t\\ &\quad+\lambda(t)\I_{[0,\tau)}(t) \frac{X^{\ell^\varepsilon,\xi}(t)-Y(t)}{|X^{\ell^\varepsilon,\xi}(t) -Y(t)|}|\xi(0)-\eta(0)|\,\d \ell^\varepsilon(t)+\d W(\ell^\varepsilon(t)- \ell^\varepsilon(0)) \end{split}\end{equation} with $Y_0=\eta$, where $$\lambda(t):=\frac{\e^{-Kt}}{\int_0^{T-r_0}\e^{-2Ks} \,\d \ell^\varepsilon(s)},\quad t\geq 0,$$ and $$\tau:=T\wedge\inf\{t\geq 0\,;\, X^{\ell^\varepsilon,\xi}(t)=Y(t)\}$$ is the coupling time. It is clear that $(X^{\ell^\varepsilon,\xi}(t),Y(t))$ is well defined for $t<\tau$. By {\bf(H)}, we have $$ \d |X^{\ell^\varepsilon,\xi}(t)-Y(t)|\leq K|X^{\ell^\varepsilon,\xi}(t)-Y(t)|\,\d t-\lambda(t)|\xi(0)-\eta(0)|\,\d \ell^\varepsilon(t),\quad t\in[0,\tau). $$ Thus, for $t\in[0,\tau)$, \begin{equation}\begin{split}\label{EX-Y'} |X^{\ell^\varepsilon,\xi}(t)-Y(t)|&\leq \e^{Kt}|\xi(0)-\eta(0)|\left(1-\int_0^t\e^{-Ks}\lambda(s)\,\d \ell^\varepsilon(s)\right)\\ &\leq \frac{\e^{Kt}\int_t^{T-r_0}\e^{-2Ks}\,\d \ell^\varepsilon(s)}{\int_0^{T-r_0}\e^{-2Ks}\,\d \ell^\varepsilon(s)}\,|\xi(0)-\eta(0)|\\ &=:\Gamma(t)|\xi(0)-\eta(0)|. \end{split}\end{equation} If $\tau(\omega)>T-r_0$ for some $\omega\in\Omega$, we can take $t=T-r_0$ in the above inequality to get $$ 0<|X^{\ell^\varepsilon,\xi}(t)(\omega)-Y(t)(\omega)| \leq0, $$ which is absurd. Therefore, $\tau\leq T-r_0$. Letting $Y(t)=X^{\ell^\varepsilon,\xi}(t)$ for $t\in[\tau,T]$, $Y(t)$ solves \eqref{EY} for $t\in[\tau,T]$. In particular, $X^{\ell^\varepsilon,\xi}_T=Y_T$. Moreover, by \eqref{EX-Y'} and $\tau\leq T-r_0$, we have \begin{align}\label{r-0}|X^{\ell^\varepsilon,\xi}(t)-Y(t)|^2\leq |\xi(0)-\eta(0)|^2\Gamma(t)^2\I_{[0,T-r_0]}(t), \ \ t\in[0,T]. \end{align} Denote by $\gamma^\varepsilon:[\ell^\varepsilon(0),\infty) \rightarrow[0,\infty)$ the inverse function of $\ell^\varepsilon$. Then $\ell^\varepsilon(\gamma^\varepsilon (t))=t$ for $t\geq\ell^\varepsilon(0)$, $\gamma^\varepsilon(\ell^\varepsilon(t))=t$ for $t\geq0$, and $t\mapsto\gamma^\varepsilon(t)$ is absolutely continuous and strictly increasing. Let $$ \widetilde{W}_t:=\int_{0}^{t}\Psi(u)\,\d u+ W(t)\quad\text{and}\quad M_t:=-\int_0^t \< \Psi(u), \d W(u)\>,\quad t\geq0, $$ where $\Psi(u):=\Phi\circ \gamma^\varepsilon (u+\ell^\varepsilon(0))$ and $$ \Phi(u):=[B(X_u ^{\ell^\varepsilon,\xi})- B(Y_u)] \frac{1}{(\ell^\varepsilon)'(u)} +\lambda(u) \I_{[0,\tau)}(u) \frac{X^{\ell^\varepsilon,\xi}(u))-Y(u)} {|X^{\ell^\varepsilon,\xi}(u) -Y(u)|} \,|\xi(0)-\eta(0)|. $$ By {\bf(H)}, the compensator of the martingale $M_t$ satisfies, for $t\geq0$, \begin{equation}\begin{split}\label{ddfaw} \<M\>_t&=\int_0^t|\Psi(u)|^2\,\d u \leq\int_0^T|\Phi(s)|^2\,\d\ell^\varepsilon(s)\\ &\leq 2K_1^2\int_0^T\|X_t^{\ell^\varepsilon,\xi} -Y_t\|^2_{2}\frac{1}{(\ell^\varepsilon)'(t)}\,\d t+2|\xi(0)-\eta(0)|^2\int_0^{T-r_0}|\lambda(t)|^2\,\d \ell^\varepsilon(t). \end{split}\end{equation} Recalling that $\ell$ is a sample path of the subordinator $S$ with drift parameter $\kappa\geq0$, one has $$(\ell^\varepsilon)'(t)= \frac{\ell(t+\varepsilon)-\ell(t)}{\varepsilon}+\varepsilon>\kappa,$$ and therefore \begin{equation}\label{j3dc34d} \int_0^T\|X_t^{\ell^\varepsilon,\xi}-Y_t\|^2_{2}\frac{1}{(\ell^\varepsilon)'(t)}\,\d t\leq\frac{1}{\kappa}\int_0^T\|X_t^{\ell^\varepsilon,\xi} -Y_t\|^2_{2}\,\d t. \end{equation} Next, we focus on the estimate of $\int_0^T\|X_t^{\ell^\varepsilon,\xi}-Y_t\|^2_{2}\,\d t$. Firstly, it is clear that for any $t\in[0,T]$, \begin{align*} \int_{-r_0}^0|X^{\ell^\varepsilon,\xi}_t(s)-Y_t(s)|^2\,\d s&=\int_{-r_0}^0|X^{\ell^\varepsilon,\xi}(t+s)-Y(t+s)|^2\,\d s\\ &=\int_{t-r_0}^t|X^{\ell^\varepsilon,\xi}(s)-Y(s)|^2\,\d s. \end{align*} This implies that for $t\in[0,r_0]$, \begin{equation}\begin{split}\label{EX-Y''} \int_{-r_0}^0|X^{\ell^\varepsilon,\xi}_t(s)-Y_t(s)|^2\,\d s &=\left(\int_{t-r_0}^0+\int_0^t\right) |X^{\ell^\varepsilon,\xi}(s)-Y(s)|^2\,\d s\\ &\leq \left(\int_{-r_0}^0+\int_{0}^{r_0}\right) |X^{\ell^\varepsilon,\xi}(s)-Y(s)|^2\,\d s\\ &=\int_{-r_0}^0|\xi(s)-\eta(s)|^2\,\d s+\int_{0}^{r_0}|X^{\ell^\varepsilon,\xi}(s)-Y(s)|^2\,\d s, \end{split}\end{equation} and by \eqref{r-0}, for $t\in[r_0,T]$, \begin{equation}\label{EX-Y'''} \int_{-r_0}^0|X^{\ell^\varepsilon,\xi}_t(s)-Y_t(s)|^2\,\d s \leq \int_{0}^{T-r_0}|X^{\ell^\varepsilon,\xi} (s)-Y(s)|^2\,\d s. \end{equation} Combining \eqref{r-0}, \eqref{EX-Y''} and \eqref{EX-Y'''}, we obtain \begin{equation}\begin{split}\label{EX-Y3} &\int_0^T\|X_t^{\ell^\varepsilon,\xi}-Y_t\|^2_{2}\,\d t\\ &\qquad=\int_0^{r_0}\left(\int_{-r_0}^0|X^{\ell^\varepsilon,\xi} _t(s)-Y_t(s)|^2\,\d s\right)\,\d t+\int_{r_0}^T\left(\int_{-r_0}^0|X^{\ell^\varepsilon,\xi}_t(s)-Y_t(s)|^2\,\d s\right)\,\d t\\ &\qquad\quad+\int_0^T|X^{\ell^\varepsilon,\xi} (t)-Y(t)|^2\,\d t\\ &\qquad\leq r_0\left(\int_{-r_0}^0|\xi(s)-\eta(s)|^2\,\d s+\int_{0}^{ r_0}|X^{\ell^\varepsilon,\xi}(s)-Y(s)|^2 \,\d s\right)\\ &\qquad\quad+(T-r_0)\left(\int_{0}^{T-r_0} |X^{\ell^\varepsilon,\xi}(s)-Y(s)|^2\,\d s\right)+\int_0^{T-r_0}|X^{\ell^\varepsilon,\xi} (t)-Y(t)|^2\,\d t\\ &\qquad\leq r_0\|\xi-\eta\|_2^2+(T+1)|\xi(0)-\eta(0)|^2 \int_{0}^{T-r_0}\Gamma(s)^2\,\d s\\ &\qquad\leq r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}\,|\xi(0)-\eta(0)|^2, \end{split}\end{equation} where in the last inequality we have used $\Gamma(s)\leq \e^{Ks}$ for $s\in[0,T-r_0]$. By the definition of $\lambda(t)$, it is easy to see that $$ 2|\xi(0)-\eta(0)|^2\int_0^{T-r_0}|\lambda(t)|^2\,\d \ell^\varepsilon(t)\leq 2|\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) \right)^{-1}. $$ This, together with \eqref{ddfaw}, \eqref{j3dc34d} and \eqref{EX-Y3}, yields that for any $t\geq0$ \begin{equation}\begin{split}\label{ddf23ds} \<M\>_t&\leq \frac{2K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right)\\ &\quad+2|\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) \right)^{-1}. \end{split}\end{equation} By Novikov's criterion, we have $\E R=1$, where $$ R:=\exp\left[M_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} -\frac12\<M\>_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} \right]. $$ According to Girsanov's theorem, $(\widetilde{W}_t)_{0\leq t\leq\ell^\varepsilon(T)-\ell^\varepsilon(0)}$ is a $d$-dimensional Brownian motion under the new probability measure $R\P$. Rewrite \eqref{EY} as $$ \d Y(t)=b(Y(t))\,\d t+B(Y_t)\,\d t +\d \widetilde{W}(\ell^\varepsilon(t)- \ell^\varepsilon(0)). $$ Thus, the distribution of $(Y_t)_{0\leq t\leq T}$ under $R\P$ coincides with that of $(X^{\ell^\varepsilon,\eta}_t)_{0\leq t\leq T}$ under $\P$; in particular, it holds that for any $f\in \B_b(\C)$, \begin{equation}\label{jh43cdf} \E f(X^{\ell^\varepsilon,\eta}_T)= \E_{R\P}f(Y_T)= \E\left[Rf(Y_T)\right] =\E\big[Rf(X^{\ell^\varepsilon,\xi}_T)\big]. \end{equation} By \eqref{jh43cdf}, the Young inequality (cf. \cite[p.\ 24]{Wbook}), and the observation that \begin{align*} \log R&=-\int_0^{\ell^\varepsilon(T)- \ell^\varepsilon(0)}\<\Psi(u),\d W(u)\> -\frac12\int_0^{\ell^\varepsilon(T)- \ell^\varepsilon(0)}|\Psi(u)|^2\,\d u\\ &=-\int_0^{\ell^\varepsilon(T)- \ell^\varepsilon(0)}\<\Psi(u),\d \widetilde{W}(u)\> +\frac12\<M\>_{\ell^\varepsilon(T)- \ell^\varepsilon(0)}, \end{align*} we get that, for any $f\in \B_b(\C)$ with $f\geq1$, \begin{align*} P_T^{\ell^\varepsilon}\log f(\eta) &=\E\log f(X^{\ell^\varepsilon,\eta}_T)\\ &=\E\big[R\log f(X^{\ell^\varepsilon,\xi}_T)\big]\\ &\leq\log\E f(X^{\ell^\varepsilon,\xi}_T) +\E[R\log R]\\ &=\log P_T^{\ell^\varepsilon}f(\xi)+ \E_{R\P}\log R\\ &=\log P_T^{\ell^\varepsilon}f(\xi)+ \frac12\<M\>_{\ell^\varepsilon(T)- \ell^\varepsilon(0)}. \end{align*} Combining this with \eqref{ddf23ds}, we obtain the desired log-Harnack inequality. Next, we prove the second assertion of the theorem. For any non-negative $f\in \B_b(\C)$, we find with \eqref{jh43cdf} and the H\"{o}lder inequality \begin{equation}\begin{split}\label{fds54fff} (P_T^{\ell^\varepsilon}f)^p(\eta)&=\big(\E f(X_T^{\ell^\varepsilon,\eta})\big)^p\\ &=\big(\E\big[R f(X_T^{\ell^\varepsilon,\xi})\big]\big)^p\\ &\leq P_T^{\ell^\varepsilon}f^p(\xi) \cdot\big( \E\big[R^{p/(p-1)}\big] \big)^{p-1}. \end{split}\end{equation} Since by \eqref{ddf23ds} \begin{align*} R^{p/(p-1)}&=\exp\left[ \frac{p}{p-1}M_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} -\frac{p}{2(p-1)} \<M\>_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} \right]\\ &=\exp\left[ \frac{p}{2(p-1)^2}\<M\>_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} \right]\\ &\quad\times \exp\left[ \frac{p}{p-1}M_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} -\frac{p^2}{2(p-1)^2} \<M\>_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} \right]\\ &\leq\exp\left[ \frac{p}{(p-1)^2}|\xi(0)-\eta(0)|^2 \left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) \right)^{-1} \right]\\ &\quad\times\exp\left[ \frac{pK_1^2}{(p-1)^2\kappa}\left( r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2 \right) \right]\\ &\quad\times\exp\left[ \frac{p}{p-1}M_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} -\frac{p^2}{2(p-1)^2} \<M\>_{\ell^\varepsilon(T)-\ell^\varepsilon(0)} \right], \end{align*} and noting the fact that $\exp\left[ \frac{p}{p-1}M_{\ell^\varepsilon(t)-\ell^\varepsilon(0)} -\frac{p^2}{2(p-1)^2} \<M\>_{\ell^\varepsilon(t)-\ell^\varepsilon(0)} \right]$, $0\leq t\leq T$, is a martingale with mean $1$ -- this is due to Novikov's criterion -- we know that \begin{align*} \E\left[R^{p/(p-1)}\right] &\leq\exp\left[ \frac{p}{(p-1)^2}|\,\xi(0)-\eta(0)|^2 \left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) \right)^{-1} \right]\\ &\quad\times\exp\left[ \frac{pK_1^2}{(p-1)^2\kappa}\left( r_0\|\xi-\eta\|_2^2+(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}\,|\xi(0)-\eta(0)|^2 \right) \right]. \end{align*} Inserting this estimate into \eqref{fds54fff}, we get the power-Harnack inequality. \end{proof} To prove Proposition \ref{dfr3s} by using Lemma \ref{Ptl}, we first prove the following lemma. \begin{lem}\label{approxi} Let $\varepsilon\in(0,1)$ and $T>0$. If $g^{(\varepsilon)}:[-r_0,\infty)\rightarrow[0,\infty)$ satisfies $g^{(\varepsilon)}(s)=0$ for $s\in[-r_0,0]$, $\int_0^T\|g^{(\varepsilon)}_t\|_2^2\,\d t<\infty$, and $$|g^{(\varepsilon)}(t)|^2\leq C\int_0^t \|g^{(\varepsilon)}_r\|_2^2\,\d r +h^{(\varepsilon)}(t),\quad t\in[0,T],$$ where $C>0$ is a constant and $h^{(\varepsilon)}:[0,T]\rightarrow[0,\infty)$ is measurable such that $$\sup_{\varepsilon\in(0,1),t\in[0,T]}h^{(\varepsilon)}(t) <\infty$$ and $\lim_{\varepsilon\downarrow0} h^{(\varepsilon)}(t)=0$ for any $t\in[0,T]$. Then we have $$\lim_{\varepsilon\downarrow0}\|g^{(\varepsilon)}_t\|_2=0 ,\quad t\in[0,T].$$ \end{lem} \begin{proof} Since $g^{(\varepsilon)}(s)=0$ for $s\in[-r_0,0]$, it holds that \begin{align*} \int_{-r_0}^0|g^{(\varepsilon)}(t+s)|^2\,\d s&=\left(\int_{-r_0+t}^0+\int_0^t\right) |g^{(\varepsilon)}(s) |^2\,\d s\\ &\leq\int_0^t|g^{(\varepsilon)}(s)|^2\,\d s\\ &\leq C\int_0^t\left(\int_0^s \|g^{(\varepsilon)}_r\|_2^2\,\d r \right)\d s +\int_0^th^{(\varepsilon)}(s) \,\d s\\ &\leq Ct\int_0^t \|g^{(\varepsilon)}_r\|_2^2\,\d r +\int_0^th^{(\varepsilon)}(s) \,\d s. \end{align*} Thus, we find that for any $t\in[0,T]$ \begin{align*} \|g^{(\varepsilon)}_t\|_2^2 &=\int_{-r_0}^0|g^{(\varepsilon)}(t+s)|^2\,\d s +|g^{(\varepsilon)}(t)|^2\\ &\leq C(t+1)\int_0^t \|g^{(\varepsilon)}_r\|_2^2\,\d r+H^{(\varepsilon)}(t)\\ &\leq C(T+1)\int_0^t \|g^{(\varepsilon)}_r\|_2^2\,\d r+H^{(\varepsilon)}(t), \end{align*} where $$ H^{(\varepsilon)}(t):= h^{(\varepsilon)}(t) +\int_0^th^{(\varepsilon)}(s) \,\d s. $$ Now we can apply Gronwall's inequality to get that, for all $t\in[0,T]$, $$ \|g^{(\varepsilon)}_t\|_2^2 \leq H^{(\varepsilon)}(t) +C(T+1)\int_0^tH^{(\varepsilon)}(s)\, \e^{C(T+1)(t-s)}\,\d s. $$ By our assumptions, we know that $\lim_{\varepsilon\downarrow0} H^{(\varepsilon)}(t)=0$ for all $t\in[0,T]$. Letting $\varepsilon\downarrow0$ on both sides of the above inequality and using the dominated convergence theorem, we complete the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{dfr3s}] Fix $T>r_0$. By a standard approximation argument, we may and do assume that $f\in C_b(\C)$. \emph{Step 1:} First, we assume that $b$ is globally Lipschitzian: there exists a constant $C>0$ such that $$ |b(x)-b(y)|\leq C|x-y|,\quad x,y\in\R^d. $$ By the Lipschitz continuity of $b$ and $B$, and noting that $|X^{\ell^\varepsilon,\xi}(r)-X^{\ell,\xi}(r)| \leq\|X^{\ell^\varepsilon,\xi}_r -X^{\ell,\xi}_r\|_2$, we have for $t\geq0$ \begin{align*} |X^{\ell^\varepsilon,\xi}(t)-X^{\ell,\xi}(t)|&\leq C\int_0^t|X^{\ell^\varepsilon,\xi}(r)-X^{\ell,\xi}(r)|\,\d r+K_1\int_0^t\|X^{\ell^\varepsilon,\xi}_r -X^{\ell,\xi}_r\|_2\,\d r\\ &\quad+|W(\ell^{\varepsilon}(t)-\ell^{\varepsilon}(0)) -W(\ell(t))|\\ &\leq(C+K_1)\int_0^t\|X^{\ell^\varepsilon,\xi}_r -X^{\ell,\xi}_r\|_2\,\d r+|W(\ell^{\varepsilon}(t)-\ell^{\varepsilon}(0)) -W(\ell(t))|. \end{align*} By the elementary inequality $$ (u+v)^2\leq2u^2+2v^2,\quad u,v\geq0, $$ and the H\"{o}lder inequality, we get that for $t\in[0,T]$, \begin{align*} |X^{\ell^\varepsilon,\xi}(t)-X^{\ell,\xi}(t)|^2 &\leq2(C+K_1)^2t\int_0^t\|X^{\ell^\varepsilon,\xi}_r -X^{\ell,\xi}_r\|_2^2\,\d r+2|W(\ell^{\varepsilon}(t)-\ell^{\varepsilon}(0)) -W(\ell(t))|^2\\ &\leq2(C+K_1)^2T\int_0^t\|X^{\ell^\varepsilon,\xi}_r -X^{\ell,\xi}_r\|_2^2\,\d r+2|W(\ell^{\varepsilon}(t)-\ell^{\varepsilon}(0)) -W(\ell(t))|^2. \end{align*} Applying Lemma \ref{approxi} with $g^{(\varepsilon)}(t) =|X^{\ell^\varepsilon,\xi}(t)-X^{\ell,\xi}(t)|$ and $h^{(\varepsilon)}(t) =2|W(\ell^{\varepsilon}(t)-\ell^{\varepsilon}(0)) -W(\ell(t))|^2$, we conclude that $X^{\ell^\varepsilon,\xi}_T\to X^{\ell,\xi}_T$ in $\C$ as $\varepsilon\downarrow0$, and so $$ \lim_{\varepsilon\downarrow0} P_T^{\ell^\varepsilon}f= P_T^\ell f, \quad f\in C_b(\C). $$ Since $\ell$ is of bounded variation, it is easy to get from \eqref{approximation} that $$ \lim_{\varepsilon\downarrow0} \int_0^{T-r_0}\e^{-2Kt}\,\d \ell^\varepsilon(t) =\int_0^{T-r_0}\e^{-2Kt}\,\d \ell(t). $$ Letting $\varepsilon\downarrow0$ in Lemma \ref{Ptl}, we obtain the desired inequalities. \medskip\emph{Step 2:} For the general case, we shall make use of the approximation argument proposed in \cite[part (c) of proof of Theorem 2.1]{WW}. Let $$ \tilde{b}(x):=b(x)-Kx,\quad x\in\R^d. $$ Then $\tilde{b}$ satisfies the dissipative condition: $$ \<\tilde{b}(x)-\tilde{b}(y),x-y\>\leq0, \quad x,y\in\R^d, $$ and it is easy to see that the mapping $\operatorname{id}-\varepsilon\tilde{b}:\R^d\rightarrow\R^d$ is injective for any $\varepsilon>0$. For $\varepsilon>0$, let $\tilde{b}^{(\varepsilon)}$ be the Yoshida approximation of $\tilde{b}$, i.e. $$ \tilde{b}^{(\varepsilon)}:= \frac{1}{\varepsilon}\left[ \left( \operatorname{id}-\varepsilon\tilde{b} \right)^{-1}(x)-x \right],\quad x\in\R^d. $$ Then $\tilde{b}^{(\varepsilon)}$ is dissipative and globally Lipschitzian, $|\tilde{b}^{(\varepsilon)}|\leq |\tilde{b}|$ and $\lim_{\varepsilon\downarrow0}\tilde{b}^{(\varepsilon)}=\tilde{b}$. Let $b^{(\varepsilon)}(x):=\tilde{b}^{(\varepsilon)}(x)+Kx$. Then $b^{(\varepsilon)}$ is also Lipschitzian and $$\<b^{(\varepsilon)}(x)-b^{(\varepsilon)}(y),x-y\>\leq K|x-y|^2.$$ Let $X^{\ell,(\varepsilon),\xi}_t$ solve the SDE \eqref{jg4dgv} with $b$ replaced by $b^{(\varepsilon)}$ and $X^{\ell,(\varepsilon),\xi}_0=\xi\in\C$. Denote by $P_t^{\ell,(\varepsilon)}$ the associated semigroup. Due to the first part of the proof, the statements of Proposition \ref{dfr3s} hold with $P_t^{\ell}$ replaced by $P_t^{\ell,(\varepsilon)}$. If \begin{equation}\label{kj64cf3dk} \lim_{\varepsilon\downarrow0} P_T^{\ell,(\varepsilon)}f= P_T^\ell f, \quad f\in C_b(\C), \end{equation} then we complete the proof by applying Proposition \ref{dfr3s} with $P_t^{\ell}$ replaced by $P_t^{\ell,(\varepsilon)}$ and letting $\varepsilon\downarrow0$. Indeed, noting that \begin{align*} &\d |X^{\ell,(\varepsilon),\xi}(t)-X^{\ell,\xi}(t)|^2\\ &\qquad\quad=2\<X^{\ell,(\varepsilon),\xi}(t)-X^{\ell,\xi}(t), b^{(\varepsilon)}(X^{\ell,(\varepsilon),\xi}(t)) -b^{(\varepsilon)}(X^{\ell,\xi}(t))\>\,\d t\\ &\qquad\qquad+2\<X^{\ell,(\varepsilon),\xi}(t)-X^{\ell,\xi}(t), b^{(\varepsilon)}(X^{\ell,\xi}(t)) -b(X^{\ell,\xi}(t))\>\,\d t\\ &\qquad\qquad+2\<X^{\ell,(\varepsilon),\xi}(t)-X^{\ell,\xi}(t), B(X^{\ell,(\varepsilon),\xi}_t)-B(X^{\ell,\xi}_t)\>\,\d t\\ &\quad\qquad\leq (2K+1) |X^{\ell,(\varepsilon),\xi}(t) -X^{\ell,\xi}(t)|^2\,\d t +|b^{(\varepsilon)}(X^{\ell,\xi}(t)) -b(X^{\ell,\xi}(t))|^2\,\d t\\ &\qquad\qquad+2K_1\|X^{\ell,(\varepsilon),\xi}_t -X^{\ell,\xi}_t\|_2^2\,\d t\\ &\quad\qquad\leq (2|K|+2K_1+1)\|X^{\ell,(\varepsilon),\xi}_t -X^{\ell,\xi}_t\|_2^2\,\d t +|b^{(\varepsilon)}(X^{\ell,\xi}(t)) -b(X^{\ell,\xi}(t))|^2\,\d t, \end{align*} one has for $t\in[0,T]$ \begin{align*} &|X^{\ell,(\varepsilon),\xi}(t)-X^{\ell,\xi}(t)|^2\\ &\quad\leq(2|K|+2K_1+1)\int_0^t \|X^{\ell,(\varepsilon),\xi}_r -X^{\ell,\xi}_r\|_2^2\,\d r +\int_0^t|b^{(\varepsilon)}(X^{\ell,\xi}(r)) -b(X^{\ell,\xi}(r))|^2\,\d r\\ &\quad=(2|K|+2K_1+1)\int_0^t \|X^{\ell,(\varepsilon),\xi}_r -X^{\ell,\xi}_r\|_2^2\,\d r +\int_0^t|\tilde{b}^{(\varepsilon)}(X^{\ell,\xi}(r)) -\tilde{b}(X^{\ell,\xi}(r))|^2\,\d r. \end{align*} Applying Lemma \ref{approxi} with $g^{(\varepsilon)}(t)= |X^{\ell,(\varepsilon),\xi}(t)-X^{\ell,\xi}(t)|$ and $h^{(\varepsilon)}(t) =\int_0^t|\tilde{b}^{(\varepsilon)}(X^{\ell,\xi}(r)) \\ -\tilde{b}(X^{\ell,\xi}(r))|^2\,\d r$, we find that $X^{\ell,(\varepsilon),\xi}_T\to X^{\ell,\xi}_T$ in $\C$ as $\varepsilon\downarrow0$, and thus \eqref{kj64cf3dk} follows. \end{proof} \section{Proofs of Theorem \ref{T3.2} and Example \ref{ex1}} \begin{proof}[Proof of Theorem \ref{T3.2}] Since the processes $W$ and $S$ are independent, we have \begin{equation}\label{jg21hhsd} P_{T}f(\cdot)=\E\left[P_{T}^{\ell}f(\cdot) \left|_{\ell=S}\right. \right],\quad f\in\B_b(\C). \end{equation} By the first assertion of Proposition \ref{dfr3s}, for all $f\in\B_b(\C)$ with $f\geq1$, \begin{align*} P_T\log f(\eta)&=\E\left[ P_T^\ell\log f(\eta)\left|_{\ell=S}\right. \right]\\ &\leq\E\left[ \log P_T^\ell f(\xi)\left|_{\ell=S}\right. \right]+|\xi(0)-\eta(0)|^2\,\E\left( \int_0^{T-r_0}\e^{-2Kt}\,\d S(t) \right)^{-1}\\ &\quad+\frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right), \end{align*} which, together with the Jensen inequality and \eqref{jg21hhsd}, implies the log-Harnack inequality. Analogously, by the second assertion of Proposition \ref{dfr3s}, for all non-negative $f\in\B_b(\C)$, \begin{align*} P_Tf(\eta)&=\E\left[ P_T^\ell f(\eta)\left|_{\ell=S}\right. \right]\\ &\leq\left.\E\left[ \big(P_T^\ell f^p(\xi)\big)^{1/p} \exp\left[\frac{1}{p-1}|\xi(0)-\eta(0)|^2\left( \int_0^{T-r_0}\e^{-2Kt}\,\d \ell(t) \right)^{-1}\right]\right|_{\ell=S}\right]\\ &\quad \times\exp\left[\frac{1}{p-1} \frac{K_1^2}{\kappa}\left(r_0\|\xi-\eta\|_2^2 +(T+1)\frac{\e^{2K(T-r_0)}-1}{2K}|\xi(0)-\eta(0)|^2\right)\right]. \end{align*} It remains to use the H\"{o}lder inequality and \eqref{jg21hhsd} to derive the power-Harnack inequality. \end{proof} \begin{proof}[Proof of Example \ref{ex1}] By the assumption, one has $$S(t)\geq\kappa t+\tilde{S}(t)\geq (\kappa t)\vee\tilde{S}(t),\quad t\geq0,$$ where $\tilde{S}$ is an $\alpha$-stable subordinator with Bernstein function $\tilde{\phi}(u)=cu^\alpha$. Combining this with Theorem \ref{T3.2} and the moment estimates for subordinators in \cite[Theorem 3.8\,(a) and (b)]{DS15}, we get the desired estimates. \end{proof}
2,869,038,155,145
arxiv
\section{Introduction} The search for neutrino oscillations is one of the most fascinating topics of modern particle physics. The {\bf KA}rlsruhe {\bf R}utherford {\bf M}edium {\bf E}nergy {\bf N}eutrino experiment KARMEN searches for neutrino oscillations in different appearance ( \mbox{\mbox{$\nu_{\mu}$} $\rightarrow\,$\mbox{$\nu_e$} } \cite{zeitnitz} and \mbox{\mbox{$\bar{\nu}_{\mu}$} $\rightarrow\,$\mbox{$\bar{\nu}_{e}$} } ) and disappearance modes (\mbox{\mbox{$\nu_e$} $\rightarrow\,$\mbox{$\nu_{x}$} }\cite{nuex}). The physics program of KARMEN also includes the investigation of $\nu$--nucleus interactions \cite{reinhard} as well as the search for lepton number violating decays of pions and muons and a test of the V--A structure of the \mbox{$\mu^+$}\ decay \cite{omega}. In the following we present the result of the search for {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillation on the basis of the data taken from February 1997 until February 1999 (KARMEN\,2 data) after the experiment upgrade in 1996. The data taken before the upgrade from 1990 - 1995 (KARMEN\,1 data) is not included in the analysis. Such a combined analysis would yield a much lower sensitivity due to the relatively high cosmic induced neutron background of the KARMEN\,1 data.\ In the data set presented here we measure the expected number of background events. Therefore we used the Unified Approach \cite{cous} based on a maximum likelihood analysis to derive a 90\,\% confidence interval. \section{Neutrino Production and Detection} The KARMEN experiment utilizes the neutrinos produced by the neutron spallation source ISIS of the Rutherford Appleton Laboratory in Chilton, Oxon, UK. An intense beam ($200\,\mu$A) of protons is accelerated to an energy of 800\,MeV by a rapid cycling synchrotron. The two parabolic proton pulses of 100\,ns base width and a gap of 225\,ns are produced with a repetition frequency of 50\,Hz (duty cycle is $10^{-5}$). The protons are stopped in the compact tantalum beam stop. Apart from spallation neutrons a large number of pions is produced and stopped immediatly within the target. While almost all \mbox{$\pi^-$}\ undergo nuclear capture, the \mbox{$\pi^+$}\ decay at rest (DAR) into \mbox{$\mu^+$}\ and \mbox{$\nu_{\mu}$} . The \mbox{$\mu^+$}\ are also stopped within the target and decay at rest via $\mbox{$\mu^+$}\rightarrow\mbox{e$^+$}\,+\,\mbox{$\nu_e$}\,+\,\mbox{$\bar{\nu}_{\mu}$}$. The minor fraction of \mbox{$\pi^-$}\ that decays in flight ($0.65\,\%$ relative to \mbox{$\pi^+$}\ DAR) with an again suppressed subsequent \mbox{$\mu^-$}\ decay leads to an extreme small \mbox{$\bar{\nu}_{e}$}\ contamination of $ \mbox{$\bar{\nu}_{e}$} /\mbox{$\nu_e$} \, \le \, 6.2\cdot 10^{-4}$ \cite{bob}. The energy spectra of the neutrinos are well defined due to the DAR of both the \mbox{$\pi^+$}\ and \mbox{$\mu^+$} . The \mbox{$\nu_{\mu}$}\ from \mbox{$\pi^+$} -decay is monoenergetic with E(\mbox{$\nu_{\mu}$})=29.8\,MeV; the continuous energy distributions up to 52.8\,MeV of the \mbox{$\nu_e$}\ and \mbox{$\bar{\nu}_{\mu}$}\ can be calculated using the V-A theory and show the typical Michel shape. Therefore ISIS is a unique, isotropic source of \mbox{$\nu_{\mu}$} , \mbox{$\nu_e$}\ and \mbox{$\bar{\nu}_{\mu}$}\ from \mbox{$\pi^+$} -\mbox{$\mu^+$}\ DAR that stands out for its time structure, the small \mbox{$\bar{\nu}_{e}$}\ contamination and the well defined time and energy distribution of the produced neutrinos. These neutrinos are detected with the KARMEN detector, a segmented calorimeter of 56\,t of liquid scintillator. The matrix structure consists of 512 ($32\,rows\,\times16\,columns$) optically independent modules with a cross section of $17.4\,\times\,17.8\,cm^2$ and a length of 353\,cm. The segmentation is made of thin double acrylic walls separated by a small air gap. Every module is read out by two 3\,inch photo tubes at each end. The position of an event within one module is given by the time difference between the photo tubes at both ends. The optimized optical properties of the organic liquid scintillator and an active volume of 96\% result in an energy resolution of $\sigma_E=11.5\% / \sqrt{E [MeV]}$. Gd$_2$O$_3$ coated paper within the module walls provides an efficient detection of thermal neutrons owing to the very high capture cross section of the \mbox{Gd\,(\,n,\,$\gamma$\,)}\ reaction ($\sigma \approx 49000$\,barn). The KARMEN electronics is synchronized to the ISIS proton pulses to an accuracy of 2\,ns to fully exploit the time structure of the neutrinos. The detector is well protected against beam correlated background as well as the hadronic component of the cosmic radiation by a blockhouse made of 7000\,t of steel. Cosmic muons entering or stopping close to the detector are identified by the two inner veto counters. The innermost veto covers the calorimeter from four sides and consists of modules identical to those of the calorimeter but half their width. The second veto counter is made of 136 plastic scintillator modules that shield the detector from five sides. With this configuration (KARMEN\,1), the dominant background for the search for {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillations were high energetic neutrons produced by cosmic muons within the steel blockhouse. To eliminate this background source an additional third veto counter made of 136 plastic szintillator modules with a total area of 300\,m$^2$ was installed in 1996 \cite{drexlin}. It was placed right inside the steel blockhouse such that every muon is detected that could produce a neutron within the blockhouse at a distance of up to 1\,m from the detector. With this configuration (KARMEN\,2) cosmic induced background is considerably reduced by a factor of 40. \section{{\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillation signature} The probability for {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillations can be written in a simplified 2 flavor description as \vspace{-.5ex} \begin{equation} \rm{P}(\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{e}) = \rm{sin}^2\,(2\,\Theta) \cdot sin^2(1.27 \frac{\Delta \rm{m}^2 L}{E_{\nu}}) \vspace{-.5ex} \end{equation} where L is given in meters, $E_{\nu}$ is the neutrino energy in MeV, and \mbox{$\Delta$m$^2$}\ denotes the difference of the squared mass eigenvalues $\mbox{$\Delta$m$^2$} = |m^2_1 - m^2_2|$ in \mbox{eV$^2$/c$^4$} . The signature for the detection of a \mbox{$\bar{\nu}_{e}$}\ is a spatially correlated, delayed coincidence of a positron from \mbox{ p\,(\,\nueb\,,\,e$^{+}$\,)\,n}\ with energies up to $E_{e^+}=E_{\mbox{$\bar{\nu}_{e}$}}-Q=(52.8-1.8)\,\rm{MeV}=51.0$\,MeV followed by the $\gamma$ emission of either of the two neutron capture processes \mbox{p\,(\,n,\,$\gamma$\,)}\ or \mbox{Gd\,(\,n,\,$\gamma$\,)} . The \mbox{p\,(\,n,\,$\gamma$\,)}\ reaction leads to one $\gamma$ with an energy of $E(\gamma)=2.2$\,MeV whereas the \mbox{Gd\,(\,n,\,$\gamma$\,)}\ process leads to 3 $\gamma$ in average with a sum energy of 8\,MeV. The positrons are expected with a 2.2\,$\mu$s\ exponential decrease of to the \mbox{$\mu^+$}\ decay after beam on target. The time difference between the positron and the capture $\gamma$ is given by the thermalization and diffusion of the neutron. \newpage \begin{figure}[thb] \centerline{\psfig{figure=signature.eps,height=12.0cm}} \caption{Signature of sequences of a positron (prompt event) and the correlated gammas from the neutron capture reaction (sequential event) that are expected for the {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillation in the KARMEN detector: a) visible energy of the positron for three different values of \mbox{$\Delta$m$^2$}\ as given by a Monte Carlo (MC) simulation; b) time of the positron relative to beam on target; c) visible energy of the delayed gammas from the nuclear neutron capture on either the free protons in the scintillator or the gadolinium in the segmentation; d) time difference of the neutron capture reaction relative to the positron. \label{fig:signat}} \end{figure} To suppress cosmic induced background, a positron candidate is accepted only if there is no activity in the central detector and in both inner veto counters up to 24\,$\mu$s\ before. When only the outermost third veto counter was hit, a dead time of 14\,$\mu$s\ is applied.\\ The unique signature of the \mbox{ p\,(\,\nueb\,,\,e$^{+}$\,)\,n}\ reaction allows already for a strong discrimination of cosmic and neutrino induced background. The following cuts are introduced to maximize the sensitivity of the experiment: The positron has to be detected in a time window from $0.6-10.6\,\mu$s after beam on target with its energy in the range from $16-50$\,MeV. The sequential gamma must have an energy below 8 MeV and has to be correlated in space (within 1.2\,m$^3$) and time ($5-300\,\mu$s) to the positron. For these cuts the total detection efficiency is -- slightly depending on \mbox{$\Delta$m$^2$}\ -- approximately 20\,\%. The expected signature for oscillation sequences in the KARMEN detector is shown in Fig. 1. \begin{table}[h] \caption{Expected sequences from background reactions within the cuts specified above. Given are the mean values and their errors. Shown in the two last rows are the number of expected \mbox{ p\,(\,\nueb\,,\,e$^{+}$\,)\,n}\ reactions from the {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillation for high \mbox{$\Delta$m$^2$}\ ($\,=\,100\,eV^2$) assuming maximal mixing (i.e. $\mbox{sin$^2\,(2\,\Theta)$}\,=\,1$) and the number of actually measured sequences. All numbers are given for two different energy windows from $16-50$\,MeV and $36-50$\,MeV respectively. \label{tab:exp}} \begin{center} \begin{tabular}{|l|l|l|} \hline Background reaction & events {\small (E$\ge$16MeV)} & events {\small(E$\ge$36MeV)}\\ \hline \mbox{$^{12}$C\,(\,\nue\,,\,\el\,)\,\Nzg}\ reaction & $2.6\,\pm\,0.3$& $0.00\,\pm\,0.01$ \\ $\nu$ induced random coincidences& $2.3\,\pm\,0.3$& $0.09\,\pm\,0.03$ \\ \mbox{$\bar{\nu}_{e}$}\ contamination from ISIS & $1.1\,\pm\,0.1$& $0.31\,\pm\,0.03$ \\ cosmic induced background & $1.9\,\pm\,0.1$& $0.56\,\pm\,0.07$ \\ \hline total expected background & $7.8\,\pm\,0.5$& $0.97\,\pm\,0.08$ \\ \hline\hline measured sequences & 8 & 0 \\ \hline \mbox{ p\,(\,\nueb\,,\,e$^{+}$\,)\,n}\ reactions for \mbox{sin$^2\,(2\,\Theta)$}$\,=1$ & $1605\,\pm\,176$& $712\,\pm\,78$ \\ \hline \end{tabular} \end{center} \end{table} \section{Background Sources} One of the main advantages for the search of {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillations with the KARMEN experiment is that the expected background is not only very small but also known with a high percision because most of it can be independently measured by applying different cuts. There are only four different sources of background: \begin{itemize} \setlength{\parskip}{0.0ex} \setlength{\partopsep}{0.0pt} \setlength{\parsep}{0.0ex} \setlength{\itemsep}{0.0ex} \setlength{\topsep}{-1.5ex} \item \mbox{$\nu_e$}\ induced sequences caused by the charged current reaction \mbox{$^{12}$C\,(\,\nue\,,\,\el\,)\,\Nzg}\ where the subsequent $\beta$ decay of the \mbox{$^{12}$N$_{\rm g.s.}$}\ ($\tau = 15.9$\,ms) occurs within the first 300\,$\mu$s. \item Neutrino reactions that have a random coincidence with a low energy event from the natural radioactivity inside the detector. \item The small intrinsic \mbox{$\bar{\nu}_{e}$}\ contamination from the \mbox{$\pi^-$} -\mbox{$\mu^-$}\ decay chain in the ISIS target. \item Undetected cosmic muons which enter the detector or produce high energy neutrons via deep inelastic scattering in the inner part of the steel blockhouse. \end{itemize} The only background source not accessible to direct measurement is the \mbox{$\bar{\nu}_{e}$}\ contamination. It is calculated using a detailed MC simulation of the ISIS target as well as all pion and muon production and decay or capture reactions \cite{bob}. Table 1 lists all background reactions and gives the number of expected events as well as their errors for the above defined cuts. \section{Maximum Likelihood Analysis} The maximum likelihood (ML) analysis is the most powerful method to infer the strength of a possible signal or to derive an upper limit if such a signal is not seen. Because of some advantages over other methods we use here the Unified Approach \cite{cous} recommended by the PDG \cite{pdg} to derive a 90\,\% confidence interval from our ML analysis. For this ML analysis every background reaction and a possible oscillation signal are taken into account with their different probability density functions for the time and energy of the prompt event (the \mbox{e$^+$} ) as well as the energy, and the time and position difference of the sequential event (the neutron capture) relative to the prompt event. The relative contributions of the individual background sources to the total number of background sequences is fixed whereas the number of oscillation sequences is allowed to vary freely. As an additional information, the likelihood function (LF) is weighted with a factor that is the conditional poisson probability of the number of inferred background sequences given the expectation value of the total background. The resulting LF depends on \mbox{$\Delta$m$^2$}\ and \mbox{sin$^2\,(2\,\Theta)$}\ only, and thus for a given \mbox{$\Delta$m$^2$}\ only on the number of oscillation sequences N$_O$ infered (or the number of background sequences N$_B$ , for N$_B$=N$_{total}$-N$_O$). For the Unified Approach we divided the relevant [\mbox{$\Delta$m$^2$} ;\mbox{sin$^2\,(2\,\Theta)$} ] parameter space in the interval [$(10^{-2}eV^2,10^{2}eV^2);(10^{-4},1)$] using a logarithmically equidistant grid of $90\times72$ points \cite{mark}. At every point on the grid we generate 8000 MC data samples according to the expected background and the given values of \mbox{$\Delta$m$^2$}\ and \mbox{sin$^2\,(2\,\Theta)$}\ for this point. To these data samples the same ML analysis as to the experimental sample is applied. For every MC sample of this specific point on the grid one calculates the logarithm of the likelihoodratio \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{MC}$}\ of the value of the LF at its global maximum in the [\mbox{$\Delta$m$^2$} ;\mbox{sin$^2\,(2\,\Theta)$} ] parameter space to the value of the LF at the given point on the grid for which the sample was generated. This procedure gives a characteristic MC generated distribution of \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{MC}$}\ for every point on the grid which is then compared to the \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{EXP}$}\ value of the experimental data set (i.e. the logarithm of the ratio of the experimental LF at its global maximum to its value at a given point on the grid). The 90\,\% confidence interval (C.I.) is the set of points on the grid for which the experimental value \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{EXP}$}\ is smaller than at least 10\,\% of all MC generated \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{MC}$}\ (see Fig. 2). If, on the other hand, for given parameters [\mbox{$\Delta$m$^2$} ;\mbox{sin$^2\,(2\,\Theta)$} ] \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{EXP}$}\ lies in the upper 10\,\% tail of the MC distribution this point on the grid does not belong to the 90\,\% confidence interval. The upper 90\,\% confidence limit (C.L.) as shown in Fig.\,4 is the upper limit of the 90\,\% C.I.. The interpretation of this C.L. is that for all parameter combinations \mbox{$\Delta$m$^2$}\ and \mbox{sin$^2\,(2\,\Theta)$}\ on this curve 90\,\% of a large number of hypothetical KARMEN experiments would have seen a larger ``signal'' (i.e. {\it smaller} \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{EXP}$} ) than the one actually observed if -- and this is important -- the true parameters of the {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillation were in this region of the parameter space. \begin{figure}[htb] \centerline{\psfig{figure=logldist.eps,height=7.0cm}} \caption{Monte Carlo generated distribution of the logarithm of the likelihoodratios \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{MC}$}\ for \mbox{$\Delta$m$^2$}$=100$\,eV$^2$ and \mbox{sin$^2\,(2\,\Theta)$}$=0.001$. The experimental value \mbox{$\Delta log[L(\Delta\rm{m}^2;\rm{sin}^2\,2\Theta)]_{EXP}$}\ for this point in the parameter space is to the left of the upper 10\,\% tail of the MC distribution. Therefore this parameter combination belongs to the 90\,\% C.I.. \label{fig:signat}} \end{figure} \vspace{-2ex} \section{Results and Conclusion} The results presented here are based on the data recorded in the measuring period from February 1997 to February 1999 which corresponds to 4670\,C protons on target. Within the cuts defined in Sect.\,3 we find 8 sequences as shown in Fig.~3. Since we expect a total background of $7.8\,\pm\,0.5$ sequences there is absolutely no indication for a beam excess. \begin{figure}[htb] \centerline{\psfig{figure=data.eps,height=12.0cm}} \caption{Distribution of the expected background sequences for the prompt energy (a) and time (b) distribution as well as the sequential energy (c) and time (d) distribution. Also shown are the 8 measured sequences which agree nicely in their shape with the expected background. \label{fig:signat}} \end{figure} For this data set the above described analysis leads to a 90\,\% C.L. of \mbox{sin$^2\,(2\,\Theta)$}=$2.1\cdot10^{-3}$ for large \mbox{$\Delta$m$^2$}\ (i.e. 100\,eV$^2$). The 90\,\% C.L. as a function of \mbox{$\Delta$m$^2$}\ can be seen in Fig.4. Also shown is the sensitivity of the KARMEN experiment. The sensitivity of an experiment is defined as the mean confidence limit a large number of identical experiments would yield if there was no oscillation. The actual limit is slightly ``better'' then the sensitivity with \mbox{sin$^2\,(2\,\Theta)$}=$2.3\cdot10^{-3}$ for large \mbox{$\Delta$m$^2$} . Na$\ddot{\rm{\i}}$vely one would expect that the sensitivity is at slightly lower \mbox{sin$^2\,(2\,\Theta)$}\ than the exclusion curve and not vice versa because we measure more events (0.2) than the expected background. This apparent contradiction is explained by the fact that there are no events above 36 MeV where -- depending on \mbox{$\Delta$m$^2$}\ -- roughly half of all oscillation sequences should be and only $0.97\,\pm\,0.08$ background events are expected (see Tab.1). This leads also to an LF that has its global maximum in the region of small negative \mbox{sin$^2\,(2\,\Theta)$}\ which - for a null result - is as possible as a global maximum in the region of positive \mbox{sin$^2\,(2\,\Theta)$} . The 90\,\% C.L. from our ML analysis is compared in Fig.\,4 to the LSND result \cite{lsnd}. It excludes most of the LSND favoured region and is thus -- as is the fact that there is no event above 36\,MeV -- strongly questioning the interpretation that the LSND beam excess is an indication for {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}}\ oscillations. Furthermore one has to keep in mind that the KARMEN limit that was derived from the Feb.97 - Apr.98 data set and that is also shown in Fig.\,4 puts an even stronger constraint on the LSND favoured region. In this data set (which had slightly different cuts) we expected $2.9\,\pm\,0.1$ background sequences and measured no event at all. This yields, using again the unified approach, a 90\,\% C.L. of $1.3\cdot10^{-3}$ for large \mbox{$\Delta$m$^2$}\ and a sensitivity of $5.4\cdot10^{-3}$, respectively \cite{neut98}, \cite{thom}. Although this limit was criticized by some people, it is of course valid and correct: If one accepts the ansatz of the Unified Approach one must NOT argue that this (``lucky'') limit is not trustworthy for this would mean to reduce the Unified Approach to absurdity. \begin{figure}[htb] \centerline{\psfig{figure=limit.eps,height=14.0cm}} \caption{KARMEN\,2 90\,\% confidence limits according to the Unified Approach compared to other experiments: The full line is the 90\,\%\,C.L. of the data presented here, the dotted line the corresponding sensitivity and the dashed line the 90\,\%\,C.L. derived from the Feb.\,97-Apr.\,98 data. Also shown are the 90\,\%\,C.L. of the two reactor experiments Chooz \cite{chooz} and Bugey \cite{bugey} and the favoured region for {\mbox{$\bar{\nu}_{\mu}$}}$\rightarrow${\mbox{$\bar{\nu}_{e}$}} oscillations as reported by the LSND experiment\cite{lsnd}. Areas to the right of the 90\,\% C.L. are excluded with a probability of more than 90\,\%. For the LSND result the ``99\,\% favoured region'' (total shaded area) and the ``90\,\% favoured region'' (light-shaded area) are given.\label{fig:limit}} \end{figure} \clearpage \section*{References}
2,869,038,155,146
arxiv
\section{Introduction} With application to distributed storage systems, the notion of locality of a code was introduced in \cite{gopalan2012locality}, which enables efficient node repair in case of single node failures (node failures modelled as erasures) by contacting fewer nodes than the conventional erasure codes based on maximum distance separable (MDS) codes. An extension to handle multiple erasures has been studied in \cite{locality}. A code symbol is said to have $(r, \delta)$ locality if there exists a punctured code $\mathcal{C}_i$ such that $c_i \in Supp(\mathcal{C}_i)$ and the following conditions hold, 1) $|Supp(C_i)| \leq r+ \delta -1$ and, 2) $d_{min}(\mathcal{C}_i) \geq \delta$ An $[n,k,d_{min}]$ code is said to have $(r,\delta)$ information locality, if $k$ data symbols have $(r,\delta)$ locality and it is said to have all-symbol locality if all the $n$ code symbols have $(r,\delta)$ locality. An upper bound on the minimum distance of a code with $(r,\delta)$ information locality is given by \begin{equation} \label{eq:dmin_rdelta} d_{min} \leq n - k + 1 - \left ( \left \lceil \frac{k}{r} \right \rceil -1 \right ) (\delta-1). \end{equation} \subsection{Maximally Recoverable Codes with Locality} Maximally recoverable codes (MRC) are a class of codes which recover from all information theoretically recoverable erasure patterns given the locality constraints of the code. Maximally recoverable codes with locality have been defined for the case of $\delta =2$ in \cite{mrc}. We extend the definitions here for the general $\delta$. \begin{defn}[Data Local Maximally Recoverable Code]\label{defn:data_local} Let $C$ be a systematic $[n,k,d_{min}]$ code. We say that $\mathcal{C}$ is an $[k, r, h, \delta]$ data-local maximally recoverable code if the following conditions are satisfied \begin{itemize} \item $r | k$ and $n=k+\frac{k}{r} \delta+h$. \item Data symbols are partitioned into $\frac{k}{r}$ groups of size $r$. For each such group, there are $\delta$ local parity symbols. \item The remaining $h$ global parity symbols may depend on all $k$ symbols. \item For any set $E \subseteq [n]$ where $E$ is obtained by picking $\delta$ coordinates from each $\frac{k}{r}$ local groups, restricting $\mathcal{C}$ to coordinates in $[n]-E$ yields a $[k+h, k]$ MDS code. \end{itemize} \end{defn} $[k, r, h, \delta]$ data-local MRC is optimum with respect to minimum distance bound in \eqref{eq:dmin_rdelta}. The minimum distance of a $[k, r, h, \delta]$ data-local MRC is given by $d_{min} = h+\delta+1.$ \begin{defn}[Local Maximally Recoverable Code]\label{defn:local} Let $C$ be a systematic $[n,k,d_{min}]$ code. We say that $\mathcal{C}$ is an $[k, r, h, \delta]$ local maximally recoverable code if the following conditions are satisfied \begin{itemize} \item $r | (k+h)$ and $n=k+\frac{k+h}{r}\delta+h$. \item There are $k$ data symbols and $h$ global parity symbols where each global parity may depend on all data symbols. \item These $k+h$ symbols are partitioned into $\frac{k+h}{r}$ groups of size $r$. For each group there are $\delta$ local parity symbols. \item For any set $E \subseteq [n]$ where $E$ is obtained by picking $\delta$ coordinates from each $\frac{k+h}{r}$ local groups, restricting $\mathcal{C}$ to coordinates in $[n]-E$ yields a $[k+h, k]$ MDS code. \end{itemize} \end{defn} $[k, r, h, \delta]$ local MRC is optimum with respect to minimum distance bound in \eqref{eq:dmin_rdelta}. The minimum distance of a $[k, r, h, \delta]$ local MRC is given by \begin{equation} \label{eq:dminLMRC} d_{min} = h+ \delta + 1 + \floor[\Big]{\frac{h}{r}}\delta. \end{equation} Maximally recoverable codes with locality for the case of general $\delta$ are known in literature as Partial-MDS codes (PMDS) codes. MRCs have been studied in the context of distributed storage systems and PMDS codes in the context of solid state drives (SSD) \cite{pmds}. Constructions of PMDS codes with two and three global parities have been discussed in \cite{blaum2016construction, chen2015sector}. A general construction of PMDS codes based on linearized polynomials has been provided in \cite{calis2017general}. An improved construction of PMDS codes for all parameters over small field sizes ($\mathcal{O}(\textit{max}\{\frac{k+h}{r},(r+\delta)^{\delta+h}\}^h)$) has been presented in \cite{small}. Constructions of MRCs with field size $\mathcal{O} ((\frac{k+h}{r})^r)$ have been presented in \cite{martinez2018universal}. Construction of MRCs ($\delta=2$) over small field sizes have been investigated in \cite{hu2016new, guruswami2018constructions}. \subsection{Codes with Hierarchical Locality} The concept of \emph{locality} has been extended to hierarchical locality in \cite{hlocality}. In the case of $(r, \delta)$ locality, if there are more than $\delta$ erasures, then the code offers no locality. In the case of codes with hierarchical locality, the locality constraints are such that with the increase in the number of erasures, the locality increases in steps. The following is the definition of code with two-level hierarchical locality. \begin{defn} \label{defn:hier_local} An $[n, k, d_{min}]$ linear code $\mathcal{C}$ is a code with \emph{hierarchical locality} having parameters $[(r_1, \delta_1), (r_2, \delta_2)]$ if for every symbol $c_i$, $1 \leq i \leq n$, there exists a punctured code $\mathcal{C}_i$ such that $c_i \in Supp(C_i)$ and the following conditions hold, 1) $|Supp(\mathcal{C}_i)| \leq r_1+\delta-1 $ 2) $d_{min}(\mathcal{C}_i) \geq \delta_1$ and 3) $\mathcal{C}_i$ is a code with $(r_2, \delta_2)$ locality. \end{defn} An upper bound on the minimum distance of a code with two-level hierarchical locality is given by \begin{equation} \label{eqn:min_dist_bound} d \leq n - k + 1 -(\ceil[\Big]{\frac{k}{r_2}}-1)(\delta_2 - 1) - (\ceil[\Big]{\frac{k}{r_1}}-1)(\delta_1 - \delta_2). \end{equation} \subsection{Our Contributions} In this work, we consider the locality constraints imposed by codes with two-level hierarchical locality and define maximally recoverable codes with data-local and local hierarchical locality. We prove that certain punctured codes of these codes are data-local/local MRCs. We derive the minimum distance of hierarchical data-local MRCs. We give a procedure to construct hierarchical data-local MRCs from hierarchical local MRCs. We provide a construction of hierarchical local MRCs for all parameters. For the case of one global parity, we provide a different construction of hierarchical local MRC over a lower field size. \subsection{Notation} For any integer $n$, $[n] = \{1, 2, 3 \ldots, n\}$. For any $E \subseteq [n]$, $\bar{E} = [n]-E$. For any $[n,k]$ code, and any $E \subseteq [n]$, $\mathcal{C}|_E$ refers to the punctured code obtained by restricting $\mathcal{C}$ to the coordinates in $E$. This results in an $[n-|E|, k']$ code where $k' \leq k$. For any $m\times n$ matrix $H$ and $E \subseteq [n]$, $H|_E$ is the $m \times |E|$ matrix formed by restricting $H$ to columns indexed by $E$. In several definitions to follow, we implicitly assume certain divisibility conditions which will be clear from the context. \vspace{-1ex} \section{Maximally Recoverable Codes with Hierarchical Locality} \label{sec:HLMRC} In this section, we define hierarchical data-local and local MRCs and illustrate the definitions through an example. We describe these codes via their parity check matrices instead of generator matrices (data local and local MRCs were defined by their generator matrices). \begin{defn}[Hierarchical Data Local Code] \label{defn:HDLC} We define a $ [k, r_1, r_2, h_1, h_2, \delta] $ hierarchical data local (HDL) code of length $n=k+h_1+\frac{k}{r_1}(h_2+\frac{r_1}{r_2}\delta)$ as follows: \begin{itemize} \item The code symbols $c_1, \ldots, c_n$ satisfy $h_1$ global parities given by $\sum_{j = 1}^n u_j^{(\ell)} c_j = 0, \ \ 1 \leq \ell \leq h_1$. \item The first $n-h_1$ code symbols are partitioned into $t_1 = \frac{k}{r_1}$ groups $A_i, 1 \leq i \leq t_1$ such that $|A_i| = r_1 + h_2 + \frac{r_1}{r_2} \delta = n_1$. The code symbols in the $i^\text{th}$ group, $1 \leq i \leq t_1$ satisfy the following $h_2$ mid-level parities $\sum_{j = 1}^{n_1} v_{i,j}^{(\ell)} c_{(i-1)n_1+j} = 0, \ \ 1 \leq \ell \leq h_2$. \item The first $n_1-h_2$ code symbols of the $i^\text{th}$ group, $1 \leq i \leq t_1$ are partitioned into $t_2 = \frac{r_1}{r_2}$ groups $B_{i,s}, 1 \leq i \leq t_1, 1 \leq s \leq t_2$ such that $|B_{i,s}| = r_2 + \delta = n_2$. The code symbols in the $(i,s)^\text{th}$ group, $1 \leq i \leq t_1, 1 \leq s \leq t_2$ satisfy the following $\delta$ local parities $\sum_{j = 1}^{n_2} w_{i,s,j}^{(\ell)} c_{(i-1)n_1+(s-1)n_2+j} = 0, \ \ 1 \leq \ell \leq \delta$. \end{itemize} \end{defn} \begin{defn}[Hierarchical Data Local MRC] \label{defn:HDLMRC} Let $\mathcal{C}$ be a $[k, r_1, r_2, h_1, h_2, \delta]$ HDL code. Then $C$ is maximally recoverable if for any set $E \subset [n]$ such that $|E| = k+h_1$, $|E \bigcap B_{i, s}| \leq r_2$ $\forall \ i, s$ and $|E \bigcap A_i| = r_1$ $\forall \ i$, the punctured code $\mathcal{C}|_E$ is a $[k+h_1,k,h_1+1]$ MDS code. \end{defn} \begin{defn}[Hierarchical Local Code] \label{defn:HLC} We define a $ [k, r_1, r_2, h_1, h_2, \delta] $ hierarchical local (HL) code of length $n=k+h_1+\frac{k+h_1}{r_1}(h_2+\frac{r_1+h_2}{r_2}\delta)$ as follows: \begin{itemize} \item The code symbols $c_1, \ldots, c_n$ satisfy $h_1$ global parities given by $\sum_{j = 1}^n u_j^{(\ell)} c_j = 0, \ \ 1 \leq \ell \leq h_1$. \item The $n$ code symbols are partitioned into $t_1 = \frac{k+h_1}{r_1}$ groups $A_i, 1 \leq i \leq t_1$ such that $|A_i| = r_1 + h_2 + \frac{r_1+h_2}{r_2}\delta = n_1$. The code symbols in the $i^\text{th}$ group, $1 \leq i \leq t_1$ satisfy the following $h_2$ mid-level parities $\sum_{j = 1}^{n_1} v_{i,j}^{(\ell)} c_{(i-1)n_1+j} = 0, \ \ 1 \leq \ell \leq h_2$. \item The $n_1$ code symbols of the $i^\text{th}$ group, $1 \leq i \leq t_1$ are partitioned into $t_2 = \frac{r_1+h_2}{r_2}$ groups $B_{i,s}, 1 \leq i \leq t_1, 1 \leq s \leq t_2$ such that $|B_{i,s}| = r_2 + \delta = n_2$. The code symbols in the $(i,s)^\text{th}$ group, $1 \leq i \leq t_1, 1 \leq s \leq t_2$ satisfy the following $\delta$ local parities $\sum_{j = 1}^{n_2} w_{i,s,j}^{(\ell)} c_{(i-1)n_1+(s-1)n_2+j} = 0, \ \ 1 \leq \ell \leq \delta$. \end{itemize} \end{defn} \begin{defn}[Hierarchical Local MRC] \label{defn:HLMRC} Same as Definiton \ref{defn:HDLMRC}. \end{defn} In an independent parallel work \cite{martinez2018universal}, a class of MRCs known as multi-layer MRCs have been introduced. We would like to note that hierarchical local MRCs (given in Definition \ref{defn:HLMRC}) form a subclass of these multi-layer MRCs. \begin{example} We demonstrate the structure of the parity check matrix for an $[k=5, r_1=3, r_2=2, h_1=1, h_2=1 , \delta=2]$ HL code. The length of the code is $n=k+h_1+\frac{k+h_1}{r_1}(h_2+\frac{r_1+h_2}{r_2}\delta) = 16$. The parity check matrix of the code is given below: \[ H=\begin{bmatrix} \begin{matrix} \begin{array}{r|r} \begin{matrix} \begin{matrix} M_{1,1} & \\ & M_{1,2} \\ \end{matrix} \\ \hline N_1 \\ \end{matrix} & \\ \hline & \begin{matrix} \begin{matrix} M_{2,1} & \\ & M_{2,2} \\ \end{matrix} \\ \hline N_2 \\ \end{matrix} \\ \end{array} \\ \hline P \\ \end{matrix} \end{bmatrix} \] \[ M_{i,j} = \begin{bmatrix} w_{i,j,1}^{(1)} & w_{i,j,2}^{(1)} & w_{i,j,3}^{(1)} & w_{i,j,4}^{(1)} \\ w_{i,j,1}^{(2)} & w_{i,j,2}^{(2)} & w_{i,j,3}^{(2)} & w_{i,j,4}^{(2)} \\ \end{bmatrix}, \] \[ N_i = \begin{bmatrix} v_{i,1}^{(1)} & v_{i,2}^{(1)} & \ldots & v_{i,8}^{(1)} \\ \end{bmatrix} \text{and } P = \begin{bmatrix} u_1^{(1)} & \ldots & u_{16}^{(1)} \\ \end{bmatrix} \] \end{example} \section{Properties of MRC with Hierarchical Locality} In this section, we will derive two properties of MRC with hierarchical locality. We will show that the middle codes of a HDL/HL-MRC have to be data-local and local MRC respectively. Also, we derive the minimum distance of HDL MRC. \begin{lem} \label{lem:midHDLMRC} Consider a $[k, r_1, r_2, h_1, h_2, \delta]$ HDL-MRC $\mathcal{C}$. Let $A_i, 1 \leq i \leq t_1$ be the supports of the middle codes as defined in Definition \ref{defn:HDLC}. Then, for each $i$, $\mathcal{C}_{A_i}$ is a $[r_1, r_2, h_2, \delta]$ data-local MRC. \end{lem} \begin{proof} Suppose not. This means that for some $i$, the middle code $\mathcal{C}_{A_i}$ is not a $[r_1, r_2, h_2, \delta]$ data-local MRC. By the definition of data-local MRC, we have that there exists a set $E_1 \subset A_i$ such that $|E_1| = r_1 + h_2$ and $\mathcal{C}_{E_1}$ is not an $[r_1+h_2, r_1, h_2+1]$ MDS code. This implies that there exists a subset $E' \subset E_1$ such that $|E'| = r_1$ and $\text{rank}(G|_{E'}) < r_1$. We can extend the set $E'$ to obtain a set $E \subset [n]$, $|E| = k+h_1$ which satisfies the conditions in the definition of HDL-MRC. The resulting punctured code $\mathcal{C}_{E}$ cannot be MDS since there exists an $r_1 < k$ sized subset of $E$ such that $\text{rank}(G|_{E'}) < r_1$. \end{proof} \begin{lem} \label{lem:midHLMRC} Consider a $[k, r_1, r_2, h_1, h_2, \delta]$ HL-MRC $\mathcal{C}$. Let $A_i, 1 \leq i \leq t_1$ be the supports of the middle codes as defined in Definition \ref{defn:HLC}. Then, for each $i$, $\mathcal{C}_{A_i}$ is a $[r_1, r_2, h_2, \delta]$ local MRC. \end{lem} \begin{proof} Proof is similar to the proof of Lemma \ref{lem:midHDLMRC}. \end{proof} \subsection{Minimum Distance of HDL-MRC} \begin{lem} The minimum distance of a $[k, r_1, r_2, h_1, h_2, \delta]$ HDL-MRC is given by $d = h_1 + h_2 + \delta +1$. \end{lem} \begin{proof} Based on the definition of HDL-MRC, it can be seen that the $[k, r_1, r_2, h_1, h_2, \delta]$ HDL-MRC is a code with hierarchical locality as per Definition \ref{defn:hier_local} with $k, r_1,r_2$ being the same, $ \delta_2 -1 = \delta$, $\delta_1 = h_2 + \delta +1$ and $n = k + h_1 + \frac{k}{r_1}(h_2 + \frac{r_1}{r_2}\delta)$ Substituting these parameters in the minimum distance bound in \eqref{eqn:min_dist_bound}, we have that $d \leq h_1 + h_2 + \delta +1$. By Lemma \ref{lem:midHDLMRC}, we know that $\mathcal{C}_{A_i}$ is a $[r_1, r_2, h_2, \delta]$ data-local MRC. The minimum distance of $\mathcal{C}_{A_i}$ (from \eqref{eq:dminLMRC}) is $h_2 + \delta +1$. Thus, the middle code itself can recover from any $h_2 + \delta$ erasures. The additional $h_1$ erasures can be shown to be extended to a set $E$ (consisting of $k$ additional non-erased symbols) which satisfies the conditions in Definition \ref{defn:HDLMRC}. Since, the punctured code $\mathcal{C}|_E$ is a $[k+h_1,k,h_1+1]$ MDS code, it can be used to recover the $h_1$ erasures. Hence, $[k, r_1, r_2, h_1, h_2, \delta]$ HDL-MRC can recover from any $h_1 + h_2 + \delta$ erasures. \end{proof} \subsection{Deriving HDL-MRC from HL-MRC} In this section, we give a method to derive any HDL-MRC from a HL-MRC. Assume an $ [k, r_1, r_2, h_1, h_2, \delta] $ HL-MRC $\mathcal{C}$. Consider a particular set $E$ of $k+h_1$ symbols satisfying the conditions given in Definition \ref{defn:HLMRC}. We will refer to the elements of set $E$ as ``primary symbols". By the definition of HL-MRC, the code $\mathcal{C}$ when punctured to $E$ results in a $[k+h_1, k, h_1+1]$ MDS code. Hence, any $k$ subset of $E$ forms an information set. We will refer to the first $k$ symbols of $E$ as ``data symbols" and the rest $h_1$ symbols as global parities. The symbols in $[n]\setminus E$ will be referred to as parity symbols (mid-level parities and local parities) and it can be observed that the parity symbols can be obtained as linear combinations of data symbols. \begin{itemize} \item If $r_1 \mid h_1$ and $r_2 \mid h_2$, \begin{enumerate} \item For $A_i, \frac{k}{r_1} < i \leq \frac{k+h_1}{r_1}$, drop all the parity symbols, including $h_2$ mid-level parities per $A_i$ as well as the $\delta$ local parities per $B_{i,s} \subset A_i$. As a result, we would be left with $h_1$ ``primary symbols" in the local groups $A_i, \frac{k}{r_1} < i \leq \frac{k+h_1}{r_1}$. These form the global parities of the HDL-MRC. This step ensures that mid-level and local parities formed from global parities are dropped. \item For each $B_{i, s},\: 1 \leq i \leq \frac{k}{r_1},\: s > \frac{r_1}{r_2}$, drop the $\delta$ local parities. This step ensures that local parities formed from mid-level parities are dropped. \end{enumerate} This results in an $ [k, r_1, r_2, h_1, h_2, \delta] $ HDL-MRC. \item If $r_1 \nmid h_1$ and $r_2 \mid h_2$, \begin{enumerate} \item From the groups $A_i, \lfloor\frac{k}{r_1}\rfloor + 1 < i \leq \frac{k+h_1}{r_1}$, drop all the parity symbols, including $h_2$ mid-level parities per $A_i$ as well as the $\delta$ local parities per $B_{i,s} \subset A_i$. \item For each $B_{i, s},\: 1 \leq i \leq \lfloor\frac{k}{r_1}\rfloor,\: s > \frac{r_1}{r_2}$, drop the $\delta$ local parities. \item Drop the $k - \lfloor\frac{k}{r_1}\rfloor r_1$ data symbols in $A_i, i = \lfloor\frac{k}{r_1}\rfloor + 1$ and recalculate all the parities (local, mid-level and global) by setting these data symbols as zero in the linear combinations. \end{enumerate} This results in an $ [\lfloor\frac{k}{r_1}\rfloor r_1, r_1, r_2, h_1, h_2, \delta] $ HDL-MRC. \end{itemize} For the case of $r_2 \nmid h_2$, HDL-MRC can be derived from HL-MRC using similar techniques as above. Hence, in the rest of the paper, we will discuss the construction of HL-MRC. \section{General Construction of HL-MRC} In this section, we will present a general construction of $ [k, r_1, r_2, h_1, h_2, \delta]$ HL-MRC. First, we will provide the structure of the code and then derive necessary and sufficient conditions for the code to be HL-MRC. Finally, we will apply a known result of BCH codes to complete the construction. \begin{defn} A multiset $S \subseteq \mathbb{F}$ is $k$-wise independent over $\mathbb{F}$ if for every set $T \subseteq S$ such that $|T| \leq k$, $T$ is linearly independent over $\mathbb{F}$. \end{defn} \begin{lem} \label{lem:mds_linearized} Let $\mathbb{F}_{q^t}$ be an extension of $\mathbb{F}_q$. Let $a_1, a_2, \ldots, a_n$ be elements of $\mathbb{F}_{q^t}$. The following matrix \[ \begin{bmatrix} a_1 & a_2 & a_3 & \ldots & a_n \\ a_1^{q} & a_2^{q} & a_3^{q} & \ldots & a_n^{q} \\ \vdots & \vdots & \vdots & \ldots & \vdots \\ a_1^{q^{k-1}} & a_2^{q^{k-1}} & a_3^{q^{k-1}} & \ldots & a_n^{q^{k-1}} \\ \end{bmatrix} \] is the generator matrix of a $[n,k]$ MDS code if and only if $a_1, a_2, \ldots, a_n$ are $k$-wise linearly independent over $\mathbb{F}_q$. \end{lem} \begin{proof} Directly follows from Lemma 3 in \cite{small}. \end{proof} \begin{constr} \label{constr:HLMRC} The structure of the parity check matrix($H$) of a $ [k, r_1, r_2, h_1, h_2, \delta]$ HL-MRC is given by \[ H = \begin{bmatrix} H_0 & \\ & H_0 & \\ & & \ddots & \\ & & & H_0 \\ H_1 & H_2 & \hdots & H_{t_1} \\ \end{bmatrix} H_0 = \begin{bmatrix} M_0 & \\ & M_0 & \\ & & \ddots & \\ & & & M_0 \\ M_1 & M_2 & \hdots & M_{t_2} \\ \end{bmatrix} \] Here, $H_0$ is an $(t_2 \delta + h_2)\times n_1 $ matrix and $H_i, 1\leq i \leq t_1$ are an $h_1\times n_1$ matrix. $H_0$ is then further subdivided into $M_i$. $M_0$ has the dimensions $\delta \times n_2$ and $M_i, 1\leq i \leq t_2$ is an $h_2 \times n_2$ matrix. Assume $q$ to be a prime power such that $q \geq n$, $\mathbb{F}_{q^{m_1}}$ be an extension field of $\mathbb{F}_q$ and $\mathbb{F}_{q^m}$ is an extension field of $\mathbb{F}_{q^{m_1}}$, where $m_1 \mid m$. In this case, the construction is given by the following. \[ M_0 = \begin{bmatrix} 1 & 1 & 1 & \ldots & 1 \\ 0 & \beta & \beta^2 & \ldots & \beta^{n_2 -1} \\ 0 & \beta^2 & \beta^4 & \ldots & \beta^{2(n_2-1)} \\ \vdots & \vdots & \vdots & \ldots & \vdots \\ 0 & \beta^{\delta-1} & \beta^{2(\delta - 1)} & \ldots & \beta^{(\delta - 1)(n_2-1)} \end{bmatrix}, \] where $\beta \in \mathbb{F}_q$ is a primitive element. \[ M_i = \begin{bmatrix} \alpha_{i, 1} & \alpha_{i, 2} & \ldots & \alpha_{i, n_2} \\ \alpha_{i, 1}^{q} & \alpha_{i, 2}^{q} & \ldots & \alpha_{i, n_2}^{q} \\ \vdots & \vdots & \ldots & \vdots \\ \alpha_{i, 1}^{q^{h_2-1}} & \alpha_{i, 2}^{q^{h_2-1}} & \ldots & \alpha_{i, n_2}^{q^{h_2-1}} \\ \end{bmatrix}, \] where $i \in [t_2]$, $ \alpha_{i,j} \in \mathbb{F}_{q^{m_1}}, 1 \leq i \leq t_2, 1 \leq j \leq n_2$. \[ H_i = [H_{i,1} \ H_{i,2} \ldots H_{i,t_2}] \] \[ H_{i,s} = \begin{bmatrix} \lambda_{i,s,1} & \lambda_{i,s, 2} & \ldots & \lambda_{i, s,n_2} \\ \lambda_{i, s,1}^{q^{m_1}} & \lambda_{i, s,2}^{q^{m_1}} & \ldots & \lambda_{i, s,n_2}^{q^{m_1}} \\ \vdots & \vdots & \ldots & \vdots \\ \lambda_{i, s,1}^{q^{m_1(h_1-1)}} & \lambda_{i, s,2}^{q^{m_1(h_1-1)}} & \ldots & \lambda_{i, s,n_2}^{q^{m_1(h_1-1)}} \\ \end{bmatrix}, \] where $i \in [t_1], s \in [t_2]$, $ \lambda_{i,s,j} \in \mathbb{F}_{q^{m}}, 1 \leq i \leq t_1, 1 \leq s \leq t_2, 1 \leq j \leq n_2$. \end{constr} A $(\delta,h_2)$ erasure pattern is defined by the following two sets: $\Delta$ is a three dimensional array of indices with the first dimension $i$ indexing the middle code and hence $1 \leq i \leq t_1$, the second dimension $s$ indexing the local code and hence $1 \leq s \leq t_2$. The third dimension $j$ varies from $1$ to $\delta$ and used to index the $\delta$ coordinates which are erased in the $(i,s)^{\text{th}}$ group. Let $e \in [n]$ denote the actual index of the erased coordinate in the code and $e \in B_{i,s}$, then we set $\Delta_{i,s,j} = (e \mod n_2) + 1$. $\Delta_{i,s}$ is used to denote the vector of $\delta$ coordinates which are erased in the $(i,s)^{\text{th}}$ group. $\bar{\Delta}_{i,s}$ is used to denote the complement of $\Delta_{i,s}$ in the set $[n_2]$. $\Gamma$ is a two dimensional array of indices with the first dimension $i$ indexing the middle code and hence $1 \leq i \leq t_1$. The second dimension $j$ varies from $1$ to $h_2$ and used to index the additional $h_2$ coordinates which are erased in the $i^{\text{th}}$ group. Let $e \in [n]$ denote the actual index of the erased coordinate in the code and $e \in A_i$, then we set $\Gamma_{i,j} = (e \mod n_1) + 1$. $\Gamma_{i}$ is used to denote the vector of $h_2$ coordinates which are erased in the $i^{\text{th}}$ group. $\bar{\Gamma}_{i}$ is used to denote the complement of $\Gamma_{i}$ in the set $[n_1] \setminus (\cup_{s=1}^{t_2} \Delta_{i,s})$. We define some matrices and sets based on the parameters of the construction, which will be useful in proving the subsequent necessary and sufficient condition for the construction to be HL-MRC. Here, $\alpha_{s,\Delta_{i,s}}$ denotes the set $\{\alpha_{s,j} \mid j \in \Delta_{i,s}\}$. \begin{eqnarray*} L_{i,s} & = & (M_0|_{\Delta_{i,s}})^{-1} M_0|_{\bar{\Delta}_{i,s}} \\ \Psi_i & = & \{ \alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s}, 1 \leq s \leq t_2 \} \\ & = & \{ \Psi_{i,\Gamma_i}, \ \Psi_{i,\bar{\Gamma}_i} \} \\ & = & \{ \psi_{i,1}, \ldots, \psi_{i,h_2}, \psi_{i,h_2+1}, \ldots, \psi_{i, r_1+h_2} \} \end{eqnarray*} The above equalities follow by noting that the $\cup_{s=1}^{t_2} \bar{\Delta}_{i,s} = \Gamma_i \cup \bar{\Gamma}_i$. We will refer to the elements in $\Psi_{i,\Gamma_i}$ by $ \{ \psi_{i,1}, \ldots, \psi_{i,h_2}\}$ and those in $\Psi_{i,\bar{\Gamma}_i}$ by $\{ \psi_{i,h_2+1}, \ldots, \psi_{i, r_1+h_2}\}$. Consider the following matrix based on the elements of $\Psi_i$, \begin{equation} F_i = [F_i|_{\Gamma_i} \ \ F_i|_{\bar{\Gamma}_i}] = \begin{bmatrix} \psi_{i, 1} & \psi_{i, 2} & \ldots & \psi_{i, r_1+h_2} \\ \psi_{i, 1}^{q} & \psi_{i, 2}^{q} & \ldots & \psi_{i, r_1+h_2}^{q} \\ \vdots & \vdots & \ldots & \vdots \\ \psi_{i, 1}^{q^{h_2-1}} & \psi_{i, 2}^{q^{h_2-1}} & \ldots & \psi_{i, r_1+h_2}^{q^{h_2-1}} \\ \end{bmatrix}, \end{equation} And \begin{eqnarray*} \Phi_i & = & \{ \lambda_{i,s,\bar{\Delta}_{i,s}} + \lambda_{i,s,\Delta_{i,s}} L_{i,s}, 1 \leq s \leq t_2 \} \\ & = & \{ \Phi_{i,\Gamma_i}, \ \Phi_{i,\bar{\Gamma}_i} \} \\ & = & \{ \phi_{i,1}, \ldots, \phi_{i,h_2}, \phi_{i,h_2+1}, \ldots, \phi_{i, r_1+h_2} \} \end{eqnarray*} Let $Z_i = (F_i|_{\Gamma_i})^{-1} F_i|_{\bar{\Gamma}_i}$. Finally, the set $\Theta = \{ \Phi_{i,\bar{\Gamma}_i} + \Phi_{i,\Gamma_i} Z_i, 1 \leq i \leq t_1 \}$. \begin{thm} \label{thm:necsuf} The code described in Construction \ref{constr:HLMRC} is a $[k, r_1, r_2, h_1, h_2, \delta]$ HL-MRC only if, for any $(\delta,h_2)$ erasure pattern, each $\Psi_i, 1 \leq i \leq t_1$ is $h_2$-wise independent over $\mathbb{F}_q$ and $\Theta$ is $h_1$-wise independent over $\mathbb{F}_{q^{m_1}}$. \end{thm} \begin{proof} By Lemma \ref{lem:midHLMRC}, we have that $\mathcal{C}$ is a HL-MRC only if the $\mathcal{C}|_{A_i}$ is a $[r_1, r_2, h_2, \delta]$ local MRC. By the definition of local MRC, a code is a $[r_1, r_2, h_2, \delta]$ local MRC, if after puncturing $\delta$ coordinates in each of the $\frac{r_1+h_2}{r_2}$ local groups, the resultant code is $[r_1+h_2, r_1, h_2+1]$ MDS code. The puncturing on a set of coordinates in the code is equivalent to shortening on the same set of coordinates in the dual code. Shortening on a set of coordinates in the dual code can be performed by zeroing the corresponding coordinates in the parity check matrix by row reduction. To prove that $\mathcal{C}|_{A_i}$ is a $[r_1, r_2, h_2, \delta]$ local MRC, we need to show that certain punctured codes are MDS (Definition \ref{defn:local}). We will equivalently that the shortened codes of the dual code are MDS. Consider the coordinates corresponding to $(i,s)^\text{th}$ group in the parity check matrix. The sub-matrix of interest in this case is the following: \begin{equation*} \left [\begin{array}{c|c} M_0|_{\Delta_{i,s}} & M_0|_{\bar{\Delta}_{i,s}} \\ \hline \alpha_{s,\Delta_{i,s}} & \alpha_{s,\bar{\Delta}_{i,s}} \\ \alpha_{s,\Delta_{i,s}}^q & \alpha_{s,\bar{\Delta}_{i,s}}^q \\ \vdots & \vdots \\ \alpha_{s,\Delta_{i,s}}^{q^{h_2-1}} & \alpha_{s,\bar{\Delta}_{i,s}}^{q^{h_2-1}} \end{array} \right], \end{equation*} Where $\alpha_{s,\Delta_{i,s}}^q$ is the vector obtained by taking $q^{\text{th}}$ power of each element in the vector. Applying row reduction to the above matrix, we have \begin{equation*} \left [\begin{array}{c|c} M_0|_{\Delta_{i,s}} & M_0|_{\bar{\Delta}_{i,s}} \\ \hline \bold{0} & \alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s}\\ \bold{0} & (\alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s})^q \\ \vdots & \vdots \\ \bold{0} & (\alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s})^{q^{h_2-1}} \end{array} \right]. \end{equation*} Note that $L_{i,s}$ can be pushed into the power of $q$ since the elements of $L_{i,s}$ are in $\mathbb{F}_q$. After row reducing $\delta$ coordinates from each of the $\frac{r_1+h_2}{r_2}$ local groups in $A_i$, the resultant parity check matrix is $F_i$. Applying Lemma \ref{lem:mds_linearized}, $F_i$ forms the generator matrix of an MDS code if and only if the set $\Psi_i$ is $h_2$-wise independent over $\mathbb{F}_q$. The shortening of the code above is applicable to mid-level parities. Now, we will apply similar shortening in two steps to global parities. The sub-matrix of interest in this case is the following: \begin{equation*} \left [\begin{array}{c|c} M_0|_{\Delta_{i,s}} & M_0|_{\bar{\Delta}_{i,s}} \\ \hline \alpha_{s,\Delta_{i,s}} & \alpha_{s,\bar{\Delta}_{i,s}} \\ \alpha_{s,\Delta_{i,s}}^q & \alpha_{s,\bar{\Delta}_{i,s}}^q \\ \vdots & \vdots \\ \alpha_{s,\Delta_{i,s}}^{q^{h_2-1}} & \alpha_{s,\bar{\Delta}_{i,s}}^{q^{h_2-1}} \\ \hline \lambda_{i,s,\Delta_{i,s}} & \lambda_{i,s,\bar{\Delta}_{i,s}} \\ \lambda_{i,s,\Delta_{i,s}}^{q^{m_1}} & \lambda_{i,s,\bar{\Delta}_{i,s}}^{q^{m_1}} \\ \vdots & \vdots \\ \lambda_{i,s,\Delta_{i,s}}^{q^{m_1(h_1-1)}} & \lambda_{i,s,\bar{\Delta}_{i,s}}^{q^{m_1(h_1-1)}} \end{array} \right] \end{equation*} Applying row reduction to the above matrix, we have \begin{equation*} \left [\begin{array}{c|c} M_0|_{\Delta_{i,s}} & M_0|_{\bar{\Delta}_{i,s}} \\ \hline \bold{0} & \alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s}\\ \bold{0} & (\alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s})^q \\ \vdots & \vdots \\ \bold{0} & (\alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s})^{q^{h_2-1}} \\ \hline \bold{0} & \lambda_{i,s,\bar{\Delta}_{i,s}} + \lambda_{i,s,\Delta_{i,s}} L_{i,s}\\ \bold{0} & (\lambda_{i,s,\bar{\Delta}_{i,s}} + \lambda_{i,s,\Delta_{i,s}} L_{i,s})^{q^{m_1}} \\ \vdots & \vdots \\ \bold{0} & (\lambda_{i,s,\bar{\Delta}_{i,s}} + \lambda_{i,s,\Delta_{i,s}} L_{i,s})^{q^{m_1(h_1-1)}} \end{array} \right]. \end{equation*} To apply row reduction again, we consider the following submatrix obtained by deleting the zero columns and aggregating the non-zero columns from the $\frac{r_1+h_2}{r_2}$ groups, \begin{equation*} \left [\begin{array}{c|c} F_i|_{\Gamma_i} & F_i|_{\bar{\Gamma}_i} \\ \hline \Phi_{i,\Gamma_i} & \Phi_{i,\bar{\Gamma}_i} \\ \Phi_{i,\Gamma_i}^{q^{m_1}} & \Phi_{i,\bar{\Gamma}_i}^{q^{m_1}} \\ \vdots & \vdots \\ \Phi_{i,\Gamma_i}^{q^{m_1(h_1-1)}} & \Phi_{i,\bar{\Gamma}_i}^{q^{m_1(h_1-1)}} \end{array} \right]. \end{equation*} Applying row reduction to the above matrix, we have \begin{equation*} \left [\begin{array}{c|c} F_i|_{\Gamma_i} & F_i|_{\bar{\Gamma}_i} \\ \hline \bold{0} & \Phi_{i,\bar{\Gamma}_i} + \Phi_{i,\Gamma_i} Z_i \\ \bold{0} & (\Phi_{i,\bar{\Gamma}_i} + \Phi_{i,\Gamma_i} Z_i)^{q^{m_1}} \\ \vdots & \vdots \\ \bold{0} & (\Phi_{i,\bar{\Gamma}_i} + \Phi_{i,\Gamma_i} Z_i)^{q^{m_1(h_1-1)}} \end{array} \right]. \end{equation*} Note that $Z_i$ can be pushed into the power of $q^{m_1}$ since the elements of $Z_i$ are in $\mathbb{F}_{q^{m_1}}$. Applying Lemma \ref{lem:mds_linearized}, the row reduced matrix above forms the generator matrix of an MDS code if and only if the set $\Theta$ is $h_1$-wise independent over $\mathbb{F}_{q^{m_1}}$. \end{proof} \begin{lem} \label{lem:hwise_ind} For any $(\delta,h_2)$ erasure pattern, \begin{itemize} \item For each $i$, $\Psi_i = \{ \alpha_{s,\bar{\Delta}_{i,s}} + \alpha_{s,\Delta_{i,s}} L_{i,s}, 1 \leq s \leq t_2 \}$ is $h_2$-wise independent over $\mathbb{F}_q$ if the set $\{ \alpha_{s,j}, 1 \leq s \leq t_2, 1 \leq j \leq n_2 \}$ is $(\delta+1) h_2$-wise independent over $\mathbb{F}_q$. \item $\Theta = \{ \Phi_{i,\bar{\Gamma}_i} + \Phi_{i,\Gamma_i} Z_i, 1 \leq i \leq t_1 \}$ is $h_1$-wise independent over $\mathbb{F}_{q^{m_1}}$ if the set $\{ \lambda_{i,s,j}, 1 \leq i \leq t_1, 1 \leq s \leq t_2, 1 \leq j \leq n_2 \}$ is $(\delta+1)(h_2+1)h_1$-wise independent over $\mathbb{F}_{q^{m_1}}$. \end{itemize} \end{lem} \begin{proof} Since the size of matrix $L_{i,s}$ is $\delta \times (n_2 - \delta)$, each element of $\Psi_i$ can be a $\mathbb{F}_q$-linear combination of atmost $\delta + 1$ different $\alpha_{s,j}$. Consider $\mathbb{F}_q$-linear combination of $h_2$ elements in $\Psi_i$. The linear combination will have at most $(\delta + 1) h_2$ different $\alpha_{s,j}$. Thus, if the set $\{ \alpha_{s,j} \}$ is $(\delta+1) h_2$-wise independent over $\mathbb{F}_q$, then $\Psi_i$ is $h_2$-wise independent over $\mathbb{F}_q$. To prove the second part, we note that each element of $\Phi_i$ is a linear combination of at most $\delta + 1$ different $\lambda_{i,s,j}$. Since the size of the matrix $Z_i$ is $h_2 \times r_1$, each element of $\Theta$ can be a $\mathbb{F}_{q^{m_1}}$-linear combination of atmost $(\delta+1)(h_2+1)$ different $\lambda_{i,s,j}$. Consider $\mathbb{F}_{q^{m_1}}$-linear combination of $h_1$ elements in $\Theta$. The linear combination will have at most $(\delta + 1) (h_2+1)h_1$ different $\lambda_{i,s,j}$. Thus, if the set $\{ \lambda_{i,s,j} \}$ is $(\delta + 1) (h_2+1)h_1$-wise independent over $\mathbb{F}_{q^{m_1}}$, then $\Theta$ is $h_1$-wise independent over $\mathbb{F}_{q^{m_1}}$. \end{proof} We will design the $\{ \alpha_{s,j} \}$ and $\{ \lambda_{i,s,j} \}$ based on the Lemma \ref{lem:hwise_ind} so that the field size is minimum possible. We will pick these based on the following two properties: \begin{itemize} \item {\bf Property 1:} The columns of parity check matrix of an $[n,k,d]$ linear code over $\mathbb{F}_q$ can be interpreted as $n$ elements over $\mathbb{F}_{q^{n-k}}$ which are $(d-1)$-wise linear independent over $\mathbb{F}_q$. \item {\bf Property 2:} There exists $[n = q^t-1, k,d]$ BCH codes over $\mathbb{F}_q$ \cite{roth2006}, where the parameters are related as \begin{equation*} n-k = 1 + \left \lceil \frac{q-1}{q} (d-2) \right \rceil \lceil \log_2 (n)\rceil . \end{equation*} \end{itemize} \begin{thm} \label{thm:HLBCH} The code in Construction \ref{constr:HLMRC} is a $ [k, r_1, r_2, h_1, h_2, \delta]$ HL-MRC if the parameters are picked as follows: \begin{enumerate} \item $q$ is the smallest prime power greater than $n_2$. \item $m_1$ is chosen based on the following relation: $m_1 = 1 + \left \lceil \frac{q-1}{q} ((\delta+1)h_2-1) \right \rceil \lceil \log_q (n_2 t_2)\rceil$. \item $n_2 t_2$ elements $\{ \alpha_{s,j} \}$ over $\mathbb{F}_{q^{m_1}}$ are set to be the columns of parity check matrix of the BCH code over $\mathbb{F}_q$ with parameters $[n = q^{\lceil \log_q (n_2 t_2) \rceil}-1, q^{\lceil \log_q (n_2 t_2)\rceil}-1-m_1, (\delta+1)h_2+1]$ . \item $m$ is chosen to be the smallest integer dividing $m_1$ based on the following relation: $m \geq 1 + \left \lceil \frac{q^{m_1}-1}{q^{m_1}} ((\delta+1)(h_2+1)h_1-1) \right \rceil \lceil \log_{q^{m_1}} (n)\rceil$. \item $n$ elements $\{ \lambda_{i,s,j} \}$ over $\mathbb{F}_{q^{m}}$ are set to be the columns of parity check matrix of the BCH code over $\mathbb{F}_{q^{m_1}}$ with parameters $[n = q^{m_1\lceil \log_{q^{m_1}} (n) \rceil}-1, q^{m_1\lceil \log_{q^{m_1}} (n) \rceil}-1-m, (\delta+1)(h_2+1)h_1+1]$ . \end{enumerate} \end{thm} \begin{proof} The proof follows from Lemma \ref{lem:hwise_ind} and Properties 1 and 2. \end{proof} \vspace{-2ex} \section{HL-MRC Construction for $h_1 = 1$} In this section, we present a construction of HL-MRC for the case when $h_1=1$ over a field size lower than that provided by Construction \ref{constr:HLMRC}. \begin{constr}\label{constr:h1} The structure of the parity check matrix for the present construction is the same as that given in Construction \ref{constr:HLMRC}. In addition, the matrices $M_0$ and $M_i,\: 1 \leq i \leq t_2$ also remain the same. We modify the matrix $H_i, \; 1 \leq i \leq t_1$ as follows: \[ H_i = \begin{bmatrix} \alpha_{1, 1}^{q^{h_2}} & \alpha_{1, 2}^{q^{h_2}} & \ldots \alpha_{t_2, n_2}^{q^{h_2}} \end{bmatrix}, \] where $\{ \alpha_{s,j} \in \mathbb{F}_{q^{m_1}}, 1 \leq s \leq t_2, 1 \leq j \leq n_2 \}$ are chosen to be $(\delta+1)(h_2+1)$-wise independent over $\mathbb{F}_q$ based on Theorem \ref{thm:HLBCH}. \end{constr} \begin{thm} The code $C$ given by Construction \ref{constr:h1} is a $ [k, r_1, r_2, h_1=1, h_2, \delta] $ HL-MRC. \end{thm} \begin{proof} We show that $H$ can be used to correct all erasure patterns defined in Definition \ref{defn:HLMRC}. From the definition the code should recover from $\delta$ erasures per $B_{i,s}$, $h_2$ additional erasures per $A_i$ and $1$ more erasure anywhere in the entire code. Now, with $h_1=1$, the last erasure can be part of one group. Thus, effectively the code should recover from $h_2+1$ erasures per group. Suppose that the last erasure is in the $i^{\text{th}}$ group. The submatrix of interest for the $(i,s)^{th}$ local group is \begin{equation*} \left [\begin{array}{c|c} M_0|_{\Delta_{i,s}} & M_0|_{\bar{\Delta}_{i,s}} \\ \hline \alpha_{s,\Delta_{i,s}} & \alpha_{s,\bar{\Delta}_{i,s}} \\ \alpha_{s,\Delta_{i,s}}^q & \alpha_{s,\bar{\Delta}_{i,s}}^q \\ \vdots & \vdots \\ \alpha_{s,\Delta_{i,s}}^{q^{h_2-1}} & \alpha_{s,\bar{\Delta}_{i,s}}^{q^{h_2-1}} \\ \hline \alpha_{s,\Delta_{i,s}}^{q^{h_2}} & \alpha_{s,\bar{\Delta}_{i,s}}^{q^{h_2}} \\ \end{array} \right]. \end{equation*} Following the proof of Theorem \ref{thm:necsuf} and performing row reduction of $\delta$ coordinates, the resultant matrix is \[ \begin{bmatrix} \psi_{i, 1} & \psi_{i, 2} & \ldots & \psi_{i, r_1+h_2} \\ \psi_{i, 1}^{q} & \psi_{i, 2}^{q} & \ldots & \psi_{i, r_1+h_2}^{q} \\ \vdots & \vdots & \ldots & \vdots \\ \psi_{i, 1}^{q^{h_2-1}} & \psi_{i, 2}^{q^{h_2-1}} & \ldots & \psi_{i, r_1+h_2}^{q^{h_2-1}} \\ \psi_{i, 1}^{q^{h_2}} & \psi_{i, 2}^{q^{h_2}} & \ldots & \psi_{i, r_1+h_2}^{q^{h_2}} \\ \end{bmatrix}. \] Now, by Lemma \ref{lem:mds_linearized}, it is the generator matrix of an MDS code if and only if $\Psi_i$ is $(h_2+1)$-wise independent over $\mathbb{F}_q$. \end{proof} \vspace{-3ex} \section*{Acknowledgment} This work was supported partly by the Early Career Research Award (ECR/2016/000954) from Science and Engineering Research Board (SERB) to V. Lalitha.
2,869,038,155,147
arxiv
\section{Introduction} \label{sec:intro} The sublime discovery of gravitational waves at advanced LIGO (aLIGO) \cite{aLIGO} is yet another striking confirmation of Einstein's theory of gravity. Due to the weakness of gravitational interactions and the fact that gravity couples to all particles that carry energy and momentum, gravitational waves (GW) are at the same time witness to and remnant of some of the most violent phenomena in our Universe, e.g. Neutron-star inspirals, Black Hole inspirals, Pulsars or phase transitions. They herald intense dynamics, potentially from a distant past. In recent years, a strong effort was made to discover gravitational waves using ground-based experiments. After somewhat uneventful runs of, for example, LIGO \cite{Abramovici:1992ah}, Virgo \cite{Giazotto:1988gw}, or the European Pulsar Timing Array (EPTA) \cite{Ferdman:2010xq}, in 2015 aLIGO \cite{Harry:2010zz} started operations with increased sensitivity in gravitational wave frequencies of $10^0$-$10^3$ Hz and a reach well into the characteristic strain of supernovae, pulsars and binary inspirals. While aLIGO was primarily designed to detect gravitational waves from a multitude of astrophysical sources, it retains a remarkable sensitivity to new physics effects. Adding gravitational wave detection experiments as an additional arrow to the quiver of experiments to search for new physics interactions will help to probe very weakly coupled sectors of new physics. With obvious short-comings in our understanding of fundamental principles of nature dangling, e.g. the lack of a dark matter candidate or the observed matter/anti-matter asymmetry, and in absence of evidence for new physics at collider experiments, so-called dark sectors become increasingly attractive as add-on to the Standard Model. If uncharged under the Standard Model gauge group, dark sectors could even have a rich particle spectrum without leaving an observable imprint in measurements at particle colliders. Hence, this could leave us in the strenuous situation where we might have to rely exclusively on very feeble possibly only gravitational interactions to infer their existence. For dark sectors to address the matter/anti-matter asymmetry via electroweak baryogenesis, usually a strong first-order phase transition is required\footnote{For an interesting recent mechanism to do baryogenesis with dark sector phase transitions see~\cite{Katz:2016adq}.}. It is well known that a first-order phase transition is accompanied by three mechanisms that can give rise to gravitational waves in the early universe \cite{Kosowsky:1991ua,Kamionkowski:1993fg,Grojean:2006bp,Huber:2008hg,Caprini:2009yp,Binetruy:2012ze,Hindmarsh:2015qta,Caprini:2015zlo}: collisions of expanding vacuum bubbles, sounds waves, and magnetohydrodynamic turbulence of bubbles in the hot plasma. However, for previously studied models, e.g. (N)MSSM \cite{Huber:2015znp}, strongly coupled dark sectors \cite{Schwaller:2015tja}, or the electroweak phase transition with the Higgs potential modified by a sextic term \cite{Huang:2016odd}, the resulting GW frequencies after red-shifting are expected to have frequencies of some two or more orders of magnitude below the reach of aLIGO. On the other hand, if electroweak symmetry breaking is triggered in the dark sector at temperatures significantly above the electroweak scale, e.g. by radiatively generating a vev using the Coleman-Weinberg mechanism, GW with frequencies are within the aLIGO reach, i.e. 1-100 Hz. However, we will explain that the overall amplitude of the signal is too small for aLIGO at present sensitivity, but it can be probed by the next generation of interferometers.\footnote{These future experiments also include the advanced LIGO/VIRGO detectors operating in years 2020+ at the projected final sensitivity \cite{TheLIGOScientific:2016wyq} as was also pointed out very recently in \cite{Dev:2016feu}.} At the same time, already now, aLIGO can probe beyond the standard model physics. We will investigate the consequences of topological defects, such as a domain wall passing through the interferometer. We will model this by introducing a non-vanishing effective photon mass localised on the domain wall, while vanishing elsewhere.\footnote{This is not a gravitational effect, but effectively it looks like local ripples affecting the propagation of photons.} The signatures of passing domain walls can be well separated from black-hole mergers and motivates an extension of ongoing search strategies. In Sec.~\ref{sec:PT} we discuss the implementation of first order phase transitions in dark sectors with radiative symmetry breaking. Sec.~\ref{sec:domain} is dedicated to the modelling and phenomenology of the domain wall interacting with aLIGO. We offer a summary in Sec.~\ref{sec:conclusion}. \section{First-order phase transition in a dark sector at high scales} \label{sec:PT} \subsection{Dark sector model at zero temperature} Let us consider a very simple minimal model of the hidden (or dark sector) consisting of a complex scalar $\Phi$ which is a SM singlet, i.e. it does not couple to any of the Standard Model gauge groups but is charged under the gauge group of the dark sector -- in the simplest case a U(1) gauge group. The SM Higgs doublet $H$ is coupled via the Higgs-portal interactions to the complex scalar \begin{equation} \Phi \,=\, \frac{1}{\sqrt{2}}(\phi + i \phi_2)\,. \end{equation} In unitary gauge one is left with two real scalars, \begin{equation} H=\frac{1}{\sqrt{2}}(0,h)\, , \quad \Phi=\frac{1}{\sqrt{2}}\phi\,, \end{equation} and the tree-level scalar potential reads \begin{equation} V_0(h,\phi)=\frac{\lambda_{\phi}}{4}\phi^4+\frac{\lambda_H}{4}h^4-\frac{\lambda_{\rm P}}{4} h^2 \phi^2\,. \label{V0hphi} \end{equation} Note that we have assumed that the theory is scale-invariant at the classical level \cite{Coleman:1973jx}, and as the result, none of the mass scales are present in the theory, they can only be generated quantum mechanically i.e. via radiative corrections. (Of course, one can also consider more general examples of hidden sectors, which are not classically scale invariant and still have first order phase transitions.) \begin{figure}[t] \includegraphics[width=0.43\textwidth]{VT0.pdf} \caption{\small The zero-temperature effective potential $V$ of the CW theory Eq. \eqref{V1RR} in the units of $\frac{3}{64 \pi ^2} \, g_{\scriptscriptstyle \mathrm{D}}^4$.} \label{fig:VT0} \end{figure} In the minimal Standard Model classical scale invariance is broken by the Higgs mass parameter $\mu^{2}_{\scriptscriptstyle \mathrm{SM}}$ . Scale invariance is easily restored by reinterpreting this scale in terms of the vev of a $\phi$, coupled to the SM via the Higgs portal interaction, $-\,(\lambda_{\rm P}/4)h^2\phi^{2}$ in \eqref{V0hphi}. Now, as soon as an appropriate non-vanishing value for $\langle \phi\rangle\ll M_{\scriptscriptstyle \mathrm{UV}}$ is generated (as we will see momentarily), we get $\mu^2_{\scriptscriptstyle \mathrm{SM}} = \lambda_{\rm P}\langle|\phi|\rangle^2$ which triggers electroweak symmetry breaking. (For more detail on this see a recent discussion in \cite{Englert:2013gz,Khoze:2014xha} and references therein.) From now on we will concentrate on the dark sector alone and neglect the back reaction of the SM; these corrections can be straightforwardly included, but will not be essential to our discussion. The zero-temperature 1-loop effective potential for $\phi$ reads \cite{Coleman:1973jx}, \begin{equation} V (\phi;\mu)\,=\, \frac{\lambda_\phi(\mu)}{4}\phi^4+ \frac{n g_{\scriptscriptstyle \mathrm{D}}(\mu)^4}{64 \pi ^2} \phi^4 \left(\log \left(\frac{\phi^2}{\mu^2}\right)-\frac{25}{6}\right) \,, \label{V1R} \end{equation} where $\mu$ is the RG scale, $g_{\scriptscriptstyle \mathrm{D}}$ is the U(1) dark sector gauge coupling, and the second term on the {\it r.h.s.} are the 1-loop contributions arising from the hidden U(1) gauge bosons $Z'$. In this case the factor of $n$ appearing on the {\it r.h.s} of \eqref{V1R} is $n=3$. The vacuum of the effective potential above occurs at $\langle \phi \rangle \neq 0.$ Minimising the potential \eqref{V1R} with respect to $\phi$ at $\mu=\langle \phi \rangle$ gives the characteristic Coleman-Weinberg-type $\lambda_\phi \propto g_{\scriptscriptstyle \mathrm{CW}}^4$ relation between the scalar and the gauge couplings, \begin{equation} \lambda_\phi \,=\, \frac{11}{16\pi^2} \,g_{\scriptscriptstyle \mathrm{D}}^4 \qquad {\rm at} \quad \mu=\langle \phi\rangle \equiv w\,. \label{eq:cwmsbar} \end{equation} From now on we will refer to the non-vanishing vev of $\phi$ in the zero-temperature theory as $w$. With this matching condition at $\mu=w$ the zero-temperature effective potential \eqref{V1R} for the U(1) CW theory takes the form, \begin{equation} V (\phi)\,=\, \frac{n}{64 \pi ^2} \, g_{\scriptscriptstyle \mathrm{D}}^4\, \phi^4 \left(-\frac{1}{2}+\log \left(\frac{\phi^2}{w^2}\right)\right). \label{V1RR} \end{equation} It is plotted in Fig.~\ref{fig:VT0} which shows the existence of a single vacuum at $\phi=w$ generated via radiative corrections. The physical mass of the CW scalar is found by expanding \eqref{V1RR} around $\phi \to w +\phi$, \begin{equation} m_\phi^2\,=\, \frac{ng_{\scriptscriptstyle \mathrm{D}}^4}{8\pi^2}w^2\,, \label{eq:mphiZ} \end{equation} and the mass of the $Z'$ vector boson is $M_{Z'} = \frac{1}{2}g_{\scriptscriptstyle \mathrm{D}} w \gg m_\phi.$ The above formulae are easily generalised also to non-Abelian CW gauge groups. For example in a classically scale-invariant SU(2) gauge theory with the scalar field in the adjoint representation considered e.g. in \cite{Khoze:2014woa} one just sets $n=6$ and hence \begin{equation} V (\phi)\,=\, \frac{6}{64 \pi ^2} \, g_{\scriptscriptstyle \mathrm{D}}^4\, \phi^4 \left(-\frac{1}{2}+\log \left(\frac{\phi^2}{w^2}\right)\right) \,. \label{V2RR} \end{equation} The only difference between \eqref{V1RR} and \eqref{V2RR} is that in the SU(2) case there are two $W'$ bosons contributing to the loops, hence the total of 6 degrees of freedom compared to 3 on the {\it r.h.s.} of \eqref{V1RR}. In the rest of this section we will concentrate on the SU(2) with the adjoint scalar case in hand, i.e. $n=6$. One can also easily switch to the U(1) theory conventions, and other examples of CW hidden sectors, such as the SU(2) with the scalar in the fundamental representation, and the U(1)$_{B-L}$ classically scale-invariant extensions of the Standard Model were considered in \cite{Khoze:2014xha}. \subsection{Thermal effects} \label{sec:thermal} \begin{figure}[t] \includegraphics[width=0.43\textwidth]{VT1.pdf} \caption{\small Thermal effective potential $\hat{V}(\gamma,\Theta)$ of the dark sector in Eq. \eqref{eq:VT1} as a function of $\gamma=\phi/w$ plotted for different temperatures $\Theta=$ 0.40, 0.35, 0.31, 0.25, 0.20 and 0 (from top to bottom). We have shifted $\hat{V}(\gamma,\Theta)$ by a constant so that the effective potential at the origin is zero for all values of $\Theta$.} \label{fig:VT1} \end{figure} \begin{figure}[t] \includegraphics[width=0.43\textwidth]{VT2.pdf} \caption{\small Thermal effective potential $\hat{V}(\gamma,\Theta)$ as in Fig~\ref{fig:VT1} now zooming at the values around the critical temperature, $\Theta=$ 0.315, 0.312, 0.309 (from top to bottom). } \label{fig:VT2} \end{figure} The effective potential at finite temperature along the $\phi$ direction is given by the zero-temperature effective potential \eqref{V2RR} plus the purely thermal correction $\Delta V_{T}$ which vanishes at $T=0$, \begin{equation} V_{T}(\phi)=\, V(\phi)+ \Delta V_{T}(\phi)\,. \label{Vth0} \end{equation} The second term is computed at one-loop in perturbation theory and is given by the well-known expression\cite{Dolan:1973qd}: \begin{equation} \frac{T^{4}}{2\pi^{2}}\sum_{i}\pm n_{i}\int_{0}^{\infty}\mbox{d}q\, q^{2}\log\left(1\mp\exp(-\sqrt{q^{2}+m_{i}^{2}(\phi)/T^{2}})\right). \label{Vth1} \end{equation} The $n_i$ denote the numbers of degrees of freedom present in the theory and the upper sign is for bosons and the lower one is for fermions. The $\phi$-dependent masses of these degrees of freedom are denoted as $m_{i}(\phi)$. In our case there are $n=6$ degrees of freedom corresponding to $W'_{\pm}$ vector bosons of mass $m(\phi)=g_{\scriptscriptstyle \mathrm{D}}\phi$. In terms of the rescaled dimensionless variables, \begin{equation} \gamma \,=\, \phi/w \,, \quad \Theta \,=\, T/(g_{\scriptscriptstyle \mathrm{D}} w)\,, \end{equation} we have, \begin{eqnarray} \hat{V}(\gamma,\Theta):= \frac{V_{T}(\phi)}{g_{\scriptscriptstyle \mathrm{D}}^4 w^4} = \frac{3}{32 \pi ^2} \, \gamma^4 \left(-\frac{1}{2}+\log \left(\gamma^2\right)\right) \nonumber\\ + \frac{6\Theta^4}{2\pi^{2}}\int_{0}^{\infty}\mbox{d}q\, q^{2}\log\left(1-\exp(-\sqrt{q^{2}+\gamma^2/\Theta^{2}})\right). \label{eq:VT1} \end{eqnarray} We plot this thermal effective potential in Figs.~\ref{fig:VT1} and \ref{fig:VT2} as a function of the rescaled scalar field $\gamma \,=\, \phi/w$ for a sequence of temperature values. It easy to see from these figures that there is a barrier separating the two vacua and thus the phase transition is of the first order. The value of the critical temperature where both minima are degenerate and the position of the second minimum are determined numerically to be at\footnote{Note that unlike in the more familiar SM Higgs effective potential applications, neither the high-temperature nor the low-temperature approximations for evaluating $T$-dependence are applicable here.} \begin{equation} \Theta_c = \frac{T_c}{g_{\scriptscriptstyle \mathrm{D}}w} \simeq 0.312\, , \quad \gamma_c= \frac{\phi_c}{w} \simeq 0.95\,, \end{equation} so that the order parameter $\phi_c/T_c \simeq 3.04/g_{\scriptscriptstyle \mathrm{D}} > 1$, ensuring that a {\em first order} phase transition indeed took place in our weakly coupled model of a dark sector. This fact is a characteristic feature of Coleman-Weinberg models where the mass parameter at the origin is set to zero as a consequence of classical scale invariance. \subsection{Phase transition} \label{sec:bubbles} Among the key parameters for the calculation of the gravitational wave spectrum are the rate of variation of the bubble nucleation rate $\beta$ and the amount of the vacuum energy $\rho_{\rm vac}$ released during the phase transition. Specifically, following \cite{Grojean:2006bp} we are interested in the dimensionless quantities $\beta/H_*$ and $\alpha$ defined below in Eqs.~\eqref{eq:beta} and \eqref{eq:alphadef}. The thin wall approximation \cite{Coleman:1977py,Anderson:1991zb} allows for an analytical computation (or estimate) of the parameters characterising the phase transition, and we will consider it first in Sec.~\ref{sec:thin}. In our model the thin wall approximation, however, will be seen to break down already at moderately small values of the coupling $g_{\scriptscriptstyle \mathrm{D}}\lesssim 1$. Therefore we will also consider in Sec.~\ref{sec:triangle} a different approximation of the effective potential by a triangular shape. The probability of bubble formation is proportional to $\exp[-S_4(\phi_{\rm cl})]$ where $S_4$ is the 4-dimensional Euclidean action corresponding to the tunnelling trajectory and $\phi_{\rm cl}$ is the spherical bubble solution \cite{Kobzarev:1974cp,Coleman:1977py}. The all-important effects of thermal corrections are taken into account by replacing $S_4$ with the 3-dimensional effective action so that the probability of tunnelling from a vacuum at the origin $\phi=0$ to the true vacuum $\phi_+$ per unit time per unit volume is \begin{equation} P = A(T) \exp\left[-S_3(\phi_{\rm cl})/T\right]\, \sim T^4 \exp\left[-S_3(\phi_{\rm cl})/T\right]\,. \end{equation} Employing spherical symmetry, the 3D action is \begin{equation} S_3 \,=\, 4\pi \int_0^\infty r^2 dr \left(\frac{1}{2}\left(\frac{d\phi}{dr}\right)^2 + V_T(\phi)\right)\,, \end{equation} so that the bubble $\phi_{\rm cl}(r)$ configuration is the solution of \begin{equation} \frac{d^2 \phi_{\rm cl}}{dr^2} +\frac{2}{r}\frac{d\phi_{\rm cl}}{dr} \,=\, V_T'(\phi_{\rm cl})\,, \label{eq:bubble} \end{equation} with the boundary conditions $\phi_{\rm cl}(\infty)=0$, $d_r\phi_{\rm cl}(0)=0$. In the formulae above $V_T$ is the temperature-dependent effective potential \eqref{Vth0}. After the universe cools down to a temperature below $T_c$ the vacuum at the origin becomes meta-stable, and the bubbles of true vacuum $\phi_+$ can start appearing. The phase transition occurs when the temperature $T_*$ is reached where the nucleation rate of the bubbles $P \sim 1$. This occurs when $S_3/T_* \sim 100$. If this regime can be reached at temperatures just below the critical temperature $T_c$ we would have an $\epsilon$-deviation from the degenerate vacua. This is depicted by the lowest curve in Fig.~\ref{fig:VT2}. Here the parameter $\epsilon$ is the split in the energy density between the two vacua, \begin{equation} \epsilon \,=\, \frac{1}{g_{\scriptscriptstyle \mathrm{D}}^4w^4}\, (V_{T}(0) - V_{T}(\phi)) \,. \label{eq:epsdef} \end{equation} For small $\epsilon$ it is suggestive to employ the thin-wall approximation~\cite{Coleman:1977py,Anderson:1991zb}. To get a first impression of the results this is what we will do in the following. However, we stress here that the smallness of $\epsilon$ is not sufficient for the thin wall approximation to be valid. Indeed the potential barrier as seen from the false vacuum must be large compared to the difference in energy between the true and false vacuum, and this will turn out to be not the case in our model at weak coupling. Hence we will supplement the thin wall approximation below with a more appropriate treatment in Section~\ref{sec:triangle}. \subsection{Thin-wall approximation} \label{sec:thin} The action in the thin-wall regime is given by the sum of the volume and the surface terms: \begin{equation} S_3\,=\, 4\pi \int_0^R r^2 dr\, V_T(\phi_+)+4\pi R^2\int_0^{\phi_+}\sqrt{2 V_T(\phi)}\, d\phi\,, \label{eq:thin} \end{equation} where $R$ is the bubble radius and the bubble interpolates between the true vacuum $\phi_+$ for $r<R$ and the false $\phi=0$ vacuum at $r>R$. The bubble wall, $R\pm \delta r$, is thin, $\delta r \ll 1$ for $\epsilon \ll 1$. The value of the radius $R$ of the bubble is then found by extremising the action $S_3$ with respect to $R$. For the volume contribution (first term on the {\it r.h.s.} of \eqref{eq:thin}) we have \begin{equation} - \, \epsilon g_{\scriptscriptstyle \mathrm{D}}^4 w^4 \, \frac{4 \pi}{3} R^3\,, \end{equation} while the surface-tension term gives \begin{equation} 4 \pi R^2 g_{\scriptscriptstyle \mathrm{D}}^2 w^3 \int_0^{\gamma_+}\sqrt{2 V_T(\gamma,\Theta_c)}\, d\gamma \simeq 4 \pi R^2 g_{\scriptscriptstyle \mathrm{D}}^2 w^3 \times 0.0338\,, \end{equation} with the integral having been evaluated numerically. The bubble radius is found by extremising the action, \begin{equation} R\,=\, \frac{2\times 0.0338}{g_{\scriptscriptstyle \mathrm{D}}^2 w} \, \frac{1}{\epsilon}\,, \end{equation} and for the action we have, \begin{equation} S_3\,=\, \frac{16\pi}{3}\, \frac{(0.0338)^3}{g_{\scriptscriptstyle \mathrm{D}}^2 } \, \frac{w}{\epsilon^2}\,. \end{equation} The phase transition completes when \begin{equation} \frac{S_3}{T_*}\,\simeq\, \frac{S_3}{T_c}\,=\, \frac{16\pi}{3}\, \frac{1}{0.312}\, \left(\frac{0.0338}{g_{\scriptscriptstyle \mathrm{D}}}\right)^3 \, \frac{1}{\epsilon^2} \sim 100\,. \label{eq:S3est} \end{equation} This implies, \begin{equation} \epsilon \simeq \frac{1}{g_{\scriptscriptstyle \mathrm{D}}^{3/2}} \, 0.00455\,. \label{eq:epsest} \end{equation} \medskip We can now compute the $\beta$-parameter characterising the phase transition and in particular the strength of the gravitational wave signal (as we will recall in the next section), \begin{equation} \frac{\beta}{H_*}\, =\, T\frac{d}{dT}\left(\frac{S_3}{T}\right)_{T=T_*}. \label{eq:beta} \end{equation} Here $T_*$ is the temperature at which the probability of nucleating one bubble per horizon volume per unit time is $\sim 1$ (in our case of the thin-wall regime it is just below $T_c$) and $H_*$ is the Hubble constant at that time. A strong gravitational wave signal requires a small $\beta/H_{*}$ so this is the regime we are most interested in. We have computed numerically the dependence of $\epsilon$ on $T$ which is plotted in Fig.~\ref{fig:eps}. \begin{figure}[t] \includegraphics[width=0.43\textwidth]{eps.pdf} \caption{\small $\epsilon$ as a function of the nucleation temperature $T_*$ for $T_* \le T_c$. } \label{fig:eps} \end{figure} This is very well-described by a numerical fit, \begin{eqnarray*} \epsilon (\Theta_*) \simeq\, -0.0496 (\Theta_*-0.312) - 0.1424 (\Theta_*-0.312)^2 \end{eqnarray*} where $0.312$ is our value for the critical temperature $\Theta_c$. Now using the expression for the action \eqref{eq:S3est}, the bound $S_3/T_* \simeq 100$ and the fit for $\epsilon (\Theta_*)$ above, we find: \begin{equation} \frac{\beta}{H_*}\,=\, \frac{S_3}{T_*}\, \frac{(-2)}{\epsilon} \left(\Theta_* \frac{d \epsilon}{d \Theta_*}\right)_{\Theta_{\rm c}}\, \simeq\, \frac{3.1}{\epsilon} \,\simeq\, 680\, \, {g_{\scriptscriptstyle \mathrm{D}}^{3/2}}\,, \label{eq:beta} \end{equation} where in the final expression we have used Eq.~\eqref{eq:epsest}. \bigskip Finally we need to determine the second key parameter affecting the gravitational wave spectrum -- the ratio of the vacuum energy density released in the phase transition to the energy density of the radiation bath, \begin{equation} \alpha\,=\, \frac{\rho_{\rm vac}}{\rho_{\rm rad}^*}\,. \label{eq:alphadef} \end{equation} Here $\rho_{\rm rad}^* = g_* \pi^2 T_*^4/30$ and $g_*$ is the number of relativistic degrees of freedom in the plasma at $T_*$. The vacuum energy, on the other hand, is easy to estimate again in the thin wall approximation as \begin{equation} \rho_{\rm vac} \,=\, g_{\scriptscriptstyle \mathrm{D}}^4 w^4 \, \epsilon \,\simeq\, 0.00455 \, g_{\scriptscriptstyle \mathrm{D}}^{5/2} w^4\,. \end{equation} Then we have \begin{equation} \alpha\,=\, \frac{1}{g_*\,g_{\scriptscriptstyle \mathrm{D}}^{3/2}}\, \frac{0.137}{\pi^2}\, \frac{1}{\Theta_*^4}\simeq \, \frac{1.46}{g_*\, g_{\scriptscriptstyle \mathrm{D}}^{3/2}} \,, \label{eq:alphaest} \end{equation} where we have used $\Theta_* \simeq \Theta_c \simeq 0.312$. \medskip As already mentioned above, to safely apply the thin-wall approximation we need not only $\epsilon\ll 1$ but also $\delta \ll 1$, where we have defined, \begin{eqnarray} \delta& = &\frac{V_{T}(0)-V_{T}(\phi)}{V_{T}(\phi_{max})-V_{T}(0)} \\\nonumber &=&\frac{g_{\scriptscriptstyle \mathrm{D}}^4w^4}{V_{T}(\phi_{max})-V_{T}(0)}\epsilon \\\nonumber &=&\frac{1}{\hat{V}(\gamma_{max},\Theta)}\epsilon, \end{eqnarray} and $\phi_{max}=w\gamma_{max}$ is the maximum of the barrier. As all terms in the potential are dimensionless and arise from 1-loop we generically expect, \begin{equation} \hat{V}(\gamma_{max},\Theta)\sim \frac{1}{16\pi^2}. \end{equation} This therefore implies, \begin{equation} \delta\sim 16\pi^2\epsilon\sim 16\pi^2\frac{0.00455}{g_{\scriptscriptstyle \mathrm{D}}^{3/2}}. \end{equation} This becomes of order one for $g_{\scriptscriptstyle \mathrm{D}}\simeq 0.8$ and the thin wall approximation is problematic in the weak-coupling regime $g_{\scriptscriptstyle \mathrm{D}}\lesssim 1.$ \medskip \subsection{Beyond the thin-wall approximation} \label{sec:triangle} To understand what happens at smaller values of the coupling we adapt the tunnelling approximation of Ref.~\cite{Duncan:1992ai} to the case of our three dimensional thermal bubbles. In~\cite{Duncan:1992ai} the authors approximate the potential by a triangle for which the tunnelling solutions can be found analytically. We will follow this approach to describe the case of broad and low-height barriers we are interested in. The triangle potential can be characterised by the slope on the left and right hand side of the peak of the triangle, $\lambda_{p}$ and $\lambda_{m}$, as well as the distance between the false vacuum and the top of the potential, $\Delta\phi_{p}$ and the distance from the top to the true vacuum $\Delta\phi_{m}$. For convenience, as in~\cite{Duncan:1992ai}, we introduce the abbreviations, \begin{equation} c=\frac{\lambda_{p}}{\lambda_{m}},\qquad a=(1+c)^{1/3},\qquad \kappa=\frac{\lambda_{p}}{(\Delta\phi_{p})^3}. \end{equation} \begin{figure*}[t] \includegraphics[width=0.4\textwidth]{betah.pdf} \hspace{1.0cm} \includegraphics[width=0.4\textwidth]{alpha.pdf} \caption{Numerical values for $\beta/H_{*}$ (left) and $\alpha$ (right) for values $g_{\scriptscriptstyle \mathrm{D}}\geq 0.1$ in the triangle approximation (blue lines). In the right panel the green line indicates the value of $\alpha_{\infty}$ according to Eq.~\eqref{alphastar} and the golden line indicates $\alpha=1$.} \label{betah} \end{figure*} The strategy to solve the equation of motion~\eqref{eq:bubble} is as follows. One can easily find solutions to the equations of motion on the right and left hand side of the triangle. On the right hand side one needs to implement the boundary condition $\phi'(0)=0$. There are two regimes for the field value at $0$. Either the field reaches the true minimum or it does not. The latter happens if $\Delta\phi_{m}$ is sufficiently large. This is what happens for our potential and we will only consider this case in the following. Importantly in this situation there is no dependence on $\Delta\phi_{m}$. On the left side the field will reach $\phi(R)=0$. Since the potential is linear, $R$ will be finite and therefore we also have $\phi'(R)=0$. Finally one can match the two solutions continuously at the top of the triangle. After some algebra the result for the 3-dimensional action of the bubble can be written in a relatively compact form as, \begin{equation} \label{s3} S_{3}=\frac{16\sqrt{6} a^3\pi\Delta\phi_{p}}{5\left[(1-a)^2(1+2a)\right]^{2/3}\sqrt{\kappa}}. \end{equation} Decreasing the coupling $g_{\scriptscriptstyle \mathrm{D}}$, the temperature at which bubbles form also decreases. As one can infer from Fig.~\ref{fig:VT1} for smaller temperatures the ratio of the slopes $\lambda_{m}/\lambda_{p}$ goes towards larger values. It therefore makes sense to approximate Eq.~\eqref{s3} for this case as, \begin{equation} \label{s3approx} S_{3}=\frac{8\sqrt{3}\pi\Delta\phi_{p}}{5\sqrt{c}\sqrt{\kappa}}=\frac{8\sqrt{3}\pi\Delta\phi_{p}^{5/2}}{5\sqrt{\lambda_{m}}}. \end{equation} For small temperatures we have checked that to a reasonable approximation the expressions, \begin{equation} \Delta\phi_{p}\sim x\Theta w\sim xT/g_{\scriptscriptstyle \mathrm{D}},\qquad \lambda_{m}\sim \frac{3}{64\pi^2}g_{\scriptscriptstyle \mathrm{D}}^4w^3, \end{equation} can be used with \begin{equation} x\sim 0.5-1.2. \end{equation} Inserting these formulae into Eq.~\eqref{s3approx} we find, \begin{equation} \label{ss3} \frac{S_{3}}{T}=\frac{64\pi^2}{5g_{\scriptscriptstyle \mathrm{D}}^{9/2}} \frac{T^{3/2}}{w^{1/2}}x^{5/2}. \end{equation} For the $\beta$ parameter we therefore have, \begin{equation} \frac{\beta}{H_{*}}=T\frac{d}{dT}\left(\frac{S_{3}}{T}\right)\bigg|_{T=T_{*}}=\frac{3}{2} \frac{D}{T_{*}}=\frac{3}{2}\frac{S_{3}}{T}\bigg|_{T=T_{*}}, \end{equation} Since $S_3/T_{*}$ is essentially fixed at$\sim 100$, the same holds for $\beta/H_{*}$ in our model. Accordingly we cannot decrease it significantly below this value. \medskip To complete our estimate we now also need to determine the $\alpha$ parameter in \eqref{eq:alphadef}. For small temperatures the difference in vacuum energy is simply given by the difference at zero temperature, \begin{equation} \rho_{\rm vac}=\frac{3}{64\pi^2}g_{\scriptscriptstyle \mathrm{D}}^4w^4. \end{equation} Using Eq.~\eqref{ss3} we have for the temperature, \begin{equation} T_{*}\sim 0.1 g_{\scriptscriptstyle \mathrm{D}}^3\left(\frac{S_{3}}{T_{*}}\right)^{2/3}w. \end{equation} This gives, \begin{equation} \alpha=\frac{3g_{\scriptscriptstyle \mathrm{D}}^4}{64\pi^2}\frac{30}{g_{*}\pi^2 T^{4}_{*}}\sim \frac{60}{g_{*}g_{\scriptscriptstyle \mathrm{D}}^8}\left(\frac{S_{3}}{T_{*}}\right)^{-8/3}\sim \frac{0.0003}{g_{*}g_{\scriptscriptstyle \mathrm{D}}^8}. \end{equation} We stress that this is a rather crude estimate which is supposed to be valid only for small $g_{\scriptscriptstyle \mathrm{D}}\ll 0.1$. However, there are two messages we can take from this calculation. The first is that with decreasing $g_{\scriptscriptstyle \mathrm{D}}$ the transition temperature $T_{*}$ drops dramatically. In line with this the $\alpha$ parameter rapidly increases. \bigskip Finally, for larger values of $g_{\scriptscriptstyle \mathrm{D}}\geq 0.1$, we have computed the phase transition parameters $\beta/H_{*}$ and $\alpha$ numerically, still using the triangle approximation. Their values are plotted in Fig.~\ref{betah}. We note that for values below $g_{\scriptscriptstyle \mathrm{D}}\sim 0.6$, the parameter $\alpha \gtrsim 1$ and the amount of energy in the surrounding plasma is lower than the field energy released in the phase transition. This is important for the gravitational wave signal as we will briefly discuss below. \subsection{Gravitational waves signal} \begin{figure*}[t] \includegraphics[width=0.7\textwidth]{gravreach_triang.pdf} \caption{Reach of gravitational wave detectors: We show aLIGO together with the fifth phase of aLIGO (both solid black), and the proposed detectors BBO, DECIGO, ET and eLISA [dashed black] (the sensitivities are taken from the gravitational wave plotter http://rhcole.com/apps/GWplotter/~\cite{Moore:2014lga}). For the curves of the CW phase transition -- going from left to right -- we choose $v_w=1$ throughout, and respectively $(\kappa=1.0, g_D=0.6,T_*=100~\mathrm{GeV})$ [in red], $(\kappa=1.0, g_D=0.6, T_*=10~\mathrm{TeV})$ [green] and $(\kappa=1.0, g_D=0.6, T_*=500~\mathrm{TeV})$ [in blue].} \label{gravreach} \end{figure*} \begin{figure*} \includegraphics[width=0.7\textwidth]{gravreach_triang_sound.pdf} \caption{Reach of gravitational wave detectors for a more conservative scenario $\kappa_{sw}=0.4$ (all other parameters as in Fig.~\ref{gravreach}).} \label{gravreach2} \end{figure*} As was already discussed and studied in the literature \cite{Kosowsky:1991ua,Kamionkowski:1993fg,Grojean:2006bp,Huber:2008hg,Caprini:2009yp,Binetruy:2012ze,Hindmarsh:2015qta,Caprini:2015zlo}, there are three types of processes during and following the first order phase transition involved in the production of gravitational waves: (1) collisions of bubble walls $h^2 \Omega_\mathrm{c}$, (2) sound waves in the plasma $h^2 \Omega_\mathrm{sw}$, and (3) magnetohydrodynamics turbulence (MHD) following bubble collisions $h^2 \Omega_\mathrm{mhd}$. We assume they contribute to the stochastic GW background approximately linearly, i.e. \begin{equation} \label{eq:sum} h^2 \Omega_\mathrm{GW} \simeq h^2 \Omega_\mathrm{c} + h^2 \Omega_\mathrm{sw} + h^2 \Omega_\mathrm{mhd}, \end{equation} where the three contributions to the signal are given by \cite{Caprini:2015zlo}: \begin{widetext} \begin{equation} h^2 \Omega_\mathrm{c} = 1.67 \times 10^{-5} \left ( \frac{H_*}{\beta} \right )^2 \left ( \frac{\kappa_\mathrm{c} \alpha}{1+\alpha} \right )^2 \left ( \frac{100}{g_*} \right )^{\frac{1}{3}} \left ( \frac{0.11 v_w^3}{0.42 + v_w^2} \right ) \frac{3.8 (f/f_{\mathrm{env}})^{2.8}}{1+2.8 (f/f_\mathrm{env})^{3.8}}, \end{equation} \begin{equation} h^2 \Omega_\mathrm{sw} = 2.65 \times 10^{-6} \left ( \frac{H_*} {\beta} \right ) \left ( \frac{\kappa_\mathrm{sw} \alpha}{1+\alpha} \right )^2 \left ( \frac{100}{g_*} \right )^{\frac{1}{3}} v_w \, \left( \frac{f}{f_{\mathrm{sw}}} \right) \left ( \frac{7}{4+3(f/f_\mathrm{sw})^2} \right )^{7/2} \end{equation} and \begin{equation} h^2 \Omega_\mathrm{mhd} = 3.35 \times 10^{-4} \left ( \frac{H_*} {\beta} \right ) \left ( \frac{\kappa_\mathrm{mhd} \alpha}{1+\alpha} \right )^{\frac{3}{2}} \left ( \frac{100}{g_*} \right )^{\frac{1}{3}} v_w\, \frac{(f/f_{\mathrm{mhd}})^3}{\left [ 1 + (f/f_{\mathrm{mhd}}) \right ]^{\frac{11}{3}} (1+8\pi f / h_*)}. \end{equation} For the peak frequencies and the Hubble rate after red-shifting for the three processes above we use respectively, \begin{equation} f_{\mathrm{env}} = 16.5 \times 10^{-6}~\mathrm{Hz} \left ( \frac{0.62}{1.8-0.1 v_w + v_w^2} \right ) \left ( \frac{\beta}{H_*} \right ) \left ( \frac{T_*}{100~\mathrm{GeV}} \right ) \left ( \frac{g_*}{100} \right )^{\frac{1}{6}}, \end{equation} \begin{equation} f_{\mathrm{sw}} = 1.9 \times 10^{-5}~\mathrm{Hz} \left ( \frac{1}{v_w} \right ) \left ( \frac{\beta}{H_*} \right ) \left ( \frac{T_*}{100~\mathrm{GeV}} \right ) \left ( \frac{g_*}{100} \right )^{\frac{1}{6}}, \end{equation} \begin{equation} f_{\mathrm{mhd}} = 1.42~f_{\mathrm{sw}}. \end{equation} \end{widetext} These expressions depend on the set of key parameters associated with the phase transition: the rate of the phase transition $\beta / H_*$, the energy ratio $\alpha$, together with the latent heat fractions $0<\kappa<1$ for each of the three processes and the bubble wall velocity $v_w$. The bubbles are supersonic for $1/\sqrt{3} < v_w \le1$, and subsonic for $v_w \lesssim 1/\sqrt{3}$. As discussed in Ref.~\cite{Caprini:2015zlo} there are three regimes for the bubbles: non-ruanway bubbles, runaway bubbles in thermal plasma, and runaway bubbles in the vacuum. In the non-runaway regime the bubble wall reaches the terminal velocity which remains $v_w<1$. Such non-runaway bubbles occur for $\alpha<\alpha_{\infty}$, with \begin{equation} \label{alphastar} \alpha_{\infty}\approx\frac{30}{24\pi^2}\frac{\sum_{a} c_{a}\Delta m_{a}^2}{g_{*}T_{*}^2}, \end{equation} where $c_{a}$ measures the degrees of freedom counting $1$ for bosons and $1/2$ for fermions and $\Delta m_{a}$ is the change in the mass of the particles during the phase transition. In this case only the first two mechanisms of gravitational wave production contribute, the MHD contribution is absent. For $\alpha\gtrsim \alpha_{\infty}$ it is possible for bubbles to accelerate without bound (the runaway bubbles) and there is no terminal velocity. In this case all three mechanisms contribute into Eq.~\eqref{eq:sum}. Finally for even larger $\alpha\gg 1$ one is in a situation where the phase transition occurs essentially in vacuum. These are runaway bubbles in the vacuum and only the bubble wall collisions processes contribute to the gravitation waves signal. We find that the signal in general tends to increase with $\alpha$ and that the sound wave contribution tends to be largest in our model of the dark sector. We therefore focus on the case $\alpha\sim \alpha_{\infty}\lesssim 1$\footnote{Here we note some caveats. It is difficult to pinpoint exactly where the transition between the runaway in the plasma and that in the vacuum occurs. Also, the expressions for $h^2 \Omega$ from~\cite{Caprini:2015zlo} which we use, have only been tested in the $\alpha\lesssim 0.1$ regime~\cite{Caprini:2015zlo}. Our estimates for the signal at $\alpha \sim 0.5$ may therefore be on the optimistic side.}. For the sound waves the efficiency fraction $\kappa_{sw}$ (for $v_{w}\sim 1$) gives~\cite{Caprini:2015zlo} \begin{equation} \kappa_{sw}\approx\frac{\alpha}{0.73+0.083\sqrt{\alpha}+\alpha}. \end{equation} For an example value $\alpha\sim\alpha_{\infty}=0.5$ this is $\sim 0.4$. Close to the runaway case the colliding bubbles contribution is negligible, and the MHD contribution is typically small, too, $\kappa_{mhd}\sim (0.05-0.1)\kappa_{sw}$ (cf.~\cite{Caprini:2015zlo}). \bigskip In Figure~\ref{gravreach} we show the reach of future and current gravitational wave detectors, assuming the optimistic maximal value of $\kappa=1$ for sound waves. For the number of degrees of freedom we use $g_*=100$. Note, $\Omega_\mathrm{sw} \gg \Omega_\mathrm{c},\Omega_\mathrm{mhd}$ at peak frequency. Over a large part of the parameter space we find good sensitivity at BBO and DECIGO, which cover the frequencies resulting from phase transitions at temperatures of $O(1) \lesssim T_* \lesssim O(10^3)~\mathrm{TeV}$. For even higher frequencies, aLIGO in the fifth phase O5 which is projected to operate in 2020's with design sensitivity taken from Ref.~\cite{TheLIGOScientific:2016wyq}, can also provide sensitivity to phase transitions. We also show the more conservative case with the lower value of the sound-waves efficiency, $\kappa=0.4$ in Fig.~~\ref{gravreach2}. Relative to the $\kappa=1$ plots of Fig.~\ref{gravreach}, here we have a loss of sensitivity to aLigo and eLisa experiments. \section{Domain-wall interactions} \label{sec:domain} In models with discrete symmetries domain walls occur quite naturally~\cite{Sikivie:1982qv}. For example they could be formed after a cosmological phase transition where different regions of the Universe settle into different degenerate vacua (connected to each other by the discrete symmetry). In dark sectors both the distance in field space as well as the height of the potential in between the vacua could be relatively low. In consequence the domain wall tension, i.e. the energy per unit area could be relatively small such that one could have a reasonable high density of walls without exceeding constraints on the energy density (there have even been suggestion that connect such domain walls to dark matter and dark energy~\cite{Battye:1999eq,Friedland:2002qs}). Here we follow the spirit of~\cite{Olive:2010vh,Pospelov:2012mt,Pustelny:2013rza} and consider the observable consequences of the existence of such domain walls. In particular we are interested in signals observable in LIGO and other gravitational wave detectors. While dark sectors by definition are very weakly coupled to Standard Model particles, even low scale domain walls feature relatively large field values. This enhances the signal, making them potentially observable in sensitive experiments. Interestingly such walls would give distinct transient signals with a variety of shapes (in contrast to the more constant signatures from phase transitions discussed in the previous section. \subsection*{Domain walls} Let us consider a domain wall in a pseudo-Goldstone boson which features an additional $Z_{N}$ symmetry. Following Ref.~\cite{Pospelov:2012mt} we consider the following effective Lagrangian for the domain wall field \begin{equation} {\mathcal{L}}_{\phi}=\frac{1}{2}(\partial_{\mu}\phi)^2-2\frac{m^2 f^2}{N^2_{\phi}}\sin^{2}\left(\frac{N_{\phi}\phi}{2f}\right). \end{equation} With this the domain wall solutions read, \begin{equation} \label{dwsol} \phi(z)=\frac{4f}{N_{\phi}}\arctan\left[\exp(mz)\right]. \end{equation} Abundant domain walls would contribute significantly to the energy density. A very conservative constraint is that this contribution should be less than the local dark matter density. Domain walls have a density per unit area $\sigma=mf^2/N^{2}_{\phi}$ and a network with typical distance scale $L$ then has an energy density $\rho\sim \sigma/L$. This gives a limit on the abundance of domain walls~\cite{Pospelov:2012mt}, \begin{equation} \frac{f}{N_{\phi}}\lesssim {\rm TeV}\,\times\left(\frac{L}{10^{-2}{\rm Ly}}\right)^{1/2}\left(\frac{{\rm neV}}{m}\right)^{1/2}\left(\frac{\rho_{\rm DW}}{\rho_{DM}}\right)^{1/2}. \end{equation} For lower energy densities of the domain wall network one needs a correspondingly lower scale $f$. Together with the typical velocity $v$ of the domain walls this gives an event rate, \begin{equation} {\rm Event\,\,Rate}\sim \frac{1}{10\,\,{\rm years}}\left(\frac{10^{-2}\,{\rm Ly}}{L}\right)\left(\frac{v}{10^{-3}}\right). \end{equation} Here the crucial ingredient is the velocity of the domain wall. Inside the galaxy objects typically have velocities of this order of magnitude and indeed Earth moves with such a velocity around the center of the galaxy. Anything considerably smaller seems a bit fine-tuned. In principle domain walls could move faster but truly stable ones should be slowed down by the expansion of the Universe\footnote{If the two vacua connected by the domain wall are not exactly equal in energy, the domain wall is in a sense a bubble wall, which could be accelerated by the energy difference and therefore be fast.}. Therefore $v\sim 10^{-3}$ seems a reasonable velocity. All in all we want the typical domain wall scale $f$ to be $\lesssim {\rm TeV}$ which is low but still doable. \subsection*{Interaction with photons} To have an observable effect in LIGO the domain wall field should have an interaction with Standard Model particles, preferably with photons. Essentially LIGO measures a phase shift between the two arms of the interferometer. A simple modification of electrodynamics that leads to a phase shift is a photon mass term inside the domain wall, \begin{equation} {\mathcal{L}}_{A}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}m^{2}_{0,\gamma}\sin^{2}\left(\frac{N_{A}\phi}{f}\right)A^{\mu}A_{\mu}. \end{equation} Crucially, far away from the plane of the domain wall the effective photon mass is zero in agreement with observation, as long as $N_{A}/N_{\phi}$ is integer. If the photon is effectively massive in some region of space inside the detector this leads to a phase shift. Approximately one finds\footnote{Here we use a WKB type approximation and neglect reflections on the domain wall. In cavities as employed in LIGO this effect could be non-negligible. Moreover we neglect the small deflection in the propagation direction caused by the domain wall.}, \begin{equation} \Delta \varphi_{i}=\int_{L_{i}} d{\vec{x}}\, \Delta k({\vec{x}}), \end{equation} where $\Delta k({\vec{x}})$ is the space dependent change in wave number and $L_{i}$ denotes the path along the arm $i$ of the interferometer. The observable quantity is the phase difference between the two paths, \begin{equation} \Delta\varphi=\Delta\varphi_{1}-\Delta\varphi_{2}. \end{equation} To evaluate this expression we have to determine the change in the wave number in presence of a mass term. Since the energy of the photon is conserved we have, \begin{equation} \Delta k({\vec{x}})=\sqrt{\omega^2-m^{2}_{\gamma}({\vec{x}})}-\omega\approx -\frac{m^{2}_{\gamma}({\vec{x}})}{2\omega}, \end{equation} where the approximate sign holds for $m_{\gamma}\ll \omega$. Moreover we we have abbreviated, \begin{equation} m^{2}_{\gamma}({\vec{x}})=m^{2}_{0,\gamma}\sin^{2}\left(\frac{N_{A}\phi({\vec{x}})}{f}\right). \end{equation} For a completely flat domain wall as in Eq.~\eqref{dwsol} the field value of the wall only depends on the distance to the the wall, \begin{equation} \phi({\vec{x}})=\phi({\vec{x}}\cdot{\vec{n}}-z_{0}-vt). \end{equation} Here ${\vec{n}}$ is the unit vector normal to the wall, $z_{0}$ is the distance of the wall from the origin at $t=0$ and $v$ is the velocity of the wall with respect to the origin. \subsection*{Simple examples} We can choose the arms of the interferometer to be in the $x$ and $y$ direction, respectively. For simplicity we now take the wall to be parallel to the $z$ direction. Its direction in the $x-y$ plane we specify by the angle $\alpha$ with respect to the $x$-direction. For one round trip through the cavity we then obtain the phase shift, \begin{eqnarray} \label{shift} \Delta \varphi(t)\!\!&&\!\! \\\nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=-\frac{m^{2}_{0,\gamma}}{\omega}\bigg[\int^{L}_{0}\!\!dx\left[\sin^{2}\left(\frac{N_{A}\phi(x\sin(\alpha)-z_{0}-vt)}{f}\right)\right] \\\nonumber &&-\int^{L}_{0}\!\!dy\left[\sin^{2}\left(\frac{N_{A}\phi(y\cos(\alpha)-z_{0}-vt)}{f}\right)\right]\bigg]. \\\nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=-\frac{m^{2}_{0,\gamma}}{\omega m} \\\nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\times \bigg[\int^{mL}_{0}\!\!d\hat{x}\left[\sin^{2}\left(\frac{N_{A}\phi((\hat{x}\sin(\alpha)-\hat{z}_{0}-v\hat{t})/m)}{f}\right)\right] \\\nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!-\int^{mL}_{0}\!\!d\hat{y}\left[\sin^{2}\left(\frac{N_{A}\phi((\hat{y}\cos(\alpha)-\hat{z}_{0}-v\hat{t})/m))}{f}\right)\right]\bigg], \end{eqnarray} where in the second equation we have rescaled to dimensionless variables $\hat{x}=mx,\,\hat{y}=my\,\hat{z}_{0}=mz_{0},\,\hat{t}=mt$. We note that the actual signal is independent of $f$. The dimensionless mass parameter $m^{2}_{\gamma}/(m\omega)$ controls the overall size of the phase shift. The sensitivity of gravitational wave detectors such as LIGO is usually quoted as a sensitivity to a gravitational strain, \begin{equation} h_{\rm sens}\sim\frac{\Delta L_{\rm sens}}{L}\sim 10^{-22}, \end{equation} where $\Delta L_{\rm{sens}}$ is the change in the length of a detector arm caused by the gravitational wave. In terms of a phase shift for a single path of the detector we therefore have, \begin{equation} \label{phisens} \Delta\varphi_{\rm sens}\sim \Delta L\omega\sim h_{\rm sens}L\omega\sim 10^{-10}. \end{equation} In Figs.~\ref{shapes1},\ref{shapes2},\ref{shapes3} we now show a few different sample shapes that can be produced from these interactions. \begin{figure}[t] \includegraphics[width=0.43\textwidth]{shapes1} \caption{$L=4000\,{\rm m}$, $\omega\approx 1\,{\rm eV}$, $m=10\,{\rm neV}$, $m_{\gamma,0}=1\,{\rm neV}$, $N_{A}/N_{\phi}=1$, $\alpha=\pi/2.2,\pi/2.5,\pi/3$ (black, blue, red), $v$ chosen such that signal has roughly a length of $0.02{\rm s}\sim 1/(50\,{\rm Hz})$ this corresponds to $v= 1\times10^{-3}$.} \label{shapes1} \end{figure} \begin{figure}[t] \includegraphics[width=0.43\textwidth]{shapes2} \caption{As in Fig.~\ref{shapes1} but $m_{\gamma,0}=0.1\,{\rm neV}$, $N_{A}/N_{\phi}=5$, $m=0.1\,{\rm neV}$, $\alpha=\pi/2.2$ and $v=3\times 10^{-3}$.} \label{shapes2} \end{figure} \begin{figure}[!t] \includegraphics[width=0.43\textwidth]{shapes3} \caption{As in Fig.~\ref{shapes1} but $m_{\gamma,0}=0.1\,{\rm neV}$, $N_{A}/N_{\phi}=5$, $m=0.5\,{\rm neV}$, $\alpha=\pi/2$ and $v=1\times 10^{-3}$.} \label{shapes3} \end{figure} From the dimensionless form of Eq.~\eqref{shift} we can determine the typical size of the signal. The $\sin$ is maximally of order $1$. The region where the $\sin$ is non-vanishing because we are inside the domain wall has length $1$ in these units as well. This allows one to estimate, \begin{eqnarray} \Delta\varphi \!\!&\sim&\!\! \frac{m^{2}_{0,\gamma}}{m\omega}\qquad\qquad\qquad\quad\! {\rm for}\,\, mL\gtrsim 1, \\\nonumber \!\!&\sim&\!\! \frac{m^{2}_{0,\gamma}}{m\omega}mL\sim \frac{m^{2}_{0,\gamma}L}{\omega}\quad {\rm for}\,\, mL\lesssim 1. \end{eqnarray} For special geometries, where one arm of the detector is essentially parallel to the wall a small enhancement is possible. Using this and a sensitivity $\Delta\varphi\sim10^{-10}$ we can test the following parameter regions, \begin{eqnarray} m_{0,\gamma}\!\!&\sim&\!\! {\rm neV}\left(\frac{m}{10\,{\rm neV}}\right)^{1/2}\quad{\rm for}\,\,m\gtrsim 0.1\,{\rm neV}, \\\nonumber \!\!&\sim&\!\! 0.1\,{\rm neV}\qquad\qquad\quad\,\,\,{\rm for}\,\, m\lesssim 0.1\,{\rm neV}. \end{eqnarray} \subsection*{Signatures of domain wall crossings} Above we have already seen that domain walls can produce interesting signals which consist of a transient signal with a few oscillations. What is characteristic of those signals and how are they different from gravitational wave signals produced in black hole or neutron star mergers? The first relevant feature are the typical time-scales and the typical frequencies. The duration of the signal is essentially determined by the time it takes the domain wall to cross the detector. If the wall is thin compared to the size of the detector, i.e. $m\gtrsim 0.1\,{\rm neV}$ this is simply determined by the length scale of the detector and the velocity of the domain wall, \begin{equation} t_{\rm duration}\sim 10\,{\rm ms}\left(\frac{10^{-3}}{v}\right),\quad {\rm thin\,\,wall:}\,\,m\gtrsim 0.1\,{\rm neV}. \end{equation} corresponding to frequencies of the order $\sim 100\,{\rm Hz}$. In addition to the overall length of the signal one will have substructure when the wall enters/leaves one of the arms of the interferometer. The time-scale for this is determined by the thickness of the wall and will have time-scales of the order, \begin{equation} t_{\rm substructure}\sim 10\,{\rm ms}\left(\frac{0.1\,{\rm neV}}{m}\right)\left(\frac{10^{-3}}{v}\right), \end{equation} corresponding to frequencies $\sim 100\,{\rm Hz}(m/(0.1\,{\rm neV}))$. For thick walls on the other hand the duration is set by the wall thickness, \begin{eqnarray} t_{\rm duration}\!\!&\sim&\!\! 10\,{\rm ms}\left(\frac{0.1\,{\rm neV}}{m}\right)\left(\frac{10^{-3}}{v}\right), \\\nonumber &&\qquad\qquad\qquad\qquad{\rm thick \,\,wall:}\,\,m\lesssim 0.1\,{\rm neV}. \end{eqnarray} As discussed above the velocity is set by the typical velocities in the galaxies. The second feature is the time difference between the two detectors at LIGO (or between even more detectors in the future). By the same argument as above this is simply given by the time it takes the domain wall to cross this $\sim 3000\,{\rm km}$ distance, \begin{equation} t_{\rm two\,\,detectors}\sim 10\,{\rm s}\left(\frac{10^{-3}}{v}\right). \end{equation} This is three orders of magnitude larger than the delay between the signals for gravitational waves. To see a ``coincidence'' one therefore needs to analyze in a suitably large time window. Indeed one can even perform an additional consistency check between the signals in different locations. This can be seen most easily in the limit when the wall is thin. Ignoring high frequency substructures the signal then has a shape as in Fig.~\ref{shapes1} which is determined by the angle of the wall with respect to the experiment. Therefore one can measure both velocity and direction of the velocity from a single measurement; the signal for the second site can be predicted. \bigskip \subsection*{Obvious constraints on the parameter space} Although this is a very simplistic model, let us at least discuss some obvious constraints on the parameter space from other experiments/observations. {\bf Photons radiating $\phi$:} The mass term for the photon also represents a four boson interaction with coupling strength, \begin{equation} \lambda_{AA\phi\phi}\sim \frac{m^{2}_{0,\gamma}N^{4}_{A}}{f^2}\sim 10^{-42}\left(\frac{N_{A}}{1}\right)^{4} \left(\frac{m_{0,\gamma}}{{\rm neV}}\right)^2\left(\frac{{\rm TeV}}{f}\right)^2. \end{equation} It seems like this can be safely ignored. \bigskip {\bf Total reflection from the domain wall:} We observe radio signals from very distant astronomical sources in all directions with frequencies down to $\omega\sim (2\pi){\rm few}\,{\rm MHz}\sim {\rm neV}$. If $m_{0,\gamma}\gtrsim {\rm few}\,\,{\rm neV}$ a domain wall would totally reflect all such radio waves, i.e. in the direction where it is coming from we would see no such radio waves. \subsection*{Beyond the simplest model} Instead of adding a mass term, one could also consider an axion-like-particle-like interaction of the domain wall with $F^{\mu\nu}F_{\mu\nu}$\footnote{Such an interaction was, e.g. considered in~\cite{Olive:2010vh}.} or $\tilde{F}^{\mu\nu}F_{\mu\nu}$. Indeed such a model might be easier to motivate theoretically. Yet the calculation of potential signals (in particular when cavities are employed) needs a more careful study which we leave to future work. \section{Summary} \label{sec:conclusion} In this note we investigated two types of signals from dark sectors observable in gravitational wave detectors: gravitational waves from first order phase transitions and dark sector domain walls very weakly interacting with photons. In the former case future experiments are needed, whereas in the latter case already aLIGO could potentially observe a signal.\\ \section*{Acknowledgements} \noindent We would like to thank Anupam Mazumdar for interesting discussions on domain walls. JJ gratefully acknowledges support by Transregio TR33 ``The Dark UniverseÕ' and VVK is supported by the Wolfson foundation. MS and VVK are supported by STFC through the IPPP grant. \newpage
2,869,038,155,148
arxiv
\section{INTRODUCTION} The International linear collider (ILC) is a very complicated and expensive project. One aspect of key importance is the main beam dumps, which are required to safely dispose of the high power ILC beams. The work on such dumps was started at the SLC, albeit at much lower power, and continued as part of the TESLA project. However, much more work is needed to obtain satisfactory beam dumps for the ILC. We shall now review the baseline and alternative beam dumps designs. \vspace{-0mm} \section{THE ILC DUMP REQUIREMENTS} The baseline layout of the ILC consists of two linacs, which then branch into two interaction regions. The current choice of crossing angle for the interactions regions are 20mrad and 2mrad. The requirement to dump the main beam after collision then leads us to two full power beam dumps for each interaction region. Furthermore, the need to dump the beam at the end of the linac for commissioning and fast extraction purposes adds two more full power beam dumps to the baseline design. Note that the 500 GeV machine beam power is 11MW and the 1 TeV machine beam power is 18MW; we always consider the dump being rated for the larger power in this paper. There is also a need to dump the intense beamstrahlung photons generated during the beam-beam interaction at the interaction point. This photon power is around 1MW. This photon beam dump is common for charged beam and photons for the 20mrad interaction region layout, while for 2mrad layout the photon dump is separate. Hence, there are separate beam dumps rated for full power for all beam lines including tune-up lines, for a total of six beam dumps in the baseline. The tune-up dumps are required to be sufficiently remote from the IP, so that the collider halls can be accessed for detector maintenance while the linac is being tuned, and full beam sent to tune-up dump. Technically, the elimination of two full power tune-up dumps should be possible, there will be impact on availability which may be partly mitigated by reduced power tune-up dumps (~0.5MW), the cost saving need to be further evaluated, detailed design would need to be made. [1] \vspace{-0mm} \section{THE BASELINE TECHNOLOGY - THE WATER DUMP} The water-based dump [2] is based on the design built in 1967 as the main dump for the SLAC Linac [3], where it is still in use. This dump was designed and built to work at 2MW, but in practice was only used at 800kW. The baseline design for the ILC beam dump is based on water vortex dump, rated for 18MW beam. The choice of a water dump for the baseline has many advantages: the water dump has been studied in detail for accelerator projects, the problems of the larger dump design have been noted, and the studies indicate there are no "show-stoppers". The water dump for the TESLA project was studied in detail at DESY [4], with input from several industrial companies. The basic principle of a water dump is to present the incoming beam with a region of cold, pressurized water. The beam dumps its energy into this water, which rapidly moves away. This presents the next part of the incoming beam with fresh water. The heat is transferred away through heat exchangers. Sufficiently beyond shower maximum the beam is sufficiently large that steel or tungsten plates can be absorb the tail of the beam energy and help reduce the overall length of the dump. There is also an outside air final cooling stage and many metres of shielding. The water is separated from the vacuum of the extraction line by a thin window - required to be thin enough to avoid the window itself becoming the dump. The window design is key part of the overall dump design. The water flow velocity is required to be sufficiently high to avoid volume boiling of the water at the tank operating pressure when the dump is accepting the larger spot sizes of the disrupted beam. The dump window is cooled by convection to the water so that its temperature rise during the passage of the bunch train is less than its thermal stress limit. The spot size of the undisrupted beam must be sufficiently large to prevent window damage. This will be done through a combination of optical means, an increase in extraction line length after the last optical element, and sweeping the beam across the face of the window. Beam sweeping can also help prevent volume boiling of the water behind the window; if employed the sweeping mechanism will need to be interlocked to the machine protection system. The water circuit consists of two closed loops and an external water circuit. The inner water loop is pressurized to 10bar and has a volume of around 18 cubic meters. The length of the dump, including all shielding, is about 25m longitudinally and about 15m transversely. The control and transport of radioactive byproducts is of central importance to the dump design. Work in ongoing in this area. For example, isotropically produced neutrons contribute to the shielding thickness and careful computation of the neutron fluence is needed. For a deep-tunnel site the forward-peaked muons are stopped in approximately one km of earth. A shallow-tunnel site may require a small downward bend of the beam before it enters the dump. That would necessitate a separate dump for the beamstrahlung followed by the charged beam dump. The required R+D items for the baseline are a study of window survivability, and the corresponding computation of radiation damage, measured in displacements per atom (DPA). A window replacement procedure, probably incorporating remote or robotic handling, and schedule can then be developed. A prototype of the window and a beam test are also necessary. The required test beam must give similar energy densities in the window as the full ILC machine. Furthermore, some studies of pressure wave formation maybe necessary. \vspace{-0mm} \section{THE ALTERNATIVE TECHNOLOGY - THE GAS DUMP} The noble gas dump is the alternative design for the ILC beam dump [5,6]. This consists of about 1km of a noble gas (Ar looks the most promising) enclosed in a water cooled iron jacket. The gas core acts as a scattering target, blowing the beam up and distributing the energy into the surrounding iron. Considerable iron is required to successfully transport the heat to the outside water cooling. As in the water dump, the final layer of cooling is an outside air system This gas dump design may ease some issues such as radiolysis and tritium production, and a gas profile can be exploited to produce a uniform energy deposition along the length of the dump. However, other issues arise such as particle beam heating of the gas and ionization effects. Further studies needed to understand feasibility and benefits of the gas dump. A further possibility is a gas/water hybrid dump, involving the use of a shorter gas dump as a passive beam expander, followed by a small water dump. This option also required further study. A further possibility is for a rotating solid dump immersed in water, or a dump based on some kind of liquid metal. The required R+D items for the alternative design are studies of gas heating, including ionization effects, and a study of radiation and activation effects. A study of the gas dump windows is also required. A smaller scale prototype of the dump, and some test beam, would also be required. \vspace{-0mm} \section{CONCLUSION} In this paper, we discussed the baseline and alternative designs of the ILC beam dumps. The baseline is a high pressure water dump. Such a dump has been built before for the SLC at much lower power. This is a solid choice for the baseline, although much work is needed. The alternative choice is a gas-based dump. This looks promising, although many further studies and possibly a prototype will be required. \vspace{-0mm}
2,869,038,155,149
arxiv
\subsection{Partially Observable Markov Decision Process} A discrete-time POMDP can be formally defined as a tuple $\left( \mathcal{X}, \mathcal{A}, \mathcal{Z}, T, O, \mathcal{R} \right)$, where $\mathcal{X}, \mathcal{A}$ and $\mathcal{Z}$ denote the state, action and observation spaces respectively; $T \left( x,a,x' \right) \triangleq \prob{x' | x, a}$ is the transition density function which expresses the probability to move from state $x \in \mathcal{X}$ to state $x' \in \mathcal{X}$ by taking action $a \in \mathcal{A}$; $O \left( x, z \right) \triangleq \prob{z | x}$ is the observation density function which expresses the probability to receive an observation $z \in \mathcal{Z}$ from state $x \in \mathcal{X}$; and $\mathcal{R}$ is a user defined reward function. As observations provide only partial information about the state, the true state of the agent is unknown. Therefore, the agent maintains a probability distribution function over the state space, also known as a belief. At each time step $t$ the belief update is performed according to Bayes rule, using the transition and observation models, given the performed action $a_{t-1}$ and the received observation $z_t$ as $b_t \left( x' \right) = \eta \int \prob{z_t | x' } \prob{x' | x, a_{t-1} } b_{t-1} \left( x \right) dx$, where $\eta$ is a normalization constant. Given a posterior belief $b_t$, a policy function $a_t = \pi(b_t)$ determines an action to be taken at time step $t$. For a finite horizon $\mathcal{T}$ the value function for a policy $\pi$ is defined as the expected cumulative reward received by executing $\pi$, \begin{equation} \label{eq: value function} V^{\pi }( b_{t}) =\mathcal{R}( b_{t } ,\pi ( b_{t })) + \underset{z_{t+1:\mathcal{T}}}{\mathbb{E}} \left[ \sum _{\tau =t+1}^{\mathcal{T}}\mathcal{R}( b_{\tau } ,\pi ( b_{\tau }))\right]. \end{equation} Similarly, an action-value function, \begin{equation} \label{eq: Q function} Q^{\pi }( b_{t}, a_t) = \mathcal{R}( b_t , a_t) + \underset{z_{t+1}}{\mathbb{E}} \left[V^{\pi }( b_{t+1}) \right], \end{equation} is defined by executing action $a_t$ and then following the policy $\pi$ for a finite horizon $\mathcal{T}$. At each planning session, the agent solves a POMDP by searching for the optimal policy $\pi^*$ that maximizes \eqref{eq: value function}. \subsection{Hybrid Belief} A hybrid belief is defined over both continuous and discrete random variables. The continuous random variables can represent the state of the agent and (possibly also) of the environment, as common in SLAM framework. The discrete random variables can represent, e.g., object classes and/or data association hypotheses. Nevertheless, the following definition is general and not restricted to these examples. We formally define the hybrid belief at each time $t$ as \begin{equation} \label{eq: hybrid belief} b_t \triangleq \mathbb{P}(X_t,\beta_{0:t}\mid H_t) = \underbrace{\mathbb{P}(X_t \mid \beta_{0:t}, H_t)}_{b[X_t]_{\beta_{0:t}}} \underbrace{\mathbb{P}(\beta_{0:t}\mid H_t)}_{b[\beta_{0:t}] \equiv \omega_{t}}, \end{equation} where $X_t\triangleq \{ x_0,..,x_t\}$ and $H_t \triangleq \{z_{1:t}, a_{0:t-1}\}$ represents all past actions and observations. $b[X_t]_{\beta_{0:t}}$ is the conditional belief over continuous variables. $\omega_{t}$ is the marginal belief over discrete variables which can be considered as the hypothesis weight. We define $H_{t+1}^{-} \triangleq H_t \cup \lbrace a_t \rbrace$ and $b_{t+1}^{-} \triangleq \mathbb{P} \left( X_{t+1}, \beta_{0:t+1} | H_{t+1}^{-} \right)$ for notational convenience. The marginal belief $\omega_{t}$ is updated for each realization of discrete random variables according to \begin{align} \label{eq: recursive weight update} &\omega^{i,j}_t = \eta^{-1} \mathbb{P}(z_t \mid \beta_{0:t}^{i,j}, H_t^-) \mathbb{P}(\beta_{0:t}^{i,j}\mid H_t^-)\\ &= \! \eta^{-1} \overbrace{\mathbb{P}(z_t \mid \beta_{0:t}^{i,j}, H_t^-) \mathbb{P}(\beta_t^i \mid \beta_{0:t-1}^j, H_t^-)}^{\zeta_t^{i|j}} \overbrace{\mathbb{P}(\beta_{0:t-1}^j\mid H_t^-)}^{\omega_{t-1}^j}, \notag \end{align} \normalsize which is obtained by Bayes rule followed by chain rule on $\omega_{t}$. The un-normalized weight can be expressed recursively as $\tilde{\omega}^{i,j}_t = \zeta_t^{i\mid j} \omega^j_{t-1}$. The conditional belief $b[X_t]_{\beta_{0:t}}$ is updated for each realization of discrete random variables as \begin{equation}\label{eq: conBeliefUp} b[X_t]_{\beta_{0:t}}^{i,j} = \psi(b[X_t]^j_{\beta_{0:t-1}}, a_{t-1}, z_t), \end{equation} where $\psi(.)$ represents the Bayesian inference method. Generally, when planning with hybrid beliefs the agent constructs both a belief tree and multiple hypotheses trees. Each hypotheses tree represent the posterior hypotheses given a history. Since every node of the planning tree (i.e. belief tree) corresponds to a hypotheses tree, the computational complexity of the corresponding POMDP becomes a significant burden. In the following section we present a novel algorithm that circumvent this difficulty via Monte-Carlo sampling. \subsection{vanilla Hybrid-Belief MCTS} \label{sec:vanillaMCTS} For completeness, we first present a \vanillaMCTS algorithm. Although the exact algorithm does not seem to exist in the literature, this is the ad-hoc way to interleave hybrid beliefs with state-of-the-art POMDP solvers. \vanillaMCTS, can be seen as an adaptation of the state-dependent MCTS \cite{Silver10nips} algorithm to a (hybrid-)belief \eqref{eq: hybrid belief}, by augmenting the belief to a belief-state. A similar approach was also taken by PFT-DPW \cite{Sunberg18icaps}, which utilized particle filters to approximate a posterior belief, over continuous variables. However, computing a full hybrid belief is a difficult and sometimes intractable task, even for particle-based solvers, and is thus prone to approximations. \textbf{Pruning.} The number of hypotheses at each posterior node in the belief tree may be prohibitively large. To handle the infeasible number of the posterior hypotheses, \vanillaMCTS utilizes a pruning mechanism similar to those suggested in \cite{Pathak18ijrr,Hsiao19icra}. As a result, unlikely hypotheses are removed from the hypotheses tree. In \vanillaMCTS, each posterior node holds a fixed number of hypotheses once expanded, depending on a predefined hyperparameter. Such a method may sometimes be too harsh, pruning away hypotheses with a high probability due to a limited hypotheses budget, or too loose, keeping highly unlikely hypotheses, thus wasting valuable computational time. Other approaches may also be applicable, such as fixing a probability threshold value, under which all hypotheses are pruned. However, the latter has its own deficiencies, such as hypothesis depletion. For completeness, we describe \vanillaMCTS implementation details in the supplementary file \cite{Barenboim23ral_supplementary}. \subsection{Hybrid Belief Monte-Carlo Planning}\label{sec:improvedMCTS} In contrast to \vanillaMCTS, in \improvedMCTS, we do not use any pruning heuristic for two reasons: (1) this requires knowledge, or an insight, as to how many hypotheses would be sufficient for the specific POMDP; (2) Each posterior node in the belief tree maintains hypotheses based on a hyperparameter, regardless of how relevant this node may be for decision-making. Conversely, we suggest an adaptive algorithm that focuses computational resources in proportion to their relevance in the belief tree, which circumvent the difficulty in full belief update. \improvedMCTS is recursively invoked with a single sampled hypothesis. Every such single hypothesis may evolve into multiple hypotheses. \improvedMCTS algorithm computes only the posterior weights (i.e. probability values) that are conditioned on that single hypothesis, followed by a random weight sample based on their categorical distribution. Then, only the hypothesis associated with the sampled weight is updated. This is in contrast to the full posterior update done in \vanillaMCTS. Additionally, to support belief-dependent rewards, the reward value is estimated based on state samples received across multiple visits to the belief node, i.e., state samples from multiple hypotheses. We describe the algorithm details in section \ref{sec:algorithms}. \improvedMCTS holds some desirable properties compared to the full belief update and pruning approaches. First, at each iteration of \improvedMCTS, a maximum of $\mathcal{T}$ posterior hypotheses are computed, and a small subset of the weights. This is in contrast to the full posterior update, that would require the entire (or pruned-)set of the current posteriors, and compute all the posterior hypotheses of the next time-step, which is highly resource expansive for every iteration. Second, \improvedMCTS explores both the planning tree and the hypotheses trees by focusing its computational effort on the interesting parts, utilizing UCB to guide the search; this property is inspired by MCTS which builds the planning tree by focusing on the optimistic parts of the tree. In the following section, we show that this approach results in an unbiased estimator for the true value function. \subsection{State-dependent rewards} \label{sec: state dept reward} State-dependent reward functions are defined as the expected reward value over the belief, i.e., $\mathcal{R}_X\triangleq\mathbb{E}_{X\sim b}[r(X,a)]$. Generally, state-dependent rewards cannot be computed analytically, thus, they are approximated using state samples. Since in a hybrid belief the number of hypotheses may be prohibitively expensive to compute, most existing algorithms approximate the belief, $\hat{b}$, by performing some heuristic pruning (e.g. by keeping the most likely hypotheses). As a consequence, the approximate distribution is shifted, and the reward value is biased even with an infinite number of state samples, \begin{lemma} \label{lemma1} The estimator $\mathbb{E}_{X\sim \hat{b}}[r(X,a)]$ is generally biased. \end{lemma} \begin{proof} \label{proof:lemma1} Assuming the weights of the pruned hypotheses are non-zero, the proof is immediate, \begin{align} &\mathbb{E}_{X\sim b}[r(X,a)] =\int _{X} \sum _{\beta } b( X, \beta) r( X,a) dX \\ &\!=\! \int\limits_{X}\sum_{\beta\in A} b( X, \beta)r(X,a)dX\!+\!\!\sum_{\beta\in \neg A} b( X, \beta)r(X,a) dX \notag \\ &\neq \eta_A \int\limits_{X}\sum_{\beta\in A} b( X, \beta)r(X,a)dX= \mathbb{E}_{X\sim \hat{b}}[r(X,a)].\notag \end{align} where $A$ denotes the set of un-pruned hypotheses, and $\eta_A$ is their corresponding normalizer after pruning. \end{proof} In contrast, \improvedMCTS samples hypotheses iteratively starting from the root node; it utilizes sequential importance resampling, which results in an unbiased estimator for the reward value. At every iteration, the new sampled states from the current hypothesis are added to the estimator from previous iterations, by averaging. The process for generating hypotheses can be described as follows; for any time $t$, a hypothesis is sampled i.i.d from a proposal-prior distribution, $\beta_0^i \sim \mathbb{Q}(\beta_0 \mid H_0)$. Then, hypotheses are recursively sampled from a proposal distribution, $\beta_\tau^i \sim \mathbb{Q}(\beta_\tau \mid \beta_{0:\tau-1})$ up to time $\tau\!=\!t$. We define $\mathbb{Q}(\beta_0\mid H_0)\!\triangleq \!\mathbb{P}(\beta_0\mid H_0)$, and $\mathbb{Q}(\beta_\tau \mid \beta_{0:\tau-1})\triangleq \textsc{Uniform}\left[1, \left|\beta_\tau\right|\right]$. Then, for every time-step $t$, the corresponding importance weight is, \begin{align} \lambda_t^{i,j} \!&=\! \frac{\mathbb{P}(\beta_{0:t}^{i,j}\mid H_t)}{\mathbb{Q}(\beta_{0:t}^{i,j}\mid H_0)} \!=\! \frac{\eta_t \zeta_t^{i\mid j} \mathbb{P}(\beta_{0:t-1}^{j}\mid H_{t-1})}{\mathbb{Q}(\beta_t^i \mid \beta_{0:t-1}^j)\mathbb{Q}(\beta_{0:t-1}^j\mid H_{0})} \\ &\!=\! \frac{\eta_t \zeta_t^{i\mid j} }{1/ | \beta_t^{i\mid j} | } \frac{\mathbb{P}(\beta_{0:t-1}^j\mid H_{t-1})}{\mathbb{Q}(\beta_{0:t-1}^j\mid H_{0})} \!=\! \eta_t \zeta_t^{i\mid j} |\beta_t^{i\mid j}| \lambda_{t-1}^j, \notag \end{align} where $\lambda_0^j=1$. As a consequence, \begin{lemma} \label{lemma2} \improvedMCTS state-dependent reward estimator, $\hat{\mathcal{R}}_X\triangleq \frac{1}{N}\sum_{i,j=1}^N \lambda_t^{i,j} \frac{1}{n_X}\sum_{k=1}^{n_X} r(X_t^{i,j,k},a_t)$, is unbiased. \begin{proof} If states are sampled i.i.d. for each hypothesis, then the expected value of the reward estimator, $\hat{\mathcal{R}}_X$, is, \begin{align} \label{eq: reward estimation} &\mathbb{E}\left[\hat{\mathcal{R}}_X\right] \triangleq \mathbb{E}\left[\frac{1}{N}\sum_{i,j=1}^N \lambda_t^{i,j} \frac{1}{n_X}\sum_{k=1}^{n_X} r(X_t^{i,j,k},a_t)\right] \\ &=\mathbb{E}_{\mathbb{Q}}\left[\frac{1}{N}\sum_{i,j=1}^N \lambda_t^{i,j} \mathbb{E}_{b[X_t]_{\beta_{0:t}}^{i,j}}\left[\frac{1}{n_X}\sum_{k=1}^{n_X} r(X_t^{i,j,k},a_t)\right]\right] \notag\\ &= \frac{1}{N}\sum_{i,j=1}^N \mathbb{E}_{\mathbb{Q}} \left[ \frac{\mathbb{P}}{\mathbb{Q}} \frac{1}{n_X}\sum_{k=1}^{n_X}\mathbb{E}_{b[X_t]_{\beta_{0:t}}}\left[ r(X_t^{i,j,k},a_t)\right] \right]\notag \\ &= \mathbb{E}_{\mathbb{P}} \left[\mathbb{E}_{b[X_t]_{\beta_{0:t}}} r(X_t,a_t) \right] \triangleq \mathcal{R}_X \notag \end{align} where $\mathbb{P}\!=\!\mathbb{P}(\beta_{0:t}\mid H_t)$, $\mathbb{Q}\!=\!\mathbb{Q}(\beta_{0:t}\mid H_t)$, and $N$ and $n_X$ denote the number of samples from $\mathbb{Q}$ and $b[X_t]^{i,j}_{\beta_{0:t}}$ respectively. \end{proof} \end{lemma} As the planning horizon grows, sampling hypotheses uniformly quickly induce sample degeneracy. That is, the weights of most hypothesis samples become negligible, while only a few remain significant, which negatively affects the accuracy of the estimate. To avoid this issue, we perform resampling at every step, also known as sequential importance resampling (SIR). Before resampling, each hypothesis weight simply becomes, $\lambda^{i \mid j}_t= \eta_t \zeta_t^{i\mid j} \left| \beta^{i \mid j}_t \right| $, which is then updated to $1/N$ after resampling. Note that resampling does not introduce bias to the estimator \cite{kennedy16book}. To avoid repeated derivations, for the rest of this sequel we treat mathematical proofs as if hypotheses are directly sampled from distribution $\mathbb{P}$, even though they are in fact sampled from the proposal distribution, $\mathbb{Q}$. However, all derivations can be start by sampling from $\mathbb{Q}$, then follow similar steps of lemma \ref{lemma2} followed by resampling to arrive at the same result. In some cases of interest, such as ambiguous DA, the normalizer $\eta_t$ cannot be easily computed, and so the importance weight, $\lambda_t$, cannot be computed. A common practice is to use the self-normalized version of the estimator, i.e. $\tilde{\lambda}^{i \mid j}_t = \tilde{\lambda}^{i \mid j}_{t-1} \frac{\zeta_t^{i \mid j}}{\sum \zeta_t^{i \mid j}}$, which is no longer unbiased \cite{kennedy16book}. However, the self-normalizing variation is consistent, meaning it becomes less biased with more samples and converges in probability (denoted $\rightarrow^p$) to the theoretical value. This is a direct consequence of applying the weak law of large numbers on both the nominator and denominator of the self-normalized estimator, \begin{align} &\hat{\mathcal{R}}^{SN}_X\triangleq \frac{ \sum_{i,j=1}^N \zeta_t^{i \mid j} \omega^j_{t-1} \frac{1}{n_X}\sum_{k=1}^{n_X} r(X_t^{i,j,k},a_t)}{ \sum_{i,j=1}^N \zeta_t^{i \mid j} \omega^j_{t-1}} \\ &=\frac{ \frac{1}{N} \sum_{i,j=1}^N \eta_t \zeta_t^{i \mid j} \omega^j_{t-1} \frac{1}{n_X}\sum_{k=1}^{n_X} r(X_t^{i,j,k},a_t)}{\frac{1}{N} \sum_{i,j=1}^N \eta_t \zeta_t^{i \mid j} \omega^j_{t-1} }\rightarrow^p \frac{\mathcal{R}_X}{1}, \notag \end{align} where the denominator converges to the sum of weights, $\sum_{i,j}\omega_t^{i,j}=1$ and the nominator to the reward value. \subsection{Belief-dependent rewards} Contrary to state-dependent rewards, belief dependent rewards are not necessarily linear in the belief, so averaging over state samples from different hypotheses does not guarantee convergence to the theoretical reward value. Moreover, different reward definitions may be functions of not only the states, but also the weights, the conditional beliefs, or the probability density values of the complete theoretical belief (such as Shannon's entropy \cite{Shienman22icra} or differential entropy \cite{Barenboim22ijcai}). To support the various cases, we split our discussion into the parametric case, where the reward can be precisely calculated given a set of parametric conditional beliefs and the corresponding weights, and the nonparametric case, where the reward is estimated based on state and hypothesis samples. \improvedMCTS supports belief-dependent rewards by accumulating conditional beliefs across multiple visitations of the same history (i.e. same node in the belief tree). The estimated weight of each conditional belief is the sample frequency of the corresponding hypothesis. That is, $\hat{\mathbb{P}}(\beta_{0:t}^{i,j}\mid H_t)\!\triangleq\!\hat{\omega}^{i,j}_t \!=\! \frac{\sum_{i,j}\mathbf{1}_{\beta=\beta_{0:t}^{i,j}}}{N}$, where $N$ is the number of hypothesis samples, $i,j\in[1,|\beta_{0:t}|]$, $|\beta_{0:t}|$ is the theoretical number of hypotheses at time $t$ and $\mathbf{1}_{\square}$ denotes the indicator function. \textbf{Parametric.} Assuming a parametric representation for the conditional beliefs, $b[X_t]_{\beta_{0:t}}^{i,j}$, the belief-dependent reward, $\mathcal{R}_b(b_t,a_t)$, is evaluated using the estimated hybrid belief, $\mathcal{R}_b(\hat{b}_t,a_t)$, where $\hat{b}_t=b[X_t]_{\beta_{0:t}}\hat{b}[\beta_{0:t}]\equiv b[X_t]_{\beta_{0:t}}\hat{\mathbb{P}}(\beta_{0:t}\mid H_t)$, and $b_t$ defined in \eqref{eq: hybrid belief}. Applying the hypothesis resampling approach as described in Section \ref{sec: state dept reward}, the sample frequency of each hypothesis in $\hat{b}_t$ is unbiased, in other words, in expectation it equals the theoretical weights. Moreover, \begin{lemma} \label{lemma4 - consistency of b dept. reward} $\mathcal{R}_b(\hat{b}_t,a_t)$ converges in probability to $\mathcal{R}_b(b_t,a_t)$ for any continuous, real-valued function $\mathcal{R}_b$. \begin{proof} By the law of large numbers, $\hat{\omega}_t^{i,j}$ is consistent as $N\rightarrow\infty$ for all ${i,j}\in[1,|\beta_{0:t}|]$, \begin{equation} \hat{\omega}_t^{i,j} =\sum_{k=1}^N \frac{\mathbf{1}_{ \beta^k=\beta_{0:t}^{i,j}}}{N} \rightarrow^p \mathbb{P}(\beta^{i,j}_{0:t}\mid H_t) \!= \omega_t^{i,j}, \end{equation} then, due to the continuous mapping theorem, \begin{align} \mathcal{R}_b(b[X_t]_{\beta_{0:t}}\hat{b}[\beta_{0:t}], a_t) \rightarrow^p\mathcal{R}_b(b[X_t]_{\beta_{0:t}}b[\beta_{0:t}], a_t), \notag \end{align} that is, $\mathcal{R}_b(\hat{b}_t,a_t)$ is a consistent estimator for $\mathcal{R}_b(b_t,a_t)$. \end{proof} \end{lemma} \textbf{Nonparametric.} In the nonparametric case, the reward value is estimated based on state particles, which may correspond to conditional belief estimation via particle filters, or POMDPs with reward functions that have no close-form solution, and are thus approximated via Monte Carlo methods. Then, instead of $\mathcal{R}_b(b_t,a_t)$, an estimator over the reward is used, $\hat{\mathcal{R}}_b(\hat{b}[X_t]_{\beta_{0:t}}\hat{b}[\beta_{0:t}],a_t)$, where both the belief and the reward functions are estimators. We denote $\hat{b}[X_t]_{\beta_{0:t}}^{ k} = \sum\nolimits _{i=1}^{n_x}\alpha_t^{i,k}\delta(X-X_t^{i,k})$, where $\alpha_t^{i,k}$ is the weight of state particle $i$ generated from conditional belief $k$ and $n_x$ is the number of particles used to approximate the conditional belief. To arrive at consistency results for an arbitrary nonparametric reward estimator, we assume that the reward estimator based on samples from the full theoretical belief is consistent, i.e., $\hat{\mathcal{R}}_b(\hat{b}[X_t]_{\beta_{0:t}}b[\beta_{0:t}],a_t)\rightarrow^p \mathcal{R}_b(b_t,a_t)$. \begin{lemma} \label{lemma4} If $\hat{\mathcal{R}}_b(\hat{b}[X_t]_{\beta_{0:t}}b[\beta_{0:t}],a_t)\rightarrow^p \mathcal{R}_b(b_t,a_t)$, then $\hat{\mathcal{R}}_b(b[X_t]_{\beta_{0:t}}\hat{b}[\beta_{0:t}],a_t) \rightarrow^p \mathcal{R}_b(b_t,a_t)$. \begin{proof} The proof follows similar steps to lemma \ref{lemma4 - consistency of b dept. reward}. \end{proof} \end{lemma} \subsection{Value function} When using the existing hypotheses pruning approximations, the estimated value function converges to the wrong value even when some external source provides the exact reward value. This is due to the way observations are generated. The value function is defined as \begin{equation} V^{\pi}(b_t) = \int_z \mathbb{P}(z_{t+1:\tau} \mid H_t^-) \sum _{\tau =t}^{\mathcal{T}}\mathcal{R}( b_{\tau } ,\pi_{\tau }) dz , \end{equation} and since there is usually no direct access to observations given history, first state-samples are generated, then observations are sampled using the observation model, that is, $\mathbb{P}(z_{t} \mid H_t^-) = \sum_\beta \int_X \mathbb{P}(z_t \mid X_t,\beta_{0:t}) b^-(X_t,\beta_{0:t})$. Replacing $b^-$ with its pruned counterpart, $\hat{b}^-$, results in a shifted distribution for both the belief and the measurements, which impacts the value function estimation. Proof of this claim is similar to that of lemma \ref{lemma1} and skipped here for conciseness. Instead, \improvedMCTS generates observations by first receiving a hypothesis from the belief at the current node, $\beta_{0:t}^j$. Conditioned on $\beta_{0:t}^j$ and the history, \improvedMCTS samples a new plausible hypothesis, $\beta^i_{t+1}$. Then, an observation is sampled based on the posterior hypothesis. More formally, \begin{align} &\mathbb{E}_{z_{t+1:\tau}} [ \sum_{\tau=t+1}^\mathcal{T} \mathcal{R}_{\tau} ]\! = \! \mathbb{E}_{z_{t+1}} \left[ \mathcal{R}_{t+1} + \mathbb{E}_{z_{t+2:\tau}}\left[V^\pi_{t+2}\right]\right] \\ &= \underbrace{\mathbb{E}_{\beta_{0:t}} \mathbb{E}_{\beta_{t+1}\mid \beta_{0:t}} \mathbb{E}_{z_{t+1}\mid \beta_{0:t+1}} \left[ \mathcal{R}_{t+1}\right]}_{\triangleq \alpha_{t+1}} + \mathbb{E}\left[V^\pi_{t+2}\right]\!. \notag \end{align} We then define the estimator for the expected reward, $\hat{\alpha}_{t+1}$, \begin{align} \hat{\mathbb{E}}_{\mathbb{Q}}\!\!\left[\frac{\mathbb{P}\left( \beta _{t+1}^{i} \mid \beta _{0:t}^{j} ,H_{t+1}^{-}\right)}{\mathbb{Q}\left( \beta _{t+1}^{i} \mid \beta _{0:t}^{j} ,H_{0}\right)} \lambda _{t}^{j} \hat{\mathbb{E}}_{z_{t+1} \mid \beta _{0:t+\!1} ,H_{t+\!1}^{-}}[\hat{\mathcal{R}}_{t+1}]\right] \end{align} \begin{lemma} \label{lemma5} Given an unbiased reward estimator, $\hat{\mathcal{R}}$, the value-function estimator used in \improvedMCTS is unbiased. \end{lemma} \begin{proof} \label{proof:lemma5} Applying similar steps from the proof of lemma \ref{lemma2} on $\hat{\alpha}_{t+1}$, leads to an unbiased value, $\alpha_{t+1}$. Continuing recursively on the value function yields the desired result. See \cite{Barenboim23ral_supplementary} for further details. \end{proof} \subsection{Ambiguous data association} To address ambiguous data association, equation \eqref{eq: recursive weight update} can be adapted to, \begin{equation} \label{eq: DA weight update} \omega_t^{i,j} = \tilde{\zeta} _{t}^{j\mid i} \omega _{t-1}^{i}, \end{equation} where, \begin{align} \tilde{\zeta} _{t}^{j\mid i} &= \frac{\zeta _{t}^{j\mid i}}{\sum_i \zeta _{t}^{j\mid i} \omega_{t-1}^i }, \label{eq: recursive DA weight update} \\ \zeta _{t}^{j\mid i} \!&=\! \int _{X_{t}}\mathbb{P}( z_{t} \mid X_{t},\beta_t^i )\mathbb{P}( \beta _{t}^i \mid X_{t})\mathbb{P}( X_t \mid \beta _{0:t-1}^j ,H_{t}^-). \notag \end{align} The latter is obtained by marginalizing $\zeta_t^{i\mid j}$ over the states, and adhering to the Markov assumption of the observation and association ($\mathbb{P}(\beta_t\mid X_t)$) models, as suggested in \cite{Pathak18ijrr}. \subsection{Negative information} \begin{table}[t] \begin{center} \begin{scriptsize} \begin{tabular}{ |c|c|c|c|c| } \hline $z^{\beta_{t,k}} \!=\! \infty$ & $\beta_{t,k}\!>\! n_{z_t}$ & $(x^{r},l^{k})\! \in\! S.R.$ & $\mathbb{P}(z \mid x,l)$ & $\mathbb{P}(\beta \mid x,l)$ \\ \hline no & no & yes & $f(\cdot)$ & 1 \\ \hline no & no & no & 0 & 0 \\ \hline yes & yes & no & 1 & 1 \\ \hline yes & yes & yes & 0 & 0 \\ \hline no & yes & yes & $f(\cdot)$ & 0 \\ \hline no & yes & no & 0 & 1 \\ \hline yes & no & no & 1 & 0 \\ \hline yes & no & yes & 0 & 1 \\ \hline \end{tabular} \caption{\label{table:negative_information}Possible combinations when considering negative information. $z^{\beta_{t,k}}=\infty$ indicates no observation. Hypothesis element $\beta_{t,k}> n_{z_t}$ assumes that $x^r,l^k$ are out of the sensing range. $(x^{r},l^{k}) \in S.R.$ indicates that a specific realization is within the sensing range. $\mathbb{P}(z^{\beta_{t,k}} \mid x^{r},l^{k})$ and $\mathbb{P}(\beta_{t,k} \mid x^{r},l^{k})$ indicate the likelihood of the models. Last, $f(\cdot)$ denotes the likelihood value of the observation sensor (e.g. Gaussian).} \end{scriptsize} \vspace{-15pt} \end{center} \end{table} Just like observations affect the hypotheses' weights, not receiving an expected observation also affects the weights, commonly known as negative information. We build on previous work \cite{Pathak18ijrr} which addresses hybrid Bayesian inference for ambiguous DA and shows how the mathematical formulation naturally extends to include negative information. We limit our discussion of negative information to the context of landmark-based observations. We conjecture that this formulation can also be adapted to arbitrary observations, but is out of the scope of this paper. Negative information is based on not receiving an observation from a mapped landmark. We denote $|L_t|\in \mathbb{N}$ as the number of mapped landmarks at time instant $t$. This usually refers to the number of landmarks that already exist in the agent state (but can be defined otherwise). We also define observation as, $z_t=[z_t^1,...z_t^{|L_t|}]$. Note that there are $|L_t|$ observation elements in the observation, even though usually not all landmarks can be observed at a single time step, as some might be out of the sensing range due to limited field of view, occlusions, and so on. If at time $t$ only $n_{z_t} < |L_t|$ landmarks are observed, we fill the rest of the observation array with $z^k_t=\infty$, i.e., out of sensing range. Then, the observation array becomes $z_t\!=\![z_t^1,...,z_t^{n_{z_t}},\infty,...,\infty]_{1\times |L_t|}$. The reason for such uncommon inflation of the observation array will become clear shortly. We define $\beta_{t}=[\beta_{t,1},....,\beta_{t,|L_t|}]$ as an array that subscribes each landmark with some observation. For example, $\beta_{t,k}=1$ associates landmark $l^k$ with observation-element $z_t^1$ from $z_t$. Note that by the definition of the observation array, $z_t^{\beta_{t,k}}=\infty$ for all $\beta_{t,k} > n_{z_t}$, which does not correspond to any real observation. Equipped with the definitions of $\beta_t$ and $z_t$, we now discuss the adaptation of the observation and association models. We drop the $\square^{i,j}$ notation to avoid notation overloading, the derivations below are true for each hypothesis separately. In the landmark-based context, it is common to further simplify the expression in \eqref{eq: recursive DA weight update} by assuming conditional independency of an observation given the state variables, to a product of observation models, $\mathbb{P}( z_{ t} \mid X_{t},\beta_{t}) = \prod^{|L_t|}_{k=1}\mathbb{P}( z^{\beta_{t,k}}_{ t} \mid x^r_t,l^k)$, where $x^r_t$ and $l^k$ are the current pose of the agent and landmark $k$. For simplicity, we assume in this paper an ideal detection sensor, in the sense that if a landmark is within range, the sensor will detect it. Under this assumption, likelihood of obtaining an out-of-range observation ($z^{\beta_{t,k}}_t=\infty$), given that the landmark is within the sensing range (denoted $S.R.$), is $\mathbb{P}(z^{\beta_{t,k}}_t=\infty\mid x^r_t,l^k\in S.R.)=0$. However, obtaining an out-of-range observation given that the landmark is indeed out of the sensing range, is $\mathbb{P}(z^{\beta_{t,k}}_t=\infty\mid x^r_t,l^k \notin S.R.)=1$. The association model, $\mathbb{P}(\beta_{t,k}\mid x_t^k, l^k)$, assigns a probability to associate a landmark, $l^k$, with a specific observation index, $\beta_{t,k}$. We define the likelihood of associating an out-of-sensing-range landmark to an actual observation element (i.e. $\beta_{t,k} \leq n_{z_t}$), as $\mathbb{P}(\beta_{t,k} \leq n_{z_t}\mid x_t^k, l^k\notin S.R.)=0$. Conversely, associating a landmark that is within the sensing range, equals a nonzero value, for simplicity defined here as a uniform distribution across all feasible associations, $\frac{1}{n_{z_t}}$. We explicitly state all possible combinations of state, association, and observation in table \ref{table:negative_information}. \section{Introduction} \input{01-Introduction.tex} \section{Preliminaries} \input{02-Preliminaries.tex} \section{POMDP Planning with hybrid beliefs} \input{03-High-Dimensional-MC-Planning.tex} \section{Theoretical analysis} \label{sec:obj} \input{04-RewardFunction.tex} \section{Implementation details}\label{sec:algorithms} \input{05-ImplementationAndDa.tex} \section{Negative information in ambiguous data association}\label{sec:negInfo} \input{06-negInfo.tex} \section{Experiments}\label{sec:experiments} \input{05-ExperimentsResults.tex} \section{CONCLUSIONS} \input{06-Conclusions.tex} \addtolength{\textheight}{-12cm} \section*{ACKNOWLEDGMENT} The authors thank Andrey Zhitnikov for helpful discussions regarding negative information. \bibliographystyle{IEEEtran}
2,869,038,155,150
arxiv
\section{Introduction} Anomalies in datasets are typically associated with unexpected or unwanted characteristics such as contamination, noise or outliers that deviate significantly from expectations. The ability to detect anomalies and accurately estimate contamination in datasets is important in a wide variety of domains including healthcare, astronomy, environmental and materials sciences. The context that motivates our work is detecting anomalies and estimating contamination in datasets collected from communication and computer systems. Specific applications of anomaly detection in these datasets include network management and Internet security broadly defined. Communication and Internet measurement datasets have several distinguishing characteristics including the potential for extreme scale and high dimensionality. The standard framework for anomaly detection is based on establishing a baseline for {\em normal} ({\em e.g.,} in a distributional sense) and then setting a threshold which if exceeded identifies an anomaly. The goal in establishing norms and thresholds is to identify anomalies with low false alarm rates. There is an extensive literature on methods for anomaly detection (see related work in Section~\ref{sec:related}). In this paper we describe a new method for anomaly detection which is based on estimating the level of contamination in a dataset. An anomaly is declared if a dataset has an elevated level of contaminate. We consider the contamination-free ({\em i.e.,} normal) condition of a dataset to be specified by a model comprised of a set of distributions. We then compare the model to the distributional profile of a target dataset collected over a specified period. A standard method for comparing datasets in this way is goodness of fit (GoF) testing~\cite{d1986goodness}. To the best of our knowledge, this paper is the first to address the problem of contamination estimation using GoF testing based on entropy minimization, as we define in Section \ref{sec:probstate}. The approach we develop is based on answering the following question. Given a model consisting of a family of distributions, a specified $p$-value, and an empirical dataset, what is the minimum number of data points that must be discarded so that the empirical distribution of the data matches a member model distribution (in terms of GoF for a specified $p$-value)? This is akin to finding the largest subset of the original dataset which has an empirical distribution \emph{close} to the model. We show that this question can be efficiently answered by solving a series of convex optimizations. Solving the optimizations results in a lower bound on the minimum number of data points that are attributed to a contaminate. In the simplest case, each convex optimization is an inequality constrained entropy minimization problem (whose dual is a constrained geometric program) which can be solved in real time and at scale for many applications. More generally, the approach can be applied to any setting in which the model consists of a convex set of distributions. Two specific instances which we discuss are \emph{1)} models defined by any number of distributions with arbitrary mixture proportions, and \emph{2)} models defined by the set distributions with small Kullback-Leibler (KL) divergence to a specified distribution, which arises when the model itself is generated from a finite amount of data. Lastly, we show the lower bound output by the optimization converges to an upper bound known as the separation distance at a rate of $O(\sqrt{\log ( p)/p })$, where $p$ is the number of data points. \section{Quantifying Contamination} \label{sec:prob_statement} \subsection{Notation} Let $P \in \mathbb{R}^n$ and $Q \in \mathbb{R}^n$ denote probability mass functions over $n$ categories, with elements $P_i$, $i=1,\dots,n$ and $Q_i$, $i=1,\dots,n$. Throughout, $P$ denotes the distribution under test, $Q$ denotes a member distribution of the model, $Q^0$ denotes the `true' unknown model distribution, and $Q^j$ indexes multiple distributions. The empirical distribution of a sequence of random variables $X=X_1, \dots, X_p \in \mathcal{X}^p$ is the relative proportion of occurrences of each element of $\mathcal{X}$ in $X$. Specifically, let $\mathcal{X}=: \left\{x_1, x_2, \dots, x_n \right\}$ and define $p_i = \sum_{j=1}^p \mathbf{1}_{\left\{X_j=x_i \right\}} $ for $i=1,\dots,n$. Then $\widehat{P}(X) = \frac{1}{p}\left\{ p_1, p_2, \dots, p_n \right\}.$ $\mathbb{P}_{Q}(\cdot)$ denotes probability measure with respect to distribution $Q$. For simplicity of notation, we write $\mathbb{P}_{{Q}}(\{\widehat{P}^1,\widehat{P}^2\})$ as short hand for $\mathbb{P}_{{Q}}\left(\left\{X \in \mathcal{X}^p : \widehat{P}(X) \in \{ \widehat{P}^1,\widehat{P}^2 \} \right\} \right)$. The Kullback-Leibler divergence between two distributions is defined in the usual manner, \begin{align*} D(P||Q) := \sum_{i} P_i \log \left( \frac{P_i}{Q_i}\right). \end{align*} $D(P||Q)$ is a jointly convex function in $P$ and $Q$. The minimum entropy set, $\left\{P: D(P||Q) \leq \epsilon \right\}$, is a convex set (for a fixed $Q$, $\epsilon$). Lastly, let $\mathbb{S}^n$ denote the probability simplex: \begin{align} \nonumber \mathbb{S}^n := \left\{P \in \mathbb{R}^n: \sum_i P_i =1, \ P_i \geq 0 \quad i=1, \dots, n \right\}. \end{align} \subsection{Quantifying Contamination} \label{sec:probstate} Consider a set of model distributions $\mathcal{Q}$ whose elements are supported over a finite number of categories $\mathcal{X}$ with $|\mathcal{X}|=n$. For example, $\mathcal{Q}$ could be set of minimum entropy distributions, or a mixture distribution, $Q = \sum_{j=1}^\ell \pi_j Q^{j}$, where $\pi_1,\dots,\pi_{\ell}$ are unknown ($\mathcal{Q}$ is the set of all such mixture distributions). Let $X \in \mathcal{X}^p$ denote a collection of samples. An unknown subset of the samples consists of $i.i.d.$ draws from an unknown distribution $Q \in \mathcal{Q}$. The remaining samples, $\mathcal{C} \subset \lbrack p \rbrack $, are generated by some other means, and correspond to \emph{contaminated} samples. This paper is concerned with lower bounding the size of the contaminating set $\mathcal{C}$ given the set of model distributions $\mathcal{Q}$, a specified significance level (a $p$-value), and the observed samples $X_1,\dots, X_p$. Intuitively, if the empirical distribution of a sequence of random variables is \emph{close} to the model distribution in terms of GoF, we conclude the sequence is \emph{not} contaminated. To quantify this intuition, we define a set of \emph{typical} empirical distributions based on statistical significance; we note this definition is distinct from the usual definitions of \emph{strongly} and \emph{weakly} typical, and making this connection is a contribution herein. \begin{definition}{Typical}. Let $\widehat{P}^{1}, \widehat{P}^{2}, \dots$ be an ordering on all empirical distributions (of $p$ samples and $n$ categories) such that $\mathbb{P}_{{Q}}(\widehat{P}^{1}) \leq \mathbb{P}_{Q}(\widehat{P}^{2}) \leq \dots$. A sequence of random variables $X$ with $\widehat{P}(X) = \widehat{P}^{\ell}$ is \emph{typical} at significance level $\epsilon$ with respect to $\mathcal{Q}$ iff \begin{align} \label{eqn:atypical} \sup_{Q \in \mathcal{Q}} \mathbb{P}_Q\left(\left\{\widehat{P}^{1}, \widehat{P}^{2}, \dots, \widehat{P}^{\ell-1}, \widehat{P}^{\ell}\right\}\right) \geq \epsilon \end{align} for any such ordering\footnote{Note the ordering is an implicit function of $Q$; we suppress this for simplicity of notation.}. \end{definition} The definition implies a sequence of random variables $X$ is typical if the probability of the empirical distribution of $X$ \emph{or any less likely empirical distribution} is more than a specified significance level. Note $\epsilon$ is interpreted as a $p$-value; as $\epsilon$ approaches zero, all sequences become typical (requiring stronger evidence to reject the null hypothesis). As $\epsilon$ increases, fewer sequences are typical. \begin{definition}{Contaminated.} We say $X$ is \emph{contaminated} iff $X$ is not typical (with respect to $\mathcal{Q}$ and with significance $\epsilon$). Likewise, an empirical distribution $\widehat{P}(X)$ is \emph{contaminated} iff $X$ is not typical. \end{definition} In this paper we study the following question. Let $X = X_1,\dots, X_p$ be a dataset, and let $X_{\widehat{\mathcal{C} }}= \{X_i : i \in \widehat{ \mathcal{C}} \ \}$ be any subset of of the original dataset. What is the smallest set $\widehat{\mathcal{C}} \subset \lbrack p \rbrack $ such that $X_{\lbrack p \rbrack \setminus \widehat{\mathcal{C}}}$ \emph{is not contaminated}? Specifically, let \begin{align*} c^* = \inf \left\{ \vert \widehat{\mathcal{C} } \vert : X_{\lbrack p \rbrack \setminus \widehat{\mathcal{C}}} \mbox{ is typical for $(\mathcal{Q}, \epsilon)$} \right\}. \end{align*} How and under what conditions can one compute $c^*$ efficiently? Our main focus and insight will be on the continuous approximation to $c^*/p$, denoted $\alpha^*$: \begin{align*} \alpha^* = \inf \left\{ \alpha \in \lbrack 0,1 \rbrack : \exists P \in \mathcal{P}(X,\alpha) \mbox{ typical for $(\mathcal{Q}, \epsilon)$} \right\} \end{align*} where $\mathcal{P}(X,\alpha)$ is the set of all distributions that can be created by discarding a fraction $\alpha$ of the mass of $\widehat{P}(X)$ (see Sec. \ref{sec:ConvRel}): \begin{align} \label{eqn:pram} \mathcal{P}(X,\alpha) = \left\{P \in \mathbb{S}^n :P_i \leq \frac{\widehat{P}_i(X)}{1-\alpha} \quad i=1,\dots,n \right\}. \end{align} Throughout, $\alpha$ is a key parameter that represents the fraction of the dataset attributed to contamination; $\alpha^*$ represents the smallest $\alpha$ such that there exists a subset of the original data of size $p(1- \alpha)$ that is \emph{not} contaminated. If $\alpha^* = 0$, the original dataset is not contaminated; if $\alpha^* = 1$, the entire dataset must be attributed to contamination. \subsection{Separation Distance} We assume $X_i \overset{i.i.d.} \sim Q^{0}$ for all $i \not \in \mathcal{C}$. For $X_i$, $i \in \mathcal{C}$, no assumption is made. This agnostic approach has inherent limitations. In the extreme case the distribution of the contaminated data could exactly follow that of the model. Here, the distribution of the full dataset should closely match the model, and be indistinguishable from the setting where $\mathcal{C}$ is empty. No contamination should be reported to within the significance level (in $m$ realizations of $X^p$, we expect $c^* \neq 0$ fewer than $m \epsilon$ times). A more interesting scenario is when the empirical distribution of the full dataset converges to a \emph{distinct} distribution \emph{i.e.}, $\widehat{P}(X^p) \rightarrow P \neq Q^{0}$. In the case that $\mathcal{Q} = \left\{ Q^0\right\}$, a consistent estimator will report non-zero contamination for large $p$. $P$ can be written as a mixture distribution, and we are interested in reporting the smallest $\kappa$ such that $(1-\kappa) Q^{0} + \kappa F = P$ for \emph{any} distribution $F$. $F$ represents the contaminating distribution, and $\kappa$ the proportion of the samples which are drawn from $F$. This minimum value of $\kappa$ is known as the \emph{separation distance}~\cite{aldous1987strong} between $P$ and $Q^{0}$, written succinctly as \begin{align*} \kappa(P||Q^{0}) = \max_{i \in [n]} \left(1 - \frac{P_i}{Q_i^{0}}\right). \end{align*} In this way, the separation distance between the empirical distribution of the data and model distribution plays an important role in the behavior of $c^*$ and $\alpha^*$ as the sample size grows. We show as a corollary to later results that $\alpha^*$ is both upper bounded by and converges to $\kappa(\widehat{P}(X)||Q^{0})$ as $p$ grows (see Proposition \ref{prop:12324} and Theorem \ref{thm:largep}). \subsection{Convex Relaxations} \label{sec:ConvRel} With the exception of problems involving data over only two categories ($n=2$), directly checking if a sample is contaminated is computationally prohibitive, even in the setting where the model consists of a single distribution (when $\mathcal{Q} = \{Q^{0}\}$). Alternatively, using large deviations results, bounds can be derived. The bound presented below can confirm if a particular dataset is contaminated. The theorem involves the KL divergence between the empirical distribution and a member of $\mathcal{Q}$. In the case where $\mathcal{Q} = \{Q^{0}\}$, the bound provides a simple way to check if a sample is contaminated at a particular significance level $\epsilon$; in the more general case, if $\mathcal{Q}$ is a convex set, numerical optimization techniques can efficiently check the condition. \begin{thm}(Outer Bound). \label{thm:outer} If \begin{align} \label{eqn:Qouter} \inf_{Q \in \mathcal{Q} } D(\widehat{P}(X)||Q) \geq \frac{1}{p} \log \left( \frac{1}{\epsilon} \right) + \frac{2n}{p} \log (p+1) \end{align} then $X$ is contaminated at significance level $\epsilon$. \end{thm} \begin{proof} See Appendix A. \end{proof} Theorem \ref{thm:outer} is an outer bound; any empirical distribution with KL distance \emph{greater} than the stated quantity (from \emph{all} elements in $\mathcal{Q}$) \emph{is} contaminated. Theorem \ref{thm:outer} can be used to bound the size of the smallest set $\mathcal{C} \subset \lbrack p \rbrack $ such that $X_{\lbrack p \rbrack \setminus \mathcal{C}}$ \emph{is not contaminated}. This is simplified if $\mathcal{Q}$ consists of a single model distribution; we first discuss this scenario. In principle, given a dataset $X \in \mathcal{X}^p$ and a model distribution $Q^{0}$, one could first check if $X$ is contaminated by evaluating (\ref{eqn:Qouter}). If (\ref{eqn:Qouter}) holds, $X$ is contaminated, and an immediate question follows -- how many and which data points must be excluded so that (\ref{eqn:Qouter}) no longer holds? A exhaustive approach to answer this question would be the following. For each $x_i \in \mathcal{X}$, discard a single data point that takes the value $x_i$, and recalculate the empirical distribution with the data point removed. Of the $n$ new empirical distributions, check if the one with minimum KL divergence to the model distribution still satisfies (\ref{eqn:Qouter}). If (\ref{eqn:Qouter}) still holds for all possible empirical distributions with one data point removed, check all distinct empirical distributions that can be created by discarding 2 data points (roughly $n^2$ possibilities, provided each $x_i$ appears at least twice in the data). Continuing in this manner, one would check each of the $\sim n^m$ possible empirical distributions that can be created by discarding $m$ data points. When (\ref{eqn:Qouter}) is first violated, $m$ lower bounds the minimum number of data points that must be excluded to match the model. We can interpret this as a series of integer programs. For $m=0,\dots, p$ define $D^*_m$ as the solution to \begin{equation} \begin{aligned} \label{eqn:int} & \underset{m_1, m_2,\dots,m_n \in \mathbb{N}^n}{\text{minimize}} & & \sum_{i=1}^n \frac{p_i - m_i}{p-m} \log \left(\frac{\frac{p_i - m_i}{p-m}}{Q^{0}_i} \right) \\ & \text{subject to} & & \sum_i m_i = m \\ & & & m_i \leq p_i \qquad i = 1, \ldots, n \end{aligned} \end{equation} where $p_i$ is the number of times $x_i$ appears in the original dataset $X$. The optimization variables, $m_i$, represent the number of samples to discard corresponding to a particular $x_i$. Note that the objective is the KL divergence between the \emph{new} empirical distribution (with $m$ samples removed) and the known distribution $Q^{0}$. The value of $D^*_m$ can be checked in Theorem \ref{thm:outer}, providing conditions under which one can find a set $|\mathcal{C}| =m$ such that $X_{\lbrack p \rbrack \setminus \mathcal{C}}$ is not contaminated. This gives a bound on $c^*$. Specifically, \begin{align*} c^* \geq \max \left\{m: D^*_m \geq \frac{1}{p-m} \log \left( \frac{1}{\epsilon} \right) \right. \hspace{1cm} \\ \left. + \frac{2n}{p-m} \log (p-m+1) \right\}. \hspace{-.9cm} \nonumber \end{align*} Note that the condition in Theorem \ref{thm:outer} will always be met for some $m$; in particular, for $m=p$, by convention $D_0^*=0$, implying that the empty set, $X_{ \{ \} }$, is not contaminated. The optimization in (\ref{eqn:int}) is an integer program over a subset of $\mathbb{N}^n$. To efficiently solve the optimization, we can translate the integer valued variables to their continuous counterparts; specifically, let $\widehat{P}_i= p_i/p$, be the original empirical distribution, and $\alpha = m/p$ represent the fraction the total samples discarded. Making these substitutions results in a convex entropy minimization problem: \begin{equation} \label{eqn:contop1} \begin{aligned} & \underset{P \in \mathbb{S}^n}{\text{minimize}} & & \sum_i P_i \log \left(\frac{P_i}{Q_i^{0}} \right) \\ & \text{subject to} & & P_i \leq \frac{\widehat{P}_i}{1-\alpha} \qquad i = 1, \ldots, n \end{aligned} \end{equation} where $\alpha \in \lbrack 0,1 \rbrack$ represents the fraction of samples removed. More generally, $\mathcal{Q}$ is a set of distributions. The same continuous approximation results in a joint optimization over the model space $\mathcal{Q}$ and the space of empirical distributions, $\mathcal{P}(X,\alpha)$ defined in (\ref{eqn:pram}). Formally, let $D^*_\alpha$ be given as \begin{equation} \label{eqn:contop} \begin{aligned} D^*_\alpha = & \underset{P \in \mathcal{P}(X,\alpha), Q \in \mathcal{Q}}{\text{min}} & & \sum_i P_i \log \left(\frac{P_i}{Q_i} \right). \\ \end{aligned} \end{equation} If $\mathcal{Q}$ is a convex set, the above optimization can be efficiently solved in many settings (see Sec. \ref{sec:minentro}). To answer our original question and bound $\alpha^*$, one can conduct a line search over $\alpha \in \lbrack 0,1 \rbrack$, repeatedly solving the above optimization, and checking the output value of $D^*_\alpha$ against Theorem \ref{thm:outer}. This is captured in the following proposition. \begin{proposition} \label{prop:12324} Let \begin{align} \label{eqn:alp_ub} \alpha_{\mathrm{L}} = \max \left\{\alpha: D^*_\alpha \geq \frac{1}{p(1-\alpha)} \log \left( \frac{1}{\epsilon} \right) \right. \hspace{1cm} \\ \left. + \frac{2n}{p(1-\alpha)} \log \left(p(1-\alpha)+1\right) \right\} \hspace{-.3cm} \nonumber \end{align} then $\alpha_{\mathrm{L}}\leq \alpha^*$. \end{proposition} \begin{proof} The proof follows directly from Theorem \ref{thm:outer}. For any $\alpha$ such that the condition on $D^*_\alpha$ in (\ref{eqn:alp_ub}) holds, by Theorem \ref{thm:outer}, any distribution in $\mathcal{P}(X,\alpha)$ is contaminated. We note that $\alpha_{\mathrm{L}}$ always exists by monotone properties of $D_\alpha^*$ and the right hand side of the conditional in (\ref{eqn:alp_ub}). See Appendix B, Theorem \ref{thm:largep} for details. \end{proof} Fig. \ref{fig:geo} shows a geometric interpretation of Proposition \ref{prop:12324} and the optimization in (\ref{eqn:contop1}). See the caption for details. The lower bound obtained by solving the series of optimization problems converges to the separation distance, captured by the following theorem. \begin{thm} \label{thm:conv} Let $\mathcal{Q} = \{ Q^{0} \}$. Fix $\widehat{P}(X)$. Then \begin{align*} \kappa(\widehat{P}||Q^{0}) - \alpha_{\mathrm{L}} = O\left(\sqrt{\frac{\log p}{p} } \right). \end{align*} \end{thm} \begin{proof} See Appendix B. \end{proof} Theorem \ref{thm:conv} is stated for a fixed $\widehat{P}(X)$, although one would in general assume $\widehat{P}(X)$ to be an implicit function of $p$. The reason for fixing $\widehat{P}(X)$ is both generality and simplicity. The assumption decouples randomness from the convergence rate of the upper bound and the lower bound produced the optimization; without this assumption, the upper and lower bounds would be random variables, and necessitate a probabilistic statement. We also note that a precise limit statement can be readily extracted from the proof. \begin{figure} \centering \includegraphics[width=.9\columnwidth]{single_annotated.png} \caption{Geometric interpretation of Proposition \ref{prop:12324} and the optimization in (\ref{eqn:contop1}) with $\mathcal{Q}= \left\{ Q^0\right\}$. The width of the hypercube around $\widehat{P}$ is $\alpha$. As $\alpha$ is increased, the hypercube eventually intersects the `outer bound' set, which represents the set of distributions closest to ${Q}^{0}$ in KL divergence; the sets intersect when $\alpha =\alpha_{\mathrm{L}}$. Note that the `outer' bound set also increases in size as $\alpha$ increases. \label{fig:geo}} \label{fig:3d} \end{figure} \subsection{Discussion} \label{sec:mix} \label{sec:minentro} In practice, it is often the case that the precise model distribution is not known; instead, it may be known that the model distribution comes from some family of distributions. This arises in anomaly detection when normal events are known to correspond to unknown proportions of samples from a finite set of distributions. This is the case of the mixture model \emph{i.e.}, $\mathcal{Q}$ is the set of all distributions that can be represented as $ Q = \sum \pi_j Q^{j}$ for any mixture proportions $\pi_j$. As the set of mixture distributions with unknown mixture components is a convex set, we can directly address this setting using the developments of Sec. \ref{sec:ConvRel}. Jointly optimizing over the mixture weights and the mixture distribution, the optimization takes the form \begin{equation} \begin{aligned} \label{eqn:mixmod} & \underset{P \in \mathcal{P}^n, \ \pi \in \mathbb{S}^k }{\text{minimize}} & & \sum_i P_i \log \left(\frac{P_i}{\sum_{j=1}^k \pi_j Q_{i}^{j}} \right). \\ \end{aligned} \end{equation} We note that the above optimization can be solved at scale in real time for many applications; see discussions of numerical experiments below for details. For many applications, model distributions are generated using a \emph{finite} amount of data from known good sources (\emph{i.e.}, sources that are known to have no contamination). Let $\widehat{Q}$ be an empirical distribution generated from $p'$ samples of an $i.i.d.$ population, and consider the set \begin{eqnarray*} \mathcal{Q}' = \left\{Q: \widehat{Q} \mbox{ is typical for } (\{Q\}, \epsilon) \right\}. \end{eqnarray*} Here, $\mathcal{Q}$ is the set of all distributions that have $\widehat{Q}$ as a typical empirical distribution. As before, determining membership in $\mathcal{Q}$ is intractable for large $p'$ and more than two categories. Let \begin{align*} \bar{\mathcal{Q}} = \left\{Q : D(\widehat{Q}||Q) \leq \frac{1}{p'} \log \left(\frac{1}{\epsilon} \right) + \frac{2n}{p'} \log(p'+1)\right\}. \end{align*} $\bar{\mathcal{Q}}$ satisfies two important properties. First, $\mathcal{Q} \subseteq \bar{\mathcal{Q}}$ by Theorem \ref{thm:outer} and second, $\bar{\mathcal{Q}}$ is a convex set. Solving the optimization in (\ref{eqn:contop}) with $\mathcal{Q} = \bar{\mathcal{Q}}$ provides a powerful result which we state in the following proposition. \begin{proposition} \label{prob:last} Consider two empirical distributions $\widehat{P}$ and $\widehat{Q}$. Let $\mathcal{Q} = \bar{\mathcal{Q}}$, defined above, and let $D_0^*$ be the solution to the optimization in (\ref{eqn:contop}) with $\alpha = 0$. If \begin{align*} D^*_0 \geq \frac{1}{p} \log \left( \frac{1}{\epsilon} \right) + \frac{2n}{p} \log \left(p+1\right), \end{align*} there is no $Q$ that simultaneously satisfies \emph{1)} $\widehat{Q}$ is typical with respect to $Q$ and \emph{2)} $\widehat{P}$ is typical with respect to $Q$. \end{proposition} Satisfying proposition \ref{prob:last} implies that observing a $\widehat{Q}$ and a $\widehat{P}$ generated by the same underlying distribution \emph{by chance} can occur at most a fraction $\epsilon$ of the time; in this sense, $\widehat{P}$ must be contaminated. With a single parameter search over $\alpha \in \lbrack 0,1 \rbrack$, the lower bound applies: $\alpha^* \geq \alpha_{\mathrm{L}}$. We note that the formulation does not require the empirical model and the distribution under test to have joint support. \begin{figure} \centering \includegraphics[width=.91\columnwidth]{varyp_normed_wbg.pdf} \caption{Numerical example. $n=11$, $\epsilon = 0.05$, \(\mathcal{Q} = \{Q^0\} \), with $Q^0$ a uniform distribution over 11 categories. Solid lines show \(\alpha_\mathrm{L} \) divided by \( \kappa(\widehat{P}||Q^0) \) for mixture distributions \(\widehat P_{\textrm{dip}} = (1 - \pi) \ Q^0 + \pi \mathcal{U}_{10}\), where $\mathcal{U}_{10}$ is a uniform distribution over 10 of the 11 categories. Dashed lines show \(\alpha_L \) divided by \( \kappa(\widehat{P}||Q^0) \) for \(\widehat P_\textrm{spike} = (1 - \pi) Q^0 + \pi \delta\), where $\delta$ is a point mass. } \label{fig:varyp_normed} \end{figure} Numerical experiments were conducted to highlight the utility of Proposition \ref{prop:12324}; results are shown in Fig. \ref{fig:varyp_normed}. In contrast to the deterministic experiments in Fig. \ref{fig:varyp_normed}, experiments with random samples from various model and test distributions as input were run, showing similar convergence behavior. An experiment with with $\mathcal{Q}$ being a set of 10 mixture distributions with $n=50$ was also conducted. The line search over $\alpha$ was completed using a bisecting search to an accuracy of $2^{-28}$ (the optimization was solved 27 times for each experiment). Averaged over 50 trials, the total time to compute $\alpha_{\mathrm{L}}$ was 0.4 seconds. Experiments were implemented using CVXOPT \cite{cvxopt} and results visualized with matplotlib \cite{Hunter:2007}. \section{Related Work} \label{sec:related} Related work can be broadly classified into traditional work in goodness of fit (GoF) testing, and more recent work in anomaly detection. GoF testing has an extensive literature. When the data are binary valued, and the model distribution Bernoulli, quantifying contamination using GoF tests can be addressed by evaluating binomial probabilities (a technique known as Fisher's Exact method \cite{mehta1984exact}). When the data take on more than two values, exact solutions for the level of contamination become intractable. A customary approach to GoF testing for categorical data is Pearson's $\chi^2$ test \cite{agresti2014categorical}. This approach to GoF testing can be quite powerful, but suffers from limitations. $\chi^2$ tests are approximations, and are known to be invalid under certain conditions. In particular, the test is invalid when $p_i = 0$ for one or more categories. Nonetheless, employing the $\chi^2$ test, one can deduce another optimization (much as we do in Sec. \ref{sec:ConvRel}) to answer the aforementioned question; we note the resulting optimization is a separable quadratic program with linear equality constraints which has an analytic solution \cite{bay2010analytic}, and would be an interesting starting point for future work. Since Pearson's $\chi^2$ test hinges on a normal approximation, this approach would not result in strict contamination bounds. More specific to the contamination estimation problem presented here, recent work includes decontamination with multiclass label noise \cite{blanchard2014decontamination, scott2013classification}, which focuses on recovering proportions of a set of mixture distributions present in dataset. There is an extensive literature on the related topics of anomaly detection and outlier detection including work employing entropy based techniques, in particular \cite{hero2006geometric} and \cite{gu2005detecting}; we note the formulations here are distinct in that the level of contamination is not estimated. Lastly, we briefly discuss related work in anomaly detection the areas of computer networks, systems and security as this is the motivation for our developments. Early work on identifying anomalous or unexpected behaviors such faults ({\em e.g.,} due to outages or failures) or spikes ({\em e.g.,} associated with DoS attacks or flash crowds) in computer network traffic was based on the application of graph models, time series and multi-resolution methods {\em e.g.,}~\cite{Feather93,Katzela95,Brutlag00,Barford02}, and Principle Components Analysis (PCA)~\cite{Lakina04,Lakina04a,Lakina05}. There are significant difficulties in tuning these methods to provide low false alarm rates in practice~\cite{Ringberg07}, necessitating methods based on statistical significance, as presented here. \bibliographystyle{IEEEtran}
2,869,038,155,151
arxiv
\section{Introduction} Fault tolerant control (FTC) has attracted much attention from control engineers since it can accommodate sensor and actuator faults to preserve desired close-loop performance \cite{50_chen2012, 50_isermann1997, 50_isermann2006, 50_blanke2006, 10_zhang2008}. FTC schemes mainly fall into two categories: passive and active. Passive approaches aim to retain stability and performance against all faults by a single controller. One the other hand, an active approach typically consists of a fault diagnosis stage followed by a controller reconfiguration. Although active approaches require more computational power during implementation, they typically yield less conservative results and better closed-loop performance when faults occur. Recently, great research attention has been given to the automotive industry to achieve improved handling, comfort and safety, as well as route planning, road condition assessment, and pothole detection capability, fuel efficiency prediction, etc. \cite{J2,J3,6214702,J8,C7,C1,C2,1219456}. In control-related areas, various model-based control and estimation approaches have been widely developed \cite{C3,J7,7506101,C9,7533442,J9,C4}. However, model-based FTC design for automotive Air Conditioning (A/C) system has rarely been touched. The main reason of this stagnation is the lack of a control-oriented model that can balance model accuracy and computational complexity to characterize the thermo-fluid dynamics of the refrigerant \cite{10_katipamula2005p1}. During heat removal/release at the heat exchanger, refrigerant experiences a liquid/vapor phase change. This phase change is a complicated process in which mass and energy balance is difficult to characterize. Hence, previous studies were based on static or empirical models where the performance after fault occurrences was unsatisfactory in transient \cite{10_keir2006}. Although steady-state analysis is the main stream in FTC, studies have recently emerged on dynamic fault diagnosis \cite{10_janecke2011}, in which transient data and models are used to identify faulty system behaviors. For example, a lumped parameter method has been applied to a vapor compression cycle with a fixed orifice device to develop an observer-based scheme \cite{10_wagner1992}. Black box models obtained through data-driven techniques, such as ARX and ARMAX, were used to generate structured residuals to predict faults in an air handling unit \cite{10_lee1996}. In \cite{10_keir2006}, a comprehensive model was used to represent the vapor compression cycle as opposed to an air handing unit and a linearized model was used to explore the sensitivity of each output to a variety of faults. However, no practical fault diagnosis algorithm was implemented and tested. In summary, model-based FTC design for automotive A/C systems is far from satisfactory on three aspects. Firstly, the vapor compression cycle, a critical subsystem in the A/C system for energy conversion, has not been comprehensively studied; previous work focused on the air handling unit. Secondly, lumped parameter modeling and data-driven modeling approaches are not physics-based, rendering little understanding of the relationships between possible faults and corresponding symptoms. Thirdly, a gap exists between fault diagnosis and control design, from the fact that a seamlessly integrated design approach combining both fault diagnosis and control action has not been investigated yet. In this paper, we aim to provide a benchmark for dynamic fault diagnosis and control of vapor compression cycle in a unified framework, by merging the latest advances in both practitioner and theory developments. A promising approach to develop control-oriented models for heat exchangers was proposed in \cite{01_he1997, 01_Asada1998,02_Li2010} by exploiting the Moving Boundary Method (MBM). The refrigerant is lumped based on its phase status, i.e., pure vapor, pure liquid or mixed vapor and liquid. By exploiting mean void fraction and volumetric ratio of vapor over liquid, a set of differential equations describing the mass, momentum, and energy balances of the phase change process was developed and solved. This model is advantageous in modeling the transient behavior, and it is more precise than existing modeling techniques. The use of a first-principle A/C model can significantly reduce the time required to develop and implement FTC algorithms. As we mentioned, active FTC schemes rely on fault detection to introduce fault-feedback compensation or control reconfiguration based on identified fault information. For instance, an integrated control and fault detection design, based on a four parameter controller, was proposed in \cite{10_nett1988, 10_niemann1997, 10_stoustrup1997,10_marcos2005}, which incorporates additional two Degrees of Freedom (DoFs) for the purpose of fault diagnosis. An alternative implementation of the integrated control, referred to as Generalized Internal Model Control (GIMC), has been introduced in \cite{10_zhou2001,10_campos2003,10_campos2005, 10_campos2008}, which is able to overcome conflicts between performance and robustness in traditional feedback frameworks. With GIMC, a high performance controller is active under normal conditions and a robust controller will be activated when sensor/actuator faults or external disturbances are identified. In this paper, MBM A/C modeling and FTC using the GIMC structure are integrated in a unified framework. The contributions of this paper include the following. First, a first-principle vapor compression cycle model is exploited to model the complex thermo-fluid process. Furthermore, a fault tolerant control scheme is developed by using the GIMC method. Finally, a gain-scheduling compensator for fault accommodation is developed and simulations are presented to demonstrate the efficacy of the proposed framework. The rest of the paper is organized as follows. The fundamental theory of FTC with GIMC structure is detailed in Section II. Mathematical modeling of the A/C plant and gain-scheduled $H_{\infty}$ controller design are introduced in Section III. Fault detection and isolation algorithm for sensor and actuator faults is developed in Section IV and fault tolerant controller is designed in Section V with a preliminary study on the gain-scheduled GIMC structure. Conclusion remarks are made in Section VI. \section{Fault Tolerant Control (FTC) Scheme} In this section, a general FTC scheme for automotive A/C systems is introduced with individual modules detailed. Integration of control action and reconfiguration mechanism is realized through a method called the GIMC structure that can actively reconfigure the controller once faults occur. \subsection{Controller Reconfiguration} The FTC scheme of an automotive A/C system is illustrated in Figure \ref{fig:ClosedLoop}. The scheme consists of four modules: A/C plant, gain-scheduled controller, Fault Detection and Isolation (FDI), and reconfiguration mechanism. \begin{figure*}[!htb] \centering \includegraphics[width=0.8\textwidth]{FTCScheme.pdf} \caption{Interconnections of Plant, Controller, FDI and Reconfiguration.} \label{fig:ClosedLoop} \end{figure*} As illustrated in Figure \ref{fig:ClosedLoop}, a basic automotive A/C system is composed of four primary components, evaporator, compressor, condenser and expansion valve. The vapor compression cycle removes heat from the air flowing into the cabin through the evaporator, as the refrigerant evaporates from two-phase (TP) status into superheated (SH) status, and rejects heat to the air flowing through the condenser, as the refrigerant condenses from superheated (SH) status into sub-cooled (SC) status through two-phase (TP) status. The enthalpy, mass flow rate and pressure, are exchanged via the four components. Basically, the two heat exchangers set the pressures of the system, while the compressor and expansion valve determine the mass flow rates at the inlet and outlet of the evaporator and condenser. From a system-level perspective, the interfaces of the A/C plant to the rest of the scheme are: \begin{enumerate} \item The controller sends commands to the A/C plant through the two controllable inputs, the compressor speed $N_c$ and the valve position $\alpha$, \item Two measurements are available for the controller and the FDI module, the evaporator pressure $p_e$ and the superheat temperature $SH$ . \end{enumerate} The air mass flow rates and temperatures on the evaporator side $\dot{m}_{ea}, T_{eo}$ and condenser side $\dot{m}_{ca}, T_{co}$, are measurable but noncontrollable disturbances. The air temperatures leaving the heat exchangers $T_{eo},T_{co}$ and tube wall temperatures $T_{ew}, T_{cs}$ are calculated using a control-oriented A/C model. A controller is designed to track prescribed trajectories of two output variables, namely the evaporator pressure $p_e$, and the superheat temperature $SH$ \cite{10_zhang2014}. The reference values for the tracked variables are labeled as $p_{e,r}$ and $SH_r$, respectively, which are generated by higher level optimization algorithms developed for improving energy conversion efficiency. Meanwhile, the controller is expected to reject disturbances caused by the variation of air mass flow rate at the evaporator, $\dot m_{ea}$, which is manually set by the driver or intelligently regulated by the cabin control unit. At different cooling loads, the coupling between inputs and outputs change significantly. Hence, the controller is supposed to be scheduled according to the system operating condition. An FTC scheme is targeted to achieve stability and performance, not only when all modules function normally, but also in cases when there are faults in sensors, actuators, or other system components \cite{10_zhang2008}. The faults of interest are one actuator fault (compressor speed) and one sensor fault (pressure reading). Both are assumed to enter the A/C system additively, meaning that 1) the actual compressor speed $N_{cmp,act}$ is the sum of the commanded compressor speed $N_{cmp}$ sent by the controller and a faulty compressor speed $f_N$; 2) the pressure measurement $p_{e,msr}$ available to both the controller and FDI module is the actual pressure in the system $p_e$ plus a faulty reading $f_p$. If the FDI module and reconfiguration mechanism are absent, the FTC scheme is considered passive as the controller is pre-determined in the design phase. Since it aims to be robust against a class of presumed faults, the fault-tolerant capabilities are limited. In contrast, an FTC scheme with both the FDI module and reconfiguration mechanism is considered active, because it reacts to faults actively by reconfiguring control actions so that the stability and performance of the closed-loop system can be preserved. For a successful control reconfiguration, a real-time FDI module is required to provide precise information of the faulty components in the system. In the FDI module, both commanded control inputs and measurement outputs from the A/C plant are synthesized to generate residuals which are signals essential for fault detection, isolation and estimation. \subsection{GIMC Structure} The general FTC scheme in Figure \ref{fig:ClosedLoop} relies on an FDI algorithm, followed by a fault accommodation into the nominal controller. The FDI module is expected to detect and isolate the occurrence of a fault in the closed-loop system, and provide an appropriate compensation signal to the controller in order to maintain the closed-loop performance. The general FTC scheme is realized through an integrated FDI design and reconfiguration mechanism referred to as the GIMC structure \cite{10_zhou2001}, as shown in Figure \ref{fig:GIMC_Scheme}, in which design methods for nominal conditions \cite{10_campos2008} are summarized below for reader's convenience. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{GIMCscheme1.pdf} \caption{Generalized Internal Model Control Structure (Adapted from \cite{10_zhou2001}).} \label{fig:GIMC_Scheme} \end{figure} Consider a linear system $P(s)$ affected by disturbances $d \in \mathbb{R}^r$ and possible faults $f \in \mathbb{R}^f$ described by \begin{equation} \left \{ \begin{array}{l} \dot{x} = Ax + Bu + F_1 f + E_1 d, \\ y = Cx + Du + F_2 f + E_2 d, \\ \end{array} \right. \end{equation} where $x \in \mathbb{R}^n$ represents the vector of states, $u \in \mathbb{R}^m$ is the vector of inputs, and $y \in \mathbb{R}^p$ represents the vector of outputs. The nominal system is considered to be controllable and observable. The system response $y$ can be analyzed in a transfer matrix form as follows: \begin{equation} y = P_{uy} u(s) + P_{fy} f(s) + P_{dy} d(s). \end{equation} A left coprime factorization for each transfer matrix can be derived as: \begin{equation} P_{uy} = \tilde{M}^{-1} \tilde {N},\, P_{dy} = \tilde{M}^{-1} \tilde {N}_d,\, P_{fy} = \tilde{M}^{-1} \tilde {N}_f, \end{equation} where $\tilde{M}, \tilde {N}, \tilde {N}_d, \tilde {N}_f \in RH_{\infty}$. Now suppose a nominal controller $K$ that stabilizes the nominal plant $P_{uy}$, and provides a desired closed-loop performance in terms of robustness, transient, and steady state responses. The controller can be represented by a left coprime factorization, \begin{equation} K = \tilde{V}^{-1} \tilde{U} \end{equation} The accommodation scheme adopted in this paper is motivated by a new implementation of the Youla parametrization referred to as GIMC \cite{10_zhou2001}. In this configuration, it allows the system to perform FDI and fault accommodation in the unified structure, where these two processes can be carried out by selecting two design parameters $Q, H \in RH_{\infty}$. Consequently, the residual $r$ is generated by selecting the detection/isolation filter $H$ and the accommodation signal $q$ is generated by the compensator $Q$, using the filtered signal $f_e$ with the following criteria \cite{10_campos2008}. \begin{itemize} \item $H(s)$: the fault detection/isolation filter must diminish the effect of the disturbances or uncertainty into the residual signal, and maximize the effect of the faults. \item $Q(s)$: the robustification controller must provide robustness into the closed-loop system in order to maintain acceptable performance against faults. \end{itemize} The GIMC structure functions as follows: $r=0$ if there is no model uncertainties, external disturbances or faults and then the control system will be solely controlled by the high performance controller $K_0 = \tilde{V}^{-1} \tilde{U}$. On the other hand, the robustification controller $Q$ will only be active when $r \neq 0$, i.e., there are either model uncertainties or external disturbances or sensor/actuator faults. The advantage of the GIMC structure is that if there is no uncertainty, the controller will perform as well as a nominal controller does; if uncertainty exists, the controller implementation should in principle perform no worse than the standard robust controller implementation does as to robustness and performance. \section{A/C Plant} Realization of the general FTC scheme using the GIMC structure needs left coprime factorization of the A/C plant model in order to design the detection/isolation filter $H$ and the compensator $Q$. Here, the A/C plant described using MBM language is utilized to generate a control-oriented model that not only provides the $\tilde{M}$ and $\tilde{N}$ matrices for design purpose, but also serves as a nonlinear simulator for algorithm validation. In the MBM modeling framework, the compressor and the valve are modeled as static components. The dynamics related to the heat and mass transfer inside the heat exchangers are described using the MBM method \cite{01_he1997, 02_Li2010}, where Reynolds transport theorem describing the mass and energy conservation for transient one-dimensional flow is applied to each phase region of the condenser and evaporator with boundary conditions and refrigerant properties specified. After derivations detailed in \cite{50_zhang2014JDSMC} and not included here for brevity, the final mathematical equations describing system dynamics are in the descriptor form, \begin{equation} \label{E:Descriptor_Form2} \begin{aligned} Z(x,f_a) \frac{dx}{dt} &= f(x,f_a,u,v,f_N), \\ y&=g(x,f_a,f_p), \end{aligned} \end{equation} where $f_N$ and $f_p$ are, respectively, the compressor and the pressure fault as aforementioned. The input vector $u$ includes the compressor rotation speed and expansion valve opening percentage, i.e., $u = \begin{bmatrix} N_{c} & \alpha \end{bmatrix} ^T$. The boundary conditions are the variables describing the air side of the heat exchangers, and could be treated as unknown disturbances, $ v = \begin{bmatrix}\dot{m}_{ea} & T_{ea,in} \end{bmatrix} ^T $. The state vector $x_e$, describing the evaporator status, includes 5 variables, $ x_e= \begin{bmatrix} \zeta_{e1} & p_e & h_{e2} & T_{e1w} & T_{e2w} \end{bmatrix} ^T $. Finally, the output vector $y$ includes the evaporator pressure and superheat temperature, $y =\begin{bmatrix} p_e & SH \end{bmatrix}^T$, which are algebraic functions of states, $g(x)$. The $Z$ matrix and $f$ vector are complex expressions of refrigerant properties, heat transfer coefficients and geometric parameters \cite{50_zhang2014JDSMC}. \subsection{Mathematical Model} The compressor and expansion valve are the two main actuators regulating the pressure difference and enthalpy distribution in the A/C loop. In the compressor, the mass flow rate $\dot m_c$ and outlet enthalpy $h_2$ are, respectively,defined as: \begin{equation} \label{E:CMP_E} \begin{aligned} \dot m_c &= \eta_v V_d \rho_1 \omega_c , \\ h_2 &= \frac {h_{2s} - h_1}{ \eta_s} + h_1, \end{aligned} \end{equation} where $V_d$ is the compressor displacement; $\rho_1, h_1$ are the refrigerant density and enthalpy at the compressor inlet, respectively; $\omega_c$ is the compressor speed and $h_{2s}-h_1$ is the isentropic enthalpy difference. The first control input is the compressor rotation speed $N_c$ in the unit of $rpm$. The mass flow rate through the expansion valve is modeled by the orifice flow equation, approximated by assuming constant fluid density: \begin{equation} \label{E:TEVM} \dot m_v = C_{d,v} A_v \sqrt{2\rho_3\left(p_{3} - p_{4}\right)}, \end{equation} where $A_v$ is the valve curtain area and $C_v$ is the discharge coefficient. The outlet enthalpy is typically found by assuming an ideal throttling process, hence $h_4 = h_3$. The second control input is the valve position $\alpha$ in percentage, determining the effective flow area of the valve. The mass and energy balance equations for the two-phase region of the evaporator are given directly in Equations \ref{E:evap_3} and \ref{E:evap_4}, respectively, \begin{figure*}[!hbt] \small \begin{equation} \label{E:evap_3} \begin{aligned} &\left(\frac{\rho_{e,TP}-\rho_g}{\rho_{e,TP}}\right)\frac{d\zeta_1}{dt} + \frac{1}{\rho_{e,TP}}\frac{\partial \rho_{e,TP}}{\partial p_e}\frac{dp_e}{dt}\cdot\zeta_1 + \frac{1}{\rho_{e,TP}}\frac{\partial \rho_{e,TP}}{\partial \bar\gamma_e}\frac{d\bar\gamma_e}{dt}\cdot\zeta_1\\ &= \frac{\dot m_v}{\rho_{e,TP}V_e} - \frac{\dot m_{12}}{\rho_{e,TP}V_e} \frac{\rho_g \left(h_{e,TP}-h_g\right)}{\rho_{e,TP}}\frac{d\zeta_1}{dt} + \left(\frac{\partial h_{e,TP}}{\partial p_e}-\frac{1}{\rho_{e,TP}}\right)\frac{dp_e}{dt}\cdot\zeta_1 + \frac{\partial h_{e,TP}}{\partial \bar\gamma_e}\frac{d\bar\gamma_e}{dt}\cdot\zeta_1 \\ & = \frac{\dot m_v}{\rho_{e,TP}V_e} \left(h_4-h_{e,TP}\right) - \frac{\dot m_{12}}{\rho_{e,TP}V_e} \left(h_g-h_{e,TP}\right) +\frac{\dot{Q}_{TP}}{\rho_{e,TP}V_e}, \end{aligned} \end{equation} \normalsize \end{figure*} \begin{figure*}[!htb] \small \begin{equation} \label{E:evap_4} \begin{aligned} &-\left(\frac{\rho_{e,SH}-\rho_g}{\rho_{e,SH}}\right)\frac{d\zeta_1}{dt} + \frac{1}{\rho_{e,SH}}\frac{\partial \rho_{e,SH}}{\partial p_e}\frac{dp_e}{dt}\cdot\left(1-\zeta_1\right) + \frac{1}{\rho_{e,SH}}\frac{\partial \rho_{e,SH}}{\partial h_{e,SH}}\frac{dh_{e,SH}}{dt}\cdot\left(1-\zeta_1\right) \\ & = \frac{\dot m_{12}}{\rho_{e,SH}V_e} - \frac{\dot m_{c}}{\rho_{e,SH}V_e}-\frac{\rho_g \left(h_{g}-h_{e,SH}\right)}{\rho_{e,SH}}\frac{d\zeta_1}{dt} + \frac{1}{\rho_{e,TP}}\frac{dp_e}{dt}\cdot\left(1-\zeta_1\right) - \frac{dh_{e,SH}}{dt}\cdot\left(1-\zeta_1\right) \\ & = \frac{\dot m_{12}}{\rho_{e,SH}V_e} \left(h_g-h_{e,SH}\right) - \frac{\dot m_{c}}{\rho_{e,SH}V_e} \left(h_1-h_{e,SH}\right) +\frac{\dot{Q}_{SH}}{\rho_{e,SH}V_e}, \end{aligned} \end{equation} \normalsize \end{figure*} where $p_e$ is the evaporator pressure; $\zeta_1$ is the two-phase region normalized tube length; $h_{e,SH}$ is the enthalpy of the refrigerant at the tube exit; $\dot{m}$ is the mass flow rate; $\dot{Q}$ is the heat transfer rate; $\rho$ denotes the density. In Equations \ref{E:evap_3} and \ref{E:evap_4}, the left hands represent the variation of independent states of the refrigerant, and the right hands the exchanger of mass and energy at the inlet and outlet of individual phase region, as well as the heat transfer along the wall of corresponding region. The terms multiplying the state variations depend on the refrigerant inherent thermodynamic properties, hence are state-dependent. The mass and energy balances for the sub-cooled, two-phase and superheated region of the condenser are not shown here for brevity. \subsection{Coprime factorization of Plant Model} The A/C model was calibrated using the data collected during the tests when vehicle/engine speeds are maintained at nominal steady state, and verified with reference to the SC03 Air Conditioning Cycle in which vehicle speed trace for this regulatory driving cycle is shown in Figure \ref{fig:Veh_SC03}. The calibration process requires applying multipliers correcting the values of the heat transfer coefficients predicted by the empirical correlations found in the literature. Figure \ref{fig: Val_SC03} also compares the outputs of the model with the corresponding experimental data. During the SC03 test, the compressor speed (related to the engine speed) changes considerably, causing significant variations in the refrigerant flow rate that affect the pressure dynamics in the heat exchangers. This is particularly evident by observing the fluctuations of the condenser pressure, as shown in Figure \ref{fig: Pc_SC03}. From the Root Mean Square Error (RMSE) between calculated pressures and measured pressure, the condenser pressure error is within $6\%$ of its average value, and the evaporator pressure error is around $8\%$. Therefore, the model appears quite accurate in capturing the pressure dynamics at the condenser and evaporator. \begin{figure}[!htb] \centering \subfigure[Vehicle Speed Profile] {\includegraphics[width=0.45\textwidth]{VehSC032.pdf} \label{fig:Veh_SC03}} \subfigure[Condenser Pressure] {\includegraphics[width=0.45\textwidth,keepaspectratio=true]{SC03Pc.pdf} \label{fig: Pc_SC03}} \subfigure[Evaporator Pressure] {\includegraphics[width=0.45\textwidth,keepaspectratio=true]{SC03Pe.pdf} \label{fig: Pe_SC03}} \caption{Verification of MBM A/C Model for the SC03 driving cycle.} \label{fig: Val_SC03} \end{figure} The nonlinear A/C model is linearized at three operating conditions, corresponding to low, medium and high cooling loads, whose controlled inputs and steady state refrigerant status change with respect to the inlet air temperature of the evaporator, as summarized in Table \ref{T:Equilibria}. \begin{table}[!h] \small \caption{A/C Operating Points.} \label{T:Equilibria} \centering \begin{tabular}{|c|c|c|c|c|} \hline \hline $\dot{Q}_a$ & $T_a$ $^0C$ & $N_c$ ($rpm$) & $\alpha$ ($\%$) & $P_e$ ($kpa$)\\ \hline Low & $25$ & $450$ & $25$ & $302.2$ \\ \hline Med & $30$ & $1000$ & $40$ & $251.2$ \\ \hline High & $40$ & $2500$ & $55$ & $204.6$ \\ \hline \hline \end{tabular} \end{table} Note that as seen from Figure \ref{fig:GIMC_Scheme}, the coprime factors $\tilde{M}$ and $\tilde{N}$, instead of the plant model $P_{uy}$, are of interest because they are parts of the fault accommodation block. \subsection{Coprime factorization of $ H_\infty $ Controller} $H_{\infty}$ control and estimation problems have been extensively studied in literature \cite{7348666,C6,7355303,J1}, and are considered to be very promising for the automotive industry. In this work, a general $ H_\infty $ synthesis is to find a controller $K$ such that the closed-loop system is asymptotically stable and the $ H_\infty $ norm of the transfer function between the disturbance $\omega$ and controlled output $z$, $ \|T_{\omega z}\|_{\infty} $, is as small as possible \cite{10_zhou1998, 10_zhou1996}. In order to fit the $H_{\infty}$ synthesis framework, the performance criteria $z$ and unknown disturbances $\omega$ should be clarified for automotive A/C systems. Mathematically, the six elements in the vector of weighted performance criterion $z$ are selected as \begin{equation} z=\begin{bmatrix} e_{p_e} & e_{SH} & N_{cmp} & \alpha & p_e & SH \end{bmatrix} ^T, \end{equation} where $e_{p_e} = p_{e,r}- p_e$ and $e_{SH} = SH_r - SH$ are errors on the output evaporator pressure and superheat temperature. $N_{cmp}$ is the compressor rotation speed and $\alpha$ is the valve opening percentage. $p_e$ and $SH$ are, respectively, the evaporator pressure and superheat temperature. The reference evaporator pressure $p_r$ and superheat temperature $SH_{r}$ are time-varying and regarded as additional disturbances besides the unknown disturbances $\dot m_{ea}$, as well as the noises. Therefore, the disturbance vector $\omega$ is defined as: \begin{equation} w=[\Delta \dot{m}_{ea}, p_{e,r}, SH_r ,n_1, n_2]^T. \end{equation} The original A/C model in state-space form is augmented with the output vector and disturbance vector defined, and a $H_{\infty}$ controller is obtained by solving Linear Matrix Inequalities (LMIs) associated with the augmented system according to methods provided in \cite{10_zhou1998, 10_zhou1996}. The above design procedure is detailed in previous work \cite{50_zhang2014JDSMC}, where simulation results are provided to support the validity of the controller during output tracking and disturbance rejection. Note that as seen from Figure \ref{fig:GIMC_Scheme}, the coprime factors $\tilde{U}$ and $\tilde{V}$, instead of the controller model $K$, are of interest because the fault accommodation signal $q$ is added in between $\tilde{U}$ and $\tilde{V}$. Therefore, the system matrices of the coprime factors are provided below, \section{Fault Detection and Isolation} Residuals generated by the FDI module are required to activate the compensator $Q$ in the GIMC structure. The filters for residual generation are designed in the framework of $H_{\infty}$ optimization, and validated using the nonlinear MBM A/C model. \subsection{$H_\infty$ optimization} The isolation filter $H_l$ ($l \times p $ transfer matrix) is designed to isolate the fault vector $f$ and decouple the perturbation $d$. The trade-off between these two objectives is also formulated as an optimization problem: \begin{equation}\label{E:fault isolation} \min_{H_I \in RH_{\infty}} \| \left[ \begin{array}{cc} 0 & T \\ \end{array} \right] - H_I \left[ \begin{array}{cc} \tilde{N}_d & \tilde{N}_f \\ \end{array} \right] \|_{\infty}, \end{equation} where $T \in RH_{\infty}$ is a diagonal transfer matrix to be determined according to the frequency response of $\tilde{N}_f$, in order to achieve the isolation and decoupling objectives. The above optimization problem can be solved by a Linear Fractional Transformation (LFT): \begin{equation}\label{E:fault isolation} \min_{H_I \in RH_{\infty}} \| F_l(G_{H_I}, H_I) \|_{\infty}, \end{equation} where $G_{H_I}$ stands for the generalized plant associated to the LFT transformation given by \begin{equation} G_{H_I}(s) = \left( \begin{array}{ccc} 0 & T & -I \\ \tilde{N}_d & \tilde{N}_f & 0 \\ \end{array} \right). \end{equation} \subsection{Performance Evaluation} The fault isolation filter generates independent residuals corresponding to either an actuator fault or a sensor fault. Based on Equation \ref{E:fault isolation}, the performance of a fault isolation filter is determined by the selected weighting function. We next design the fault isolation filter $H_I$ by selecting the following weighting function \begin{equation} T(s) = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right]\cdot \frac{1}{10s + 1}. \end{equation} The fault isolation filter $H_I$ calculated by solving an $H_{\infty}$ optimization problem stated in Equation \ref{E:fault isolation} is implemented in the nonlinear closed-loop simulation, where the plant is given by the nonlinear differential equations of the MBM A/C model. This enables the test for model discrepancies with respect to the linear plants used in the filter design process. The actuator fault of $40$ $rpm$ is added into the compressor rotation speed at $400$ second, and the sensor fault of $5 ^oC$ is added into the pressure signal at $700$ second. When external disturbance is not considered, the two residuals corresponding to actuator fault and sensor fault respectively, are presented in Figure \ref{fig:Fault_Isolation_woDisturbance}. It is clear that both residuals respond to their respective fault, and are decoupled from the other fault. When external disturbance is added as a stepwise signal, however, the two residuals corresponding to actuator fault and sensor fault respectively, as depicted in Figure \ref{fig:Fault_Isolation_wDisturbance}, are very sensitive to the variation of ambient condition on the air side of the heat exchangers. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{FaultIsolationwoDisturbance.pdf} \caption{Fault Isolation without Disturbance.} \label{fig:Fault_Isolation_woDisturbance} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{FaultIsolationwDisturbance.pdf} \caption{Fault Isolation with Disturbance.} \label{fig:Fault_Isolation_wDisturbance} \end{figure} Sensitivity of the designed isolation filter to external disturbances is due to the fact that the fault observability condition is not met. Current hardware configuration, i.e., one pressure transducer and one thermal couple, is able to isolate two independent faults simultaneously. In order to ensure the robustness of the isolation filter, it is required to increase the observability of the A/C system. Herein, an additional thermocouple is placed on the wall of the section of the heat exchanger encompassing vapor refrigerant. In other words, an additional state, the temperature of the wall in superheat phase region, is available. Using the same weighting function as before, a new isolation filter $H_I$ is calculated again as an $H_{\infty}$ optimization problem stated in Equation \ref{E:fault isolation}. Next, the robustness of the new filter $H_I$ to external disturbance is checked with stepwise variation of boundary conditions on air side of the evaporator as drawn in Figure \ref{fig:Fault_Isolation_Augmented}. A manoeuver is performed that takes the A/C system slightly away from the equilibrium design point. The manoeuver performed takes the A/C system evaporator pressure up $5$ $kpa$ and superheat temperature down $2.5 ^oC$. Due to the existence of external disturbance, the tracking performance is slightly deteriorated as the controller tries to compensate the disturbance during transient. For the same fault setting, however, it is seen that the filter residuals are able to detect and isolate the fault signature with a high degree of accuracy, without significant variation introduced by the external disturbance. Hence, it can be concluded that detection and robustness capability of the filter is guaranteed under current choice of the weight function and computation scheme. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{FaultIsolationAugmented.pdf} \caption{Fault Isolation with Disturbance after Additional Sensor Installed.} \label{fig:Fault_Isolation_Augmented} \end{figure} \section{Fault Accommodation} The GIMC structure is an active FTC scheme since the compensator $Q$ will be activated by the residual signals generated by the FDI module. From analysis of the influence of a fault at different locations of the closed-loop system, the problem of designing a compensator $Q$ is reduced to an optimization problem that minimizes the influence of a fault at either input or output of the plant. The fault accommodation capability of the compensator $Q$ is demonstrated by comparing simulation results of a passive FTC scheme and an active FTC scheme. Moreover, the variation of the compensator $Q$ over the plant operating point is investigated as a preliminary step towards the gain-scheduled GIMC structure. \subsection{Theoretical Background} The following lemma originally presented in \cite{10_campos2005} characterizes the dynamic behavior of the control input $u$ and output $y$ of the closed-loop system. \emph{Lemma 1. } In the GIMC configuration considering additive faults, the resulting closed-loop characteristics for the control signal $u$ and output $y$ are given by \begin{equation} \begin{aligned} u(s) &= S_i K r(s) - S_i \tilde(V)^{-1} (\tilde{U} \tilde{M}^{-1} + Q)\\ &\quad\times[\tilde{N}_d d(s) + \tilde{N}_f f(s)], \\ y(s) &= T_o r(s) + S_o \tilde{M}^{-1} (I-\tilde{N}\tilde{V}^{-1}Q)\\ &\quad\times[\tilde{N}_d d(s) + \tilde{N}_f f(s)], \end{aligned} \end{equation} where the input sensitivity is $S_i = (I + KP_{uy})^{-1}$, output sensitivity is $S_o = (I + P_{uy}K)^{-1}$ and complementary output sensitivity is $T_o = I - S_o = (I + P_{uy}K)^{-1} P_{uy}K$. If one desires to attenuate both faults and perturbations at the output $y$, the following optimization scheme is suggested: \begin{equation}\label{E:fault_actuator} \min_{Q \in RH_{\infty}} \| S_o \tilde{M}^{-1} (I - \tilde{N}\tilde{V}^{-1}Q) Q \left[ \begin{array}{cc} \alpha_d \tilde{N}_d & \alpha_f \tilde{N}_f\\ \end{array} \right] \|_{\infty}. \end{equation} The above optimization problem can be solved by a Linear Fractional Transformation (LFT): \begin{equation} \min_{Q \in RH_{\infty}} \| F_l(G_{Q}, Q) \|_{\infty}, \end{equation} where $G_{Q}$ is given by \begin{equation} G_{Q}(s) = \left( \begin{array}{ccc} \alpha_d S_o P_{dy} & \alpha_f S_o K P_{fy} & - S_o P_{uy} \tilde{V}^{-1} \\ \alpha_d \tilde{N}_d & \alpha_f \tilde{N}_f & 0 \\ \end{array} \right). \end{equation} If one wants to minimize the fault effects on the control signal while reducing the perturbation contribution at the output, the compensator $Q$ should be designed by the following optimization strategy: \begin{equation}\label{E:fault_sensor} \begin{aligned} &\min_{Q \in RH_{\infty}} \| \left[ \begin{array}{cc} \alpha_d S_o P_{dy} & 0 \\ 0 & -\alpha_f S_i K P_{fy} \end{array} \right] \\ &\quad+ \left[ \begin{array}{c} -\alpha_d S_o P_{uy} \tilde{V}^{-1} \\ -\alpha_f S_i \tilde{V}^{-1}\\ \end{array} \right] Q \left[ \begin{array}{cc} \tilde{N}_d & \tilde{N}_f\\ \end{array} \right] \|_{\infty}. \end{aligned} \end{equation} The above optimization problem can be solved by a Linear Fractional Transformation (LFT): \begin{equation} \min_{Q \in RH_{\infty}} \| F_l(G_{Q}, Q) \|_{\infty}, \end{equation} where $G_{Q}$ represents the generalized plant given by \begin{equation} G_{Q}(s) = \left( \begin{array}{ccc} \alpha_d S_o P_{dy} & 0 & -\alpha_d S_o P_{uy} \tilde{V}^{-1} \\ 0 & -\alpha_f S_i K P_{fy} & -\alpha_f S_i \tilde{V}^{-1} \\ \tilde{N}_d & \tilde{N}_f & 0 \\ \end{array} \right), \end{equation} and $\alpha_d, \alpha_f \in [0,1]$ are two weighting factors to balance the tradeoff between perturbations and faults reduction. The two $H_{\infty}$ optimization schemes given in Equations \ref{E:fault_actuator} and \ref{E:fault_sensor} are derived by attenuating the influences of faults on outputs and inputs, respectively. The solutions to the above two problems using the GIMC structure usually generate a compensator $Q$ in high order. Thus it is necessary to perform a controller order reduction. One approach is the standard way, such as balanced truncation appealing to analysis of Hankel norm of every system state. Another approach is to design a specific compensator for every studied fault by replacing the transfer functions $\tilde{N}_f$ and $P_{fy}$ with their corresponding parts. \subsection{Actuator Fault Accommodation} When an actuator fault occurs, an active FTC scheme is supposed to ensure the system outputs unchanged, enabling the system inputs to maintain the original values if the steady-state input-output mapping relationships are fixed. Since the sum of the commanded inputs and the faulty input signals are unchanged, the commanded inputs by the controller are modulated automatically to compensate the faulty signals entering the system inputs. Hence, the $H_{\infty}$ optimization schemes given in Equation \ref{E:fault_actuator} are suitable for actuator fault accommodation. After block replacing, the optimization problem for actuator compensator design is defined as: \begin{equation} \min_{Q_{act} \in RH_{\infty}} \| S_o \tilde{M}^{-1} (I - \tilde{N}\tilde{V}^{-1}Q_{act}) \left[ \begin{array}{cc} \alpha_d \tilde{N}_d & \alpha_f \tilde{N}_f^{act}\\ \end{array} \right] \|_{\infty}. \end{equation} The designed compensator $Q_{act}$ is implemented in the FTC scheme shown in Figure \ref{fig:ClosedLoop}, where the plant is replaced with the MBM A/C model. The simulation results of actuator accommodation are depicted in Figure \ref{fig:Fault_Accomodation_Actuator}, with the same manoeuver strategy as before and one actuator fault of $40$ $rpm$ added at $500$ $rpm$. The solid lines represent the inputs and the outputs of the A/C system with a passive FTC scheme, which refers to an output tracking controller designed using $H_{\infty}$ synthesis without fault accommodation function. When an actuator fault occurs, a passive FTC scheme has some extent of capability of fault compensation. In the top figures, the compressor speed is reduced by $20$ $rpm$ to ensure that the evaporator pressure does not deviate from the reference value. The dashed lines represent the inputs and the outputs of the A/C system with an active FTC scheme using GIMC structure. Still from the top figures, the compressor speed is reduced by $40$ $rpm$ by the actuator fault compensator $Q_{act}$, which is the exact amplitude of the actuator fault added. The output evaporator, after minor transient, returns to the reference value without any deviation. The simulation results are natural outcomes of the design process, since the GIMC structure activate a compensator to tolerate the faulty input. \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{FaultAccomodationActuator.pdf} \caption{Fault Accommodation of Actuator Fault.} \label{fig:Fault_Accomodation_Actuator} \end{figure} \subsection{Sensor Fault Accommodation} When a sensor fault occurs, an active FTC scheme is desired to ensure the system inputs unchanged, enabling the system outputs to maintain the original values provided that the steady-state input-output mapping is preserved. Since actual system outputs are unchanged, the sum of the actual outputs and the faulty output signals deviate from the original measured outputs. However, the controller disregards the deviation, and uses the original measured outputs as before. In other words, the sensor fault does not affect the closed-loop system performance significantly. Hence, the $H_{\infty}$ optimization schemes given in Equation \ref{E:fault_sensor} are suitable for actuator fault accommodation. After block replacing, the optimization problem for sensor compensator design is defined as \begin{equation} \begin{aligned} &\min_{Q_{sen} \in RH_{\infty}} \| \left[ \begin{array}{cc} \alpha_d S_o P_{dy} & 0 \\ 0 & -\alpha_f S_i K P_{fy}^{sen} \end{array} \right] \\ &\quad+ \left[ \begin{array}{c} -\alpha_d S_o P_{uy} \tilde{V}^{-1} \\ -\alpha_f S_i \tilde{V}^{-1}\\ \end{array} \right] Q \left[ \begin{array}{cc} \tilde{N}_d & \tilde{N}_f^{sen}\\ \end{array} \right] \|_{\infty}. \end{aligned} \end{equation} The designed compensator $Q_{sen}$ is implemented in the FTC scheme shown in Figure \ref{fig:ClosedLoop}, where the plant is replaced with the MBM A/C model. The simulation results of sensor accommodation are depicted in Figure \ref{fig:Fault_Accomodation_Sensor}, with the same manoeuver strategy as before and one sensor fault of $5$ $kPa$ added at $500$ $sec$. The solid lines represent the inputs and the outputs of the A/C system with a passive FTC scheme. For a sensor fault, a passive fault tolerant controller uses the faulty measurements to calculate the commanded inputs. In the top figures, the measured evaporator pressure is maintained, while the actual evaporator pressure is enforced to track another reference value deviating from the nominal value by the amplitude of the faulty output, $5 $ $kPa$. Hence, the compressor speed boosts up $50$ $rpm$ in order to drive the actual evaporator pressure to the deviated reference value. The dashed lines represent the inputs and the outputs of the A/C system with an active FTC scheme using the GIMC structure. Still from the top figures, the compressor speed only boosts up $10$ $rpm$ after the sensor fault compensator $Q_{sen}$ is activate. Since the actual evaporator output is almost unchanged, the measured evaporator pressure starts to deviate from the nominal reference value by $ 4$ $kPa$ after faulty output is added. Although the total sensor fault $5$ $kPa$ is not fully compensated, the active FTC scheme has already achieved much better performance than the passive one. The simulation results are natural outcomes of design process, since the GIMC structure activate a compensator to ensure the actual output is almost unchanged and the measured output changing by the amplitude of the faulty signal. \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{FaultAccomodationSensor.pdf} \caption{Fault Accommodation of Sensor Fault.} \label{fig:Fault_Accomodation_Sensor} \end{figure} \subsection{Plant Variation} The variation of the compensator $Q_{act}$ and $Q_{sen}$ over the operating point, or cooling load for A/C system, is examined to exploit the possibility of gain-scheduled GIMC structure. The nonlinear MBM A/C model is linearized at three operating conditions, corresponding to low, medium and high cooling loads. The cooling load is regulated by changing the inlet air temperature of the evaporator. For consistency, the superheat temperature is kept around $20^oC$ by cooperation of the compressor speed $N_c$ and the expansion valve position $\alpha$; however, the evaporator pressure $P_e$ is allowed to vary according to the cooling load as a gain scheduling parameter. Specifically, the boundary conditions, controlled inputs and steady-state refrigeration status are summarized in Table below. For every operating point, the actuator and sensor compensators are designed following the active FTC scheme using the GIMC scheme. From Figures \ref{fig:Fault_Accomodation_Qu} and \ref{fig:Fault_Accomodation_Qy}, it is not difficulty to see that the GIMC can also be made to be adaptive to working point. \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{FaultAccomodationQu.pdf} \caption{Actuator Compensator Variation over Working Points.} \label{fig:Fault_Accomodation_Qu} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{FaultAccomodationQy.pdf} \caption{Sensor Compensator Variation over Working Points.} \label{fig:Fault_Accomodation_Qy} \end{figure} \section{Conclusion} In this paper, the GIMC structure is applied to accommodate actuator and sensor faults of an automotive air conditioning system. The air conditioning system is modeled using the moving boundary method to capture the mixed liquie/vapor flow of the refrigerant in the heat exchangers. The resulting high-order, nonlinear, and control-oriented model is utilized to design an active fault tolerant controller. The designed fault isolation filter is able to isolate actuator and sensor faults when external disturbances are added with an additional thermocouple instrumented on the tube wall. The accommodation performance of the active fault tolerant controller is examined by adding actuator and sensor faults separately. In terms of actuator faults, the FTC scheme using GIMC outperforms passive FTC scheme due to the compensation of the faulty input. As for sensor faults, the deviation of the actual output from the reference output is, thought not completely eliminated, mitigated significantly. The possibility of the FTC using the GIMC scheme by gain-scheduled compensator is also exploited by investigating its variation over operating points. Future work will include the introduction of model uncertainty into the scheme and devise gain-scheduling module. \section*{Acknowledgment} This work described in the paper is in part supported by the U.S. Department of Energy, through Chrysler, LLC as the prime contractor. The authors gratefully acknowledge Chrysler, LLC and Dr. Timothy C. Scott for providing the data to calibrate the model and for the useful discussions. \bibliographystyle{ieeetr}
2,869,038,155,152
arxiv
\section{Introduction} For a sequence $\mathbf{s} = \{s_i\}_{i\ge 1}$ of positive integers, the $n$-dimensional \emph{$\mathbf{s}$-inversion sequences} are defined by \[\I_n^{(\mathbf{s})} = \{(e_1, \dotsc, e_n) \in \mathbb{Z}^n \mid 0\le e_i < s_i \;\;\text{for}\; 1\le i \le n\}.\] The \emph{ascent set} of an $\mathbf{s}$-inversion sequence $\mathbf{e} = (e_1, \dotsc, e_n) \in \I_n^{(\mathbf{s})}$ is the set \begin{equation} \A \mathbf{e} = \left\{i \in \{0,1, \ldots, n-1\} \Bigm| \frac{e_{i}}{s_{i}} < \frac{e_{i+1}}{s_{i+1}} \right\}, \label{Ascdef} \end{equation} with the convention that $e_0=0$ (and $s_0 = 1$). The \emph{ascent statistic} on $\mathbf{e} \in \I_n^{(\mathbf{s})}$ is \[ \asc \mathbf{e} = \left|\A\mathbf{e}\,\right|. \] When $\mathbf{s}=(1,2,3, \ldots )$, there are well-known bijections between $\I_n^{(\mathbf{s})}$, the set of inversion sequences and $\Sn$, the set of permutations of $\{1,2, \ldots, n\}$. We use $\I_n^{(\mathbf{s})}$ to generalize results about the distribution of statistics on $\Sn$. The generating polynomial of the descent statistic is the {\em Eulerian polynomial} defined as \[A_n(x) \ := \ \sum_{\pi \in \Sn} x^{\des \pi}\,. \] Here, $\des \pi$ is the number of indices $i \in \{1,2, \ldots, n-1\}$ such that $\pi_i > \pi_{i+1}$. Our object of study is the generating polynomial of the ascent statistic over the set of $\mathbf{s}$-inversion sequences of length $\I_n^{(\mathbf{s})}$: \[ \E_n^{(\mathbf{s})}(x) = \sum_{\mathbf{e} \in \I_{n}^{(\mathbf{s})}} x^{\asc \mathbf{e}}. \] Since this ascent statistic over inversion sequences for $\mathbf{s}=(1,2,3, \ldots )$ is equidistributed with the descent statistic over permutations (see \cite[Lemma 1]{SS}) we call this generalized polynomial the \emph{$\mathbf{s}$-Eulerian polynomial}. In addition to its many remarkable properties, $A_n(x)$ is known to have {\em only real roots} \cite{Fro}, a property which implies that its coefficient sequence is unimodal and log-concave. \medskip Our main result is the following generalization. \begin{thm} Let $\mathbf{s}$ be any sequence of positive integers and $n$ a positive integer. Then the $\mathbf{s}$-Eulerian polynomial \[ \E_n^{(\mathbf{s})}(x) = \sum_{\mathbf{e} \in \I_{n}^{(\mathbf{s})}} x^{\asc \mathbf{e}} \] has only real roots. \label{thm:main} \end{thm} In Section~\ref{sec:main}, we prove Theorem~\ref{thm:main} by refining the $\mathbf{s}$-Eulerian polynomials in a way that allows for an easy recurrence. Using a result of Chudnovsky and Seymour \cite{ChudnovskySeymour}, we show that the recurrence preserves a stronger property---that these refined polynomials are \emph{compatible}---which in turn implies the theorem. Variations of the Eulerian polynomials arise as descent generating polynomials in combinatorial families other than permutations. As we show in Section~\ref{sec:applications}, Theorem~\ref{thm:main} generalizes many previous results concerning the real-rootedness of these Eulerian polynomials. It also implies some new results. Notably, by extending our main theorem we are able to settle a conjecture of Brenti on the Eulerian polynomials of Coxeter groups \cite{brenti94} and partially settle a related conjecture of Dilks, Petersen, and Stembridge for affine Eulerian polynomials of Weyl groups \cite{DPS09}. In Section~\ref{sec:geometry}, we discuss the geometric significance of Theorem~\ref{thm:main}. The above mentioned Eulerian polynomials are known to be the $h$-polynomials of Coxeter complexes and the affine Eulerian polynomials are the $h$-polynomials of the reduced Steinberg tori. A different geometric connection can be obtained by considering the {\em $\mathbf{s}$-lecture hall polytope}, $\PP_n^{(\mathbf{s})}$ which is defined by \[ \PP_n^{(\mathbf{s})} = \left\{ (\lambda_1, \lambda_2, \dotsc, \lambda_n) \in \reals^n \ \Big| \ 0 \leq \frac{\lambda_{1}}{s_{1}} \leq \frac{\lambda_{2}}{s_{2}} \leq \cdots \leq \frac{\lambda_{n}}{s_{n}} \leq 1 \right\}. \] It is known (see \cite{SS}) that the $\mathbf{s}$-Eulerian polynomial is the $\h^*$-polynomial of $\PP_n^{(\mathbf{s})}$. In Section~\ref{sec:q-analogs}, we extend Theorem~\ref{thm:main} to a $(p,q)$-analog of $\E_n^{(\mathbf{s})}(x)$. With the help of this extension, we show, for the first time, that the MacMahon--Carlitz $q$-Eulerian polynomial has only real roots for positive real $q$, a result conjectured by Chow and Gessel in \cite{ChowGessel}. We further show that several other $q$-Eulerian polynomials for signed permutations, and the wreath products $\w$ (colored permutations, indexed permutations) are real-rooted for positive $q$. This includes the generating polynomial for the joint distribution of descent and flag-inversion number. We also study the generating polynomial for the joint distribution of descent and flag-major index on signed permutations and the wreath products. We prove that this $q$-analog also has all roots real for positive $q$, a result which was conjectured by Chow and Gessel \cite{ChowGessel} for signed permutations and by Chow and Mansour \cite{ChowMansour}, for $\w$. \section{The main result} \label{sec:main} In this section, we prove Theorem~\ref{thm:main} using the method of ``compatible polynomials'' in conjunction with a recurrence for $\E^{(\mathbf{s})}_n(x)$. We also discuss some connections of our results to previous work and the more familiar notion of interlacing (of roots). \subsection{A recurrence for the $\mathbf{s}$-Eulerian polynomial} Let $\chi(\varphi)$ be $1$ if the statement $\varphi$ is true and $0$ otherwise. In order to show that the $\mathbf{s}$-Eulerian polynomial has all real roots, consider a refinement: \begin{equation} \poly^{(\mathbf{s})}_{n,i}(x) := \sum_{\mathbf{e} \in \I_{n}^{(\mathbf{s})}} \chi(e_n = i)\, x^{\asc \mathbf{e}}\,. \label{eq:Pni} \end{equation} Clearly, \begin{equation} \E^{(\mathbf{s})}_{n}(x) = \sum_{i=0}^{s_n-1} \poly^{(\mathbf{s})}_{n,i}(x). \label{Esum} \end{equation} The benefit of introducing these polynomials is that they satisfy a simple recurrence, which we now prove. \begin{lem} Given a sequence $\mathbf{s} = \{s_i\}_{i \ge 1}$ of positive integers, let $n \geq 1$ and $0 \leq i < s_n$. Then, for all $n > 1$, we have the recurrence \begin{equation} \poly^{(\mathbf{s})}_{n,i}(x) \ = \ \sum_{h=0}^{t_i-1} x \poly^{(\mathbf{s})}_{n-1,h}(x) \ + \sum_{h=t_i}^{s_{n-1}-1} \poly^{(\mathbf{s})}_{n-1,h}(x), \label{eq:recurrencePni} \end{equation} where $t_i = \lceil i s_{n-1}/s_{n} \rceil,$ with initial conditions $\poly^{(\mathbf{s})}_{1,0}(x)=1$ and $\poly^{(\mathbf{s})}_{1,i}(x)=x$ for $0 < i < s_1$. \end{lem} \begin{proof} Consider an inversion sequence $\mathbf{e}=(e_1, \dotsc, e_{n-1}, e_{n}) \in \I_{n}^{(\mathbf{s})}$ with $e_n=i$. By definition (\ref{Ascdef}) of the ascent set, $n-1 \in \A \mathbf{e}$ if and only if $e_{n-1}/s_{n-1} < i/s_n$, or, equivalently, if and only if $0 \leq e_{n-1} \leq \lceil i s_{n-1}/s_{n} \rceil - 1$ holds. So, \[ \asc (e_1, \ldots,e_{n-1},i) = \asc (e_1, \ldots, e_{n-1})+ \chi(e_{n-1} \leq t_i -1), \] which proves (\ref{eq:recurrencePni}) by setting $h= e_{n-1}$. For the initial conditions, recall that $e_0/s_0 = 0$, by definition, and hence $0 \in \A \mathbf{e}$ if and only if $e_1 > 0$. \end{proof} \begin{rem} For the special case of the classical Eulerian polynomials $A_n(x) = \poly^{(1,2, \dots, n)}_n(x)$, essentially the same refinement $A_{n,i}(x)$ was considered in a geometric context in \cite[Section 4]{NPT11} under the name \emph{restricted Eulerian polynomials}. Thanks to Eran Nevo for bringing this to our attention. \end{rem} \subsection{Compatible polynomials} Polynomials $f_1(x), \dotsc, f_m(x)$ over $\reals$ are \emph{compatible} if, for all real $c_1, \dotsc, c_m \ge 0$, the polynomial \[\sum_{i=1}^m c_if_i(x)\] has only real roots. We call such a weighted sum $\sum_{i=1}^m c_if_i(x)$ of polynomials, with nonnegative coefficients $c_1, \dotsc, c_m$ a {\em conic combination} of $f_1(x), \dotsc, f_m(x)$. A real-rooted polynomial (over $\reals$) is compatible with itself. The polynomials $f_1(x), \dotsc, f_m(x)$ are \emph{pairwise compatible} if for all $i,j \in \{1,2, \dots, m\}$, $f_i(x)$ and $f_j(x)$ are compatible. The following lemma is useful in proving that a collection of polynomials is compatible. \begin{lem}[Chudnovsky--Seymour \cite{ChudnovskySeymour}, 2.2] \label{lem:ChudnovskySeymour} The polynomials $f_1, \dotsc, f_m$ with positive leading coefficients are pairwise compatible if and only if they are compatible. \end{lem} \subsection{Proof of Theorem~\ref{thm:main}} We will prove Theorem~\ref{thm:main} by establishing the following---more general---theorem. Theorem~\ref{thm:main} then follows in view of (\ref{Esum}), (\ref{eq:recurrencePni}) and Lemma \ref{lem:ChudnovskySeymour}. \begin{thm} \label{thm:compatible} Given a set of polynomials $f_1, \dotsc, f_m \in \mathbb{R}[x]$ with positive leading coefficients satisfying for all $1\le i < j \le m$ that \begin{enumerate}[label=\emph{(\alph*)}, ref=(\alph*)] \item $f_i(x)$ and $f_j(x)$ are compatible, and \label{f1} \item $xf_i(x)$ and $f_j(x)$ are compatible \label{f2} \end{enumerate} define another set of polynomials $g_1, \dotsc, g_{m'}\in \mathbb{R}[x]$ by the equations \[g_k(x) = \sum_{\ell=0}^{t_k-1} xf_\ell(x) + \sum_{\ell=t_k}^{m}f_\ell(x),\quad \mathrm{for}\; 1\le k \le m'\] where $0\le t_0\le t_1 \le \dotso \le t_{m'} \le m$. Then, for all $1\le i < j \le m'$ \begin{enumerate}[label=\emph{(\alph*')}, ref=(\alph*')] \item $g_i(x)$ and $g_j(x)$ are compatible, and \label{g1} \item $xg_i(x)$ and $g_j(x)$ are compatible. \label{g2} \end{enumerate} \end{thm} \begin{proof} We first show \ref{g1}, i.e., that the polynomial $c_ig_i(x) + c_jg_j(x)$ has only real roots for all $c_i, c_j \ge 0.$ By the definition of $g_i(x)$, $g_j(x)$ and the assumption that $t_i \le t_j$ it is clear that \[c_ig_i(x) + c_jg_j(x) = \sum_{\alpha=0}^{t_i-1}(c_i + c_j)xf_\alpha(x) + \sum_{\beta = t_i}^{t_j-1} (c_i+c_jx)f_\beta(x) + \sum_{\gamma=t_j}^{m}(c_i + c_j)f_\gamma(x),\] that is, $c_ig_i(x) + c_jg_j(x)$ can be written as a conic combination of the following polynomials, which we group into three (possibly empty) sets: \[\left\{xf_\alpha(x)\right\}_{0\le \alpha < t_i} \ \cup \ \left\{(c_i+c_jx)f_\beta(x)\right\}_{t_i \le \beta < t_j } \ \cup \ \left\{f_\gamma(x)\right\}_{t_j \le \gamma \le m}\,.\] Therefore, it suffices to show that these $m$ polynomials are compatible. In fact, by Lemma~\ref{lem:ChudnovskySeymour}, it is equivalent to show that they are pairwise compatible. This is what we do next. First, two polynomials from the same sets are compatible by \ref{f1}. Secondly, a polynomial from the first set is compatible with another from the third set by \ref{f2}, since $\alpha < \gamma$. To show compatibility between a polynomial from the first set and one from the second, we need that $a x f_\alpha(x) + b (c_i + c_jx) f_\beta(x)$ has only real roots for all $a,b,c_i,c_j \ge 0$ and $\alpha < \beta$. This expression is a conic combination of $xf_\alpha(x)$, $xf_\beta(x)$, and $f_\beta(x)$. Since $\alpha < \beta$, these three polynomials are again pairwise compatible by \ref{f1} and \ref{f2} (and the basic fact the $f(x)$ and $xf(x)$ are compatible), and hence compatible, by Lemma~\ref{lem:ChudnovskySeymour}. Finally, the compatibility of a polynomial in the second set and one in the third set follows by a similar argument, exploiting the fact that, $xf_\beta(x)$, $f_\beta(x)$, and $f_\gamma(x)$ are pairwise compatible for $\beta < \gamma$. Now we are left to show \ref{g2}, that $xg_i(x)$ and $g_j(x)$ are compatible for all $i < j$. This is done in a similar manner. In order to show that $c_ixg_i(x)+ c_jg_j(x)$ is real-rooted for all $c_i, c_j \ge 0$ we show that \[ \left\{x(c_ix+c_j)f_\alpha(x)\right\}_{0\le \alpha < t_i} \ \cup \ \left\{xf_\beta(x)\right\}_{t_i \le \beta < t_j} \ \cup \ \left\{(c_ix+c_j)f_\gamma(x)\right\}_{t_j \le \gamma \le m} \] is a set of compatible polynomials, which follows from analogous reasoning to the above. Two polynomials from the same subsets are compatible by \ref{f1}. Considering one from the first and one from the third subset: $xf_\alpha(x)$ and $f_\gamma(x)$ are compatible by \ref{f2}, since $\alpha < \gamma$. Similarly, $x^2f_\alpha(x)$, $xf_\alpha(x)$, and $xf_\beta(x)$ are pairwise compatible which settles the case when we have a polynomial from the first and one from the second subset. Finally, $xf_\beta(x)$, $xf_\gamma(x)$, and $f_\gamma(x)$ are compatible, settling the case of one polynomial from the second subset and one from the third. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}] We use induction on $n$. When $n=1$, for $0 \leq i \leq j < s_1$, \[(E^{(\mathbf{s})}_{1,i}(x),E^{(\mathbf{s})}_{1,j}(x)) \in \{(1,1), \ (1,x), \ (x,x)\}\] and thus \[(xE^{(\mathbf{s})}_{1,i}(x), E^{(\mathbf{s})}_{1,j}(x)) \in \{(x,1), \ (x,x), \ (x^2,x)\}\,.\] Clearly, each of the pairs of polynomials $(1,1)$, $(1,x)$, $(x,x)$, $(x^2,x)$, is compatible. From (\ref{eq:recurrencePni}) we see that the polynomials $E^{(\mathbf{s})}_{n,i}(x)$ satisfy a recurrence of the form required in Theorem~\ref{thm:compatible}. Hence, by induction, they are compatible for all $n$ and $0\le i < s_n$. In particular, $E^{(\mathbf{s})}_{n}(x)$ has only real roots for $n\ge 1$. \end{proof} \subsection{Connection to interlacing} We now make a small detour to discuss the connection of compatibility to interlacing, and mention some related work in this direction. Given $f(x) = \prod_{i=1}^{\deg f} (x -x_i)$ and $g(x) = \prod_{j=1}^{\deg g} (x- \xi_j)$, two real-rooted polynomials, we say that \emph{$f$ interlaces $g$} if their roots alternate in the following way \begin{equation} \dots \le x_2 \le \xi_2 \le x_1 \le \xi_1\,. \label{eq:roots-interlacing} \end{equation} Note that this requires the degrees of $f$ and $g$ to satisfy the following inequalities: $\deg f \le \deg g \le \deg f + 1.$ In particular, the order of polynomials is important. Interlacing of two polynomials implies the real-rootedness of their arbitrary linear combination by the famous theorem of Obreschkoff. \begin{thm}[Satz 5.2 in \cite{obreschkoff}] Let $f,g \in \mathbb{R}[x]$ with $\deg f \le \deg g \le \deg f + 1$. Then $f$ interlaces $g$ if and only if their arbitrary linear combination, $c_1f(x) + c_2g(x)$ for all $c_1,c_2 \in \mathbb{R}$ has only real roots. \end{thm} In Theorem~\ref{thm:compatible}, if we require that the $f_1, \dotsc, f_m$ polynomials have only nonnegative coefficients we can simplify the conditions \ref{f1} and \ref{f2} using the notion of interlacing. (Note that this will not be much of a restriction for us as all polynomials considered in this paper have this property.) The following lemma is due to D.~G.~Wagner. This version appeared (without a proof) in \cite[Lemma~3.4]{Wag00} where it was also mentioned that it can be proved with the same techniques that were used to obtain \cite[Corollary~5.3]{Wag92}, the special case of the lemma when $\deg f = \deg g.$ \begin{lem} Let $f,g \in \mathbb{R}[x]$ be polynomials with nonnegative coefficients. Then the following two statements are equivalent: \begin{enumerate}[label=\emph{(\roman*)}] \item $f(x)$ and $g(x)$ are compatible, and $xf(x)$ and $g(x)$ are also compatible. \item $f(x)$ interlaces $g(x)$. \end{enumerate} \label{lem:interlacing} \end{lem} \begin{proof} Let $n_f(x_0)$ denote the number of roots of the polynomial $f$ in the interval $[x_0,\infty)$. There exists an equivalent formulation for both compatibility and interlacing in terms of this notion. First, $f$ and $g$ are compatible if and only if $\left|n_f(x_0) - n_g(x_0)\right| \le 1$ for all $x_0 \in \mathbb{R}$ (see 3.5 in \cite{ChudnovskySeymour} or for a proof in the case of $\deg f = \deg g$, see \cite[Theorem 2']{Fell} or \cite[Theorem~5.2]{Wag92}). Secondly, it is immediate from \eqref{eq:roots-interlacing} that $f$ interlaces $g$ if and only if $0 \le n_g(x_0) - n_f(x_0) \le 1$ for all $x_0 \in \mathbb{R}$. In addition, we also have that $n_{xf}(x_0) = n_{f}(x_0) + \chi(x_0 \le 0)$. Since all roots of $f$ and $g$ are nonpositive, we may assume that $x_0 \le 0$. Finally, it can easily be seen that the following two conditions are equivalent, which completes the proof. \begin{enumerate}[label={(\roman*)}] \item $\left|n_f(x_0) - n_g(x_0)\right| \le 1$ and $\left|(n_f(x_0)+1) - n_g(x_0)\right| \le 1.$ \item $0 \le n_g(x_0) - n_f(x_0) \le 1.$ \end{enumerate} \end{proof} \begin{rem} Some further results on the connection of interlacing and compatibility appeared recently in \cite{Liu12}. \end{rem} Lemma~\ref{lem:interlacing} together with Theorem~\ref{thm:compatible} implies the following result of Haglund, Ono, and Wagner \cite[Lemma~8]{HOW99}. \begin{cor} \label{cor:HOW} Let $f_1, \dotsc, f_m \in \reals[x]$ be real-rooted polynomials with nonnegative coefficients, and such that $f_i$ interlaces $f_j$ for all $1\le i < j \le m.$ Let $b_1, \dotsc, b_m \ge 0$ and $c_1, \dotsc, c_m \ge 0$ be such that $b_ic_{i+1} \leq c_ib_{i+1}$ for all $1 \le i \le m-1.$ Then $c_1f_1 + \dotsb + c_mf_m$ interlaces $b_1f_1 + \dotsb + b_mf_m.$ \end{cor} In fact, since the $\mathbf{s}$-Eulerian polynomials have all positive coefficients this implies that the $n$th polynomial interlaces the $(n+1)$th. \begin{thm} For any sequence $\mathbf{s}$ of positive integers and any positive integer $n$, we have that \[ \poly^{(\mathbf{s})}_n(x) \quad \textrm{interlaces}\quad \poly^{(\mathbf{s})}_{n+1}(x). \] \end{thm} \begin{proof} By definition, $\poly^{(\mathbf{s})}_n(x)$ has only nonnegative coefficients, so we can apply Corollary~\ref{cor:HOW}. Set $m = s_{n+1}$, $c_1 = 1$, $c_2 = \dotso = c_{m} = 0$, $b_1 = \dotso = b_{m} = 1$ and $f_i = \poly^{(\mathbf{s})}_{n+1,i-1}(x)$ for all $1\le i \le m$ to get that $\poly^{(\mathbf{s})}_{n}(x) = \poly^{(\mathbf{s})}_{n+1,0}(x) = f_1$ interlaces $f_1 + \dotsb + f_m = \sum_{i=0}^{m-1} \poly^{(\mathbf{s})}_{n+1,i}(x) = \poly^{(\mathbf{s})}_{n+1}(x).$ \end{proof} \section{Applications} \label{sec:applications} In this section, we show that Theorem~\ref{thm:main} contains as special cases several existing real-rootedness results on (generalized) Eulerian polynomials, as well as some results which appear to be new. In particular, we prove the real-rootedness of the Eulerian polynomials of type $D$ in Subsection~\ref{subsec:coxeter}. \subsection{Permutations} We first show that Theorem~\ref{thm:main} implies the real-rootedness of the familiar Eulerian polynomials, a result known since Frobenius \cite{Fro}. For $\pi \in \Sn$, let $\D \pi$ be the descent set of $\pi$, \[ \D \pi = \{ i \in \{1, \ldots, n-1\} \mid \pi_i > \pi_{i+1}\}, \] and let $\inv \pi$ be the number of {\em inversions} of $\pi$: \[ \inv \pi \ = \left |\{(i,j) \mid 1 \leq i < j \leq n \ {\rm and} \ \pi_i > \pi_j \} \right |. \] We will make use of the following bijection between $\Sn$ and $\I_n^{(1,2, \ldots, n)}$ which was proved in \cite[Lemma 1]{SS} to have the properties claimed. \begin{lem} The mapping $\phi: \Sn \rightarrow \I_{n}^{(1,2, \ldots, n)}$ defined by $\phi(\pi) = \boldsymbol{t} = (t_1, t_2, \ldots, t_n)$ for $\pi = (\pi_1, \dotsc, \pi_n)$ as \[ t_i \ = \ \left|\{j \in \{1,2, \ldots, i-1\} \mid \pi_j>\pi_i\}\right| \] is a bijection satisfying both $\D \pi = \A \boldsymbol{t}$ and $\inv \pi = |\boldsymbol{t}| = t_1+t_2+ \cdots + t_n$. \label{Desinv} \end{lem} \begin{cor} For $n \geq 1$, the Eulerian polynomial, \[ A_n(x) =\sum_{\pi \in \Sn} x^{\des \pi} , \] has only real roots. \end{cor} \begin{proof} By Lemma \ref{Desinv}, $A_n(x) = \E_n^{(1,2, \ldots, n)}(x), $ which has all roots real by Theorem~\ref{thm:main} with $\mathbf{s}=(1,2, \ldots, n)$. \end{proof} \subsection{Signed permutations} \label{sec:signedpermutations} Let $\Bn$ denote the hyperoctahedral group, whose elements are the signed permutations of $\{1,2, \ldots, n\}$. Each $\sigma \in \Bn$ has the form $(\pm \pi_1, \pm \pi_2, \ldots, \pm \pi_n)$ where $\pi = (\pi_1, \dotsc, \pi_n) \in \Sn$. In defining the notion a of ``descent'' on $\Bn$, various orderings have been used in the literature. In this subsection, we will assume the \[-1 <_B -2 <_B \cdots <_B -n <_B 0 <_B 1 <_B 2 <_B \cdots <_B n\] ordering, since it generalizes naturally to the wreath products discussed in the next subsection. (For another ordering used for $\Bn$, see Subsection~\ref{subsec:coxeter}.) Let $\sigma$ be a signed permutation of length $n$. An index $i \in \{0,1, \ldots, n-1\}$ is a {\em descent} of $\sigma$ if $\sigma_i >_B \sigma_{i+1}$, where $\sigma_0:=0$. Let $\desB \sigma$ denote the number of descents of $\sigma \in \Bn$. There is a correspondence between statistics on signed permutations and statistics on inversion sequences $\I_n^{(\mathbf{s})}$ with $\mathbf{s} = (2,4,6, \dotsc)$. The following was shown in \cite[eq.~(26)]{SS}. \begin{lem} \begin{equation} \sum_{t \geq 0} (2t+1)^n x^t \ = \ \frac{\E_n^{(2,4,\ldots, 2n)}(x)} {(1-x)^{n+1}}\,. \label{signed_perms} \end{equation} \label{lem:signed_perms} \end{lem} On the other hand, the infinite series in (\ref{signed_perms}) was shown by Brenti in \cite[Theorem~3.4]{brenti94} to satisfy: \begin{equation} \sum_{t \geq 0} (2t+1)^n x^t \ = \ \frac{\sum_{\sigma \in \Bn}x^{\desB(\sigma)}} {(1-x)^{n+1}}\,. \label{steingrimsson} \end{equation} So, we have the following result, originally due to Brenti \cite[Corollary~3.7]{brenti94}. \begin{cor} \label{cor:typeB} The descent polynomial for signed permutations, \[ B_n(x) \ := \ \sum_{\sigma \in \Bn}x^{\desB(\sigma)}, \] has all real roots. \end{cor} \begin{proof} Combining \eqref{signed_perms} and \eqref{steingrimsson}, $B_n(x) \ = \ \E_n^{(2,4,\ldots, 2n)}(x)$. The result follows with $\mathbf{s}=(2,4,\ldots, 2n)$ from Theorem~\ref{thm:main}. \end{proof} \subsection{$k$-colored permutations} For a positive integer $k$, the wreath product $\w$, of a cyclic group, $\integers_k$, of order $k$, and the symmetric group $\Sn$, generalizes both $\Sn$ (the case $k=1$) and $\Bn$ ($k=2$). We regard $\w$ as the set of $k$-colored permutations, or as pairs $(\pi, \xi)$, written as \[ \pi^{\xi} \ = \ (\pi_1^{\xi_1}, \pi_2^{\xi_2}, \ldots, \pi_n^{\xi_n}), \] where $\pi=(\pi_1, \ldots, \pi_n) \in \Sn$ and $\xi = (\xi_1, \ldots, \xi_n) \in \{0,1, \ldots, k-1\}^n$. The {\em descent set} of $\pi^{\xi} \in \w$ is \begin{align} \D \pi^{\xi} & \ = \ \{i \in \{0, \ldots, n-1\} \mid \xi_i < \xi_{i+1} \ {\rm or} \ \xi_i = \xi_{i+1} \ {\rm and} \ \pi_i > \pi_{i+1}\}, \label{des} \end{align} with the convention that $\pi_0=\xi_0=0$. Let $\des \pi^{\xi} =\left|\D \pi^{\xi}\right|$ denote the number of descents. Note that this definition of $\des$ agrees with $\des$ on $\Sn$ when $k=1$, and with $\desB$ on $\Bn$ when $k=2$. The descent polynomial for $G_{n,k} := \w$ is defined analogously as \[ G_{n,k}(x) \ := \ \sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi} }. \] As we now describe, the statistics on $\w$ are related to statistics on $\mathbf{s}$-inversion sequences, $\I_n^{(\mathbf{s})}$, with $\mathbf{s}=(k,2k, \ldots, nk)$. The following bijection was proven in \cite[Theorem 3]{PS2} to map the descent set on colored permutations, $\w$, to the ascent set on inversion sequences $\I_n^{(k,2k, \ldots, nk)}$. \begin{lem} For each pair $(n,k)$ with $n \geq 1$, $k \geq 1$, define \[ \Theta: \ \ \w \ \longrightarrow \ \I_n^{(k,2k, \ldots, nk)} \] by \begin{equation} \mathbf{e} \ = \ \Theta(\pi_1^{\xi_1}, \pi_2^{\xi_2}, \ldots, \pi_n^{\xi_n})\\ \ = \ (\xi_1 + t_1,\ 2 \xi_2 + t_2,\ \ldots,\ n \xi_n + t_n), \end{equation} where $(t_1, t_2, \ldots, t_n) = \phi(\pi)$, for $\phi$ defined on $\Sn$ as in Lemma \ref{Desinv}. Then \[ \A \mathbf{e} \ = \ \D \pi^{\xi}. \] \label{lem:theta} \end{lem} The following result is originally due to Steimgr\'{i}msson \cite[Theorem~3.19]{Stein}. \begin{cor} For each pair $(n,k)$ with $n \geq 1$, $k \geq 1$, the descent polynomial of $\w$ has all roots real. \end{cor} \begin{proof} By Lemma \ref{lem:theta}, $G_{n,k}(x) \ = \ \E_n^{(k,2k, \ldots, nk)}(x)$, so the result follows from Theorem~\ref{thm:main} with $\mathbf{s}=(k,2k, \ldots, nk)$. \end{proof} In Section~\ref{sec:q-analogs}, we will use the fact that the bijection $\Theta$ of Lemma \ref{lem:theta} relates other statistics of $\w$ and $\I_n^{(k,2k, \ldots, nk)}$, to show that several $q$-analogs of $G_{n,k}(x)$ are real-rooted for all positive $q$, settling some open questions. \subsection{Finite Coxeter groups} \label{subsec:coxeter} The symmetric group and the hyperoctahedral group are examples of finite Coxeter groups. The descent statistic can be extended to all such groups (with the appropriate choice of generators), and hence one can define the Eulerian polynomials for all finite Coxeter groups, sometimes called Coxeter systems (see \cite{BB05, brenti94}). Brenti showed that these polynomials have only real roots for type $B$ (see Corollary~\ref{cor:typeB}) and the exceptional groups and conjectured that this is the case in general \cite[Conjecture~5.2]{brenti94}. \begin{conj} \label{conj:Coxeter} The Eulerian polynomials for all finite Coxeter groups have only real roots. \end{conj} Brenti also showed by a simple argument that it is enough to check this for irreducible finite Coxeter groups. Combining this with the above results reduced Conjecture~\ref{conj:Coxeter} to the case of even-signed permutations \cite[Conjecture~5.1]{brenti94}. \begin{conj} \label{conj:typeD} The Eulerian polynomials of type $D$ have only real roots. \end{conj} In this subsection, we give the first proof of Conjecture~\ref{conj:typeD}. To be precise, we view the Coxeter group of type $B$ (resp.~$D$) of rank $n$, denoted by $\Bn$ (resp.~$\Dn$), as the set of signed (resp. even-signed) permutations of the set $\{1, \dotsc, n\}$. The type $B$ and $D$ descents have the following simple combinatorial interpretation (see \cite{brenti94, BB05}). For a signed (resp. even-signed) permutation $\sigma$ given in its ``window notation" $(\sigma_1, \dotsc, \sigma_n)$, let \begin{align} \D_B \sigma &= \{ i \in \{1,\dotsc, n-1\} \mid \sigma_i > \sigma_{i+1}\} \cup \{0 \mid \mathrm{if}\; \sigma_1 < 0\}, \label{def:B-des} \\ \D_D \sigma &= \{ i \in \{1,\dotsc, n-1\} \mid \sigma_i > \sigma_{i+1}\} \cup \{0 \mid \mathrm{if}\; \sigma_1+\sigma_2 < 0\}. \label{eq:Ddes} \end{align} Let us start with a simple observation. Note that the type $D$ descent statistic, $\D_D$, defined in (\ref{eq:Ddes}) can be extended to all signed permutations. Furthermore, $\D_D$ is equidistributed over even-signed and odd-signed permutations. In other words, we have the following equality. \begin{lem} For $n\ge 2,$ \[ \sum_{\sigma \in \Bn} x^{\des_D \sigma} = 2 \sum_{\sigma \in \Dn}x^{\des_D \sigma}. \] \label{lem:involution} \end{lem} \begin{proof} The involution on $\Bn$ that swaps the values $1$ and $-1$ in (the window notation of) $\sigma\in\Bn$ is a bijection between $\Dn$ and $\Bn\setminus \Dn$ that preserves the type $D$ descent statistic whenever $n\ge 2$. \end{proof} Therefore, in order to avoid dealing with the parity of the signs and to allow for simpler recurrences, we will be working instead with the polynomial \begin{align} \label{Tdef} T_n(x) & = \sum_{\sigma \in \Bn}x^{\des_D \sigma}. \end{align} Clearly, $T_n(x)$ has all roots real if and only if $D_n(x)$ does (even for $n=1$, in the trivial case not covered by Lemma~\ref{lem:involution}, since $T_1(x) = x+1$ and $D_1(x)=1$). This observation allows us to focus our attention on signed permutations, with the goal of showing $T_n(x)$ has all real roots. We will prove the following inversion sequence representation of $T_n(x)$. For $\mathbf{e} = (e_1, \dotsc, e_n) \in \I^{(2,4,\dots, 2n)}_n$, the type $D$ ascent set of $\mathbf{e}$ is defined as \begin{equation} \A_D \mathbf{e} = \left\{i\in \{1,\dotsc, n-1\} \Bigm| \frac{e_i}{i} < \frac{e_{i+1}}{i+1}\right\} \cup \left\{0 \Bigm| {\rm if} \ e_1 + \frac{e_2}{2}\ge \frac{3}{2}\right\}. \label{eq:D-Asc} \end{equation} Let \[\asc_D \mathbf{e} = \left|\A_D \mathbf{e}\right|. \] \begin{lem} \label{lem:invseqD} For $n \ge 1,$ \[ T_n(x) = \sum_{\mathbf{e} \in \I^{(2,4,\dots, 2n)}_n} x^{\asc_D \mathbf{e}}\,. \] \label{lem:Tn-invseq} \end{lem} We will also make use of the following basic but practical observation. \begin{lem} \label{lem:observation} Let $a,b,p$ be nonnegative integers such that $0 \le a/p < 1$ and $0 \leq b/(p+1) < 1$. Then \[\frac{a}{p} < \frac{b}{p+1} \Longleftrightarrow a < b.\] \end{lem} \begin{proof} If $a < b$, then $a+1 \leq b$. Thus, since $a < p,$ $ (p+1)a = pa + a < pa + p = p(a+1) \leq p b. $ So, $(p+1)a < pb$. Conversely, if $a \geq b$, then $(p+1)a \geq (p+1)b > p b, $ so $(p+1)a > p b$. \end{proof} Clearly, the set of signed permutations, $\Bn$, has the same cardinality as the set of ``type $B$'' inversion sequences, $\I^{(2,4,\dots, 2n)}_n$. Next, we define a bijection $\psi$ between these sets that maps type $D$ descents in signed permutations to type $D$ ascents in the inversion sequences. We will prove several other properties of $\psi$ as well. Some will be used to establish the real-rootedness of $T_n(x)$---and hence $D_n(x)$---others will be needed in Section~\ref{subsec:affine} for the affine Eulerian polynomials. Throughout this subsection we will assume the natural ordering of integers, \[-n < \dotsb <-1 < 0 < 1 < \dotsb < n.\] For $\sigma=(\sigma_1, \ldots, \sigma_n) \in \Bn$, let $(t_1, \dotsc, t_n)= \phi(|\sigma_1|, \dotsc, |\sigma_n|)$ where $\phi$ is the map defined in Lemma~\ref{Desinv} and $(|\sigma_1|, \dotsc, |\sigma_n|)$ denotes the underlying permutation in $\Sn$. Define the map $\psi: \Bn \rightarrow \I^{(2,4,\dots, 2n)}_n$ as follows. Let $ \psi(\sigma) = (e_1, \ldots, e_n), $ where, for all $1\le i\le n$, \[e_i=\begin{cases} t_i&$ if\, $\sigma_i > 0\,, \\ 2i-1-t_i&$ if\, $\sigma_i < 0\,.\end{cases}\] \begin{thm} \label{bijection} The map $\psi: \Bn \rightarrow \I^{(2,4,\dots, 2n)}_n$ is a bijection satisfying the following properties. \begin{enumerate}[label=\emph{(\arabic*)}, ref=(\arabic*)] \item $\sigma_1 < 0$ if and only if $e_1 > 0$. \label{cond:typeB} \item $\sigma_n > 0$ if and only if $e_n < n$. \label{cond:typeBtilde} \item $\sigma_1 + \sigma_2 < 0$ if and only if $e_1 + e_2/2 \geq 3/2$. \label{cond:typeD} \item $\sigma_i > \sigma_{i+1}$ if and only if $e_i/i < e_{i+1}/(i+1)$, for $1\le i\le n-1$. \label{cond:typeA} \item $\sigma_{n-1} + \sigma_n > 0$ if and only if $e_{n-1}/(n-1) + e_n/n < (2n-1)/n$. \label{cond:typeDtilde} \end{enumerate} \end{thm} \begin{proof} Note that $\sigma_i < 0$ if and only if $e_i \geq i$ which proves \ref{cond:typeB} and \ref{cond:typeBtilde}. Moreover, this shows that the map $\psi$ is a bijection since $\phi$ is. \begin{enumerate} \setcounter{enumi}{2} \item It is not too hard to see that it is sufficient to verify this claim for all $\sigma \in \mathfrak{B}_2$. See Table~\ref{table}. \begin{table}[ht] \begin{center} \begin{tabular}{|c||c|c|c|} \hline $\sigma\in\mathfrak{B}_2$ &$\mathbf{e} \in I_2^{(2,4)}$ & $\A_D \mathbf{e}$ & $\asc_D \mathbf{e}$ \\ \hline (1,2)&(0,0) & $\{ \ \}$ & 0 \\ \hline (-1,2)&(1,0) & $\{ \ \}$ & 0 \\ \hline (2,1) &(0,1) & $\{ 1 \}$ & 1 \\ \hline (-2,1)&(1,1) & $\{ 0 \}$ & 1 \\ \hline (2,-1)&(0,2) & $\{ 1 \}$ & 1 \\ \hline (-2,-1)&(1,2) & $\{ 0 \}$ & 1 \\ \hline (1,-2)&(0,3) & $\{ 0,1 \}$ & 2 \\ \hline (-1,-2)&(1,3) & $\{ 0,1 \}$ & 2 \\ \hline \end{tabular} \end{center} \caption{An example of the bijection for $n=2$.} \label{table} \end{table} \item To prove this claim, we consider four cases, based the signs of $\sigma_i$ and $\sigma_{i+1}$. \begin{enumerate} \item{If $\sigma_i > 0$ and $\sigma_{i+1} > 0$}, then $e_i = t_i < i$ and $e_{i+1}=t_{i+1} < i+1$. By Lemma~\ref{Desinv}, $\sigma_i > \sigma_{i+1}$ if and only if $t_i < t_{i+1}$, i.e, if and only if $e_i < e_{i+1}$. By Lemma~\ref{lem:observation}, this is equivalent to $e_i/i < e_{i+1}/(i+1)$. \item{If $\sigma_i < 0$ and $\sigma_{i+1} < 0$}, then $e_i =2i-1- t_i $ and $e_{i+1}=2(i+1)-1-t_{i+1} $. Now $\sigma_i > \sigma_{i+1}$ if and only if $|\sigma_i| < |\sigma_{i+1}|$, which, applying Lemma~\ref{Desinv}, is equivalent to $t_i \geq t_{i+1}$. If $t_i \geq t_{i+1}$, \[ \frac{e_i}{i} = 2 - \frac{t_i+1}{i} \leq 2 - \frac{t_{i+1}+1}{i} < 2 - \frac{t_{i+1}+1}{i+1} = \frac{e_{i+1}}{i+1}. \] On the other hand, if $t_i < t_{i+1}$, then $t_i+1 \leq t_{i+1}$ and by Lemma~\ref{lem:observation}, $t_{i+1}/i < (t_{i+1}+1)/(i+1)$, so \[ \frac{e_i}{i} = 2 - \frac{t_i+1}{i} \geq 2 - \frac{t_{i+1}}{i} > 2 - \frac{t_{i+1}+1}{i+1} = \frac{e_{i+1}}{i+1}. \] \item{If $\sigma_i < 0<\sigma_{i+1}$}, then $e_i =2i-1- t_i $ and $e_{i+1}=t_{i+1} \leq i$. Since $t_i \leq i-1$, $e_i \geq 2i-1-(i-1)=i.$ Thus we have \[ \frac{e_i}{i} \geq 1> \frac{i}{i+1} \geq \frac{e_{i+1}}{i+1}. \] \item{If $\sigma_i > 0>\sigma_{i+1}$}, then $e_i = t_i < i $ and $e_{i+1}= 2(i+1)-1-t_{i+1}.$ Since $t_{i+1} \leq i$, $e_{i+1} \geq 2(i+1)-1-(i)=i+1.$ Thus we have \[ \frac{e_i}{i} < 1 \leq \frac{e_{i+1}}{i+1}. \] \end{enumerate} \item Since $t_n = n-|\sigma_n|$, \[ e_n = \left \{ \begin{array}{ll} n-\sigma_n & {\rm if} \ \sigma_n > 0\\ n-1 + |\sigma_n| & {\rm if} \ \sigma_n < 0.\\ \end{array} \right. \] Note that $t_{n-1} = n-|\sigma_{n-1}| - \chi(|\sigma_n| > |\sigma_{n-1}|)$. Thus, \[ e_{n-1} = \left \{ \begin{array}{ll} n-\sigma_{n-1} - \chi(|\sigma_n| > |\sigma_{n-1}|) & {\rm if} \ \sigma_{n-1} > 0\\ n-3 + |\sigma_{n-1}| + \chi(|\sigma_n| > |\sigma_{n-1}|) & {\rm if} \ \sigma_{n-1} < 0.\\ \end{array} \right. \] First assume $\sigma_{n-1}+\sigma_{n} > 0$. Then either (i) $\sigma_{n-1} > 0$ and $\sigma_{n} > 0$, and so \[ \frac{e_{n-1}}{n-1}+ \frac{e_{n}}{n} \leq \frac{n-2}{n-1}+ \frac{n-1}{n} < \frac{2n-1}{n}\,, \] or (ii) $\sigma_{n-1} > 0$ and $\sigma_{n} < 0$ with $1 \leq |\sigma_{n}| < \sigma_{n-1} \leq n,$ in which case \begin{align*} \frac{e_{n-1}}{n-1}+ \frac{e_{n}}{n} &= \frac{n-\sigma_{n-1}}{n-1} + \frac{n-1+|\sigma_{n}|}{n}\\ &= \frac{2n-1}{n} + \left(\frac{|\sigma_{n}|}{n} - \frac{\sigma_{n-1}-1}{n-1} \right) < \frac{2n-1}{n}\,, \end{align*} or (iii) $\sigma_{n-1} < 0$ and $\sigma_{n} > 0$ with $ 1\le |\sigma_{n-1}| < \sigma_{n}\le n,$ so that \begin{align*} \frac{e_{n-1}}{n-1}+ \frac{e_{n}}{n} &= \frac{n-2+|\sigma_{n-1}|}{n-1} + \frac{n-\sigma_{n}}{n}\\ &= \frac{2n-1}{n} + \left(\frac{|\sigma_{n-1}|-1}{n-1} - \frac{\sigma_n-1}{n}\right) < \frac{2n-1}{n}\,. \end{align*} The last inequality in both cases (ii) and (iii) follows by Lemma~\ref{lem:observation}. Now assume $\sigma_{n-1}+\sigma_{n} < 0$. Then either (iv) $\sigma_{n-1} < 0$ and $\sigma_{n} < 0$, so \[ \frac{e_{n-1}}{n-1}+ \frac{e_{n}}{n} \ge 1 + 1 > \frac{2n-1}{n}\,,\] or (v) $\sigma_{n-1} < 0$ and $\sigma_{n} > 0$ with $1\le |\sigma_{n}| < \sigma_{n-1} \le n$, then \begin{align*} \frac{e_{n-1}}{n-1}+ \frac{e_{n}}{n} &=\frac{n-3 +|\sigma_{n-1}| }{n-1} + \frac{n-|\sigma_{n}|}{n}\\ &= \frac{2n-1}{n} + \left (\frac{|\sigma_{n-1}| - 2}{n-1} - \frac{|\sigma_{n}| - 1}{n} \right ) \ge \frac{2n-1}{n}\,, \end{align*} or (vi) $\sigma_{n-1} > 0$ and $\sigma_{n} < 0$ with $ 1\le |\sigma_{n-1}| < \sigma_{n} \le n$, then \begin{align*} \frac{e_{n-1}}{n-1}+ \frac{e_{n}}{n} &= \frac{n-1 -|\sigma_{n-1}| }{n-1} + \frac{n+|\sigma_{n}|-1}{n} \\&= \frac{2n-1}{n} + \left (\frac{|\sigma_{n}| }{n} - \frac{|\sigma_{n-1}| }{n-1} \right ) \ge \frac{2n-1}{n}, \end{align*} Again, we applied Lemma~\ref{lem:observation} in the last steps of (v) and (vi). \end{enumerate} \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:invseqD}] Follows from parts \ref{cond:typeD} and \ref{cond:typeA} of Theorem~\ref{bijection}. \end{proof} \begin{cor} Parts \ref{cond:typeB} and \ref{cond:typeA} of Theorem~\ref{bijection} can be used to give an alternative proof of Corollary~\ref{cor:typeB}. \end{cor} \begin{lem} For $n \geq 2$ and $0 \leq i < 2(n+1),$ \[T_{n+1,i}(x) = \sum_{\ell=0}^{\lceil ni/(n+1)\rceil - 1} xT_{n,\ell}(x) + \sum_{\ell=\lceil ni/(n+1)\rceil}^{2n-1}T_{n,\ell}(x),\] with initial conditions $T_{2,0}(x)=2$, $T_{2,1}(x)= T_{2,2}(x)=2x$, and $T_{2,3}(x)=2x^2$. \label{lem:T-recurrence} \end{lem} \begin{proof} The initial conditions can be checked from the Table~\ref{table}. Now suppose $n \geq 2$ and $\mathbf{e}=(e_1, \ldots, e_{n+1}) \in \I_{n+1}^{(2,4,\dotsc,2n+2)}$ with $e_{n+1}=i$. Then, by the definition of the type $D$ ascent set, $n \in \A_D \mathbf{e}$ if and only if $e_n/n < i/(n+1)$ or, equivalently, whenever $0 \leq \ell \leq \lceil ni/(n+1)\rceil - 1$. So, \[ \asc_D \mathbf{e} = \asc_D(e_1, \ldots, e_n) + \chi(e_n \leq \lceil ni/(n+1)\rceil - 1). \] We conclude the proof by letting $\ell = e_n$. \end{proof} Finally, we are in position to prove Brenti's conjecture (Conjecture~\ref{conj:typeD}). \begin{thm} For $n\ge 2$, the polynomial $T_n(x)$ has only real roots. In fact, for $0 \leq i < 2n$, $T_{n,i}(x)$ has only real roots. \label{thm:typeD} \end{thm} \begin{proof} We prove the statement by induction for $n\ge 4$ using Theorem~\ref{thm:compatible}. For $n=2$ and $n=3$ the hypotheses of Theorem~\ref{thm:compatible} do not hold. We need to check the cases $n=2$ and $n=3$ separately. Clearly, $T_2(x) = 2(x+1)^2$ has only real roots, but the polynomials $T_{2,0}(x)=2,T_{2,1}(x)=T_{2,2}(x)=2x,T_{2,3}(x)=2x^2$ fail to be compatible, since $T_{2,0}(x) + T_{2,3}(x)$ has no real roots. Using the recurrence given in Lemma~\ref{lem:T-recurrence} we can easily compute $T_{n,i}(x)$ for $n=3$. While $T_3(x) = 2(x^3+11x^2+11x+1)$ has only real roots, the polynomials $T_{3,0}(x) = 2(x+1)^2$, $T_{3,1}(x) = 2x(x+3)$, $T_{3,2}(x) = T_{3,3}(x) = 4x(x+1)$, $T_{3,4}(x) = 2x(3x+1)$, $T_{3,5}(x) = 2x(x+1)^2$ are not compatible, e.g., $T_{3,0}(x)+T_{3,4}(x)$ has no real roots. However, iterating one more time, we obtain the following eight polynomials (the approximate values of their roots are also given for the reader's convenience): \[ \begin{tabular}{rlll} $T_{4,0}(x) = $& $2(x+1)(x^2+10x+1)$ && $\{-9.899, -1, -0.101\}$\\ $T_{4,1}(x) = $& $4x(x+1)(x+5)$&& $\{-5,-1,0\}$ \\ $T_{4,2}(x) = $& $2x(3x^2+14x+7)$&&$\{-4.097, -0.569, 0\}$\\ $T_{4,3}(x) = $& $2x(5x^2+14x+5)$&&$\{-2.380,-0.420, 0\}$ \\ $T_{4,4}(x) = $& $2x(5x^2+14x+5)$&&$\{-2.380,-0.420, 0\}$ \\ $T_{4,5}(x) = $& $2x(7x^2+14x+3)$&&$\{-1.756, -0.244, 0\}$\\ $T_{4,6}(x) = $& $4x(x+1)(5x+1)$&&$\{-1,-0.2, 0\}$\\ $T_{4,7}(x) = $& $2x(x+1)(x^2+10x+1)$&&$\{-9.899, -1, -0.101,0\}$. \end{tabular}\] We need to show that these eight polynomials are indeed pairwise compatible and also that $xT_{4,i}(x)$ and $T_{4,j}(x)$ are compatible for all $0\le i < j \le 7$. By Lemma~\ref{lem:interlacing}, this can be done by checking the roots explicitly to verify that $T_{4,i}(x)$ interlaces $T_{4,j}(x)$ for all $0 \le i < j \le 7$. Proceeding by induction on $n$, successive applications of Theorem~\ref{thm:compatible} gives us that for all $n\ge 4$ the polynomials $T_{n,0}(x), \dotsc, T_{n, 2n-1}(x)$ are pairwise compatible and also that $xT_{n,i}(x)$ and $T_{n,j}(x)$ are compatible for all $0\le i < j \le 2n-1$. In particular, the former is equivalent to saying that these $2n$ polynomials are compatible. Therefore, their sum, $T_n(x)$, has only real roots for all $n\ge 4$ as well. \end{proof} \subsection{Affine descents in Weyl groups} \label{subsec:affine} Recently, Dilks, Petersen and Stembridge defined and studied Eulerian-like polynomials associated to irreducible affine Weyl groups. In \cite{DPS09}, they define these ``affine'' Eulerian polynomials as generating functions for ``affine descents'' over the corresponding finite Weyl group. An affine descent is similar to an ordinary descent in a Weyl group, except that the reflection corresponding to the highest root (in the underlying root system) may also contribute a descent, depending on its effect on length. Dilks, Petersen and Stembridge observed that these polynomials have interesting properties similar to their counterparts for the Coxeter groups and proposed a companion conjecture to Brenti's conjecture. \begin{conj}[Conjecture~4.1 in \cite{DPS09}] The affine Eulerian polynomials for all finite Weyl groups have only real roots. \label{conj:weyl} \end{conj} As they pointed out the type $A$ and $C$ affine Eulerian polynomials were already known to be multiples of the classical Eulerian polynomial and hence, have only real roots. In this section, we prove one of the remaining cases, for type $B$ (the type $D$ case remains open). The affine Eulerian polynomial of type $B$ is defined in \cite[Section 5.3]{DPS09} as the generating function of the ``affine descents'' over the corresponding finite Weyl group, $\Bn$, \[ \widetilde{B}_n(x) = \sum_{\sigma\in \Bn} x^{\widetilde{\des}_B \sigma}, \] where for a signed permutation $\sigma = (\sigma_1, \dotsc, \sigma_{n}) \in \Bn$ the affine descent statistic is computed as \[\widetilde{\des}_B \sigma = \chi(\sigma_1 <0) + \left | \{1\le i \le n-1 \mid \sigma_i > \sigma_{i+1}\}\right | + \chi(\sigma_{n-1} + \sigma_{n} > 0).\] Notice the affine Eulerian polynomial of type $B$ is intimately related to the type $D$ Eulerian polynomial in the following way. \begin{thm} For $n \ge 2$, \[\widetilde{B}_n(x) = T_{n+1,n+1}(x)\,,\] where $T_{n,i}(x)$ is the refined Eulerian polynomial of type $D$ defined in (\ref{Tdef}). \end{thm} \begin{proof} It is easy to see under the involution $(\sigma_1, \ldots, \sigma_n) \mapsto (-\sigma_n, \ldots, -\sigma_1)$, that $\widetilde{\des}_B$ has the same distribution over $\Bn$ as the statistic \[ \widetilde{\stat}_B \sigma = \chi(\sigma_n > 0) + \left | \{1\le i \le n-1 \mid \sigma_i > \sigma_{i+1}\}\right | + \chi(\sigma_{2} + \sigma_{1} < 0).\] From Theorem~\ref{bijection} part \ref{cond:typeD} it follows that $\sigma_{2} + \sigma_{1} < 0$ is equivalent to $e_1 + e_2/2 > 3/2$ and from part \ref{cond:typeBtilde} we have that $\sigma_{n} > 0$ if and only if $e_n < n$. Note $e_n < n$ is equivalent to $e_n/n < 1 = (n+1)/(n+1)$. So, $\widetilde{B}_n(x) = T_{n+1,n+1}(x)$. \end{proof} \begin{cor} For $n\ge 2$, $\widetilde{B}_n(x)$ has only real roots. \end{cor} \begin{proof} Follows from the fact that $T_{n,i}(x)$ have only real roots (see Theorem~\ref{thm:typeD}). \end{proof} As we mentioned earlier there is an analogous conjecture for type $D$ which remains unsolved. \begin{conj}[\cite{DPS09}] For $\sigma \in \Dn,$ let \[\widetilde{\des}_D \sigma = \chi(\sigma_1+\sigma_2 <0) + \left | \{1\le i \le n-1 \mid \sigma_i > \sigma_{i+1}\}\right | + \chi(\sigma_{n-1} + \sigma_{n} > 0)\,.\] Then the affine Eulerian polynomial of type $D$ \[ \sum_{\sigma \in \Dn} x^{\widetilde{\des}_D}\] has only real roots. \end{conj} By Theorem \ref{bijection} (parts \ref{cond:typeD}, \ref{cond:typeA} and \ref{cond:typeDtilde}) we can at least express the type $D$ affine Eulerian polynomial in terms of ascent statistics on inversion sequences. \begin{cor} The type $D$ affine Eulerian polynomial satisfies \[ 2 \sum_{\sigma \in \Dn} x^{\widetilde{\des}_D \sigma} = \sum_{\mathbf{e} \in {\I_n^{(2,4,\dotsc,2n)}}} x^{\widetilde{\asc}_D e}, \] where the type $D$ affine ascent statistic for $\mathbf{e} \in{\I_n^{(2,4,\dots,2n)}}$ is given by \begin{align*} \widetilde{\asc}_D\, \mathbf{e} =& \;\chi(e_1 + e_2/2 \ge 3/2) + \left | \left\{1\le i\le n-1 \;\Big|\; \frac{e_i}{i} < \frac{e_{i+1}}{i+1}\right\} \right | \\ &+ \chi( e_{n-1}/(n-1) + e_n/n < (2n-1)/n). \end{align*} \end{cor} \subsection{$k$-ary words} The {\em $k$-ary words of length $n$} are the elements of the set $\{0,1, \ldots, k-1\}^n$. Define an ascent statistic for $w \in \{0,1, \ldots, k-1\}^n $ by \[ \asc w = |\left\{i \in \{0, 1, \ldots, n-1\} \mid w_i < w_{i+1}\right\}| , \] with the convention that $w_0=0$. \begin{cor} The ascent polynomial for $k$-ary words, \[ \sum_{w \in \{0,1, \ldots, k-1\}^n} x^{\asc w}, \] has all real roots. \end{cor} \begin{proof} Clearly, using the identity mapping from $\{0,1, \ldots, k-1\}^n $ to $ \I_{n}^{(k,k, \ldots, k)}$, $\sum_{w \in \{0,1, \ldots, k-1\}^n } x^{\asc w} \ = \ E_n^{(k,k,\dots, k)}(x)\,.$ So, the result follows by setting $\mathbf{s}=(k,k, \ldots, k)$ in Theorem~\ref{thm:main}. \end{proof} It was shown in \cite[Corollary 8]{SS} using Ehrhart theory that \[ \frac{\sum_{\mathbf{e} \in \I_{n}^{(k,k, \ldots, k)}} x^{\asc \mathbf{e}}}{(1-x)^{n+1}} = \sum_{t \geq 0} \binom{n+kt}{n} x^t. \] \begin{rem} It was pointed out in \cite[Section 5]{DiaconisFulman} that the above series arises as the Hilbert series of the $k$th Veronese embedding of the coordinate ring of the full projective space $\mathbb{C}[x_1, \dotsc, x_{n+1}]$. \end{rem} \subsection{Excedances and number of cycles in permutations} For a permutation $\pi \in \Sn$, the {\em excedance} number of $\pi$, $\exc(\pi)$, is defined by \[ \exc(\pi) = |\left\{i \in \{1,2, \ldots, n\} \mid \pi(i) > i\right\}|,\] and the {\em cycle number} of $\pi$, $\cyc(\pi)$, is the number of cycles in the disjoint cycle representation of $\pi$. Let \[ A^{\exc,\cyc}_n(x,y) = \sum_{\pi \in \Sn} x^{\exc \pi} y^{\cyc \pi}. \] It was proven by Brenti in \cite[Theorem 7.5]{brenti2000} that $A^{\exc,\cyc}_n(x,y)$ has all roots real for every positive $y \in \reals$. This was extended by Br\"and\'en in \cite[Theorem 6.3]{MR2218995} to include values of $y$ for which $n+y \le 0$. In \cite{gopal}, $A^{\exc,\cyc}_n(x,1/k)$ was shown to be related to inversion sequences. This will allow us to deduce the real-rootedness in this special case (when $y = 1/k$) from Theorem~\ref{thm:main}. \begin{cor} For every positive integer, $k$, the polynomial \[ A^{\exc,\cyc}_n(x,1/k) = \sum_{\pi \in \Sn} x^{\exc \pi} k^{-\cyc \pi} \] has only real roots. \end{cor} \begin{proof} Let $\mathbf{s}=(k+1,2k+1, \ldots,(n-1)k+1)$. It was shown in \cite[Theorems 3 and 6]{gopal}, that for every positive integer $k$, \[ \sum_{\mathbf{e} \in \I_{n}^{(\mathbf{s})}} x^{\asc \mathbf{e}} = k^n A^{\exc,\cyc}_n(x,1/k).\] The corollary follows from Theorem~\ref{thm:main} with $\mathbf{s}=(k+1,2k+1, \ldots,(n-1)k+1)$. \end{proof} \begin{rem} It is well-known that the pair of statistics $(\exc, \cyc)$ is equidistributed with $(\des, \mathrm{lrm})$, where $\mathrm{lrm}$ is the number of left-to-right minima in a permutation. \end{rem} \subsection{Multiset permutations} Simion \cite[Section 2]{MR728500} showed that for any $n$-element multiset, $M$, the descent polynomial for the set of permutations, $P(M)$, of $M$ has only real roots. A {\em descent} in a multiset permutation, $\pi \in P(M)$ is an index $i \in \{1,2, \ldots, n-1\}$ such that $\pi_i > \pi_{i+1}$. When $M=\{1,1,2,2, \ldots, n,n\}$, there is a connection with inversion sequences. Let $\mathbf{s}$ be the sequence $\mathbf{s}=(1,1,3,2,5,3,7,4, \ldots)$, where for $i \geq 1$, $s_{2i}=i$ and $s_{2i-1}=2i-1$. Observe that the number of $\mathbf{s}$-inversion sequences of length $2n$ is the same as the number of permutations of $\{1,1,2,2, \ldots, n,n\}$: \[ \left |\I_{2n}^{(1,1,3,2,5,3,7,4, \ldots, 2n-1,n)} \right | \ = \ \frac{(2n)!}{2^n} = \ \left |P(\{1,1,2,2, \ldots, n,n\})\right |. \] We discovered that the distribution of ascents on the first set is equal to the distribution of descents on the second set. \begin{thm} \[ \sum_{\pi \in P(\{1,1,2,2, \ldots, n,n\})} x^{\des \pi} = \sum_{\mathbf{e} \in \I_{2n}^{(1,1,3,2,5,3, \ldots, 2n-1, n)}} x^{\asc \mathbf{e}}. \] \label{multiset_perms} \end{thm} \begin{proof} It was shown in \cite[Theorem 14]{SS} that \[ \sum_{t \geq 0}\left ( \frac{(t+1)(t+2)}{2} \right )^{ n} x^t \ = \ \frac{\sum_{\mathbf{e} \in \I_{2n}^{(1,1,3,2,5,3, \ldots, 2n-1,n)}} x^{\asc \mathbf{e}}} {(1-x)^{2n+1}}. \] MacMahon \cite[Volume 2, Chapter IV, p. 211, \S462]{MR2417935} showed that \[ \frac{\sum_{\pi \in P(\{1^{p_1}, \dotsc, n^{p_n}\})} x^{\des \pi}}{(1-x)^{1+\sum_i p_i}} = \sum_{t\ge 0}\frac{(t+1)\dotsb (t+p_1) \dotsb (t+1)\dotsb(t+p_n)}{p_1! \cdot \dotsb \cdot p_n!} \,x^t\,. \] In particular, when $p_i = 2$ for all $i$, this implies \[ \frac{\sum_{\pi \in P(\{1,1,2,2, \ldots, n,n\})} x^{\des \pi}}{(1-x)^{2n+1}} \ = \ \sum_{t \geq 0}\left ( \frac{(t+1)(t+2)}{2} \right )^{ n} x^t. \] \end{proof} We thus obtain the following special case of Simion's result as a corollary of Theorem~\ref{thm:main}. \begin{cor} The polynomial \[ \sum_{\pi \in P(\{1,1,2,2, \ldots, n,n\})} x^{\des \pi} \] has only real roots. \end{cor} The sequence $\mathbf{s}=(1,1,3,2,5,3,7,4, \ldots)$ was studied in \cite{CSS}, where it was shown that the $\mathbf{s}$-lecture hall partitions lead to a new finite model for the {\em Little G\"ollnitz identities}. There was a companion sequence, $\mathbf{s}=(1,4,3,8,5,12, \ldots, 2n-1,4n)$ defined by $s_{2i}=4i$, $s_{2i+1}=2i+1$, which we now consider in the context of multiset permutations. Let $P^{\pm}(\{1,1,2,2, \ldots, n,n\})$ be the set of all \emph{signed} permutations of the multiset $\{1,1,2,2, \ldots, n,n\}$. The elements are those of the form $(\pm \pi_1, \pm \pi_2, \ldots, \pm \pi_{2n})$, where $( \pi_1, \pi_2, \ldots, \pi_{2n}) \in P(\{1,1,2,2, \ldots, n,n\})$. Note that \[ \left | P^{\pm}(\{1,1,2,2, \ldots, n,n\}) \right | \ = \ \frac{(2n)!}{2^n} 2^{2n} \ = \ 2^n(2n)! \ = \ \left | \I_{2n}^{(1,4,3,8, \ldots, 2n-1,4n)} \right | . \] From our experiments it appears that distribution of descents on the first set is equal to the distribution of ascents on the second set, and we make that conjecture. \begin{conj} \[ \sum_{\pi \in P^{\pm}(\{1,1,2,2, \ldots, n,n\})} x^{\des \pi} = \sum_{\mathbf{e} \in \I_{2n}^{(1,4,3,8, \ldots, 2n-1, 4n)}} x^{\asc \mathbf{e}}. \] \end{conj} If this conjecture is true, it would follow as a corollary of Theorem~\ref{thm:main} that the descent polynomial for the signed permutations of the multiset $\{1,1,2,2, \ldots, n,n\}$ has all real roots. It was shown in \cite[Theorem 13]{SS} that \[ \sum_{t \geq 0}\left ( (t+1)(2t+1)\right )^{ n} x^t \ = \ \frac{\sum_{\mathbf{e} \in \I_{2n}^{(1,4,3,8,5,12, \ldots, 2n-1,4n)}} x^{\asc \mathbf{e}}} {(1-x)^{2n+1}}. \] It may be possible to show that $\sum_{\pi \in P^{\pm}(\{1,1,2,2, \ldots, n,n\})} x^{\des \pi}$ satisfies the same identity. Finally, it would be interesting to investigate $q$-analogs of the above identities and possibly the conjecture. \section{Geometric consequences} \label{sec:geometry} In this section, we describe some geometric consequences of Theorem~\ref{thm:main}. \subsection{The $h$-polynomials of finite Coxeter complexes and reduced Steinberg tori} There is a natural simplicial complex associated with a finite Coxeter group $W$ and its reflection representation. The Coxeter complex of $W$ is the simplicial complex $\Sigma = \Sigma(W)$ formed as the intersection of a unit sphere and the reflecting hyperplanes of $W$. The $f$-polynomial of a $(d-1)$-dimensional simplicial complex $\Delta$ is the generating function for the dimensions of the faces of the complex: \[f(\Delta,x) = \sum_{F \in \Delta} x^{\dim F + 1}. \] The $h$-polynomial of $\Delta$ is a transformation of the $f$-polynomial: \[h(\Delta,x) = (1-x)^df\left(\Delta,\frac{x}{1-x}\right). \] In fact, the $h$-polynomial of a Coxeter complex $\Sigma(W)$ is the Eulerian polynomial of type $W$ discussed in Subsection~\ref{subsec:coxeter}: \[h(\Sigma(W),x) = \sum_{\sigma\in W} x^{\des_W(\sigma)}. \] In \cite{DPS09}, Dilks, Petersen, and Stembridge defined a Boolean cell complex, called the \emph{reduced Steinberg torus} whose $h$-polynomial is the affine Eulerian polynomial of type $W$ discussed in Subsection~\ref{subsec:affine}. A curious property that implies unimodality and is implied by real-rootedness (under certain conditions) is called \emph{$\gamma$-nonnegativity}. Every polynomial $h(x)$ of degree $n$ that is palindromic, i.e., satisfies $h(x) = x^nh(1/x)$, can be written uniquely in the form \[h(x) = \sum_{i=0}^{\lfloor n/2\rfloor} \gamma_i x^i(1+x)^{n-2i}.\] If the coefficients $\gamma_i$ are nonnegative for $0\le i \le n/2$, then we say that $h(x)$ is $\gamma$-nonnegative. For a (palindromic) polynomial with only positive coefficients real-rootedness implies $\gamma$-nonnegativity (see \cite[Lemma~4.1]{Br04}, \cite[Remark~3.1.1]{Gal05}). Since, all the Eulerian and the affine Eulerian polynomials of type $W$ are palindromic with positive coefficients, the real-rootedness of all but one of these polynomials (recall that the case of the affine $D$-Eulerian polynomial which is still open) implies their $\gamma$-nonnegativity. The $\gamma$-nonnegativity was established for various types combinatorially: for type $B$ in \cite[4.15]{Petersen}, for type $D$ in \cite[Theorem~6.9]{Chow}, (see also \cite[Theorem~1.2]{Stembridge}); and for the affine Eulerian polynomials in \cite[Theorem~4.2]{DPS09}. \subsection{The $\h^*$-polynomials of $\mathbf{s}$-lecture hall polytopes} For background, the {\em Ehrhart series} of a polytope $\PP$ in $\reals^n$ is the series \[ \sum_{t \geq 0} |t\PP \cap \integers^n| x^t, \] where $t\PP$ is the $t$-fold {\em dilation} of $\PP$: \[ t\PP = \left\{(t\lambda_1, t \lambda_2, \ldots, t \lambda_n) \mid (\lambda_1, \lambda_2, \ldots, \lambda_n) \in \PP\right\}. \] So, $i(\PP,t) := |t\PP \cap \integers^n|$ is the number of points in $t\PP$, all of whose coordinates are integer. For the rest of this discussion assume that the polytope $\PP$ is integral, that is, all of its vertices have integer coordinates. Then $i(\PP,t)$ is a {\em polynomial} in $t$ and the Ehrhart series of $\PP$ has the form \[ \sum_{t \geq 0} i(\PP,t) x^t = \frac{\h(x)}{(1-x)^{n+1}}, \] for a polynomial $\h(x) = h_0 + h_1x + \cdots h_dx^d$, where $h_d \not = 0$ and $d \leq n$. The polynomial $\h(x)$ is known as the {\em $\h^*$-polynomial} of $\PP$ \cite{ehrhart1,ehrhart2}. By Stanley's {\em Nonnegativity Theorem} \cite[Theorem~2.1]{nonnegthm} the coefficients of its $\h^*$-polynomial are nonnegative. The sequence of coefficients $h_1,h_2, \ldots, h_d$ of $\h(x)$ is called the {\em $\h^*$-vector} of $\PP$. (Alternative names also appear in the literature, such as Ehrhart $h$-vector, $\delta$-vector.) \begin{example} Consider the triangle in the plane with vertices $(0,0)$, $(1,2)$, $(2,1)$, formally, let \begin{equation} \PP = \{(\lambda_1, \lambda_2) \in \reals^2 \mid \lambda_1 \leq 2 \lambda_2, \ \lambda_2 \leq 2 \lambda_1, \ {\rm and} \ \lambda_1 + \lambda_2 \leq 3\}. \label{polytope_ex} \end{equation} Then \[ i(\PP,t) \ = | t\PP \cap \integers^2 | \ = \ 1 + 3/2 t + 3/2 t^2, \] and the Ehrhart series of $\PP$ is \begin{equation} \sum_{t \geq 0}(1 + 3/2 t + 3/2 t^2) \, x^t \ = \ \frac{x^2+x+1}{(1-x)^3}. \label{polytope_es} \end{equation} So, the $\h^*$-polynomial of the polytope $\PP$ is $\h(x)=x^2+x+1$ and its $\h^*$-vector is $[1,1,1]$, which is nonnegative, symmetric, and unimodal. \end{example} The $\h^*$-vector of a convex polytope with integer vertices need not be symmetric or unimodal. Although there has been much progress in the direction of characterizing those polytopes whose $\h^*$-vector is unimodal (see, e.g., \cite{Athanasiadis,BrunsRomer,MustataPayne,Stanley1980}) this is still an open question. However, we can use Theorem~\ref{thm:main} to answer the question for the following class of polytopes associated with $\mathbf{s}$-inversion sequences. The {\em $\mathbf{s}$-lecture hall polytope} $\PP_n^{(\mathbf{s})}$ is defined by \[ \PP_n^{(\mathbf{s})} = \left\{ (\lambda_1, \lambda_2, \dotsc, \lambda_n) \in \reals^n \ \Big| \ 0 \leq \frac{\lambda_{1}}{s_{1}} \leq \frac{\lambda_{2}}{s_{2}} \leq \cdots \leq \frac{\lambda_{n}}{s_{n}} \leq 1 \right\}, \] where $\mathbf{s}$ is an arbitrary sequence of positive integers. The following is a special case of Theorem 5 in \cite{SS}. \begin{lem} For any sequence $\mathbf{s}$ of positive integers, \[ \sum_{t \geq 0} i(\PP_n^{(\mathbf{s})},t) x^t = \frac{\E_n^{(\mathbf{s})}(x)}{(1-x)^{n+1}}. \] \label{SSlem} \end{lem} So combining Lemma~\ref{SSlem} with Theorem~\ref{thm:main} we have: \begin{cor} For any sequence $\mathbf{s}$ of positive integers, the $\h^*$-polynomial of the $\mathbf{s}$-lecture hall polytope has all roots real. \end{cor} The $\mathbf{s}$-lecture hall polytopes are special in this regard, even among lattice simplices. The polytope $\PP$ of the example (\ref{polytope_ex}) is a simplex in $\reals^2$ with integer vertices, but its $\h^*$-polynomial, $x^2 + x + 1$, does not have real roots. The sequence of coefficients of a real polynomial with only real roots is log-concave, and---if the coefficients are nonnegative---it is also unimodal. This is an easy corollary of a classic result, often referred to as Newton's inequality. Thus, we have the following. \begin{cor} For any sequence $\mathbf{s}$ of positive integers, The $\h^*$-vector of the $\mathbf{s}$-lecture hall polytope is unimodal and log-concave. \end{cor} \begin{rem} The $\h^*$-vector an $\mathbf{s}$-lecture hall polytope need not be {\em symmetric}. For example, \[ \E_n^{(1,3,5)}(x) \ = \ 1 + 10x + 4x^2. \] \end{rem} \section{$(p,q)$-analogs of $\mathbf{s}$-Eulerian polynomials} \label{sec:q-analogs} In this section, we define $(p,q)$-analogs of the $\mathbf{s}$-Eulerian polynomials and show that they have real rools for every positive $p,q\in \reals$. In addition to the statistics $\A \mathbf{e}$, $\asc \mathbf{e}$ and $|\mathbf{e}| = \sum_i e_i$ on $\mathbf{s}$-inversion sequences, we define a new statistic, related to the major index on permutations. For $\mathbf{e} \in \I_{n}^{(\mathbf{s})}$, let \begin{align*} \amaj \mathbf{e} = & \sum_{j \in \A \mathbf{e}}(n-j). \end{align*} For a sequence of positive integers $\mathbf{s}$, and a positive integer $n$, define a $(p,q)$-analog of the $\mathbf{s}$-Eulerian polynomials as \begin{equation} \E^{(\mathbf{s})}_{n}(x,p,q) = \sum_{\mathbf{e} \in \I_{n}^{(\mathbf{s})}} x^{\asc \mathbf{e}}q^{\amaj \mathbf{e}} p^{|\mathbf{e}|}\,, \label{eq:pq-analog} \end{equation} and for $0 \leq i < s_{n}$, define its refinement as before \[ \poly^{(\mathbf{s})}_{n,i}(x,p,q) = \sum_{ \mathbf{e} \in \I_{n}^{(\mathbf{s})}} \chi(e_n = i)\, x^{\asc \mathbf{e}}q^{\amaj \mathbf{e}} p^{|\mathbf{e}|}. \] \begin{lem} For $n \geq 1$ and $0 \leq i < s_{n+1}$, \[ \poly^{(\mathbf{s})}_{n+1,i}(x,p,q) = p^i \left ( \sum_{j=0}^{\ell-1} xq \poly^{(\mathbf{s})}_{n,j}(xq,p,q) + \sum_{j=\ell}^{s_{n}-1} \poly^{(\mathbf{s})}_{n,j}(xq,p,q) \right )\, \] where $\ell = \lceil i s_{n}/s_{n+1} \rceil$, and with initial conditions $\poly^{(\mathbf{s})}_{1,0}(x,p,q)=1$ and $\poly^{(\mathbf{s})}_{1,i}(x,p,q)=xqp^i$ for $i > 0$. \label{Pqzrec} \end{lem} \begin{proof} For $\mathbf{e}=(e_1, \dotsc, e_{n}, i) \in \I_{n+1}^{(\mathbf{s})}$ we have that $n \in \A \mathbf{e}$ if and only if $e_{n}/s_{n} < i/s_{n+1}$, that is, if $0 \leq e_{n} \leq \ell-1$. Thus, the statistics change accordingly: \begin{align*} |(e_1, \dotsc, e_{n}, i)| &= |(e_1, \dotsc, e_{n})| + i\,, \\ \asc (e_1, \dotsc, e_{n}, i) &=\asc (e_1, \dotsc, e_{n}) + \chi(e_{n} \leq \ell-1)\,,\\ \amaj(e_1, \dotsc, e_{n}, i) &= \amaj(e_1, \dotsc, e_{n}) + \asc(e_1, \dotsc, e_{n}) + \chi(e_{n} \leq \ell-1)\,. \end{align*} For the initial conditions, $0 \in \A \mathbf{e}$ if and only if $e_1 > 0$, in which case $\asc \mathbf{e} = 1$, $\amaj \mathbf{e} = n-0=1$ and $|\mathbf{e}|=e_1 = i$. \end{proof} Since $ \E_n^{(\mathbf{s})}(x,p,q) = \sum_{i=0}^{s_n-1} \poly^{(\mathbf{s})}_{n,i}(x,p,q),$ we have the following result. \begin{thm} Let $\mathbf{s} = (s_1, s_2, \dotsc)$ be a sequence of positive integers. For any positive integer $n$ and positive real numbers $p$ and $q$, the polynomial $\E^{(\mathbf{s})}_{n}(x,p,q)$, defined in \eqref{eq:pq-analog}, has only real roots. \label{thm:pq-analog} \end{thm} \begin{proof} By Lemma~\ref{Pqzrec}, the statement is a special case of the following theorem with $b_0 = 1$, $b_i = qp^i$ for $0 < i< s_1$, $c_{n,i} = q$, and $d_{n,i} = p^i$ for all $n\ge 1$ and $ 0\le i < s_n$. \end{proof} \begin{thm} Given a sequence of positive integers, $\mathbf{s} = \{s_i\}_{i=1}^\infty$, and positive numbers $\{b_i \mid 0\le i < s_1\}$, $\{c_{n,i} \mid n\ge 1,\, 0\le i < s_n\}$ and $\{d_{n,i} \mid n\ge 1,\, 0\le i < s_n\}$ define the polynomials $R^{(\mathbf{s})}_{n,i}(x)$ as follows. Let $R^{(\mathbf{s})}_{1,0}(x) = b_0$, $R^{(\mathbf{s})}_{1,i}(x) = b_ix$ for $1\le i < s_1$, and for $n \ge 1$ and $0 \leq i < s_n$, let \[ R^{(\mathbf{s})}_{n+1,i}(x) = d_{n,i} \left ( \sum_{j=0}^{\ell-1} c_{n,j}x R^{(\mathbf{s})}_{n,j}(c_{n,j}x) + \sum_{j=\ell}^{s_{n}-1} R^{(\mathbf{s})}_{n,j}(c_{n,j}x) \right ), \] with $\ell = \lceil i s_{n+1}/s_{n} \rceil$. Then for all $n\ge 1$, $\sum_{i} R^{(\mathbf{s})}_{n,i}(x)$ is real-rooted \label{thm:qcompatible} \end{thm} \begin{proof} We apply the method of Section~\ref{sec:main}. The polynomials $\{R^{(\mathbf{s})}_{n,i}(x)\}_{ 0 \leq i < s_n }$ are compatible by Theorem~\ref{thm:compatible}. Note that the transformation $f(x) \mapsto c f(qx)$ preserves real-rootedness and the sign of the leading coefficient for $c,q > 0.$ \end{proof} \subsection{The MacMahon--Carlitz $q$-analog for $\Sn$} We apply Theorem \ref{thm:pq-analog} now to the case of permutations. In particular, we show for the first time that the MacMahon--Carlitz $q$-Eulerian polynomial is real-rooted for $q>0$, a result conjectured by Chow and Gessel in \cite{ChowGessel}. For $\pi \in \Sn$, let $\maj \pi = \sum_{j \in \D \pi} j$, and $\comaj \pi = \sum_{j \in \D \pi} (n-j).$ It was conjectured in \cite{ChowGessel}, that for any $q > 0$, the polynomial \[ A^{\maj}_n(x,q) = \sum_{\pi \in \Sn} x^{\des \pi}q^{\maj \pi}, \] has all roots real. We now settle this conjecture as part of the following theorem. \begin{thm} All of the following $q$-analogs of the Eulerian polynomials $A_n(x)= \sum_{\pi \in \Sn} x^{\des \pi}$, are real-rooted whenever $q>0$: \begin{align} A^{\inv}_n(x,q)=& \sum_{\pi \in \Sn} x^{\des \pi} q^{\inv \pi}, \label{qE1}\\ A^{\comaj}_n(x,q)=& \sum_{\pi \in \Sn} x^{\des \pi} q^{\comaj \pi}, \label{qE2}\\ A^{\maj}_n(x,q)=& \sum_{\pi \in \Sn} x^{\des \pi} q^{\maj \pi} \label{qE3}. \end{align} The last of these is known as the MacMahon--Carlitz $q$-Eulerian polynomial. \end{thm} \begin{proof} The bijection $\phi: \Sn \rightarrow \I_{n}^{(1,2, \ldots, n)}$ of Lemma \ref{Desinv} maps the pair of statistics $(\D, \inv)$ on permutations to the pair $(\A, | \ |)$ on inversion sequences, so we have \[ \sum_{\mathbf{e} \in \I_{n}^{(1,2, \ldots, n)}} x^{\asc \mathbf{e}} q^{\amaj \mathbf{e}} p^{|\mathbf{e}|} \ =\ \sum_{\pi \in \Sn} x^{\des \pi} q^{\comaj \pi} p^{\inv \pi}. \] In particular, \[\E_n^{(1,2, \ldots, n)}(x,p,1) = A^{\inv}_n(x,p) \quad \mathrm{and}\quad \E_n^{(1,2, \ldots, n)}(x,1,q) = A^{\comaj}_n(x,q)\] and by Theorem~\ref{thm:pq-analog} they are real-rooted for $p >0$ and $q > 0$, respectively. As for $A^{\maj}_n(x,q)$, observe that the mapping $\Sn \rightarrow \Sn$ defined by \[ \pi = (\pi_1, \pi_2, \ldots, \pi_n) \ \mapsto \ \pi' = (n+1-\pi_n, n+1- \pi_{n-1}, \ldots, n+1-\pi_1) \] satisfies $\D \pi' = \{n-j \mid j \in \D \pi \}.$ Thus \[ \sum_{\pi \in \Sn} x^{\des \pi} q^{\maj \pi} \ = \ \sum_{\pi' \in \Sn} x^{\des \pi'} q^{\comaj \pi'} \ = \ \sum_{\pi \in \Sn} x^{\des \pi} q^{\comaj \pi}, \] so $(\des,\maj)$ and $(\des, \comaj)$ have the same joint distribution on $\Sn$, i.e., $A^{\maj}_n(x,q) = A^{\comaj}_n(x,q)$. \end{proof} \begin{cor} For $q > 0 $ the coefficients of the polynomials $A^{\maj}_n(x,q)$ defined in (\ref{qE3}) form a unimodal and log-concave sequence. \end{cor} Partial results on the unimodality of these coefficients were obtained in \cite[Proposition 9]{HJZ2013}. \subsection{Some $q$-analogs for $\Bn$ and $\w$} We can also apply Theorem~\ref{thm:pq-analog} to signed permutations and wreath products, making use of Lemma~\ref{lem:theta}. Extend $\maj$ and $\comaj$ to $\w$ by defining them for $\pi^{\xi} \in \w$ as $\maj \pi^{\xi} = \sum_{j \in \D \pi^{\xi}} j$ and $\comaj \pi^{\xi} = \sum_{j \in \D \pi^{\xi}} (n-j).$ The statistic {\em flag inversion number} is defined for $\pi^{\xi} \in \w$ by \[ \finv \pi^{\xi} \ = \inv \pi + \sum_{i=1}^n i \xi_i. \] Recall the bijection $\Theta$ of Lemma~\ref{lem:theta} that maps the descent set on $\w$ to the ascent set on $\I_n^{(k,2k, \ldots, nk)}$. The following additional property of $\Theta$ was shown in \cite[Theorem~3]{PS2}. \begin{lem} The bijection $\Theta: \w \ \longrightarrow \ \I_n^{(k,2k, \ldots, nk)}$ of Lemma~\ref{lem:theta} satisfies \[ \finv \pi^{\xi} \ = \left | \Theta(\pi^{\xi}) \right |. \] \label{thetafinv} \end{lem} \begin{thm} The following $q$-analogs of the descent polynomial $G_{n,k}(x)$ have all roots real for $q > 0$: \begin{align} G^{\finv}_{n,k}(x,q) \ = & \ \sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi} } q^{\finv \pi^{\xi}}, \label{Wq1}\\ G^{\comaj}_{n,k}(x,q) \ = & \ \sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi} } q^{\comaj \pi^{\xi}},\label{Wq2}\\ G^{\maj}_{n,k}(x,q) \ = & \ \sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi} } q^{\maj \pi^{\xi}}. \label{Wq3} \end{align} \end{thm} \begin{proof} By Lemma \ref{thetafinv}, \[\E_n^{(k,2k, \ldots, nk)}(x,1,p) = G^{\finv}_{n,k}(x,p)\quad \mathrm{and} \quad \E_n^{(k,2k, \ldots, nk)}(x,q,1) = G^{\comaj}_{n,k}(x,q),\] both of which are real-rooted by Theorem~\ref{thm:pq-analog}. In contrast to the case for $\Sn$, when $k>1$, the polynomials (\ref{Wq2}) and (\ref{Wq3}) are not the same. However, \[\sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi} } q^{\maj \pi^{\xi}} \ = \ \sum_{\pi^{\xi} \in \w} (xq^n)^{\des \pi^{\xi} } q^{-\comaj \pi^{\xi}}\,.\] So, $G^{\maj}_{n,k}(x,q) = \E_n^{(k,2k, \ldots, nk)}(xq^n,1,1/q)$ and, by Theorem \ref{thm:pq-analog}, has all roots real. \end{proof} \begin{cor} For $q > 0$ the coefficients of the polynomials $G^{\maj}_n(x,q)$ defined in (\ref{Wq3}) form a unimodal and log-concave sequence. \end{cor} Partial results on the unimodality of these coefficients were obtained in \cite[Proposition 10]{HJZ2013}. \subsection{Euler--Mahonian $q$-analogs for $\Bn$ and $\w$} \label{subsec:euler-mahonian} In the case of $\Bn$ and $\w$, a different $q$-analog of the Eulerian polynomial is based on the {\em flag major index} statistic \cite{AdinRoichman}, $\fmaj$, which is defined for $\pi^{\xi} \in \w$ by \[ \fmaj \pi^{\xi} \ = k \, \comaj \pi^{\xi} - \sum_{i=1}^n \xi_i. \] This definition differs a bit from those appearing elsewhere because of the appearance of $\comaj$, but it was shown in \cite{PS2} to be equivalent, e.g., to the definition in \cite{ChowGessel,ChowMansour}. In contrast to $\comaj$ and $\maj$ from the previous section, $\fmaj$ is {\em Mahonian}, i.e., it has the same distribution as ``length'' on $\w$ \cite{AdinRoichman}. For that reason, the polynomials \[ G^{\fmaj}_{n,k}(x,q)= \sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi}}q^{\fmaj \pi^{\xi}} \] are referred to as the {\em Euler--Mahonian polynomials} for $\w$. It was conjectured by Chow and Mansour in \cite{ChowMansour} that $G^{\fmaj}_{n,k}(x,q)$ has all roots real for $q > 0$. In the special case, $k=2$, of signed permutations, $\Bn$, the conjecture was made by Chow and Gessel in \cite{ChowGessel} that for any $q > 0$, the polynomial $B_n(x,q)$, defined by \[ B_n(x,q) = \sum_{\sigma \in \Bn} x^{\des \sigma}q^{\fmaj \sigma}, \] has all real roots. We can now settle these conjectures. \begin{thm} For the wreath product groups, $\w$, the Euler--Mahonian polynomials \[ G^{\fmaj}_{n,k}(x,q)= \sum_{\pi^{\xi} \in \w} x^{\des \pi^{\xi}}q^{\fmaj \pi^{\xi}} \] have all real roots for $q > 0$. \label{wreathreal} \end{thm} To prove this theorem, we first relate $\fmaj$ to a statistic, $\Ifmaj$, on the inversion sequences $\I_n^{(k,2k, \ldots, kn)}$. For $\mathbf{e} = (e_1, \dotsc, e_n) \in \I_n^{(k,2k, \ldots, nk)}$, define \begin{align*} \Ifmaj \mathbf{e} & \ = \ k \, \amaj \mathbf{e} - \sum_{j=1}^n \floor{\frac{e_j}{j}}. \end{align*} It was shown in \cite [Theorem 3]{PS2} that the bijection $\Theta: \w \rightarrow \I_n^{(k,2k, \ldots, nk)}$ of Lemma \ref{lem:theta}, which maps the descent set on $\w$ to the ascent set on $\I_n^{(k,2k, \ldots, nk)}$ also satisfies \[ \fmaj \pi^{\xi} = \Ifmaj \Theta(\pi^{\xi}). \] We thus have \begin{equation} G^{\fmaj}_{n,k}(x,q) \ = \ \sum_{\mathbf{e} \in \I_n^{(k,2k, \ldots, nk)}} x^{\asc \mathbf{e}}q^{\Ifmaj \mathbf{e}}. \label{EMF} \end{equation} For $n,k \geq 1$ and $0 \leq i < nk$, define \[ G^{\fmaj}_{n,k,i}(x,q) = \sum_{\mathbf{e} \in \I_n^{(k,2k, \ldots, nk)}} \chi(e_n = i)\, x^{\asc \mathbf{e}}q^{\Ifmaj \mathbf{e}}. \] \begin{lem} For $n,k \geq 1$ and $0 \leq i < (n+1)k$, \[ G^{\fmaj}_{n+1,k,i}(x,q) = q^{-\lfloor i/(n+1) \rfloor} \left ( \sum_{j=0}^{\ell-1} xq^k G^{\fmaj}_{n,k,j}(xq^k,q) + \sum_{\ell}^{s_{n}-1} G^{\fmaj}_{n,k,j}(xq^k,q) \right )\, \] where $\ell = \lceil i n/(n+1) \rceil$, and with initial conditions $G^{\fmaj}_{1,k,0}(x,q)=1$ and $G^{\fmaj}_{1,k,i}(x,q)=xq^{k-i}$ for $i > 0$. \label{Frec} \end{lem} \begin{proof} For $\mathbf{e}=(e_1, \dotsc, e_{n},i) \in \I_{n+1}^{(k,2k, \dotsc, (n+1)k)}$ we have that $n \in \A \mathbf{e}$ if and only if $e_n/n < i/(n+1)$, that is $0 \leq e_n \leq \ell-1$. Similarly to the proof of Lemma~\ref{Pqzrec}, we see that by appending the value $i$ in the $(n+1)$th position the statistics change as follows: \begin{align*} \asc (e_1, \dotsc, e_{n},i) &= \asc (e_1, \dotsc, e_{n}) + \chi(e_n \leq \ell-1)\,,\\ \Ifmaj (e_1, \dotsc, e_{n},i) & = \Ifmaj (e_1, \dotsc, e_{n}) + k \asc (e_1, \dotsc, e_{n},i) - \lfloor i/(n+1) \rfloor\,. \end{align*} For the initial conditions, $0 \in \A \mathbf{e}$ if and only if $e_1 > 0$, in which case $\des \mathbf{e} = 1$ and $\Ifmaj \mathbf{e} = k \, \amaj \mathbf{e} - \lfloor i/1 \rfloor = k-i$, and both are zero otherwise. \end{proof} \begin{proof}[Proof of Theorem~\ref{wreathreal}] For fixed positive integer $k$ and real number $q$, setting $s_n = nk$ for all $n\ge 1$, $b_0=1$, $b_i = q^{k-i}$ for $0\le i < k$, $c_{n,j} = q^k$ for $0 \le j < nk$ and $d_{n,i}=q^{-\lfloor i/(n+1) \rfloor}$ for $0 \le i < nk$, in Theorem~\ref{thm:qcompatible} gives the recurrence of Lemma~\ref{Frec}. Thus, by Theorem~\ref{thm:qcompatible}, \[ G^{\fmaj}_{n,k}(x,q) \ = \ \sum_{i=0}^{nk-1} G^{\fmaj}_{n,k,i}(x,q) \] has all real roots. \end{proof} \section*{Acknowledgments} We thank the National Science Foundation (grant \#1202691 supporting the Triangle Lectures in Combinatorics) and the Simons Foundation (grant \#244963) for travel funding that faciltated the research presented here. The second author was also supported by the Knut and Alice Wallenberg Foundation. Thanks to Thomas Pensyl for his contributions to Theorem~\ref{bijection}. Special thanks to Christian Krattenthaler for his comments on a partial result presented at the S\'eminaire Lotharingien de Combinatoire at Strobl that encouraged us to develop our methods for the type $D$ case. We also thank Petter Br\"and\'en for helpful suggestions. We are grateful to the referee who did a careful reading of the paper and offered many corrections and suggestions to improve the presentation. \bibliographystyle{plain}
2,869,038,155,153
arxiv
\section{Introduction} Damped Ly$\alpha$ systems seen in QSO spectra are characterized by very large H~{\sc i} column densities, {\it N}(H~{\sc i})$\ga 10^{20}$ cm$^{-2}$, that are similar to those observed through gas-rich spiral galaxies. The importance of DLAs in the paradigm of hierarchical structure formation can be assessed from the fact that the mass density of baryonic matter in DLAs at {\it z}$_{abs}$ $\sim 3$ is similar to that of stars at present epochs (Wolfe 1995). Studies of Ly$\alpha$ and UV continuum emission from galaxies associated with DLAs usually reveal star formation rates(SFR) (or upper limits) of a few M$_\odot$ yr$^{-1}$ (Fynbo et al. 1999; Bunker et al. 1999; Kulkarni et al. 2001; M\"{o}ller et al. 2002 and 2004; Weatherley et al. 2005). The metallicity and depletion factor in DLAs are usually estimated from {\it N}(Zn~{\sc ii})/{\it N}(H~{\sc i}) and {\it N}(Fe~{\sc ii})/{\it N}(Zn~{\sc ii}) respectively (Lu et al. 1996; Pettini et al. 1997; Prochaska \& Wolfe 2002; Ledoux et al. 2002a and Khare et al 2005). The inferred metallicities typically vary between log {\it Z} = $-$2.5 to 0 for $2\le$ {\it z}$_{abs}$ $\le 3$ with a median of $\simeq-1.3$ (Ledoux et al. 2003). The measured depletions range between 0 to $-1.6$ with a median value of $-0.3$. If dust content is defined as $\kappa~=~10^{\rm [Zn/H]}~(1-10^{\rm [Fe/Zn]})$, then the median dust content in a typical DLA is $\kappa=0.07$. This is less than 10 per cent of what is seen in the Galactic ISM for a similar neutral hydrogen column density. If the physical conditions in DLAs are similar to those in our Galaxy or the Magellanic clouds then H$_2$ molecules should be detectable. There have been very few detections of H$_2$ in DLAs (with ${\rm 3\times10^{14}\le {\it N}(H_2)(cm^{-2})\le 3\times 10^{19}}$) despite extensive searches (Ge \& Bechtold 1999; Srianand, Petitjean \& Ledoux 2000; Petitjean, Srianand \& Ledoux 2000; Ledoux, Srianand \& Petitjean 2002b; Levshakov et al. 2002; Ledoux, Petitjean, \& Srianand 2003; Reimers et al. 2003). Roughly 80 per cent of DLAs do not have detectable H$_2$ (with ${\rm {\it N}(H_2)}\le10^{14}$ cm$^{-2}$). The physical conditions in the H~{\sc i}~ gas can be probed by using the fine-structure absorption lines produced by the excited atomic species and the 21 cm transition. Apart from a few rare cases (for example Srianand \& Petitjean 2001), C~{\sc i} is detected only in systems that show H$_2$. The derived total hydrogen density ($\it{ n_H}$) based on the fine-structure level populations of the heavy elements are usually large ($\ge 20 $ cm$^{-3}$) (Ledoux et al. 2002b; Srianand et al. 2005; Wolfe et al. 2004). Like C~{\sc i}, C~{\sc ii$^*$} absorption is detected in every case where one detects H$_2$. However, it has also been seen with lower column densities in a considerable fraction of DLAs without H$_2$ (Wolfe et al. 2003a; Srianand et al. 2005, Wolfe et al. 2004). The search for 21 cm absorption in DLAs at {\it z}$_{abs}$$\ge2$ have mostly resulted in null detections with typical spin temperatures $\ge10^3$ K (Table 3 of Kanekar \& Chengalur (2003) and Table 1 of Curran et al. (2005) for a summary of all the available observations). There are 8 DLAs at {\it z}$_{abs}$$\ge$ 1.9 for which redshifted 21 cm observations are available. Redshifted 21 cm absorption is detected for only two systems ({\it z}$_{abs}$ = 1.944 toward PKS 1157$+$014 (Wolfe et al. 1981) and {\it z}$_{abs}$ = 2.0394 toward PKS 0458$-$02 (Briggs et al. 1989)). The measured spin temperatures are 865$\pm$190 K and 384$\pm$100 K. However, none of these systems show detectable H$_2$ (Ledoux et al. 2003; Ge \& Bechtold 1997). The DLA toward PKS 1157$+$014 is special as the QSO shows broad absorption lines and the {\it z}$_{abs}$~ of the DLA is close to the {\it z}$_{em}$~ of the QSO. The physical conditions in this system may not be a good representative of the general DLA populations. Interestingly, H$_2$ is seen at {\it z}$_{abs}$ = 2.8110 toward PKS 0528-2505, while no 21 cm absorption is detected in this system (Carilli et al. 1996; Srianand \& Petitjean 1998). The lower limit on the spin temperature derived for this system is 710 K. However, the excitation temperature derived from H$_2$ rotational levels is $\le 200$ K (Srianand \& Petitjean 1998; Srianand et al. 2005). This system is also special since {\it z}$_{abs}$$>${\it z}$_{em}$~. The radiation field of the QSO has a much stronger influence on the physical conditions in this DLA (Srianand \& Petitjean 1998). The upper limits on the spin temperature derived for the rest of the systems are higher than 1000 K. The H$_2$ content of these systems is not known. Even though various properties of DLAs, listed above, have been investigated in detail (Matteucci et al. 1997; Prantzos \& Boissier 2000; Howk \& Sembach 1999; Izotov et al. 2001; Vladilo, 2001; Liszt 2002; Hirashita et al. 2003; Wolfe et al. 2003a,b; Calura et al. 2003; Wolfe et al. 2004), very few attempts have been made to investigate all of them in a single unified calculation. That is the main motivation of this paper. \section{Calculations } The main aim of our study is to investigate the physical conditions in high-{\it z} DLAs. In particular, our goal is to understand the equilibrium abundance of H$_2$, the excitations of H$_2$~and atomic fine-structure levels, the ionization state of the gas, and the 21 cm optical depth, self-consistently. In the Galactic interstellar medium (ISM) H$_2$ is usually detected either in a diffuse medium irradiated by the UV background radiation field or in high-density photo-dissociation regions (PDRs) near OB stars. One can anticipate this to also be the case in DLAs. At high redshift the diffuse UV background from QSOs will be an additional source of UV radiation. Recently there have been three attempts to model H$_2$ in DLAs. Liszt (2002) uses two phase models similar to Wolfire et al. (1995) for this purpose. The models consider dust-free gas, so only the slow gas-phase H$^{-}$ + H $\rightarrow$ H$_2$ + e formation process is important. The second attempt by Hirashita et al. (2003) models DLAs as large protogalactic disks. The density and temperature in the gas are determined by "non-linear hydrodynamic" effects. The radiation field is assumed to have a negligible contribution to the temperature of the gas and is used only for destroying the H$_2$~molecules. Their model provides insight into the spatial distribution of H$_2$. The third attempt by Hirashita \& Ferrara (2005) uses simple H$_2$ equilibrium formation models to study H$_2$ in DLAs. Unlike Liszt (2002), no attempt is made to model ionization conditions of the gas and excitations of the fine-structure lines in the later two studies. The main aim of our paper is to use full self-consistent numerical calculations to understand (i) physical conditions in DLAs with H$_2$ detections, (ii) the reason for the lack of H$_2$ in most of the DLAs (iii) the origin of C~{\sc ii$^*$} absorption frequently seen in DLAs and (iii) the absence of detectable 21 cm absorption at high redshifts (i.e z$\ge2$). \par The availability of good quality observational data allows us to estimate the metallicity, depletion, H$_2$ abundance, H$_2$ excitation, and populations of fine-structure levels in C~{\sc i} and C~{\sc ii} (Ledoux et al. 2003; Srianand et al. 2005). One can hope to build more realistic models with all these in hand. This forms the prime motivation of this work. We study the ionization state, chemical history, and temperature of the gas using version 96 of Cloudy, last described by Ferland et al.(1998), and available at {\bf http://www.nublado.org}. The details of the new H$_2$ model are given in Shaw et al. (2005; hereafter S05). A comparison between predictions of our code and several independent calculations of PDRs can be found at {\bf http://hera.ph1.uni-koeln.de/$\sim$roellig}. A direct application of a PDR is given in Abel et al. (2004; hereafter A04). The calculations presented here take into account various heating (e.g. dust photo-electric heating cosmic ray heating etc) and cooling processes (for details see Ferland et al. 1998 and S05). \subsection{The micro-physics of grains :} We use the improved grain physics and molecular network as described in van Hoof et al. (2004), Ferland et al. (1994; 2002), and A04. The grain size distribution is resolved, and the formalism described by Weingartner \& Draine (2001a; van Hoof et al. 2004) is used to determine the grain charge, temperature, photoelectric heating, and drift velocity self-consistently with the local radiation field and gas temperature. The extinction curves of grains in DLAs are not well known. We use a grain size distribution that fits the extinction curve of the diffuse interstellar medium with R$_{\rm V}$ = 3.1 (Table~1 of Weingartner \& Draine (2001b)). We emphasize that the physical treatment of grains, and their effects on the surrounding gas, is fully self-consistent, and does not rely on general fitting formulae, such as those in Bakes \& Tielens (1994). The grain charge and temperature are determined by the local gas conditions (mainly the electron density and temperature) and radiation field (including the attenuated incident continuum and emission by the surrounding gas, mainly Ly$\alpha$ ). The result is a grain temperature and charge that depends on grain size and material. The temperature then determines the rate of H$_2$ formation on grain surfaces - we adopt the temperature-material-dependent rates given by Cazaux \& Tielens (2002). The charge establishes the floating potential of the grain, which then sets the grain photoelectric heating rate. \subsection{ Molecular hydrogen :} The detailed treatment of the micro-physics of H$_2$ is described in S05. Here we will briefly mention some of those processes. \par Generally, H$_2$ forms via grain catalysis in a dusty cold-neutral gas. We use the total formation rate given by Cazaux \& Tielens (2002) along with the size and temperature resolved grain distribution described in van Hoof et al.(2004). This is an exothermic process and H$_2$ is formed in exited vibrotational levels, a process referred to as formation pumping. The results presented below use the state-specific formation distribution function given by Takahashi (2001) and Takahashi \& Uehara (2001). By contrast, in an equipartition distribution function 1/3 of the binding energy is statistically distributed as internal excitation (Black \& van Dishoeck 1987). Both produce an ortho-para-ratio (OPR) that is nearly 3. \par H$_2$ is formed by associative detachment from H$^-$ in a dust-free gas. This process is important in the clouds with lower dust content considered below. This is also an exothermic process and we use the state-specific formation distribution function given by Launay et al. (1991). \par H$_2$ is destroyed mainly via the Solomon process, in which the H$_2$ molecule is irradiated by far UV (912\AA $<$ $\lambda$ $<$ 1200\AA) radiation and is excited to higher electronic states. Approximately 10 per cent of these electronically excited H$_2$ decay to the ground state continuum and are dissociated. The other 90 per cent populate the higher vibrotational levels of the ground electronic state. These cascade to lower vibrotational levels producing infrared emission lines, referred to as Solomon pumping. Formation pumping on grains is only 10 percent as effective as Solomon pumping when Solomon destruction is balanced by grain formation. Thus, the H$_2$ populations are non-thermal if the electronic lines are optically thin and the Solomon process is dominant (hereafter referred to as the optically thin case), while the H$_2$ level populations may be in LTE at the local gas kinetic temperature if the electronic lines are optically thick and the Solomon process is slow (hereafter referred to as the optically thick case). \par Radiative decays between ortho and para states are not possible because of the different nuclear spin. However, exchange collisions with H, H$^+$, H$_{3}$$^+$, and interactions on grain surfaces (below a certain critical temperature) can cause an ortho-para conversion. The column density ratio of {\it J}=1 and {\it J}=0 traces the kinetic temperature in a collisionally dominated gas but may fail to do so in a Solomon-process dominated region (Abgrall et al. 1992; Sternberg \& Neufeld 1999; Roy, Chengalur \& Srianand (2005)). \par The formation rate on dust has the largest uncertainty among the many processes considered in our calculations. There are significant variations in this rate even in the case of the Galactic ISM (Browning et al. 2003). There are also substantial differences between collisional rates of H$_2$ computed by different groups at low temperatures (S05). These uncertainties should be kept in mind while comparing our results with observations. \subsection{H~{\sc i} spin temperature:} It is commonly assumed that the N(H~{\sc i}) spin temperature, {\it T$_s$}, is equal to the gas kinetic temperature. The optical depth of the 21 cm transition is proportional to N(H$^0$) / {\it T$_s$}, so is sensitive to the inverse of the gas kinetic temperature. The mean value of {\it T$_s$} we report here is T$_K$ with weighting by N(H$^0$) / T. A separate paper (Shaw et al. 2005) discusses our treatment of {\it T$_s$}, and relationships between T, {\it T$_s$}, and H$_2$ temperature indicators, in detail. In DLAs, {\it T$_s$} is usually estimated using the integrated optical depth $\tau_v$ in the 21 cm absorption line and {\it N}(H~{\sc i}) measured from Ly$\alpha$ using \begin{equation} T_s = { N(H~{\sc i})f_c\over 1.823\times10^{18}\tau_v}. \label{eqn1} \end{equation} Here, $f_c$ is the fraction of the background radio source covered by the absorbing gas. Thus, low 21 cm optical depths could either mean high {\it T$_s$} or low $f_c$. Even for $f_c = 1$, the derived temperature will be high if the mean N(H~{\sc i}) covering the extended radio source is lower than the measured one along the line-of-sight toward the optical point source (Wolfe et al. 2003a). Thus, observations will constrain either the physical conditions or the projected H~{\sc i} surface density distribution of the absorbing gas. \subsection{Fine-structure level population:} The ionization potential of C$^0$ is close to the energy of the photons responsible for the H$_2$ electronic band transitions. So, C~{\sc i} lines may be sensitive to the conditions in the H~{\sc i}~ $-$ H$_2$ transition region. The fine-structure level populations of C~{\sc i} are sensitive to the gas pressure and the IR radiation field. Thus, populations of the excited fine-structure levels of C~{\sc i} allow an independent probe of quantities that control the physical conditions in the H~{\sc i}~ $-$ H$_2$ transition region and the temperature of the cosmic microwave background (CMBR) (Srianand et al. 2000; Silva \& Viegus 2002). The column densities of excited levels within the ground term of C~{\sc i}, C~{\sc ii}~, Si~{\sc ii}, and O~{\sc i} are all calculated as part of the gas cooling function. All excitation and deexcitation processes, collisions, line trapping, destruction by background opacities, and pumping by the external continuum, are included. At high {\it z} the IR pumping is predominantly by CMBR pumping, although the diffuse IR radiation from grains also contributes. \subsection{Cloud geometry:} \begin{figure*} \centerline{\epsfig{file=depth.ps,width=14cm,height=15cm}} \label{depth} \caption{ {\bf Pedagogical example:} Various physical quantities are shown as a function of depth in a cloud irradiated by the stellar and diffuse continua with log {\it N}(H~{\sc i}) = 20.7. The metallicity is 0.1 {\it Z}$_\odot$, $\it{ n_H}$= 50 cm$^{-3}$, and log $\kappa$ is $-1.39$ (corresponds to a dust to metal ratio 0.4 times that seen in the Galactic ISM). In these panels the short-dashed lines are for the diffuse case. In panel (a) thick and thin curves are for Silicate and Graphite grains respectively. The short-dashed curves are for models with diffuse continua. The assumed radiation field is more or less 4 times that of Galactic mean UV field. In panel (b) the thick curves represent electron density. Panels (c) and (d) show the ionization and fine-structure excitations as a function of depth in the cloud. } \end{figure*} We envision the region where the absorption lines form as a layer or cloud of ISM gas exposed to several radiation fields. In keeping with much of the PDR literature, we assume a plane parallel geometry (Tielens \& Hollenbach 1985; Draine \& Bertoldi 1996; Wolfire et al. 1995, 2003; Wolfe et al. 2003a,b). We further assume that the gas has constant density for simplicity. \par In the absence of ongoing star formation, the meta-galactic background UV radiation field, dominated by QSOs (Haardt \& Madau 1996), will determine the physical conditions and abundance of H$_2$. If there is ongoing star formation then a locally-produced stellar radiation field will also contribute. OB stars are very short lived, and so do not move far from their birthplace before dying. So, newly formed OB stars will be close to their parent molecular cloud throughout most of their lives. This geometry is assumed in the PDR references cited above. \par Our main goal is to understand the physical conditions in the components with H$_2$ and C~{\sc i}. Only the total H~{\sc i} column density is measurable in DLAs and the H~{\sc i} column density in the H$_2$ component is generally unknown. So, we consider clouds with three values of {\it N}(H~{\sc i}); 10$^{19}$, 10$^{20}$, and 10$^{21}$ cm$^{-2}$. We assume the gas metallicity to be 0.1 {\it Z}$_\odot$ and vary the dust-to-metal ratio in the range 0.001 to 0.1 (this corresponds to a range in $\kappa$ (as defined in Section 1) of $10^{-4}$ to $10^{-2}$) of the galactic ISM. \par We consider three ionizing continua; the meta-galactic radiation field at {\it z} = 2, the direct radiation field from an O star, and an O star continuum that has been attenuated by intervening absorption. The first mimics the case in which there is no {\it in situ} star formation. The second is observed in galactic star-forming regions - the OB stars are close to the molecular cloud and an H II region lies between them. This will be called the stellar case below. The third would be similar to a diffuse ISM exposed to the galactic background starlight, and will be called the diffuse case from now on. \par Following the general practice in the PDR literature, we define the intensity of the incident UV radiation field using a dimensionless constant $\chi$ (as defined by Draine \& Bertoldi 1996), \begin{equation} \chi = {\int_{912\AA}^{1110\AA}h^{-1} \lambda u_\lambda~ d \lambda\over 1.22\times10^7} \label{eqchi} \end{equation} Here, $\lambda u_\lambda$ is the energy density (ergs cm$^{-3}$) of photons and $\chi$ = 1 for the Galactic UV field defined by Habing (1968). Thus $\chi$ provides the UV field strength in the units of Galactic mean UV field. \par We use the observed metallicity, depletion, H$_2$ abundance, and fine-structure excitations of C~{\sc i}, C~{\sc ii}, and {\it N}(C~{\sc i})/{\it N}(C~{\sc ii}) to constrain either the particle density or the intensity of the radiation field. The 21 cm spin-temperature and the level populations of H$_2$ are used for consistency checks. \subsubsection{Ionization and thermal structure :} { In this sub-section we demonstrate the need for a composite self-consistent simulation of the gas in order to deduce the correct physical conditions using a pedagogical example. In Fig.~\ref{depth} we show the ionization and thermal structure of a cloud irradiated by stellar and diffuse continua. \par Panel (a) plots the temperature of graphite and silicate grains (for the range of sizes considered in our calculations) as a function of depth from the illuminated side. All the calculations we present in this work use the self-consistently estimated grain temperature for a range of grain sizes which are important for different processes such as photoelectric heating and formation of H$_2$ on the grain surfaces. \par The kinetic ($T_K$) and the electron density ($n_e$) are plotted in panel (b). {\bf In this work we present the H~{\sc i} weighted harmonic mean kinetic temperature as spin temperature ($T_S$)}. Detail investigations of relationship between $T_S$ and $T_K$ under different physical conditions is described in Shaw et al.(2005). \par Panel (c) plots the densities of H$_2$, C$^0$, C$^+$, and C$^{++}$ as a function of cloud depth. The ratio of carbon fine-structure levels are shown in panel (d). The electron temperature is high ($\sim$10$^4$ K) and the electron density is nearly equal to the H$^+$ density at the illuminated side of the gas for the stellar continua (panel b, solid line). A hydrogen ionization front occurs at a depth of 2.5$\times$10$^{18}$ cm, where {\it T} and {\it n$_e$} fall. Across the PDR, {\it T} ranges between 300 $-$ 800K and electrons are mainly donated by C$^+$. The short-dashed lines show the results for the diffuse case. There is no H~{\sc ii} region, and so the entire cloud is a PDR. The physical conditions are nearly constant across the cloud, which does not have enough grain opacity to attenuate the incident continuum significantly. \par The behavior of C$^0$ in the case of the stellar case is as follows. In photoionization equilibrium, {\it n}(C$^0$) is $\propto$ $n_e$ {\it n}(C$^+$) $\alpha_{rec}$ $\propto$ $n_e$ {\it n}(C$^+$) $T^{-0.6}$ where $\alpha_{rec}$ is the recombination co-efficient. Here $n_e$ decreases by three dex across the ionization front, however the electron temperature changes by less than two dex. This leads to two orders of magnitude decrease in {\it n}(C$^0$) across the ionization front (see upper panel). As the ionization potential of C$^0$ is very close to the energy of photons that are responsible for the excitations of electronic states in H$_2$, one expects both C~{\sc i} and H$_2$ to originate from the same part of the cloud. This happens for the diffuse case. But in the case of the stellar case, a considerable fraction of C~{\sc i} originates in warmer gas that does not possess H$_2$. \par The predicted ratio of fine-structure level populations depends on the nature of the radiation field. The ratios are constant across the cloud in the case of the diffuse case. However, they strongly depend on the radiation field for the stellar case. \par } \section{Ionization by the meta-galactic UV background:} \begin{figure*} \centerline{\epsfig{file=fig1.ps,width=18cm,height=15cm}} \caption {The results for various constant density clouds ionized by the meta-galactic UV background given by Haardt \& Madau (1996). The results with continuous, short-dashed and long-dashed are for $\kappa$ = 0.01, 0.001 and 0.0001 respectively. Panel (a) plots the mean gas pressure as a function of hydrogen density. The density-ranges of the cold and warm neutral medium are marked in this panel. The numbers near each line give the assumed log {\it N}(H~{\sc i}). This panel is useful to identify the warm and cold components of the stable two-phase medium under pressure equilibrium. The shaded histogram gives the observed distribution in the systems with H$_2$ detections. The non-shaded histogram in panel (d) gives the results for the systems without H$_2$. In panel (f) the non-shaded histogram provides the distribution of upper limits. The observational data used are mainly from Ledoux et al. (2003) and Srianand et al. (2005). The horizontal short-dashed lines in panels (b), (e) and (f) are typical detection limits obtained in echelle spectra. The horizontal short-dashed line in panel (c) gives the expected value of the ratio {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) when CMBR at {\it z} = 2 is the only source of excitation. \label{fig1}} \end{figure*} First we consider the case in which the only available radiation field is the meta-galactic UV background. We use the QSO dominated meta-galactic UV radiation field (Bgr) computed by Haardt \& Madau (1996) and the cosmic microwave background radiation (CMBR) at {\it z} = 2 (assumed to be a black body with {\it T} = 8.1 K). The UV flux density in the Bgr at energies below 1 Ryd at {\it z} = 2 is roughly two orders of magnitude lower than the current mean Galactic UV field (i.e $\chi = 1.44\times10^{-2}$ for the Bgr at z = 2). Some of the results from our calculations are presented in Figs.~\ref{fig1} and \ref{fig2}. \subsection{Gas pressure:} Panel (a) of Fig.~\ref{fig1} plots the mean pressure of the H~{\sc i} gas as a function of the total hydrogen density ($\it{ n_H}$) for three different values of {\it N}(H~{\sc i}). Although ionization and thermal gradients exist between the H~{\sc ii} and H~{\sc i} regions, the H~{\sc i} gas is fairly isothermal. Thus, we use the neutral hydrogen weighted mean temperature to estimate the pressure. The continuous, short-dashed and long-dashed curves in all these panels are for dust to metallicity 0.1, 0.01, and 0.001 (i.e $\kappa$ = 0.01, 0.001 and 0.0001) respectively . The regions of thermal stability occur for $d(log P)/d(log n)$$>0$. The warm neutral medium (WNM) and cold neutral medium (CNM) of the two-phase medium are shown for reference. As an example, gas with 0.3 $\le$ $\it{ n_H}$ $({\rm cm}^{-3})\le 1$ is thermally unstable for N(H~{\sc i})$\simeq10^{21}~{\rm cm}^{-2}$. The gas will be in the stable WNM phase for 0.03$\le$ $\it{ n_H}$ $({\rm cm}^{-3})\le 0.3$ and in the stable CNM phase for 1$\le$ $\it{ n_H}$ $({\rm cm}^{-3})\le 30$. For reference, Fig. 6 of Wolfire et al. (1995) shows a similar phase diagram for various metallicities and dust content. The allowed minimum and maximum pressure in the two-phase medium are higher in our case than in the typical galactic ISM for a given column density, mainly because of the low metallicity and low dust-to-gas ratio (see also Petitjean et al. (1992), Lizst (2002), Wolfe et al. (2003a,b); Wolfe et al. (2004)). The main motivation for plotting the phase diagram from our calculations is to have a rough idea of the nature of the gas at different densities and to compare our work with published models. It is worth keeping in mind the fact that ISM is more complex than different phases in pressure equilibrium with one another. For example magnetic field, if present in DLAs, can provide confinement even if there is no thermal pressure equilibrium between different phases. Thus, we do not make any serious attempt to model the DLAs as two-phase systems. \subsection{H$_2$ abundance:} In this section, we compare the predicted and measured {\it N}(H$_2$) values (Ledoux et al. 2003) to determine the physical conditions in clouds both with and without observed H$_2$. \subsubsection{Systems without H$_2$ detections:} First we consider the cases where H$_2$ is not detected. The predicted H$_2$ column densities as a function of hydrogen density ($\it{ n_H}$) are shown in panel (b) in Fig.~\ref{fig1}. The horizontal short-dashed line gives the typical detection limit achieved in H$_2$ surveys (i.e., {\it N}(H$_2$)$ = 10^{14}~{\rm cm}^{-2}$). The observed {\it N}(H$_2$) is distributed uniformly between 10$^{14}$ and 10$^{19}$ cm$^{-2}$ (The histogram in the left hand side in panel (b)). \par From this figure it is clear that for a given {\it N}(H~{\sc i}), the column density of H$_2$ is independent of $\kappa$ when the density is low and the gas is mainly the WNM. This is mainly because in the low-density, high-temperature gas, H$_2$ is formed predominantly through the H$^-$ process due to the low dust-to-gas ratio. It is also clear that the hydrogen density has to be higher than 0.1, 1.0, 30 cm$^{-3}$ for {\it N}(H~{\sc i}) = 10$^{21}$, 10$^{20}$, and 10$^{19}$ cm$^{-2}$ respectively in order to detect H$_2$. In the presence of an additional local radiation field (perhaps generated due to {\it in situ} star formation) these critical densities will be larger. { Thus, if the gas in DLAs is mainly a stable WNM in ionization equilibrium with the Bgr, then the equilibrium H$_2$ column density will be below the detection limit.} This inference is independent of $\kappa$ since the H$^-$ process dominates the H$_2$ formation at low densities. \par Fig.~\ref{fig1} suggests that if DLAs have a thermally stable CNM then the equilibrium abundance of H$_2$ is high enough for the molecule to be easily detectable whenever $\kappa$ is greater than 0.0001 or dust to metal ratio grater than 0.001 (The long-dashed curves in panel (b)). The H$_2$ formation time-scale, which can be long, does not affect this result. A typical time-scale for forming H$_2$ with molecular fraction $f_{\rm H_2}$ is $\sim$$f_{\rm H_2}$/2$Rn$(H$^0$). Here {\it n}(H$^0$) denotes the atomic hydrogen density. According to Jura (1975) R$\simeq 3\times10^{-17}$ cm$^{+3}$ s$^{-1}$ in the Galactic interstellar medium. Scaling this value by $\kappa$ we have, \begin{equation} {\rm t = {5.025 \times 10^8~f_{H_2}\over\kappa~n_H}~ yrs. } \end{equation} Assuming that all the hydrogen is H$^0$, we find a typical H$_2$ formation time-scale of $\sim 5 \times 10^5$ yrs with $\kappa$ = 0.0001 and ${\rm f_{H_2} = 10^{-6}}$ for $\it{ n_H}$~=10 cm$^{-3}$. The age of the cloud has to be less than 10$^{5}$ yrs for us not to detect H$_2$ with {\it N}(H$_2$)$\le10^{14}$ cm$^{-2}$ and $\kappa=0.0001$. The hydrodynamical time-scales or pressure readjustment time-scales in the cold gas are usually larger than this value (Hirashita et al. 2003) due to the low sound speeds. Hence the typical age of the clouds is expected to be larger. Thus H$_2$ should be detectable in a CNM with $\kappa$ more than 0.0001 in the absence of any additional local radiation field. \subsubsection{Systems with H$_2$ detections:} Now we focus on the systems with detectable H$_2$. Ledoux et al. (2003) have shown that these systems usually have a high metallicity and dust content, i.e; {\it }Z$\ge0.1${\it Z}$_\odot$ and log~$\kappa\ge-2$. If the gas originates from a stable CNM, our calculations predict ${\rm {\it N}(H_2)\ge10^{19}}$ cm$^{-2}$ (Panel (b) of Fig.~\ref{fig1}) for {\it N}(H~{\sc i}) $\ge 10^{20}$ cm$^{-2}$. The observed {\it N}(H$_2$) is always smaller. Interestingly, the observed {\it N}(H$_2$) values with $\kappa\ge0.01$ are only reproduced in a very narrow density range, one that is usually thermally unstable in the standard two-phase models. This signifies that, for a uniformly distributed range of densities, a cloud with a random choice of $\it{ n_H}$,~ with {\it N}(H~{\sc i}) in the range of 10$^{20}$ to 10$^{21}$ cm$^{-2}$, and with $\kappa\ge0.01$, will have either no detectable H$_2$ or {\it N}(H$_2$) $\ge 10^{19}$ cm$^{-2}$. The probability of detecting H$_2$ in the observed column density range of 10$^{16}$ to 10$^{19}$ cm$^{-2}$ is very low. {Thus, in order to understand the relatively low column densities of H$_2$ in DLAs with clear detections, we need either the presence of an additional radiation field or for the cloud to be too young to produce H$_2$.} The existence of a local radiation field with intensity of the order or higher then the Galactic mean field has been suggested by various authors while discussing H$_2$ in individual DLAs (Black et al. 1987; Ge \& Bechtold 1997; Srianand \& Petitjean 1998; Petitjean et al. (2000;2002); Ge, Bechtold \& Kulkarni 2001; Ledoux et al. 2002b, Levshakov et al. 2002; Reimers et al. 2003). \subsection{Fine-structure excitations of C~{\sc i} and C~{\sc ii}:} In this section, we compare the column densities predicted for C~{\sc i} and C~{\sc ii} fine-structure excitations with the observations. Many of these results will not agree leading us to conclude that an additional source of radiation must exist in these systems. The time-scales to establish the ionization and populations within the fine-structure levels of neutral atoms are faster than the H$_2$ formation time-scale, so this provides an indicator which should be in steady state. \subsubsection{C~{\sc i} absorption: detectability and degree of ionization} The {\it N}(C~{\sc i})/{\it N}(C~{\sc ii}) ratio is a good tracer of the flux of photons driving the Solomon process in the cold neutral gas where H$_2$ forms since the ionization potential of C$^0$ overlaps with the H$_2$ electronic bands. However, {\it N}(C~{\sc ii}) cannot be accurately determined in DLAs since the C~{\sc ii}$\lambda1334$ line is usually saturated. Unlike C~{\sc ii}, the column densities of Si~{\sc ii} and S~{\sc ii} (which usually trace the same region as C~{\sc ii} (see Fig.~\ref{depth})) are accurately measured using transitions with low oscillator strengths. Si and S are usually not highly depleted in DLAs (but see Petitjean et al. 2002 for a unique counter example). {\it N}(Si~{\sc ii}) can be used as a proxy to estimate {\it N}(C~{\sc ii}) (Srianand et al. 2000) by assuming the solar abundance of Si/C. Now we compare the predicted {\it N}(C~{\sc i}) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) with the observations. We plot {\it N}(C~{\sc i}) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) as a function of $\it{ n_H}$~respectively in panels (e) and (f) of Fig.~\ref{fig1}. The predicted values of {\it N}(C~{\sc i}) are much closer to the detection limit in a very low-density gas ($\it{ n_H}$$\le0.01$ cm$^{-3}$). As previously noted, our calculations are performed with Z = 0.1Z$_\odot$, and the systems that do not show C~{\sc i} absorption tend to have metallicity in the range of ${\rm 0.01 Z_\odot\le Z\le0.1Z_\odot}$ (Srianand et al. 2005). Thus, the absence of C~{\sc i} is consistent with a cloud having $\it{ n_H}$$\le0.01$ cm$^{-3}$, for the metallicity and {\it N}(H~{\sc i}) typically measured in these systems. Such a cloud will have log~{\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) $\le-1.5$ (Panel (f) of Fig.~\ref{fig1}). This is consistent with the measured upper limits, shown as the non-shaded histogram in panel (f), in most of the systems without detected H$_2$. But {\it N}(C~{\sc i}) is more than an order of magnitude larger than the typical detection limit for $\it{ n_H}$$\ge1$ cm$^{-3}$. Thus, C~{\sc i} should be detectable in a high-density gas, even for low metallicity, in the absence of any internal radiation field. The predicted values of {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) in the high-density gas is usually higher than the observed upper limits (see Liszt 2002 and Wolfe et al. 2003a). \par Next we concentrate on systems with detectable C~{\sc i} absorption. As noted in section 1, apart from only one case ({\it z}$_{abs}$ = 2.139 toward Tol~1037$-$270), all the C~{\sc i} detections are from DLAs that also have H$_2$. These systems have metallicities higher than we assume. Based on the detection of H$_2$, we expect these systems to also have a higher density. Dense clouds ($\it{ n_H}$$\ge0.1$ cm$^{-3}$) produce a higher value of {\it N}(C~{\sc i}) than observed. The measured ratio of {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) in these components is much less than our predictions for a high-density gas. Clearly higher radiation field is needed to produce N(C~{\sc i})/N(Si~{\sc ii}) as measured in these systems. \par The fine-structure populations of C~{\sc i} and C~{\sc ii} can also test the high density requirement for components with H$_2$ detections. This will be discussed in the next subsection. \subsubsection{C~{\sc i} fine-structure excitation:} Here we use the observed {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) ratio to constrain $\it{ n_H}$~in the components with H$_2$. This ratio is regularly used to trace the pressure in a neutral gas (Jenkins \& Tripp 2001; Srianand et al. 2005). We plot {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) as a function of hydrogen density in panel (c) of Fig.~\ref{fig1}. The dotted line gives the expected value of the ratio if CMBR pumping at {\it z} = 2 is the only source of C~{\sc i} fine-structure excitation. The observed values of {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) (the histogram in the left hand side) are much higher than this, suggesting that collisions and UV pumping also contribute to the excitation. Most of the observed ratios are consistent with the predictions for $\it{ n_H}$~$\ge~10~{\rm cm}^{-3}$ for the considered range of {\it N}(H~{\sc i}) and $\kappa$. {As noted above, for such a high-density gas, our calculations predict {\it N}(H$_2$) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) higher than observed. We show below that the presence of an additional radiation field can reduce both of these.} \subsubsection{Excitations of C~{\sc ii} fine-structure level:} Like C~{\sc i}, C~{\sc ii$^*$} is always detected whenever H$_2$ is present in DLAs. However, it is also seen in a considerable fraction of DLAs without C~{\sc i} and H$_2$ (Wolfe et al. 2003a; Srianand et al. 2005). The observed column density of C~{\sc ii$^*$} can directly give cooling rate (when the optical depth of [C~{\sc ii}]$\lambda158$ line is negligible) which can be used to constrain the star-formation rate once the physical conditions in th gas is known (Wolfe et al. 2003a; 2003b; 2004). C~{\sc ii} can originate from the CNM, WNM, and an ionized gas (i.e., H~{\sc ii} regions). Collisions with atoms are important for exciting C~{\sc i$^*$}, while electron collisions are important for the excitation of C~{\sc ii$^*$}. Thus, C~{\sc ii$^*$} is expected to be detectable in systems with a warm and/or ionized gas even if the high-density CNM is absent (see Lehner et al (2004) and Srianand et al. (2005)). C~{\sc ii$^*$} is invariably detected in all the systems with log {\it N}(H~{\sc i})$\ge 21$ and log~{\it Z}$\ge-0.03${\it Z}$_\odot$ irrespective of the absorption redshift. The observed column density is in the range 12.7$\le$log {\it N}(C~{\sc ii}$^*$)$\le$14.0 for systems at 1.5$\le${\it z}$_{abs}$$\le$2.5. The typical upper limit is {\it N}(C~{\sc ii}) $\le$ 10$^{13}$ cm$^{-2}$ (Srianand et al. 2005). The calculated {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}) ratio is shown in panels (d) of Fig.~\ref{fig1}. The shaded and non-shaded observed histograms in these panels are for the systems with and without H$_2$ detection respectively. {The observed range of {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}) in systems with H$_2$ detections suggests they originate in a high-density gas. This is consistent with our conclusion based on H$_2$ and {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i})}. The observed {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}) ratio tends to be smaller in systems without H$_2$. As we have mentioned above, most of these systems have total {\it N}(H~{\sc i}) higher than 10$^{21}$ cm$^{-2}$. {Thus, the systems that show C~{\sc ii$^*$} will originate in clouds with $\it{ n_H}$$\ge$0.1 cm$^{-3}$. Thus, we will require a radiation field in excess of the Bgr to suppress C~{\sc i} and H$_2$ in a high-density gas}. \subsection{H~{\sc i} spin temperature:} \begin{figure} \centerline{\epsfig{file=fig3.ps,width=9cm,height=8cm}} \caption {The calculated H~{\sc i} spin temperature is shown as a function of density with the meta-galactic UV background dominated by the QSOs at $z\simeq2$. The results are presented for three different column densities of {\it N}(H~{\sc i})(10$^{21}$, 10$^{19}$, 10$^{20}$, 10$^{21}$ cm$^{-2}$) and three different values of $\kappa$ (continuous, short-dashed and long-dashed curves are for $\kappa$ = 0.01, 0.001, and 0.0001 respectively). \label{fig2}} \end{figure} The thermal state of H~{\sc i} gas can be probed with the 21 cm spin-temperature ($T_s$). The harmonic weighted mean temperature, a proxy for $T_s$, is shown in panel (a) of Fig.~\ref{fig2}. It is clear that if DLAs originate in a low-density WNM gas, then the spin temperature will be $\simeq 8000$ K. Thus, systems with no H$_2$, C~{\sc i}, C~{\sc ii$^*$}, and 21 cm absorption are consistent with a low-density WNM in radiative equilibrium with the Bgr. The predicted spin temperatures are usually less than 100 K for clouds with N(H~{\sc i})$\ge 10^{20}$ cm$^{-2}$ in the CNM. Thus, we expect all DLAs to show detectable 21 cm absorption if they originate in a high-density CNM gas which covers the background radio source. Unlike C~{\sc i} or H$_2$, the presence of an additional radiation field with {\it h}$\nu$ $\le$ 13.6 eV may not reduce the 21 cm optical depth because it can not ionize hydrogen. The absence of 21 cm absorption in most DLAs suggests that they originate in a low-density warm medium. Our calculations also suggest that H~{\sc i}, H$_2$, and C~{\sc ii$^*$} should be found in systems with 21 cm absorption. The absence of C~{\sc i} and H$_2$ in the few systems with 21 cm absorption is inconsistent with this prediction, again suggesting a local source of star light. \subsection{Effects of micro-turbulence and cosmic rays} \begin{figure} \centerline{\epsfig{file=fig4.ps,width=9cm,height=9cm}} \caption {The effects of micro-turbulence \& cosmic ray ionization. The solid curves give the result without turbulence and cosmic ray ionization when Z = 0.1 Z$_\odot$ and $\kappa = 0.01$. The short-dashed curves show the effects of 3 km~s$^{-1}$~turbulence. The long-dashed curves shows the effect of cosmic ray ionization (with a cosmic ray ionization rate of hydrogen 2.5 $\times10^{-17}$ s$^{-1}$). The results are presented for two values of {\it N}(H~{\sc i})(10$^{20}$, 10$^{21}$ cm$^{-2}$). \label{fig4}} \end{figure} We have neglected cosmic rays and turbulent motions in the models considered until now. In this section we investigate whether the inclusion of these processes help bring the Bgr models into agreement with observations. The presence of micro-turbulence increases the mean free path for line photons and reduces line centre optical depth. We consider a turbulent velocity corresponding to a Doppler {\it b} parameter of 3 km~s$^{-1}$. This shows the greatest possible effect since the typical measured {\it b} parameters of H$_2$ components are usually $\le 3$ km~s$^{-1}$ (Srianand et al. 2005; Ledoux et al. 2003; Petitjean et al. 2002; Srianand et al. 2000). We consider two column densities, {\it N}(H~{\sc i}) = 10$^{21}$ cm$^{-2}$ and {\it N}(H~{\sc i}) = 10$^{20}$ cm$^{-2}$, along with $\kappa$ = 0.01, and {\it Z} = 0.1 {\it Z}$_\odot$. Some results are presented in Fig.~\ref{fig4}. {\it N}(H$_2$) changes very little for {\it N}(H~{\sc i}) = 10$^{21}$ cm$^{-2}$. However, the molecular column density is slightly lower in the case of {\it N}(H~{\sc i}) = 10$^{20}$ cm$^{-2}$. We would need {\it b} $\gg$ 3 km~s$^{-1}$~to produce a significant effect on {\it N}(H$_2$). [C~{\sc i}] 610$\mu$ is optically thick at high column densities and the fine-structure level populations are influenced by line trapping. Turbulence increases the photon mean free path and reduces this trapping. The effect is clearly seen for the ratio {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) when {\it N}(C~{\sc i}$^*$)$\ge 3\times 10^{12}$ cm$^{-2}$, which occurs when $\it{ n_H}$ =10 cm$^{-3}$ (Panel (c) of Fig.~\ref{fig4}) for {\it N}(H~{\sc i}) = 10$^{21}$cm$^{-2}$. The effect is not seen significantly in the case of {\it N}(H~{\sc i}) = 10$^{20}$ cm$^{-2}$ since large column densities of {\it N}(C~{\sc i$^*$}) only occur for densities above 100 cm$^{-3}$. As pointed out before, our calculations produce higher {\it N}(C~{\sc i}) than observed. {\it N}(C~{\sc i}) is invariably not saturated in most DLAs and $\lambda$610$\mu$ line trapping should not control the {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) ratio. Thus the inclusion of micro-turbulence can not rectify the problems of the Bgr models in simultaneously reproducing the H$_2$ and C~{\sc i} observations. \par The column density of C~{\sc ii}$^*$ is also affected by the presence of turbulent motions for {\it N}(C~{\sc ii$^*$}) $\ge 3\times 10^{13}$ cm$^{-2}$ due to optical depth in the [C~{\sc ii}] 158$\mu$ line. The observed {\it N}(C~{\sc ii$^*$}) is always higher than 3$\times10^{13}$ cm$^{-2}$ (Table 1 of Wolfe et al. 2003a; Srianand et al. 2005) in DLAs with {\it N}(H~{\sc i})$\ge 10^{21}$ cm$^{-2}$. Thus, line trapping effects may be important in producing the observed excitation of the C~{\sc ii} fine-structure level. Cosmic rays add heat to a highly ionized gas and produce secondary ionizations in a neutral gas. H$^+$ produced by cosmic ray ionization can cause ortho-para conversion and thermalize this ratio (Flower et al. 1994). We consider a cosmic ray ionization rate equal to the Galactic background ionization rate ($\sim2.5\times10^{-17}~{\rm s}^{-1}$; Williams et al. 1998). These results are presented with long-dashed lines in Fig~\ref{fig4}. The enhancement in the gas temperature increases the column densities of C~{\sc i$^*$} and C~{\sc ii$^*$} in the high-density gas. The pressure of the neutral gas increases due to cosmic ray heating as expected. We also see a decrease in N(H$_2$) at a given $\it{ n_H}$~mainly because the increase in the gas temperature reduces the efficiencies in forming H$_2$. Background cosmic ray ionization does not produce drastic changes for the low-density gas where the Bgr dominates (Fig.~\ref{fig4}). A much larger cosmic ray ionization rate is needed to have an desired effect. \subsection{Summary:} The main results for a cloud irradiated by the meta-galactic UV background radiation are: \begin{itemize} \item{} The presence of the QSO dominated meta-galactic radiation field can maintain a H$_2$ abundance lower than the detection threshold for $\it{ n_H}$$\le0.1$ cm$^{-3}$, irrespective of the dust content and {\it N}(H~{\sc i}). The presence of any extra radiation field in addition to the meta-galactic radiation field, or a slower H$_2$ grain formation rate, increases this critical density. Thus the absence of H$_2$ in 85 per cent of DLAs is consistent with the low density models. \begin{figure*} \centerline{\epsfig{file=bbfig1.ps,width=18cm,height=14cm}} \caption { {\bf The effect of density:} The results of calculations of a cloud in the radiation field of a star with a surface temperature 40,000 K (stellar case). The cloud has log {\it N}(H~{\sc i}) = 20.7, {\it Z} = 0.1 {\it Z}$_\odot$. The long-dashed, continuous and short-dashed curves are for log $\it{ n_H}$ = 0.0, 1.0, and 1.7 respectively. The labels 1,2,3, and 4 in the short-dashed curves are for log($\kappa$) = $-1.4,~-1.6,~-1.8$, and $-2.0$ respectively. We mark these numbers only for the short-dashed curves. The dependences on $\kappa$ has the same sense for other values of $\it{ n_H}$~as well. The results are presented as a function of $\chi$ (see Eq.\ref{eqchi}). In each panel, the observed distributions are given as histograms (see caption of Fig.~\ref{fig1} for details). \label{figbb1}} \end{figure*} \item{} The detection of C~{\sc i} absorption is inevitable whenever our line-of-sight passes through the CNM. However, the density range that produces the observed {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) ratio also produces {\it N}(C~{\sc i}) and {\it N}(H$_2$) higher than the observed values. An additional source of radiation with energies $\le 13.6$ eV is needed to account for the low values of {\it N}(H$_2$) and {\it N}(C~{\sc i}) in these systems. \item{} Like H$_2$, the absence of 21 cm absorption in most of the high-{\it z} DLAs can be naturally explained if DLAs originate mostly in the WNM. A low-density gas, corresponding to a WNM, has a very large spin temperate ({\it T}$_s$ $\ge$ 7000 K). If DLAs are dominated by such a gas then 21 cm absorption will not be detectable. \item{} A high-density gas has {\it T}$_s$ $\le$ 100 K. This produces strong 21 cm absorption along with high values of {\it N}(H$_2$) and {\it N}(C~{\sc i}). The fact that {\it N}(H$_2$) and {\it N}(C~{\sc i}) are not seen in the few systems that do show 21 cm absorption suggests that an additional radiation field is present in these systems as well. This is consistent with results of detail investigations of individual systems available in the literature (see references given in Section 3.3.1). \item{} The too-large {\it N}(H$_2$) and {\it N}(C~{\sc i}) column densities predicted at higher densities cannot be explained by micro-turbulent motions (up to 3 km~s$^{-1}$) or cosmic ray heating. However, the non-thermal motions affect the fine-structure level populations when the infrared lines become optically thick. The observed log {\it N}(C~{\sc ii$^*$}) is $\ge 13.5$ for {\it N}(H~{\sc i})$\ge 10^{21}$ cm$^{-2}$. The effects of line trapping of [C~{\sc ii}] $\lambda158$ becomes very important in these systems whenever the turbulent motions are small. \end{itemize} In the following section we explore the possibility of using {\it in situ} star formation to prevent the formation of too-large {\it N}(H$_2$) in high-density systems. We wish to point-out that inclusion of radiation from the Lyman Break Galaxies (LBGs) can increase the flux of Lyman Warner band photons in the UV background by upto a factor 10 (see Haardt \& Madau (2001)). This will make the Bgr radiation roughly 5 times less than the Galactic mean field. However, Section 4 shows that the required radiation field is much higher than this enhanced Bgr. In addition, this enhanced UV field will also produce bimodel distribution of N(H$_2$) contrary to what has been observed. \section {Ionization by young stars:} \begin{figure*} \centerline{\epsfig{file=bbfig1ext.ps,width=18cm,height=14cm}} \caption {The results of calculations of a cloud in the radiation field of a star with a surface temperature 40,000 K and attenuated by {\it N}(H~{\sc i}) = 20 cm$^{-2}$ (diffuse case). Rest are same as in Fig.~\ref{figbb1} \label{figbb1e}} \end{figure*} \begin{figure*} \centerline{\epsfig{file=bbfig1a.ps,width=18cm,height=14cm}} \caption { {\bf Effects of column density:} The results of calculations of a cloud in the radiation field of a star with surface temperature 40,000 K (stellar case). The cloud has $\it{ n_H}$=50 cm$^{-3}$, {\it Z} = 0.1 {\it Z}$_\odot$. The long-dashed, continuous and short-dashed curves are for log {\it N}(H~{\sc i}) = 20.0, 20.7 and 21.0 respectively. The labels 1,2,3, and 4 in the short-dashed curves are for log($\kappa$) = $-1.4,~-1.6,~-1.8,$ and $-2.0$ respectively. We mark these numbers only for the short-dashed curves. The dependences on $\kappa$ has the same sense for other values of {\it N}(H~{\sc i}) as well. The histograms are as explained in Fig.~\ref{fig1}. \label{figbb1a}} \end{figure*} \begin{figure*} \centerline{\epsfig{file=bbfig1aext.ps,width=18cm,height=14cm}} \caption { The results of calculations of a cloud in the radiation field of a star with surface temperature 40,000 K attenuated by {\it N}(H~{\sc i}) = 20 cm$^{-2}$, our diffuse case. Rest are same as in Fig.~\ref{figbb1a} \label{figbb1ae}} \end{figure*} We add a stellar radiation field using a 40,000 K Kurucz' model atmosphere in addition to the metagalactic radiation field discussed above. We consider direct stellar radiation and stellar continuum attenuated by N(H~{\sc i}) = $10^{20}$ cm$^{-2}$. These two cases are denoted by ``stellar" and ``diffuse", as discussed in Section 2.5. We showed that the meta-galactic UV background is sufficient to suppress the formation of H$_2$ in low-density gas (i.e $\it{ n_H}$$\le0.1~{\rm cm}^{-3}$). Thus, in this section we mainly concentrate on the high-density gas needed to account for the observed {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) ratio. We consider clouds with three values of {\it N}(H~{\sc i}) ($10^{20}$, $10^{20.7}$, and 10$^{21}$ cm$^{-2}$), three values of $\it{ n_H}$ (1, 10, and 50 cm$^{-3}$), and four values of $\kappa$ ($10^{-2},$ $10^{-1.8}, $ $10^{-1.6}$, and $10^{-1.4}$). In all of these calculations we assume {\it Z} = 0.1 {\it Z}$_\odot$, turbulent velocity {\it b} = 3 km~s$^{-1}$. Cosmic-ray heating is not considered in the models described below. \par We present the results of model calculations as a function of $\chi$ (as defined in Eq.~\ref{eqchi}). The calculations for stellar case will have more high-energy photons than the diffuse one for the same value of $\chi$. Therefore, for a given {\it N}(H~{\sc i}), the gas will be hotter and more ionized at the illuminated side of the cloud for the stellar case (see Fig.~\ref{depth}). We present the results of our calculations for the stellar and diffuse case for {\it N}(H~{\sc i}) = 5 $\times10^{20}$ cm$^{-2}$ in Figs.~\ref{figbb1} and \ref{figbb1e} respectively. The effects of changing {\it N}(H~{\sc i}) are shown in Figs.~\ref{figbb1a} and \ref{figbb1ae}. \par \subsection{H$_2$ abundance:} \begin{table*} {\tiny \caption{Results for clouds irradiated by starlight. A-E are the stellar case and AE-EE are the diffuse case} \begin{tabular}{lcccccccccc} \hline &\multicolumn {10}{c}{Models} \\ Parameters & A & B & C &D&E& AE & BE &CE&DE&EE\\ \hline log N(H~{\sc i}) & 20.7 & 20.7 & 20.7 &20.0&21.0& 20.7 & 20.7 &20.7&20.0&21.0\\ $\it{ n_H}$(cm$^{-3}$) & 1 & 10 & 50 & 50& 50 & 1 & 10 &50. &50 & 50 \\ log N(H~{\sc i}(ext))& ....&.... & .... &.... &.... &20.0 &20.0& 20.0 & 20.0 &20.0\\ $\chi $ & $\le 3$& 1,30 & 3,100&0.5,35&4,400&$\le10$& 1,100 &3,200&0.3,22&10,475\\ T$_s$ (K) & 3150,5600& 100,1000 & 56,316&104,1260&48,275 &2290,5248&90,900&54,339&40,560&66,195\\ $\tau_v({\rm 21~cm})/f$&0.09,0.05 &2.74,0.27&4.90,0.87&0.53,0.04&11.42,2.00& 0.12,0.05& 3.04,0.30& 5.07,0.81&1.37,0.10 &0.83,0.28\\ log N(C~{\sc i})/N(Si~{\sc ii}) &$-2.6,-2.0$ &$-2.6,-2.0$&$-2.5,-1.8$&$-1.8,-1.0$&$-3.0,-2.5$&$-3.8,-2.5$&$-4.0,-2.04$&$-3.7,-2.0$&$-2.5,-1.0$&$-3.7,-2.0$\\ log {N(C~{\sc i}$^*$)/ N(C~{\sc i})}& $-0.76$,0.0 & $-0.3$,0.3 &$-0.2$,0.4 &$-0.1,0.44$&$0.0,0.40$&$\sim-0.76$&$-0.6,-0.5$&$-0.4,-0.1$&$-0.4,-0.1$&$-0.26,-0.44$\\ log N(C~{\sc ii$^*$})/N(Si~{\sc ii}) & $-1.8,-1.3$ & $-0.5,-1.0$& $-0.2,-0.8$ & $-0.5,0.0$&$-0.9,-0.1$&$\sim-1.9$&$-1.3,-1.1$ & $-0.9,-0.6$ &$-1.5,-0.5$&$-1.08,-0.66$\\ log N(C~{\sc i}(tot)) &12.0,13.4 &11.9,13.0 & 11.8,12.8& 11.0,13.4&11.8,13.0&11.0,13.0&11.0,13.0&11.0,13.0&11.2,13.0&10.6,12.0\\ log N(O~{\sc i$^*$}) &11.2,11.3 & 11.3,11.9 & 11.5,12.2 &11.0,11.3 & 10.5,11.7&11.0,11.2&10.0,11.5&10.0,12.0&10.0,11.5&10.5,12.0\\ log N(O~{\sc i$^{**}$})&11.2,11,3 & 11.3,11.9 & 11.6,12.2 &11.0,11.3 & 10.5,11.5&11.0,11.3&11.0,11.53&110.0,11.5&0.0,11.3&10.0,11.7\\ log N(Si~{\sc ii$^*$}) & $\le10.2$& 10.5,11.0 & 10.8,11.7 &$\sim9.5$ & 9.6,10.2 &9.0,10.3& 9.9,10.0&9.0,10.5&9.0,10.6&9.0,10.5\\ \hline \end{tabular} \label{table1} } \end{table*} Panel (a) of Fig.~\ref{figbb1} shows the predicted column density of H$_2$ as a function of $\chi$ for the stellar case, log~{\it N}(H~{\sc i})=20.7, and 3 values of $\it{ n_H}$ (long-dashed, continuous and short-dashed curves are for $\it{ n_H}$ = 1, 10 and 50 cm$^{-3}$ respectively). For each values of $\it{ n_H}$ the results are presented for 4 different $\kappa$. For a given $\it{ n_H}$~and $\chi$, a higher dust content $\kappa$ produces a lower {\it N}(H$_2$). Naively we would expect {\it N}(H$_2$) to increase with increasing $\kappa$ due to an enhanced probability of H striking a grain, and increased shielding. However, the gas temperature increases for higher $\kappa$ due to additional grain photo-electric heating (see the next section). This reduces the H$_2$ formation rate as shown in Fig.~1 of Cazaux and Tielens (2002), so, {\it N}(H$_2$) decreases. \par The calculations reproduce the observed range of {\it N}(H$_2$) for 1$\le\chi\le$100, $\it{ n_H}$~ $\sim$ 10$-$50 cm$^{-3}$, and log~{\it N}(H~{\sc i}) = 20.7 for the stellar case. The range becomes 1$\le\chi\le$300 for the diffuse case (Panel (a) of Fig.~\ref{figbb1e}). We find that with $\it{ n_H}$=1 cm$^{-3}$, H$_2$ should be detectable when $\chi\le 3$ and $\chi\le 10$ for the stellar and diffuse case respectively. Clearly, for a moderate local radiation field, $\chi\simeq 1-10$, H$_2$ will be detectable for $\it{ n_H}$$\ge$1 cm$^{-3}$ and {\it N}(H~{\sc i}) $\ge5\times10^{20}$ cm$^{-2}$. Higher $\it{ n_H}$~ is needed to produce detectable {\it N}(H$_2$) for a lower {\it N}(H~{\sc i}) (Panels (a) in Figs.~\ref{figbb1a} and \ref{figbb1ae}). The range of $\chi$ that is consistent with the observed range of H$_2$ is summarised in Table~\ref{table1} for all the scenarios discussed in this work. Observations of atomic fine-structure lines will further narrow down this range. \par \subsection{Spin temperature and 21 cm optical depth:} Panels (b) of Figs.~\ref{figbb1} and \ref{figbb1e} show the predicted {\it T}$_s$ for log~N(H~{\sc i}) = 20.7, 3 values of $\it{ n_H}$ (Long-dashed, continuous and short-dashed curves are for $\it{ n_H}$=1,10 and 50 cm$^{-3}$ respectively) and 4 values of $\kappa$. It is clear from all these panels that for a given $\it{ n_H}$ (curves with similar line style), $T_s$ increases with increasing $\kappa$ mainly due to photoelectric heating by dust grains (labels 1, 2, 3 and 4 on the short-dashed curve show models with $\kappa$ in the increasing order). We also notice that for a given $\kappa$ (say top-most curve with a given line-style) and $\chi$ the models with higher $\it{ n_H}$ have lower {\it T}$_s$. This effect is very prominent in the stellar case. This is mainly because {\it T}$_s$ is very large at the illuminated side of the gas in the stellar case. Panels (b) of Figs.~\ref{figbb1a} and \ref{figbb1ae} show the results with $\it{ n_H}$ = 50 cm$^{-3}$ for 3 different values of N(H~{\sc i}) (long-dashed, continuous and short-dashed curves are respectively for log~{N(H~{\sc i})}= 20, 20.7 and 21) for the range of $\kappa$. It is clear that for a given $\kappa$ (say top most curves for different line-styles), clouds with lower {\it N}(H~{\sc i}) will have higher $T_s$ and hence lower $\tau_v({\rm 21~cm})$. In the diffuse case (from Figs.~\ref{figbb1e} and \ref{figbb1ae}) we notice that for a given $\it{ n_H}$ and $\kappa$, {\it T}$_s$ gradually increases with $\chi$ and saturates to a constant for large values of $\chi$. The increase in $T_s$ with lower value of $\chi$ is the effect of photo-electric heating. At larger $\chi$ the grains become highly charged and total heating rate will become independent of $\chi$ (Bakes \& Tielens, 1994; Weingartner \& Draine (2001a)). Thus $T_s$ becomes independent of $\chi$ at large $\chi$. \par The range of predicted $T_s$ in the range of ${\rm \chi}$ constrained by the H$_2$ observations is summarised in Table.~\ref{table1}. This table also gives the expected 21 cm optical depth obtained using Eq.~\ref{eqn1}. {It is clear from the table that for a high-density gas (i.e $\it{ n_H}$$\ge$10 cm$^{-3}$) and log~{\it N}(H~{\sc i})$\ge20.7$, 21 cm absorption should be detectable with an optical depth of $\tau_v({\rm 21~cm})/f\ge0.27$. Especially for high $\kappa$, the 21 cm optical depth becomes as low as 0.05, even when $\it{ n_H}$=50 cm$^{-3}$, for log~{\it N}(H~{\sc i}) = 20. {Thus, the absence of 21 cm absorption in high-density (or low temperature) systems will either mean that the average N(H~{\sc i}) along the radio source is much less than N(H~{\sc i}) seen along the optical sight-line (Wolfe et al. 2003a) or that the actual {\it N}(H~{\sc i}) in the high-density component is lower (Kanekar \& Chengalur, 2003). Absence of H$_2$ in the systems that show 21 cm absorption with low $T_s$ will indicate a radiation field much higher (i.e $\chi>>1$). } Our calculations also suggest that it is possible to detect H$_2$ with low $\tau_v({\rm 21 cm})$ for moderate $\it{ n_H}$ (see Models A \& AE in Table.~\ref{table1}). For example, when log {\it N}(H~{\sc i}) $=20.7$ and $\it{ n_H}$ $\simeq 1$ cm$^{-3}$, the expected spin temperature is high (2300-5600 K) and $0.05\le\tau_v({\rm 21 cm})/f\le 0.12$. Thus, 21 cm absorption will either be weak or undetected in cases where H$_2$ is detectable. Thus, the fine-structure excitation of C~{\sc i} or C~{\sc ii} with systems with detectable 21 cm absorption will lead to a better understanding of physical conditions in the gas. This is detailed in the following sections.} \subsection{C~{\sc i} absorption: detectability and level of ionization:} \begin{figure*} \centerline{\epsfig{file=bbfig2.ps,width=18cm,height=15cm}} \caption {The ratio of column densities of H$_2$ in different rotational levels are shown as a function of total H$_2$ column density for the stellar case. The points in the figures give the observed data (Ledoux et al. 2003). In all these calculations we assume the metallicity to be 0.1 Z$_\odot$, log~N(H~{\sc i})=20.7, and log $\kappa$ is varied between $-2.0$ and $-1.4$ (different curves with same line-style). The long-dashed, continuous and short-dashed curves are for $\it{ n_H}$ = 1, 10 and 50 cm$^{-3}$ respectively. \label{figbb2}} \end{figure*} \begin{figure*} \centerline{\epsfig{file=bbfig2ext.ps,width=18cm,height=15cm}} \caption { Same as Fig.~\ref{figbb2} but diffuse ionizing radiation is used in these calculations. \label{figbb2e}} \end{figure*} { Panels (e) and (f) of Figs.~\ref{figbb1} and~\ref{figbb1e} plot {\it N}(C~{\sc i}) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) as a function of $\chi$ for stellar and diffuse cases respectively with log~{\it N}(H~{\sc i})=20.7. It is clear from the figure that the predictions for different $\kappa$ (curves with same line-style) are very much identical.} Both {\it N}(C~{\sc i}) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) are lower than those produced by the Bgr (Fig.~\ref{fig1}) due to the presence of additional ionizing photons. A minor reduction in {\it N}(C~{\sc i}) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) for a given $\it{ n_H}$~and $\kappa$ is noted for the diffuse case compared to the stellar case. This can be easily understood using Fig.~\ref{depth}. N(Si~{\sc ii}) is nearly identical for both the radiation fields. Whereas N(C~{\sc i}) is less in the diffuse case as $n_e$ and $T$ are lower in these models compared to stellar case (see panel (b) of Fig.~\ref{depth}). \par Here we concentrate on systems with H$_2$ detections. The predicted {\it N}(C~{\sc i}) is well below the detection limit for log~{\it N}(H~{\sc i}) = 20.7 and $\chi\ge10$ for the range of $\it{ n_H}$~and the two stellar continua considered here. Thus, these models can explain the weak or non-detection of C~{\sc i} absorption in some of DLAs that show strong H$_2$ absorption ({\it z}$_{abs}$ = 3.025 toward Q~0347$-$383 with log~{\it N}(H~{\sc i})= 20.56; {\it z}$_{abs}$ = 2.595 toward Q~0405$-$443 with log~{\it N}(H~{\sc i})= 20.90; and {\it z}$_{abs}$ = 2.811 toward Q~0528$-$250 with log~{\it N}(H~{\sc i})= 21.10). {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) is lower than the measured ratio (in DLAs that show both H$_2$ and C~{\sc i} absorption) for the range in $\chi$ allowed by H$_2$ for both the diffuse and stellar case, and log~{\it N}(H~{\sc i}) = 20.7. Based on the trend seen in Figs.~~\ref{figbb1} and~\ref{figbb1e} we may need $\it{ n_H}$$\ge$ 50 cm$^{-3}$ and $\chi\le10$ to explain the observed range in {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) for H$_2$ components that show detectable C~{\sc i} absorption. However, such a model will over produce {\it N}(H$_2$). This inconsistency can be solved by using a lower value of {\it N}(H~{\sc i}) and a higher $\it{ n_H}$ (see Panel (f) in Figs.~\ref{figbb1a} and \ref{figbb1ae}). {In these plots long-dashed, continuous and short-dashed curves are the results for $\it{ n_H}$=50 cm$^{-3}$ with log~N(H~{\sc i}) = 20.0, 20.7, and 21.0 respectively. } The total {\it N}(H~{\sc i}) measured for {\it z}$_{abs}$ = 1.968 toward Q~0013$-$004 and {\it z}$_{abs}$ = 2.087 toward 1444+014 are consistent with log~{\it N}(H~{\sc i})$\le$20 in the H$_2$ (and C~{\sc i}) components. The {\it z}$_{abs}$ = 1.962 system toward Q~0551-366 that shows three H$_2$ components has a total log~{\it N}(H~{\sc i})=20.5. The {\it z}$_{abs}$=1.973 system toward Q~0013-004 has 15 well detached C~{\sc i} components with a total {\it N}(H~{\sc i})=20.8. Clearly a low N(H~{\sc i}) is probable in the components that show H$_2$ and C~{\sc i} absorption. A cloud with a moderate radiation field (i.e $\chi\le10$), log~{\it N}(H~{\sc i})=20.0, and $\it{ n_H}$ = 50 cm$^{-3}$ reproduces the observed range of {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}), {\it N}(H$_2$) and {\it N}(C~{\sc i}). \subsection{C~{\sc i} fine-structure excitation:} In this section we consider the fine-structure excitation of C~{\sc i}. Panel (c) of Figs.~\ref{figbb1} and~\ref{figbb1e} shows {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) as a function of $\chi$ for log~{\it N}(H~{\sc i})=20.7. This ratio should be independent of $\chi$ if UV pumping is negligible because $T_s$ depends mainly on density and is roughly independent of $\chi$. This happens for the diffuse case (Panel (c) of \ref{figbb1e}). {Also the effect of $\kappa$ is clearly evident in this case}. However, in the stellar case, we find that the predicted {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) increases with increasing $\chi$. We also notice that for a given $\it{ n_H}$~and $\chi$, the stellar case (Panel c in Fig.~\ref{figbb1a}) with a lower {\it N}(H~{\sc i}) produces a higher value of {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}). However, the dependence on {\it N}(H~{\sc i}) is very weak in clouds irradiated by the diffuse radiation field (see panel c in Fig.\ref{figbb1ae}). {This implies that the ratio {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) will depend only on density for the diffuse case. However, the ratio {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) will depend on the strength of the radiation field (also see panel (d) in Fig.~\ref{depth}) in the stellar case.} \par Now we focus on systems that show detectable C~{\sc i} and H$_2$. The predicted value of {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) is more sensitive to $\it{ n_H}$~and weakly depends on {\it N}(H~{\sc i}) and $\chi$ for the diffuse case. The observed {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) is consistent with 10$\le$$\it{ n_H}$$\le100$ cm$^{-3}$. In the stellar case {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) depends on $\it{ n_H}$, N(H~{\sc i}) and $\chi$. Clouds with log~{\it N}(H~{\sc i}) = 20.7 reproduce the observed range in {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) for 1$\le$$\it{ n_H}$$\le$50 cm$^{-3}$ and the values of $\chi$ constrained by the H$_2$ observations. As noted before, however, these models fail to reproduce the observed {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}). Thus we require low {\it N}(H~{\sc i}) ($\simeq 10^{20}$ cm$^{-3}$), high $\it{ n_H}$ ($\simeq 50$ cm$^{-3}$), and low $\chi$ ($\le10$) components in order to be consistent with the observed {\it N}(C~{\sc i}) and {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) ratio. \subsection{C~{\sc ii} fine-structure excitation:} Here we discuss the predicted C~{\sc ii$^*$} in detail. Panel (d) of Figs.~\ref{figbb1} and \ref{figbb1e} shows {\it N}(C~{\sc ii$^*$})/{\it N}~({Si~{\sc ii}) for log~{\it N}(H~{\sc i}) = 20.7 as a function of $\chi$. The shaded histogram is the observed distribution of the systems with H$_2$ components and the non-shaded histogram represents those systems that do not show detectable H$_2$ and C~{\sc i}. \par {In the stellar case {\it N}(C~{\sc ii$^*$})/{\it N}~({Si~{\sc ii}) is higher for higher $\chi$ (see panel (d) in Fig.~\ref{figbb1}), and for a given $\chi$ the excitation is more for lower density (long-dashed, continuous and short-dashed lines are for $\it{ n_H}$ = 1, 10, and 50 cm$^{-3}$ respectively). The ratio depends only weakly on $\kappa$ (different curves with different line-style) in the stellar case.}} { All these trends are mainly because, for a fixed {\it N}(H~{\sc i}), a considerable fraction of C~{\sc ii} will originate from regions where hydrogen is ionized. The fraction of C~{\sc ii} originating from a hot ionized gas is higher in the case of lower $\it{ n_H}$~and so the ratio is higher. This happens for the lower {\it N}(H~{\sc i}) case also (see panel (d) in Fig.~\ref{figbb1a} where long-dashed, continuous and short-dashed curves are for log~N(H~{\sc i}) = 20., 20.7 and 21 respectively). } \par { Results for the diffuse case are summarised in panels (d) of Figs. \ref{figbb1e},\ref{figbb1ae}. It is clear that for a given $\it{ n_H}$, N(H~{\sc i}), and $\kappa$ the ratio increases with increasing $\chi$ when the $\chi$ is small. However, at larger $\chi$ the ratio becomes independent of $\chi$. This is due to the grain heating saturation for highly charged grains at high $\chi$ (see Weingartner \& Draine, 2001a). This is also the reason for the lack of dependence of spin temperature on $\chi$ (see previous section). Thus, at high $\chi$ the ratios mainly depend on $\it{ n_H}$~in the diffuse case. The models presented by Liszt (2002) use the fitting function given by Bakes \& Tielens (1994) and show a monotonic increase in N(C~{\sc ii$^*$})/N(C~{\sc ii}) with an increase in $\chi$. Weingartner \& Draine (2001a) (see figure 15 in their paper) show that the fitting function given by Bakes \& Tielens (1994) over produce the photo-electric heating at high $\chi$ and because of this there will be an increase in N(C~{\sc ii$^*$})/N(Si~{\sc ii}) with increasing $\chi$ even at large values of $\chi$. As our treatment is very close to that of Weingartner \& Draine (2001a) we clearly see the effect of saturation of photo-electric heating by dust grains in our models. } \par First we will concentrate on the systems with H$_2$ detections. In the diffuse case the range in $\it{ n_H}$~that is consistent with {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) also reproduces the observed range in {\it N}(C~{\sc ii$^*$})/{\it N}(S~{\sc ii}). In the stellar case with log~{\it N}(H~{\sc i}) = 20.7 the observed distribution of {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}) is consistent with $\it{ n_H}$~in the range of 10-50 cm$^{-3}$. The models with lower {\it N}(H~{\sc i}) tend to produce higher {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}) for a given $\it{ n_H}$~and $\kappa$. A cloud with the low {\it N}(H~{\sc i}) and high $\it{ n_H}$~that are required to reproduce N(C~{\sc i})/N(Si~{\sc ii}) will over produce {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}). Thus the observed {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii}) seems to favor the diffuse radiation field. This produces a model that reproduces all the other observations. Now we concentrate on systems without H$_2$ detections but showing C~{\sc ii$^*$} absorption. {There are two possibilities for the absence of H$_2$ in these systems: Either (i) the gas has lower density and so is partially ionized with a higher temperature or (ii) the gas is at a high density in a strong UV field.} { In the diffuse case the ratio N(C~{\sc ii$^*$})/N(Si~{\sc ii}) measured in systems without H$_2$ are consistent with $\it{ n_H}$ in the range of 1-10 cm$^{-3}$ (see panel (d) of Figs. \ref{figbb1e}). } Srianand et al. (2005) pointed out that Al~{\sc iii} absorption seen in these systems could be a useful indicator of the ionization of the gas. We will return to this issue while discussing the predicted {\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}) ratio (see Section 4.7). \subsection{Rotational excitation of H$_2$:} \begin{figure*} \centerline{\epsfig{file=bbfig3ext.ps,width=18cm,height=15cm}} \caption {The ratio of column densities of H$_2$ in different rotational levels are shown as a function of total H$_2$ column density for diffuse case. The points in the figures give the observed data (Ledoux et al. 2003). In all these calculations, we assume the metallicity to be 0.1 {\it Z}$_\odot$, $\it{ n_H}$= 50 cm$^{-3}$, and log $\kappa$ is varied between $-2.0$ and $-1.4$. The labels 1, 2, 3 and 4 in long-dashed curves are for log($\kappa$) = $-1.4$, $-1.6$, $-1.8$ and $-2.0$ respectively. The long-dashed, continuous and short-dashed curves are for log {\it N}(H~{\sc i}) = 20., 20.7, and 21 respectively. \label{figbb3e}} \end{figure*} Here we focus on the H$_2$ rotational excitation predicted in our calculations. \subsubsection{The ortho-para ratio (OPR):} The OPR indicates the kinetic temperature when the H$_2$ electronic bands are optically thick (i.e., log({\it N}(H$_2$))$\ge$ 16; Tumlinson et al. 2002). Srianand et al. (2005) have shown that the OPR observed in DLAs are higher than those measured in Galactic ISM, LMC, and SMC sight lines. Here, we probe the reason for this difference. Panel (a) of Figs.~\ref{figbb2} and \ref{figbb2e} plots the OPR as a function of {\it N}(H$_2$) for the stellar and diffuse cases respectively with log~{\it N}(H~{\sc i}) = 20.7. For optically thin H$_2$, when the Solomon process dominates excitations of H$_2$, (i.e log~{\it N}(H$_2$)$\le$ 16), the predicted OPR is close to 3 for clouds with log~{\it N}(H~{\sc i})=20.7. However, the OPR is greater than 3 for log~{\it N}(H$_2$) in the range of 16 to 18. At {\it N}(H$_2$) $\ge10^{18}~{\rm cm}^{-2}$ the OPR traces the kinetic temperature of the gas. \par Sternberg \& Neufeld (1999) show that the high value of the OPR seen for intermediate {\it N}(H$_2$) is because the electronic absorption lines of ortho-H$_2$ become self-shielded at smaller column densities than para-H$_2$. Thus, ortho-H$_2$ exists while para-H$_2$ is destroyed. Thus, we expect the OPR to be larger in the case of a higher radiation field when 16$\le$log~{\it N}(H$_2$)$\le$18. For a given {\it N}(H$_2$) (in the intermediate range), the predicted OPR is higher for higher $\it{ n_H}$~ in models with log~N(H~{\sc i})=20.7. We also notice that, for a given {\it N}(H$_2$) and $\it{ n_H}$, the models with lower {\it N}(H~{\sc i}) produce a lower value of the OPR (panel (a) in Fig.~\ref{figbb3e}). {Thus, the observed OPR with 16$\le$log~{\it N}(H$_2$)$\le$18 will require a lower {\it N}(H~{\sc i}) and higher $\it{ n_H}$. This is consistent with what we inferred based on the {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}) ratio.} A detail observation of an individual system confirms that the components with low OPR are consistent with a low value of {\it N}(H~{\sc i}) (see Table. 1 of Srianand et al. 2005). As an example, {\it z}$_{abs}$ = 1.96822 toward Q~0013-004 has the lowest OPR value measured in DLAs (0.64$\pm$0.09) and has log~{\it N}(H$_2$) = 16.77 and log~{\it N}(H~{\sc i})$\le19.43$. We notice that the kinetic temperature is in the range of $40-560$ K for the consistent models. This is slightly higher than the kinetic temperature range (60$-$300K) derived using the OPR and assuming LTE assumptions (Srianand et al. 2005). We notice that the OPR does not track the kinetic temperature well in the intermediate N(H$_2$) range. A careful investigation of this is presented elsewhere (Shaw et al. 2005). All of our calculations are in qualitative agreement with the OPR seen for log~{\it N}(H$_2$)$>$18 components. \par \subsubsection{ N(J=4)/N(J=2) and the radiation field:} Panel (d) of Figs.~\ref{figbb2} and \ref{figbb2e} plots {\it N(J=4)/N(J=2)} as a function of {\it N}(H$_2$) for various log~{\it N}(H~{\sc i}) and $\kappa$. The Solomon process controls these populations since the energy separation between these energy levels is far too large and collisional excitation is inefficient. This ratio indicates $\chi$ when the H$_2$ column density is low (Jura 1975). For a given {\it N}(H$_2$) with log {\it N}(H$_2$)$\le$ 16.0, the {\it N(J=4)/N(J=2)} ratio is larger for larger (i) $\it{ n_H}$, (ii)$\kappa$ and (iii){\it N}(H~{\sc i}) (panel d in Fig.~\ref{figbb3e}). Apart from the two systems in Ledoux et al. (2003), absorption from the {\it J}=4 level of H$_2$ is not detected. These two measurements and the upper limits for the optically thin systems are consistent with a radiation field as high as $\chi=30$. There is very little difference between the diffuse and stellar continua since the excitation is mainly by electronic line absorption. In the optically thick cases, {\it N}(H$_2$) in the {\it J}=4 level is populated mainly by formation pumping. No clear trend is present since formation pumping depends on various quantities. \subsubsection{N(J=2)/N(J=0) and N(J=3)/N(J=1):} The {\it N(J=2)/N(J=0)} ratio is more sensitive to collisional excitation than the population ratios of higher rotational levels. The observed and predicted {\it N(J=2)/N(J=0)} and {\it N(J=3)/N(J=1)} are plotted as a function of {\it N}(H$_2$) in panels (b) and (c) of Figs.~\ref{figbb2}, \ref{figbb2e}, and \ref{figbb3e} respectively. For the intermediate range of {\it N}(H$_2$) the models that reproduce the OPR also roughly reproduce these two ratios. However, they do not explain the observed distribution for log~{\it N}(H$_2$)$\ge18$. It is important to note that the {\it J=2} and {\it J=3} levels are mainly populated by cascades from high {\it J} levels following formation and UV pumping. The grain formation distribution function and grain surface interactions can affect the excitation of these high {\it J} levels. Although fitting the observed results may shed light on a gas with a different metallicity and dust composition than the Milky Way, such an exercise will divert us from our main theme and is left to future work. \subsection{{\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}):} \begin{figure} \centerline{\epsfig{file=bbal3.ps,width=9cm,height=8cm}} \caption { The ratio {\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}) as a function of $\chi$ for clouds with log~N(H~{\sc i}) = 20.7. The long-dashed, continuous and short-dashed curves are for $\it{ n_H}$ = 1, 10, and 50 cm$^{-3}$ respectively. As in the other figures, 1, 2, 3, and 4 marks the models with $\kappa$ $-$1.4, $-$1.6,$-$1.8 and $-$2.0 respectively. The thin and thick curves represent the results for the stellar and diffuse case respectively. \label{bbal3}} \end{figure} { Here we focus on {{\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii})} produced with a stellar radiation field on top of the Bgr(Fig.~\ref{bbal3}). Al~{\sc ii} is mainly ionized by the high energy photons from the Bgr. In the diffuse case {{\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii})} ratio depends more on the density than on $\chi$. This is also the case for stellar case when $\chi$ is small. However, the ratio increases with increase in $\chi$ for large values of $\chi$ (See thin curves in Fig.~\ref{bbal3}). Higher $\kappa$ produces higher {\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}), especially in the case of a high-density gas. The ratio {\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}) is higher for lower {\it N}(H~{\sc i}) for a given $\kappa$ and $\it{ n_H}$. Our calculations predict log~{\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}) to be less than $-$1.5 for the ranges of $\chi$, $\it{ n_H}$, and $\kappa$ that reproduce the observed properties of the H$_2$ components. \par Srianand et al. (2005) have shown that most DLAs with log~{\it N}(H~{\sc i}) $\ge$ 21 show C~{\sc ii$^*$} absorption even when H$_2$ and C~{\sc i} are clearly absent. All these systems also show Al~{\sc iii} absorption with log~{\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}) higher than$ -1.8$ (Table 6 of Srianand et al. 2005). Our model calculations produce log~{\it N}(Al~{\sc iii})/{\it N}(Al~{\sc ii}) higher than $-1.6$ for log~{\it N}(H~{\sc i})$\ge$ 20.7 when $\it{ n_H}$ $\le$ 10 cm$^{-3}$. Clearly, the systems with only C~{\sc ii$^*$} absorption without H$_2$ and C~{\sc i} absorption need $\it{ n_H}$$\le$10 cm$^{-3}$ in order to have a {\it N}(Al~{\sc iii})/N(Al~{\sc ii}) ratio at the detected level, since log {\it N}(H~{\sc i}) $\ge$21 in most of these components. {This range in $\it{ n_H}$ will also explain the observed N(C~{\sc ii$^*$})/N(Si~{\sc ii}) in these systems (see Section. 4.5).} This implies that C~{\sc ii$^*$} absorption originates in a region of lower density and higher ionization compared to the components that produce H$_2$ and C~{\sc i} absorption. { Lehner et al. (2004) show that a significant fraction of C~{\sc ii$*$} absorption detected toward high latitude lines-of-sight in our galaxy originate from warm ionized medium (WIM). Using the profile coincidence between Al~{\sc ii}, Al~{\sc iii}, and photoionization models (computed using Cloudy) Wolfe et al. (2004) argue that C~{\sc ii$^*$} in DLAs are unlikely to originate from WIM gas. They argue that a considerable fraction of Al~{\sc iii} can be produced from the gas that is by and large neutral (much like the models considered here). For the {\it z}$_{abs}$ = 1.919 system toward Q~2206-19 that show C~{\sc ii$^*$} without H$_2$ and C~{\sc i} absorption lines, Wolfe et al. (2004) derive a density of 1.6 cm$^{-3}$. Srianand et al. (2005) discuss the profiles of absorption lines from different ionization states (including 21 cm absorption line) in the case of {\it z}$_{abs}$ = 1.944 system toward Q~1157+014 and conclude that a considerable fraction of C~{\sc ii$^*$} absorption originate from gas at lower densities (i.e $\it{ n_H}$$\simeq$ 1 cm$^{-3}$). All these are consistent with our conclusion that density is lower (i.e $\it{ n_H}$$\le10$ cm$^{-3}$) in the systems that show C~{\sc ii$*$} absorption without H$_2$ and C~{\sc i} compared to the ones that show H$_2$ absorption (i.e $\it{ n_H}$ = 10-100 cm$^{-3}$). } \subsection{Other fine-structure lines:} We predict the fine-structure level populations of O~{\sc i} and Si~{\sc ii} in addition to C~{\sc i} and C~{\sc ii}. The column densities of both O~{\sc i$^*$} and O~{\sc i$^{**}$} are in the range 2$\times10^{11} - 10^{12}$ cm$^{-2}$, for the range of $\chi$ suggested by the C~{\sc i} and H$_2$ observations (see Table.\ref{table1}). The oscillator strengths of O~{\sc i$^*$} and O~{\sc i$^{**}$} lines are low ($\sim 4\times10^{-2}$) and these lines are never detected in DLAs. Our calculations also predict N(Si~{\sc ii$^*$}) $<$ 2$\times10^{11}$cm$^{-2}$, consistent with the fact that Si~{\sc ii$^*$} absorption is not detected in DLAs. It is most likely that one may not detect these lines directly even if the absorbing gas has a higher density due to their weakness. However, it may be possible to detect them by co-adding a large number of DLA spectra. {Thus, one possible way to confirm the idea that most DLAs (with or without H$_2$) originate in a high-density gas with star formation is to detect excited fine-structure lines of O I and Si II by co-adding many spectra as one does for metal lines in the Ly$\alpha$ forest}. \subsection{Summary} The main results for a high-density cloud in a stellar radiation field are: \begin{itemize} \item{} The observed properties of DLAs with H$_2$ are consistently reproduced by models with a local radiation field in addition to the QSO dominated BGR. Most of the observations of the H$_2$ components (such as {\it N}(C~{\sc i})/{\it N}(Si~{\sc ii}), {\it N}(C~{\sc i}$^*$)/{\it N}(C~{\sc i}) and {\it N}(C~{\sc ii$^*$})/{\it N}(Si~{\sc ii})) are consistent with lower {\it N}(H~{\sc i}) (i,e log {\it N}~(H~{\sc i})$\simeq$20 cm$^{-2}$) and higher densities (10 $\le$ $\it{ n_H}$(cm$^{-3}$) $\le$100). The median {\it N}(H~{\sc i}) in DLAs with H$_2$ is $\sim$ 10$^{20.8}$ cm$^{-2}$, so in these systems only a fraction of the total {\it N}(H{~\sc i}) originates in regions with H$_2$. The typical kinetic temperature ranges between 40 and 560 K. \item{}We reproduce the observed range of the OPR in DLAs. The systems that are optically thin in the H$_2$ electronic bands have a lower OPR, suggesting log {\it N}(H~{\sc i})$\simeq$20, consistent with constraints from atomic species. The OPR $>3$, seen in some of the components with intermediate H$_2$ electronic line optical depths, are produced by the different level of self-shielding in ortho and para H$_2$. The absence of C~{\sc i} and H$_2$ in {\it J} $\ge$ 4 levels in the case of a few optically thick H$_2$ components are consistent with a higher $\chi$ in these clouds. \item{} Our predictions, the measurements, and the upper limits on the {\it N(J=4)/N(J=2)} ratio in the optically thin H$_2$ components are consistent with a radiation field as high as $\chi=30$. This is consistent with the limits on the radiation field from the atomic species. The absence of {\it N(J=4)} lines in the optically thick H$_2$ components are consistent with a low rate of formation pumping in these systems. \item{} H$_2$ and C~{\sc i} are not detectable if the radiation field is much higher irrespective of the model parameters. However, such clouds will be easily detectable in 21 cm absorption with spin temperature in the range of 100 to 1000 K. These clouds will also show very strong C~{\sc ii$^*$} absorption. However, the column density of C~{\sc ii$^*$} will strongly depend on the amount of ionized gas along the line of sight. Also these systems will show very strong Al~{\sc iii} absorption. \end{itemize} \section{Discussion and Conclusions:} \subsection{Nature of the radiation field:} Ledoux et al. (2003) show that detectable H$_2$ absorption ({\it N}(H$_2$)$\ge10^{14}~{\rm cm^{-2}}$) is seen in 15-20 per cent of DLAs. We show that the observed properties of these systems are inconsistent with a gas irradiated by the meta-galactic UV radiation field. Our calculations suggest that these systems originate from a high-density gas ($\ge10$ cm$^{-3}$) irradiated by a moderate diffuse UV radiation field (1 to 30 times that of Galactic ISM) and indicate ongoing star formation in these systems. The mean radiation field is determined by both the SFR and radiative transport. As the mean dust optical depth in DLAs will be smaller than that of Galactic ISM, the typical SFR in DLAs with H$_2$ absorption can not be much larger than that seen in our Galaxy. Even if such a moderate star formation exists in most DLAs, they will still contribute appreciably to the global star formation rate density at higher redshifts (see Wolfe et al. 2003a,b; Srianand et al. 2005; Hirashita \& Ferrara 2005). \par \subsection{Physical state of the H$_2$ gas:} Our calculations with a diffuse radiation field suggest high densities in the H$_2$ gas (i.e 10 $\le$ $\it{ n_H}$(cm$^{-3}$) $\le100$). The typical temperature of the clouds that are consistent with the observations are in the range 40 to 560 K. {Our calculations simultaneously explain H$_2$ abundance and fine-structure excitations of atomic species without opting for enhanced H$_2$ formation rate on dust grains as required by analytical models of Hirashita \& Ferrara (2005). } The inferred range in temperature and densities are consistent with the physical conditions in the CNM gas. We show that if the cloud is irradiated by a diffuse interstellar UV background then {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) can directly probe the density of the gas. { For radiation field with $\chi\ge10$ the ratio N(C~{\sc ii$^*$})/N(Si~{\sc ii}) will also trace the density of the gas as photo-heating saturates at higher values of $\chi$.} However, if the cloud is close to the ionizing source {\it N}(C~{\sc i$^*$})/{\it N}(C~{\sc i}) depends also on $\chi$. The predicted values of N(C~{\sc ii$^*$})/N(Si~{\sc ii}) in the stellar case with low N(H~{\sc i}) (as required by other observations) are much higher than the observed values. Thus, our calculations require that the H$_2$ components in DLAs are ionized by a diffuse radiation field. {Most of the observations are consistently reproduced with N(H~{\sc i}) = 10$^{20}$ cm$^{-2}$. This suggests that only a fraction of the total measured N(H~{\sc i}) is present in the H$_2$ components. } \subsection{DLAs without H$_2$:} {The observations show that systems without detectable H$_2$ (i.e $\sim 80-85$ per cent of the DLAs) do not show C~{\sc i} absorption and also have very small values of $\kappa$. Roughly 50 per cent of the DLAs show detectable C~{\sc ii$^*$} absorption. Our calculations suggest that the absence of H$_2$, C~{\sc i}, C~{\sc ii$^*$} and 21 cm absorption in a considerable fraction of DLAs could just be a consequence of a low-gas density in a moderate radiation field (irrespective of dust content of the gas). This more or less agrees with the Wolfe et al. (2004)' conclusions that the systems with upper limits on C~{\sc ii$^*$} absorption originate in a warm neutral medium (WNM)(also see Liszt 2002). } \par { The observed N(C~{\sc ii$^*$})/N(Si~{\sc ii}) in systems that show detectable C~{\sc ii$^*$} absorption without H$_2$ and C~{\sc i} is consistent with $\it{ n_H}$$\ge0.1$ cm$^{-3}$ (see Section 3.3.1). The absence of C~{\sc i} and H$_2$ in these systems can be explained as a consequence of higher radiation field. Wolfe et al. (2003a; 2003b) in the framework of stable two phase medium argue that most of the C~{\sc ii$^*$} absorption should originate from the CNM gas in order to have reasonable global star-formation rate density. We note $\it{ n_H}$$<$10 cm$^{-3}$ in systems that show C~{\sc ii$^*$} without H$_2$ and C~{\sc i} so that the observed N(C~{\sc ii$^*$})/N(Si~{\sc ii}) as well as ionization state of Al can be consistently reproduced (see Section 4.7). This range is consistent with the one measured by Wolfe et al. (2004) for {\it z}$_{abs}$ = 1.919 system toward 2206-10. Interestingly, the inferred density in these systems is less than that typically required to explain the property of H$_2$ detected components (i.e $\it{ n_H}$$\ge10~cm^{-3}$). Thus, it appears that the systems that show only C~{\sc ii$^*$} seem to originate from lower density gas compared to the ones that also show H$_2$ and C~{\sc i} absorption. } \par { Unlike H$_2$ and C~{\sc i}, an additional radiation field (with h$\nu\le13.6$ eV) can not suppress 21 cm absorption in the high-density gas. Our calculations with $\it{ n_H}$$\ge1~{\rm cm}^{-3}$ predict a spin temperature in the range of $100-1000$ K for a range of $\kappa$, $\chi$, and {\it N}(H~{\sc i}) typically seen in DLAs. Thus, 21~cm absorption is definitely detectable. Over the redshift range that is similar to the range used by Ledoux et al. (2003) for H$_2$ searches (1.9$\le${\it z}$_{abs}$$\le 3.5$), only two out of 8 DLAs show detectable 21 cm absorption (Kanekar \& Chengalur 2003). The rest of these systems have a lower limit on the spin temperature in the range of 700-9000 K. Both of the systems with 21 cm absorption also show detectable C~{\sc ii$^*$} absorption. Detail investigation of one of these systems ({\it z}$_{abs}$ = 1.944 system toward Q 1157+014) shows that C~{\sc ii$^*$} originate not only from the gas responsible for 21 cm absorption but also from other components (Fig. 19 Srianand et al 2005). Clearly C~{\sc ii$^*$} traces a wider range of physical conditions. There are few systems (e.g {\it z}$_{abs}$ = 3.387 toward Q~0201+11 and {\it z}$_{abs}$ = 3.063 toward Q 0336-01) that show detectable C~{\sc ii$^*$} absorption without 21 cm absorption. The derived upper limits on spin temperatures in these systems will mean very low CNM fraction along the sight lines if the gas covers the background radio source completely (Kanekar \& Chengalur, 2003). In the absence of VLBI observations, interpretation of these system will be very subjective (see Wolfe et al. 2003b for details). A careful analysis of Al~{\sc iii}/Al~{\sc ii} and N(C~{\sc ii$^*$})/N(Si~{\sc ii}) in individual components is needed to get the contribution of ionized gas to the excitations of C~{\sc ii$^*$}. \par Alternatively, one can use the fine-structure state populations of O~{\sc i} and Si~{\sc ii} that trace C~{\sc ii} very closely. Our calculations also compute expected fine-structure excitations of O~{\sc i} and Si~{\sc ii}. The expected column densities of O~{\sc i$^*$}, O~{\sc i$^{**}$} and Si~{\sc ii$^*$} are in the range $10^{11}-10^{12}$ cm$^{-2}$ for the range of parameters considered in our calculations. It may be possible to detect these lines using pixel optical depth techniques that are used to detect metals in the diffuse low density IGM. Detection of such lines will put stringent constraints on density in these systems. } \par \subsection{Conclusions:} { In this article, we present calculations that self-consistently determine the gas ionization, level populations (atomic fine-structure levels and rotational levels of H$_2$), grain physics, and chemistry. We show that for a low-density gas ($\it{ n_H}$$\le$ 0.1 cm$^{-3}$) the meta-galactic UV background due to quasars is sufficient to maintain H$_2$ column densities below the detection limit (i.e {\it N}(H$_2$)$\le10^{14}$ cm$^{-2}$) irrespective of the metallicity and dust content in the gas. Such a gas will have a 21 cm spin temperature in excess of 7000 K and very low C~{\sc i} and C~{\sc ii$^*$} column densities for H~{\sc i} column densities typically observed in DLAs. \par Calculations with a high-density gas in the presence of a local radiation field reproduce most of the observations of H$_2$ components in DLAs. Thus our study clearly confirms the presence of CNM at least in 15-20\% of the DLAs. We also show only fraction of total N(H~{\sc i}) is in the H$_2$ components. \par Unlike the components with H$_2$, interpretation of systems that show only C~{\sc ii$^*$} without additional constraints is not clear. This is because presence of free electrons can be more efficient in populating the fine-structure level of C~{\sc ii}. This can lead to a high value of inferred $\it{ n_H}$~if the electron contribution is neglected. Using Al~{\sc iii} absorption we show that a gas that produces C~{\sc ii$^*$} in systems without H$_2$ has lower density than the ones with H$_2$ absorption. } \par\noindent \section{acknowledgements} GJF acknowledges the support of the NSF through AST 03-07720 and NASA with grant NAG5-65692. GJF and RS acknowledge the support from the DST/INT/US(NSF-RP0-115)/2002. GS would like to thank CCS, University of Kentucky for their two years of support. The hospitality of IUCAA is gratefully acknowledged. RS and PPJ gratefully acknowledge support from the Indo-French Centre for the Promotion of Advanced Research (Centre Franco-Indien pour la Promotion de la Recherche Avanc\'ee) under contract No. 3004-3.
2,869,038,155,154
arxiv
\section*{Prologue} \begin{quotation} \textit{What's in a name? That which we call a rose by any other name would smell as sweet.} \textit{Romeo and Juliet, Act 2, Scene 2 (W. Shakespeare)} \end{quotation} Take an egg --preferably a hard boiled one, and cut it in half along its middle using a very sharp knife. The surface of section will be roughly circular and have area $\pi r^{2}$. Next, take a new egg of same size, and cut it this time along a line joining the egg's tops, again as shown in Fig1. This time we get an elliptic surface of section with area $\pi R^{2}$ larger than that of the disk we got previously. So far, so good. But if you now take two \emph{symplectic eggs}, and do the same thing, then both sections will have exactly same area! Even \textquotedblleft worse\textquotedblright, no matter along which plane passing through the center of the egg you cut, you will always get sections having the same area! This is admittedly a very strange property, which you probably never have experienced (at least in a direct way) in everyday life. But what is a symplectic egg? The eggs we are cutting are metaphors for ellipsoids; an ellipsoid is a round ball that has been deformed by a linear transformation of space, \textit{i.e.} a transformation preserving the alignment of three, or more, points. In mathematics such transformations are represented by matrices. Thus the datum of an ellipsoid is the same thing as the datum of a ball and of a matrix. What we call a symplectic egg is an ellipsoid corresponding to the case where the matrix is symplectic (we'll define the concept in a moment). The reason for which the only symplectic egg you have seen on your breakfast table is flat --a fried egg!-- is that the number of rows and columns of a symplectic matrix must always be even. Since we are unable to visualize things in dimension three or more, the only symplectic eggs that are accessible to our perception are two dimensional. But what is a symplectic matrix? In the case of smallest dimension two, a matrix \begin{equation} S \begin{pmatrix} a & b\\ c & d \end{pmatrix} \label{s1 \end{equation} is symplectic if it has determinant one: \begin{equation} ad-bc=1.\label{adbc \end{equation} In higher dimensions, 4, 6, 8, etc. there are many more conditions: for instance 10 if the dimension is 4, 21 if it is 6, and $n(2n+1)$ if it is $2n$. We will write these conditions explicitly in section \ref{sec11}. So far, so good. But where do symplectic eggs come from, and what are they good for? Let me first tell you where symplectic matrices come from. They initially come from the study of motion of celestial bodies, which is really rich in mathematical concepts, some of these going back to the observations of Tycho Brahe, and the work of Galileo Galilei and Johannes Kepler (these were the \textquotedblleft Giants\textquotedblright\ on the shoulder's of which Isaac Newton stood). But the notion of symplectic matrix, or more generally that of symplectic transformation, did really have a long time to wait until it appeared explicitly and was recognized as a fundamental concept. It was implicit in the work of Hamilton and Lagrange on classical and celestial mechanics, until the word \textquotedblleft symplectic\textquotedblright\ was finally coined by the mathematician Hermann Weyl in his book \emph{The Classical Groups}, edited in 1939, just before World War II. But still then, as Ian Stewart \cite{stew} reminds us, it was a rather baffling oddity which presumably existed for some purpose --but which? It was only later agreed that the purpose of symplectic transformations is dynamics, that is the study of \textit{motion}. Let me explain this a little bit more in detail: if we have a physical system consisting of \textquotedblleft particles\textquotedblrigh \ (sand corns, planets, spacecraft, or quarks) it is economical from both a notational and computational point of view to describe their motion (that is, their instantaneous location and velocity) by specifying a phase space vector, which is a matrix having only one column. For instance, if we are dealing with one single particle with coordinates $(x,y,z)$ and momentum $(p_{x ,p_{y},p_{z})$ (the momentum of a particle is just its velocity multiplied by its mass $m$) the phase space vector will be the column vector whose entries are $(x,y,z,p_{x},p_{y},p_{z})$ If we have a large number $N$ of particles with coordinates $(x_{i},y_{i},z_{i})$ and momenta $(p_{x_{i}},p_{y_{i },p_{z_{i}})$ the phase space vector will be obtained by first writing all the position coordinates and thereafter the momentum coordinates in corresponding order, their momenta. These vectors form the phase space of our system of particles. It turns out that the knowledge of a certain function, the Hamiltonian (or energy) function, allows us to both predict and retrodict the motion of our particles; this is done by solving (exactly, or numerically) the Hamilton equations of motion, which are in the case $n=1$ given b \begin{equation} \frac{dx}{dt}=\frac{\partial H}{\partial p}\text{ \ , \ }\frac{dp}{dt =-\frac{\partial H}{\partial x}. \label{Ham1 \end{equation} Mathematically these equations are just a fancy way to write Newton's second law $F=ma$. That is, knowing exactly the positions and the momenta at some initial time, we are able to know what these are going to be at any future time (we can actually also calculate what they were in the past). The surprising, and for us very welcome fact is that the transformation which takes the initial configuration to the final configuration is always a symplectic transformation! These act on the phase vectors, and once this action is known, we can determine the future of the whole system of particles, and this at any moment (mathematicians would say we are in presence of a \textquotedblleft phase space flow\textquotedblright). The relation between symplectic transformations and symplectic matrices is that we can associate a symplectic matrix to every symplectic transformation: it is just the Jacobian matrix of that transformation. In the simplest cases, for instance when no external forces act on the particles, these matrices are themselves the symplectic transformations. The symplectic egg is a special case a deep mathematical theorem discovered in 1985 by the mathematician Gromov \cite{Gromov}, who won the Abel Prize in 2010 for his discovery (the Abel Prize is the \textquotedblleft true\textquotedblright\ substitute for the Nobel Prize in mathematics, as opposed to the Fields medal, which is intended to mathematicians under 40). Gromov's theorem is nicknamed the \textquotedblleft principle of the symplectic camel\textquotedblright\ \cite{arnold,FP,stew}, and it tells us that it impossible to squeeze a symplectic egg through a hole in a plane of \textquotedblleft conjugate coordinates\textquotedblright\ if its radius is larger than that of the hole. That one can do that with an ordinary (uncooked) egg is easy to demonstrate in your kitchen: put it into a cup of vinegar (Coca Cola will do as well) during 24 hours. You will then be able to squeeze that egg through the neck of a bottle without any effort! The marvelous thing with the symplectic egg is that it contains quantum mechanics in a nutshell... er ... an eggshell! Choose for radius the square root of Planck's constant $h$ divided by $2\pi$. Then each surface of section will have radius of $h/2$. In \cite{goletta,physlett,Birk,hileyfest} I have called such a tiny symplectic egg a \textit{quantum blob}. It is possible --and in fact quite easy if you know the rules of the game-- to show that this is equivalent to the uncertainty principle of quantum mechanics. The thing to remember here is that a classical property (\emph{i.e.} a property involving usual motions, as that of planets for instance), here symbolized by the symplectic egg, contains as an imprint quantum mechanics! The analogy between \textquotedblleft classical\textquotedblright\ and \textquotedblleft quantum\textquotedblright\ can actually be pushed much further, as I have shown with Basil Hiley \cite{gohi}. But this, together with the notion of emergence \cite{gotriad}, is another story. Some of the ideas presented here are found in our \textit{Physics Reports} paper \cite{physreps} with F. Luef; they are developed and completed here in a different way more accessible to a general audience. \section{Notation and terminology} Position and moment vectors will be written as column vector \ \begin{pmatrix} x_{1}\\ \vdots\\ x_{n \end{pmatrix} \text{ \ \ and \ \ \begin{pmatrix} p_{1}\\ \vdots\\ p_{n \end{pmatrix} \] and the corresponding phase vector is thus \ \begin{pmatrix} x\\ p \end{pmatrix} =(x,p)^{T}=(x_{1},...,x_{n};p_{1},...,p_{n})^{T \] where the superscript $^{T}$ indicates transposition. The integer $n$ is unspecified; we will call it the number of degrees of freedom. If the vector $(x,p)^{T}$ denotes the phase vector of a system of $N$ particles, then $n=3N$ and the numbers $x_{1},x_{2},x_{3}$, (resp. $p_{1},p_{2},p_{3}$) can be identified with the positions $x,y,z$ (resp. the momenta $p_{x},p_{y},p_{z}$) of the first particle, $x_{4},x_{5},x_{6}$, (resp. $p_{4},p_{5},p_{6}$) with those of the second particle, and so on. This is not the only possible convention, but our choice has the advantage of making formulas involving symplectic matrices particularly simple and tractable. For instance, the \textquotedblleft standard symplectic matrix\textquotedblright\ is here $J \begin{pmatrix} 0 & I_{\mathrm{d}}\\ -I_{\mathrm{d}} & 0 \end{pmatrix} $ where $I_{\mathrm{d}}$ is the $n\times n$ identity matrix and $0$ the $n\times n$ zero matrix. Note tha \begin{equation} J^{2}=-I_{\mathrm{d}}\text{ \ , \ }J^{T}=J^{-1}=-J. \label{J \end{equation} \section{The Symplectic Egg\label{sec1}} \subsection{Symplectic matrices\label{sec11}} Let $S$ be a (real) matrix of size $2n$. We say that $S$ is a symplectic matrix if it satisfies the condition \begin{equation} S^{T}JS=J \label{stjs \end{equation} Clearly the standard symplectic matrix $J$ is itself a symplectic matrix. Assume that we write the matrix $S$ in block form \begin{equation} S \begin{pmatrix} A & B\\ C & D \end{pmatrix} \label{sn \end{equation} where $A,B,C,D$ are matrices of size $n$. It is a simple exercise in matrix algebra to show that condition (\ref{stjs}) is equivalent to the the following constraints on the blocks $A,B,C,D \begin{equation} A^{T}C=C^{T}A\text{ \ , \ }B^{T}D=D^{T}B\text{ \textit{, and }}A^{T D-C^{T}B=I_{\mathrm{d}}. \label{ABDC \end{equation} Notice that the two first conditions mean that both $A^{T}C$ and $B^{T}D$ \textit{are symmetric}). Observe that these conditions collapse to (\ref{adbc}) when $n=1$: in this case $A,B,C,D$ are the numbers $a,b,c,d$ so that $A^{T}C=ac$ and $B^{T}D=bd$ are automatically symmetric; the condition $A^{T}D-C^{T}B=I_{\mathrm{d}}$ reduces to $ad-bc=1$. The product of two symplectic matrices is a symplectic matrix: if $S$ and $S^{\prime}$ satisfy (\ref{stjs}) then $(SS^{\prime})^{T}JSS^{\prime }=S^{\prime T}(S^{T}JS)S^{\prime}=S^{\prime T}JS^{\prime}=J$. Also, symplectic matrices are invertible, and their inverses are symplectic as well: first, take the determinant of both sides of $S^{T}JS=J$ we get $\det(S^{T}JS)=\det J$; since $\det J=1$ this is $(\det S)^{2}=1$ hence $S$ is indeed invertible. Knowing this, we rewrite $S^{T}JS=J$ as $JS=(S^{-1})^{T}J$, from which follows that $(S^{-1})^{T}JS^{-1}=JSS^{-1}=J$ hence $S^{-1}$ is symplectic. The symplectic matrices of same size thus form a group, called the symplectic group and denoted by $\operatorname*{Sp}(2n)$. An interesting property is that the symplectic group is closed under transposition: if $S$ is a symplectic matrix, then so is $S^{T}$ (to see this, just take the inverse of the equality $(S^{-1})^{T}JS^{-1}=J$). Since this means that a matrix is symplectic if and only if its transpose is, inserting $S^{T}$ in (\ref{stjs}) and noting that $(S^{T})^{T}=S$ we get the condition \begin{equation} SJS^{T}=J\text{.\ } \label{sjst \end{equation} Replacing $S \begin{pmatrix} A & B\\ C & D \end{pmatrix} $ with $S^{T} \begin{pmatrix} A^{T} & C^{T}\\ B^{T} & D^{T \end{pmatrix} $ the conditions (\ref{ABDC}) are thus equivalent to the set of conditions \begin{equation} AB^{T}=BA^{T}\text{ \ , \ }CD^{T}=DC^{T}\text{ \textit{, }}AD^{T -BC^{T}=I_{\mathrm{d}}. \label{ad \end{equation} One can obtain other equivalent sets of conditions by using the fact that $S^{-1}$ and $(S^{-1})^{T}$ are symplectic (see \cite{Birk}). It is very interesting to note that the inverse of a symplectic matrix i \begin{equation} S^{-1} \begin{pmatrix} D^{T} & -B^{T}\\ -C^{T} & A^{T \end{pmatrix} . \label{sinv \end{equation} It is interesting because this formula is very similar to that giving the inverse \begin{pmatrix} d & -b\\ -c & a \end{pmatrix} $ of a $2\times2$ matrix \begin{pmatrix} a & b\\ c & d \end{pmatrix} $ with determinant one. The inversion formula (\ref{sinv}) suggests that in a sense symplectic matrices try very hard to mimic the behavior of $2\times2$ matrices. We will see that this is actually the essence of symplectic geometry, and at the origin of the symplectic egg property! A last property of symplectic matrices: recall that when we wanted to show that a symplectic matrix always is invertible, we established the identity $(\det S)^{2}=1$. From this follows that the determinant of a symplectic matrix is a priori either $1$ or $-1$. It turns out --but there is no really elementary proof of this-- that we always have $\det S=1$ (see for instance \S 2.1.1 in \cite{Birk} where I give one proof of this property; Mackey and Mackey's online paper \cite{mack} give a nice discussion of several distinct methods for proving that symplectic matrices have determinant one. Conversely, it is not true that any $2n\times2n$ matrix with determinant one is symplectic when $n>1$. Consider for instanc \begin{equation} M \begin{pmatrix} a & 0 & 0 & 0\\ 0 & 1/a & 0 & 0\\ 0 & 0 & a & 0\\ 0 & 0 & 0 & 1/a \end{pmatrix} \label{konter0 \end{equation} where $a\neq0$; this matrix trivially has determinant one, but the condition $AD^{T}-BC^{T}=I_{\mathrm{d}}$ in (\ref{ad}) is clearly violated unless $a=\pm1$. Another simple example is provided b \[ M \begin{pmatrix} R(\alpha) & 0\\ 0 & R(\beta) \end{pmatrix} \] where $R(\alpha)$ and $R(\beta)$ are rotation matrices with angles $\alpha \neq\beta$ (this counterexample generalizes to an arbitrary number $2n$ of phase space dimensions). \subsection{The first Poincar\'{e} invariant} In what follows $\gamma(t)$, $0\leq t\leq2\pi$, is a loop in phase space: we have $\gamma(t) \begin{pmatrix} x(t)\\ p(t) \end{pmatrix} $ where $x(0)=x(2\pi)$, $p(0)=p(2\pi)$; the functions $x(t)$ and $p(t)$ are supposed to be continuously differentiable. By definition, the first Poincar\'{e} invariant associated to $\gamma(t)$ is the integra \begin{equation} I(\gamma)=\oint\nolimits_{\gamma}pdx=\int_{0}^{2\pi}p(t)^{T}\dot{x}(t)dt. \label{firstpoinc \end{equation} The fundamental property --from which almost everything else in this paper stems-- is that $I(\gamma)$ is a symplectic invariant. By this we mean that if we replace the loop $\gamma(t)$ by the a new loop $S\gamma(t)$ where $S$ is a symplectic matrix, the first Poincar\'{e} invariant will keep the same value: $I(S\gamma)=I(\gamma)$, that i \begin{equation} \oint\nolimits_{\gamma}pdx=\oint\nolimits_{S\gamma}pdx. \label{pdx \end{equation} The proof is not very difficult if we carefully use the relations characterizing symplectic matrices (see Arnol'd \cite{Arnold}, \S 44, p.239, for a shorter but more abstract proof). We will first need a differentiation rule for vector-valued functions, generalizing the product formula $d(uv)/dt=u(dv/dt)+v(du/dt)$ from elementary calculus. Suppose that \[ u(t) \begin{pmatrix} u_{1}(t)\\ \vdots\\ u_{n}(t) \end{pmatrix} \text{ \ , \ }v(t) \begin{pmatrix} v_{1}(t)\\ \vdots\\ v_{n}(t) \end{pmatrix} \] are vectors depending on the variable $t$ and such that each component $u_{j}(t)$, $v_{j}(t)$ is differentiable. Let $M$ be a symmetric matrix of size $n$ and consider the real-valued function $u(t)^{T}Mv(t)$. That function is differentiable as well and its derivative is given by the formula \begin{equation} \frac{d}{dt}\left[ u(t)^{T}Mv(t)\right] =\dot{u}(t)^{T}Mv(t)+u(t)^{T M\dot{v}(t) \label{diff \end{equation} (we are writing $\dot{u},\dot{v}$ for $du/dt$, $dv/dt$ as is customary in mechanics); for a proof I refer you to your favorite calculus book. Let us now go back to the proof of the symplectic invariance of the first Poincar\'{e} invariant. We write as usual the symplectic matrix $S$ in block form \begin{pmatrix} A & B\\ C & D \end{pmatrix} $ so that the loop $S\gamma(t)$ is parametrized b \[ S\gamma(t) \begin{pmatrix} Ax(t)+Bp(t)\\ Cx(t)+Dp(t) \end{pmatrix} \text{ \ , \ }0\leq t\leq2\pi. \] We thus have, by definition of the Poincar\'{e} invariant, \[ I(S\gamma)=\int_{0}^{2\pi}(Cx(t)+Dp(t))^{T}(A\dot{x}(t)+B\dot{p}(t))dt; \] expanding the product in the integrand, we have $I(S\gamma)=I_{1}+I_{2}$ wher \begin{align*} I_{1} & =\int_{0}^{2\pi}x(t)^{T}C^{T}A\dot{x}(t)dt+\int_{0}^{2\pi p(t)^{T}D^{T}B\dot{p}(t)dt\\ I_{2} & =\int_{0}^{2\pi}x(t)^{T}C^{T}B\dot{p}(t)dt+\int_{0}^{2\pi p(t)^{T}D^{T}A\dot{x}(t)dt. \end{align*} We claim that $I_{1}=0$. Recall that $C^{T}A$ and $C^{T}B$ are symmetric in view of the two first equalities in (\ref{ABDC}); applying the differentiation formula (\ref{diff}) with $u=v=x$ we hav \begin{align*} \int_{0}^{2\pi}x(t)^{T}C^{T}A\dot{x}(t)dt & =\frac{1}{2}\int_{0}^{2\pi \frac{d}{dt}(x(t)^{T}C^{T}Ax(t))dt\\ & =\frac{1}{2}\left[ x(2\pi)C^{T}Ax(2\pi)-x(0)C^{T}Ax(0)\right] \\ & =0 \end{align*} because $x(0)=x(2\pi)$. Likewise, applying (\ref{diff}) with $u=v=p$ we ge \[ \int_{0}^{2\pi}p(t)D^{T}B\dot{p}(t)dt=0 \] hence $I_{1}=0$ as claimed. We next consider the term $I_{2}$. Rewriting the integrand of the second integral as \[ x(t)^{T}C^{T}B\dot{p}(t)=\dot{p}(t)^{T}B^{T}Cx(t)^{T \] (because it is a number, and hence equal to its own transpose!) we hav \[ I_{2}=\int_{0}^{2\pi}\dot{p}(t)^{T}B^{T}Cx(t)^{T}dt+\int_{0}^{2\pi p(t)^{T}D^{T}A\dot{x}(t)dt \] that is, since $D^{T}A=I_{\mathrm{d}}+B^{T}C$ by transposition of the third equality in (\ref{ABDC}), \[ I_{2}=\int_{0}^{2\pi}p(t)^{T}\dot{x}(t)dt+\int_{0}^{2\pi}\left[ p(t)^{T B^{T}CA\dot{x}(t)+\dot{p}(t)^{T}B^{T}CAx(t)\right] dt. \] Using again the rule (\ref{diff}) and noting that the first integral is precisely $I(\gamma)$ we get, $D^{T}A$ being symmetric \[ I_{2}=I(\gamma)+\int_{0}^{2\pi}\frac{d}{dt}\left[ p(t)^{T}B^{T}CAx(t)\right] dt. \] The equality $I(S\gamma)=I(\gamma)$ follows noting that the integral in the right-hand side is \[ p(2\pi)^{T}B^{T}CAx(2\pi)-p(0)^{T}B^{T}CAx(0)=0 \] since $(x(2\pi),p(2\pi))=(x(0),p(0))$. The observant reader will have observed that we really needed all of the properties of a symplectic matrix contained in the set of conditions (\ref{ABDC}); this shows that the symplectic invariance of the first Poincar\'{e} invariant is a characteristic property of symplectic matrices. \subsection{Proof of the symplectic egg property\label{subsec3}} Let us denote by $B_{R}$ the phase space ball centered at the origin and having radius $R$. It is the set of all points $z=(x,p)$ such that $|z|^{2}=|x|^{2}+|p|^{2}\leq R^{2}$. What we call a \textquotedblleft symplectic egg\textquotedblright\ is the image $S(B_{R})$ of $B_{R}$ by a symplectic matrix $S$. It is thus an ellipsoid in phase space, consisting of all points $z$ such that $S^{-1}z$ is in the ball $B_{R}$, that is $|S^{-1}z|^{2}\leq R^{2}$. Using formula (\ref{sinv}) giving the inverse of $S \begin{pmatrix} A & B\\ C & D \end{pmatrix} $ together with the relations $A^{T}C=C^{T}A$, $B^{T}D=D^{T}B$ in (\ref{ABDC}) we get the following explicit expression after some easy calculations \[ x^{T}(CC^{T}+DD^{T})x-2x^{T}(DB^{T}+CA^{T})p+p^{T}(AA^{T}+BB^{T})p\leq R^{2 \] (don't worry: we will not have to use this cumbersome inequality in what follows!). Let us now cut $S(B_{R})$ by a plane $\Pi_{j}$ of conjugate coordinates $x_{j},p_{j}$. We get an elliptic surface $\Gamma_{j}$, whose boundary is an ellipse denoted by $\gamma_{j}$. Since that ellipse lies in the plane $\Pi _{j}$ we can parametrize it by only specifying coordinates $x_{j}(t)$, $p_{j}(t)$ all the other being identically zero; relabeling if necessary the coordinates we may as well assume that $j=1$ so that the curve $\gamma_{j}$ can be parametrized as follows: \[ \gamma_{j}(t)=(x_{1}(t),0,\cdot\cdot\cdot,0;p_{1}(t),0,\cdot\cdot\cdot,0)^{T \] for $0\leq t\leq2\pi$ with $x_{1}(0)=x_{1}(2\pi)$ and $p_{1}(0)=p_{1}(2\pi)$. Since $x_{k}(t)=0$ and $p_{k}(t)=0$ for $k>1$ the area of the ellipse is given by the formul \begin{align*} \operatorname*{Area}(\Gamma_{1}) & =\int_{0}^{2\pi}p_{1}(t)\dot{x _{1}(t)dt\\ & =\sum_{k=1}^{n}\int_{0}^{2\pi}p_{k}(t)\dot{x}_{k}(t)dt\\ & =\oint\nolimits_{\gamma_{1}}pdx \end{align*} hence $\operatorname*{Area}(\Gamma_{1})=I(\gamma_{1})$. Since the inverse matrix $S^{-1}$ is symplectic, we have $I(\gamma_{1})=I(S^{-1}\gamma_{1})$. But the loop $S^{-1}\gamma_{1}$ bounds a section of the ball $B_{R}$ by a plane (the plane $S^{-1}\Pi_{j}$) passing through its center. This loop is thus a great circle of $B_{R}$ and the area of the surface $S^{-1}\Gamma_{1}$ is thus exactly $\pi R^{2}$, which was to be proven. We urge the reader to notice that the assumption that we are cutting $S(B_{R})$ with a plane of \textit{conjugate }coordinates\textit{ }is \emph{essential}, because it is this assumption that allowed us to identify the area of the section with action. Here is a counterexample which shows that the property does not hold for arbitrary sections of $S(B_{R}).$ Take, for instanc \begin{equation} S \begin{pmatrix} \lambda_{1} & 0 & 0 & 0\\ 0 & \lambda_{2} & 0 & 0\\ 0 & 0 & 1/\lambda_{1} & 0\\ 0 & 0 & 0 & 1/\lambda_{2 \end{pmatrix} \text{ \ \ , \ }\lambda_{1}>0\text{, }\lambda_{2}>0,\text{ and }\lambda _{1}\neq\lambda_{2} \label{konter \end{equation} so that $S(B_{R})$ is defined by the inequalit \[ \frac{1}{\lambda_{1}}x_{1}^{2}+\frac{1}{\lambda_{2}}x_{2}^{2}+\lambda_{1 p_{1}^{2}+\lambda_{2}p_{2}^{2}\leq R^{2}. \] The section of $S(B_{R})$ by the $x_{2},p_{2}$ plane is the ellips \[ \frac{1}{\lambda_{1}}x_{1}^{2}+\lambda_{1}p_{1}^{2}\leq R^{2 \] which has area $\pi(R^{2}\sqrt{\lambda_{1}}\sqrt{1/\lambda_{1}})=\pi R^{2}$ as predicted, but its section with the $x_{2},p_{1}$ plane is the ellips \[ \frac{1}{\lambda_{1}}x_{1}^{2}+\lambda_{2}p_{2}^{2}\leq R^{2 \] which has area $\pi(R^{2}\sqrt{\lambda_{1}/\lambda_{2}})$ which is different from $\pi R^{2}$ since $\lambda_{1}\neq\lambda_{2}$. The assumption that $S$ is symplectic is also essential. Assume that we scramble the diagonal entries of the matrix $S$ above in the following way \[ S^{\prime} \begin{pmatrix} \lambda_{1} & 0 & 0 & 0\\ 0 & \lambda_{2} & 0 & 0\\ 0 & 0 & 1/\lambda_{2} & 0\\ 0 & 0 & 0 & 1/\lambda_{1 \end{pmatrix} \text{. \] The matrix $S^{\prime}$ still has determinant one, but it is not symplectic (cf. (\ref{konter0})). The section $S^{\prime}(B_{R})$ by the $x_{2},p_{2}$ plane is now the ellipse \[ \frac{1}{\lambda_{1}}x_{1}^{2}+\lambda_{2}p_{1}^{2}\leq R^{2 \] with area $\pi R^{2}\sqrt{\lambda_{1}/\lambda_{2}}\neq\pi R^{2}.$ \section{The Symplectic Camel} The property of the symplectic camel is a generalization of the property of the symplectic egg for arbitrary canonical transformations; it reduces to the latter in the linear case. \subsection{Gromov's non-squeezing theorem: static formulation} As we mentioned in the Prologue, the property of the symplectic egg is related to the \textquotedblleft non-squeezing theorem\textquotedblright\ of Gromov \cite{Gromov} in 1985. To understand it fully we have to introduce the notion of canonical transformation \cite{Arnold,Goldstein}. A canonical transformation is an invertible infinitely differentiable mappin \[ f \begin{pmatrix} x\\ p \end{pmatrix} \longrightarro \begin{pmatrix} x^{\prime}\\ p^{\prime \end{pmatrix} \] of phase space on itself whose inverse $f^{-1}$ is also infinitely differentiable and such that its Jacobian matrix \[ f^{\prime}(x,p)=\frac{\partial(x^{\prime},p^{\prime})}{\partial(x,p) \] is symplectic at every point $(x,p)$. A symplectic matrix $S \begin{pmatrix} A & B\\ C & D \end{pmatrix} $ automatically generates a linear canonical transformation by letting it act on phase space vectors \begin{pmatrix} x\\ p \end{pmatrix} \longrightarrow \begin{pmatrix} x\\ p \end{pmatrix} $: it is an invertible transformation (because symplectic matrices are invertible), trivially infinitely differentiable, and the Jacobian matrix is the matrix $S$ itself. Phase space translations, that is mappings \begin{pmatrix} x\\ p \end{pmatrix} \longrightarro \begin{pmatrix} x+x_{0}\\ p+p_{0 \end{pmatrix} $ are also canonical: their Jacobian matrix is just the identity \begin{pmatrix} I_{\mathrm{d}} & 0\\ 0 & I_{\mathrm{d} \end{pmatrix} $. By composing linear canonical transformations and translations one obtains the class of all affine canonical transformations. Here is an example of a nonlinear canonical transformation: assume that $n=1$ and denote the phase space variables by $r$ and $\varphi$ instead of $x$ and $p$; the transformation defined by $(r,\varphi)\longrightarrow(x,p)$ wit \[ x=\sqrt{2r}\cos\varphi\text{ \ , \ }p=\sqrt{2r}\sin\varphi\text{ \ , \ 0\leq\varphi<2\pi, \] has Jacobian matri \[ f^{\prime}(r,\varphi) \begin{pmatrix} \frac{1}{\sqrt{2r}}\cos\varphi & \frac{1}{\sqrt{2r}}\sin\varphi\\ -\sqrt{2r}\sin\varphi & \sqrt{2r}\cos\varphi \end{pmatrix} \] which has determinant one for every choice of $r$ and $\varphi$. The transformation $f$ is thus canonical, and can be extended without difficulty to the multi-dimensional case by associating a similar transformation to each pair $(x_{j},p_{j})$. It is in fact a symplectic version of the usual passage to polar coordinates (the reader can verify that the latter is not canonical by calculating its Jacobian matrix); it can also be viewed as the simplest example of action-angle variable \cite{Arnold,Goldstein}; for instance it reduces the isotropic harmonic oscillator Hamiltonian $H=\frac{1}{2 (p^{2}+x^{2})$ to $K=r$. We will see in a moment why canonical transformations play such an important role in Physics (and especially in classical mechanics), but let us first state Gromov's theorem: \begin{description} \item[Gromov's theorem:] \emph{No canonical transformation can squeeze a ball }$B_{R}$\emph{ through a circular hole in a plane }$\Pi_{j}$ \emph{of conjugate coordinates }$x_{j},p_{j}$ \emph{with smaller radius }$r<R$\emph{ }. \end{description} This statement is surprisingly simple, and one can wonder why it took so long time to discover it. There are many possible answers. The most obvious is that all known proofs Gromov's theorem are extremely difficult, and make use of highly non-trivial techniques from various parts of pure mathematics, so the result cannot be easily derived from elementary principles. Another reason is that it seems, as we will discuss below, to contradict the common conception of Liouville's theorem, and was therefore unsuspected! So, what is the relation of Gromov's theorem with our symplectic eggs, and where does its nickname \textquotedblleft principle of the symplectic camel\textquotedblright\ come from? The denomination apparently appeared for the first time in Arnol'd's paper \cite{arnold}. Recalling that in \cite{Matthew} it is stated that \begin{quote} `...\emph{Then Jesus said to his disciples, `Amen, I say to you, it will be hard for one who is rich to enter the kingdom of heaven. Again I say to you, it is easier for a camel to pass through the eye of a needle than for one who is rich to enter the kingdom of God}'. \end{quote} \noindent The biblical camel is here the ball $B_{R}$, and the eye of the needle is the hole in the $x_{j},p_{j}$ plane! (For alternative interpretations of the word \textquotedblleft camel\textquotedblright; see the reader's comments following E. Samuel Reich's New Scientist paper \cite{Reich} about \cite{FP}.) Let us next show that the section property of the symplectic egg is indeed a linear (or affine) version of Gromov's theorem. It is equivalent to prove that no symplectic egg $S(B_{R})$ with radius $R$ larger than that, $r$, of the hole in the $x_{j},p_{j}$ plane can be threaded through that hole. Passing $S(B_{R})$ through the hole means that the section of the symplectic egg by the $x_{j},p_{j}$ plane, which has area $\pi R^{2}$, is smaller than the area $\pi r^{2}$ of the hole; hence we must have $R\leq r$. \subsection{Dynamical interpretation} The reason for which canonical transformations play an essential role in Physics comes from the fact that Hamiltonian phase flows precisely consist of canonical transformations. Consider a particle with mass $m$ moving along the $x$-axis under the action of a scalar potential $V$. The particle is subject to a force $F=-\frac{d}{dx}V(x)$. Since $F=mdv/dt=dp/dt$ (Newton's second law), the equations of motion can be writte \begin{equation} m\frac{dx}{dt}=p\text{ \ , \ }\frac{dp}{dt}=-\frac{dV}{dx}\text{.} \label{motion1 \end{equation} In terms of the Hamilton function \[ H(x,p)=\frac{1}{2m}p^{2}+V(x) \] this system of differential equations is equivalent to Hamilton's equations of motio \begin{equation} \frac{dx}{dt}=\frac{\partial H}{\partial p}\text{ \ , \ }\frac{dp}{dt =-\frac{\partial H}{\partial p}. \label{motion2 \end{equation} We will more generally consider the $n$-dimensional version of (\ref{motion2}) which read \begin{equation} \frac{dx_{j}}{dt}=\frac{\partial H}{\partial p_{j}}\text{ \ , \ }\frac{dp_{j }{dt}=-\frac{\partial H}{\partial x_{j}}\text{ \ , \ }1\leq j\leq n. \label{motion3 \end{equation} (In mathematical treatments of Hamilton's equations \cite{Arnold,Goldstein,Birk} the function $H$ can be of a very general type, and even depend on time $t$). In either case, these equations determine --as any system of differential equations does-- a \emph{flow}. By definition, the Hamiltonian flow is the infinite set of mappings $\phi_{t}^{H}$ defined as follows: suppose we solve the system (\ref{motion3}) after having chosen initial conditions $x_{1}(0),...,x_{n}(0)\ $and $p_{1}(0),...,p_{n}(0)$ at time $t=0$ for the position and momentum coordinates. Denote the initial vector thus defined \begin{pmatrix} x(0)\\ p(0) \end{pmatrix} $. Assuming that the solution to Hamilton's equations at time $t$ exists (and is unique), we denote it by \begin{pmatrix} x(t)\\ p(t) \end{pmatrix} $. By definition, $\phi_{t}^{H}$ is just the mapping that takes the initial vector to the final vector \begin{equation \begin{pmatrix} x(t)\\ p(t) \end{pmatrix} =\phi_{t}^{H \begin{pmatrix} x(0)\\ p(0) \end{pmatrix} . \label{fi \end{equation} As time varies, the initial point describes a curve in phase space; it is called a \textquotedblleft flow curve\textquotedblright\ or a \textquotedblleft Hamiltonian trajectory\textquotedblright. The essential property to remember is that each mapping $\phi_{t}^{H}$ is a canonical transformation;\ Hamiltonian flows are therefore volume preserving: this is Liouville's theorem \cite{Arnold,Goldstein}. This easily follows from the fact that symplectic matrices have determinant one. Since it is not true that every matrix with determinant one is symplectic, as soon as $n>1$ volume preservation also holds for other transformations, and is therefore not a characteristic property of Hamiltonian flows; see Arnold \cite{Arnold}, Ch.3, \S 16 for a discussion of this fact. The thing to observe is that volume preservation does not imply conservation of shape, and one could therefore imagine that under the action of a Hamiltonian flow a subset of phase space can be stretched in all directions, and eventually get very thinly spread out over huge regions of phase space, so that the projections on any plane could \textit{a priori} become arbitrary small after some (admittedly, very long) time $t$. In addition, one may very well envisage that the larger the number $n$ of degrees of freedom, the more that spreading will occur since there are more directions in which the ball is likely to spread! This possibility, which is ruled out by the symplectic camel as we will explain below, has led to many philosophical speculations about Hamiltonian systems. For instance, in his 1989 book Roger Penrose (\cite{penrose}, p.174--184) comes to the conclusion that phase space spreading suggests that \textquotedblleft\textit{classical mechanics cannot actually be true of our world}\textquotedblright\ (p.183, l.--3). In fact, our discussion of Gromov's theorem shows that Hamiltonian evolution is much less disorderly than Penrose thought. Consider again our phase space ball $B_{R}$. Its orthogonal projection (or \textquotedblleft shadow\textquotedblright) on any two-dimensional subspace $\Pi$ of phase space is a circular surface with area $\pi R^{2}$. Suppose now that we move the ball $B_{R}$ using a Hamiltonian flow $\phi_{t}^{H}$ and choose for $\Pi$ the plane $\Pi_{j}$ of conjugate coordinates $x_{j},p_{j}$. The ball will slowly get deformed, while keeping same volume. But, as a consequence of the principle of the symplectic camel, its \textquotedblleft shadow\textquotedblright\ on any plane $\Pi_{j}$ will never decrease below its original value $\pi R^{2}$! Why is it so? First, it is clear that if the area of the projection of $f(B_{R})$ on on a plane $x_{j},p_{j}$ ($f$ a canonical transformation) will never be smaller than $\pi R^{2}$, then we cannot expect that $f(B_{R})$ lies inside a cylinder $(p_{j}-a_{j})^{2}+(x_{j}-b_{j})^{2}=r^{2}$ if $r<R$. So is the \textquotedblleft principle of the symplectic camel\textquotedblrigh \ stronger than Gromov's theorem? Not at all, it is equivalent to it! Let us prove this. We assume as in section \ref{subsec3} that $j=1$; this does not restrict the generality of the argument. Let $\gamma_{1}$ be the boundary of the projection of $f(B_{R})$ on the $x_{1},p_{1}$ plane; it is a loop encircling a surface $\Gamma_{1}$ with area at least $\pi R^{2}$. That surface $\Gamma_{1}$ can be deformed into a circle with same area using an area-preserving mapping of the $x_{1},p_{1}$ plane; call that mapping $f_{1}$ and define a global phase space transformation $f$ by the formula \[ f(x_{1},p_{1},x_{2},p_{2},.....,x_{n},p_{n})=(f_{1}(x_{1},p_{1}),x_{2 ,p_{2},...,x_{n},p_{n})\text{. \] Calculating the Jacobian matrix it is easy to check that the matrix $f$ is a canonical transformation, hence our claim. For a more detailed discussion of this and related topics see \cite{FP,physreps}. \section{Quantum Blobs} By definition, a quantum blob is a symplectic egg with radius $R=\sqrt{\hbar }.$ The section of quantum blob by a plane of conjugate coordinates is thus $\pi\hbar=\frac{1}{2}h$. We will see that quantum blobs qualify as the smallest units of phase space allowed by the uncertainty principle of quantum mechanics. We begin with a very simple example illustrating the basic idea, which is that a closed (phase space) trajectory cannot be carried by an energy shell smaller (in a sense to be made precise) than a quantum blob. As simple as this example is, it allows us to recover the ground energy of the anisotropic quantum harmonic oscillator. \subsection{The harmonic oscillator} The fact that the ground energy level of a one-dimensional harmonic oscillator \[ H=\frac{p_{x}^{2}}{2m}+\frac{1}{2}m\omega^{2}x^{2 \] is different from zero is heuristically justified in the physical literature by the following observation: since Heisenberg's uncertainty relation $\Delta p_{x}\Delta x\geq\frac{1}{2}\hbar$ prevent us from assigning simultaneously a precise value to both position and momentum, the oscillator cannot be at rest. To show that the lowest energy has the value $\frac{1}{2}\hbar\omega$ predicted by quantum mechanics one can then argue as follows: since we cannot distinguish the origin $(x=0,p=0)$ of phase space from a phase plane trajectory lying inside the double hyperbola $p_{x}x<\frac{1}{2}\hbar$ , we must require that the points $(x,p)$ of that trajectory are such that $|p_{x}x|\geq\frac{1}{2}\hbar$; multiplying both sides of the trivial inequality \[ \frac{p_{x}^{2}}{m\omega}+m\omega x^{2}\geq2|px|\geq\hbar \] by $\omega/2$ we then get \[ E=\frac{p_{x}^{2}}{2m}+\frac{1}{2}m\omega^{2}x^{2}\geq\frac{1}{2}\hbar\omega \] which is the correct lower bound for the quantum energy. This argument can be reversed: since the lowest energy of an oscillator with frequency $\omega$ and mass $m$ is $\frac{1}{2}\hbar\omega$, the minimal phase space trajectory will be the ellipse \[ \frac{p_{x}^{2}}{m\hbar\omega}+\frac{x^{2}}{(\hbar/m\omega)}=1 \] which encloses a surface with area $\frac{1}{2}h$. Everything in this discussion immediately extends to the generalized anisotropic $n$-dimensional oscillator \[ H=\sum_{j=1}^{n}\frac{p_{j}^{2}}{2m_{j}}+\frac{1}{2}m_{j}\omega_{j}^{2}x^{2 \] and one concludes that the smallest possible trajectories in $x_{j},p_{j}$ space are the ellipse \[ \frac{p_{j}^{2}}{m_{j}\hbar\omega_{j}}+\frac{x_{j}^{2}}{(\hbar/m_{j}\omega _{j})}=1\text{. \] By the same argument as above, using each of the Heisenberg uncertainty relations \begin{equation} \Delta p_{j}\Delta x_{j}\geq\frac{1}{2}\hbar\label{hup \end{equation} we recover the correct ground energy leve \[ E=\frac{1}{2}\hbar\omega_{1}+\frac{1}{2}\hbar\omega_{2}+\cdot\cdot\cdot +\frac{1}{2}\hbar\omega_{n \] as predicted by standard quantum theory \cite{Messiah}. In addition, one finds that, the projection of the motion on any plane of conjugate variables $x_{j},p_{j}$ will always enclose a surface having an area at least equal to $\frac{1}{2}h$. In other words, the motions corresponding to the lowest possible energy must lie on a quantum blob! \subsection{Quantum blobs and uncertainty} The Heisenberg inequalities (\ref{hup}) are a weak form of the quantum uncertainty principle; they are a particular case of the more accurate Robertson--Schr\"{o}dinger \cite{Rob,Schr} inequalitie \begin{equation} (\Delta p_{j})^{2}(\Delta x_{j})^{2}\geq\Delta(x_{j},p_{j})^{2}+\tfrac{1 {4}\hbar^{2} \label{robup \end{equation} (see Messiah \cite{Messiah} for a simple derivation). Here, in addition to the standard deviations $\Delta x_{j}$, $\Delta p_{j}$ we have the covariances $\Delta(x_{j},p_{j})$ which are a measurement of how much the two variables $x_{j},p_{j}$ change together. (We take the opportunity to note that the interpretation of quantum uncertainty in terms of standard deviations goes back to Kennard \cite{kennard};\ Heisenberg's \cite{heisenberg} own interpretation was much more heuristic). Contrarily to what is often believed the Heisenberg inequalities (\ref{hup}) and the Robertson--Schr\"{o}dinger inequalities (\ref{robup}) are not statements about the accuracy of our measurements; their derivation assumes on the contrary perfect instruments; see the discussion in Peres \cite{Peres}, p.93. Their meaning is that if the same preparation procedure is repeated a large number of times on an ensemble of systems, and is followed by either by a measurement of $x_{j}$, or by a measurement of $p_{j}$, then the results obtained have standard deviations $\Delta x_{j}$, $\Delta p_{j}$; in addition these measurements need not be independent: this is expressed by the statistical covariances $\Delta (x_{j},p_{j})$ appearing in the inequalities (\ref{robup}). It turns out that quantum blobs can be used to give a purely geometric and intuitive idea of quantum uncertainty. Let us first consider the case $n=1$, and define the covariance matrix by \begin{equation} \Sigma \begin{pmatrix} \Delta x^{2} & \Delta(x,p)\\ \Delta(p,x) & \Delta p^{2 \end{pmatrix} . \label{covma1 \end{equation} Its determinant is $\det\Sigma=(\Delta p)^{2}(\Delta x)^{2}-\Delta(x,p)^{2}$, so in this case the Robertson--Schr\"{o}dinger inequality is the same thing as $\det\Sigma\geq\tfrac{1}{4}\hbar^{2}$. Now to the geometric interpretation. In statistics it is customary to associate to $\Sigma$ the so-called covariance ellipse: it is the set of $\Omega_{\Sigma}$ points $(x,p)$ in the phase plane satisfying \begin{equation} \frac{1}{2}(x,p)\Sigma^{-1 \begin{pmatrix} x\\ p \end{pmatrix} \leq1. \label{covell1 \end{equation} Its area is $2\pi\sqrt{\det\Sigma}$, that i \[ \operatorname{Area}(\Omega_{\Sigma})=2\pi\left[ (\Delta p)^{2}(\Delta x)^{2}-\Delta(x,p)^{2}\right] ^{1/2 \] and the inequality $\det\Sigma\geq\tfrac{1}{4}\hbar^{2}$ is thus equivalent to $\operatorname{Area}(\Omega_{\Sigma})\geq\pi\hbar=\frac{1}{2}h$. We have thus succeeded in expressing the rather complicated Robertson--Schr\"{o}dinger inequality (\ref{robup}) in terms of the area of a certain ellipse. In higher dimensions the same argument applies, but contrarily to what common intuition suggests, the Robertson--Schr\"{o}dinger inequalities are not expressed in terms of volume (which is the generalization of area to higher dimensions), but again in terms of \emph{areas }--namely those of the intersections of the conjugate planes $x_{j},p_{j}$ with the covariance ellipsoi \begin{equation} \Sigma \begin{pmatrix} \Delta(x,x) & \Delta(x,p)\\ \Delta(p,x) & \Delta(p,p) \end{pmatrix} . \label{covma2 \end{equation} Here $\Delta(x,x),\Delta(x,p)$, etc. are the $n\times n$ block-matrices $\left( \Delta(x_{i},x_{j})\right) _{1\leq i,j\leq n}$, $\left( \Delta(x_{i},p_{j})\right) _{1\leq i,j\leq n}$ etc. Notice that the diagonal terms of $\Sigma$ are just the variances $\Delta x_{1}^{2},...,\Delta x_{n}^{2};\Delta p_{1}^{2},...,\Delta p_{n}^{2}$ so that (\ref{covma2}) reduces to (\ref{covma1}) for $n=1$. Defining the covariance ellipsoid $\Omega_{\Sigma}$ as above, one then proves that the inequalities (\ref{robup}) are equivalent to the property that the intersection of $\Omega_{\Sigma}$ with the planes $x_{j},p_{j}$ is at least $\frac{1}{2}h$. These inequalities are saturated (\textit{i.e.} they become equalities) if and only if these intersections have exactly area $\frac{1}{2}h$, that is, if and only if $\Omega_{\Sigma}$ is a quantum blob! The proof goes as follows (for a detailed argument see \cite{FP,physreps}): one first remarks, using a simple algebraic argument that the Robertson--Schr\"{o}dinger inequalities are equivalent to the following condition of the covariance matrix, well-known in quantum optics, see \textit{e.g.} \cite{SMD,SSM} and the references therein: \begin{quotation} \textit{The eigenvalues of the Hermitian matrix} $\Sigma+\frac{i\hbar}{2}J$ \textit{are non-negative}: $\Sigma+\frac{i\hbar}{2}J\geq0$. \end{quotation} \noindent The next step consists in noting that in view of Sylvester's theorem from linear algebra that the leading principal minors of \[ \Sigma+\frac{i\hbar}{2}J \begin{pmatrix} \Delta(x,x) & \Delta(x,p)+\frac{i\hbar}{2}I\\ \Delta(p,x)-\frac{i\hbar}{2}I & \Delta(p,p) \end{pmatrix} \] are non-negative. This applies in particular to the minors of order $2$ so that we must hav \ \begin{vmatrix} \Delta x_{j}^{2} & \Delta(x_{j},p_{j})+\frac{i\hbar}{2}\\ \Delta(p_{j},x_{j})-\frac{i\hbar}{2} & \Delta p_{j}^{2 \end{vmatrix} \geq0 \] and this condition is precisely the Robertson--Schr\"{o}dinger inequality (\ref{robup}). As we have seen, the fact that the covariance ellipsoid is cut by the conjugate coordinate planes along ellipsoids with areas $\geq\frac{1}{2}h$ implies the Robertson--Schr\"{o}dinger inequalities. This is thus a geometric statement --and a strong one-- of the quantum uncertainty principle, which can be rephrased as follows: \begin{quotation} \textit{Every quantum covariance ellipsoid contains a quantum blob, i.e. a symplectic egg with radius} $\sqrt{\hbar}$. \textit{When this ellipsoid is a quantum blob, the Robertson--Schr\"{o}dinger inequalities are saturated.} \end{quotation} This statement can be extended in various ways; in a very recent paper \cite{sat} we have applied this geometric approach to the quantum uncertainty principle to the study of partial saturation of the Robertson--Schr\"{o}dinger inequalities for mixed quantum states. We show, in particular, that partial saturation corresponds to the case where some (but not all) planes of conjugate coordinates cut the covariance ellipsoid along an ellipse with exactly area $\frac{1}{2}h$; this allows us to characterize those states for which this occurs (they are generalized Gaussians). Another important thing we will unfortunately not be able to discuss in detail because of length limitations, is the following: everything we have said above still holds true if we replace the sentence \textquotedblleft planes of conjugate coordinates $x_{j},p_{j}$\textquotedblright\ with the sentence \textquotedblleft symplectic planes\textquotedblright. A symplectic plane is a two-dimensional subspace of phase space which has the property that if we restrict the symplectic form to it, then we obtain a new symplectic form, defined on this two-dimensional space. For instance, it is easy to check that the $x_{j},p_{j}$ are symplectic planes (but those of coordinates $x_{j ,p_{k}$, $j\neq k$, or $x_{j},x_{k}$, $p_{j},p_{k}$ are not). One proves \cite{Arnold,Birk} that every symplectic plane can be obtained from any of the $x_{j},p_{j}$ planes using a symplectic transformation. This property implies, in particular, that the Robertson--Schr\"{o}dinger inequalities (\ref{robup}) are covariant under symplectic transformations: if one defines new coordinates $x^{\prime},p^{\prime}$ by $(x^{\prime},p^{\prime})^{T}=S(x,p)^{T}$, $S$ a symplectic matrix, then if \[ (\Delta p_{j})^{2}(\Delta x_{j})^{2}\geq\Delta(x_{j},p_{j})^{2}+\tfrac{1 {4}\hbar^{2 \] we also hav \[ (\Delta p_{j}^{\prime})^{2}(\Delta x_{j}^{\prime})^{2}\geq\Delta(x_{j ^{\prime},p_{j}^{\prime})^{2}+\tfrac{1}{4}\hbar^{2}. \] Also, there are possible non-trivial generalizations of the uncertainty principle, using new results in symplectic topology, for instance \cite{Abbo} which extends Gromov's theorem (in the linear case) to projections on symplectic subspaces with dimension greater than $2$. In \cite{gossang} we have shown how this result leads to \textquotedblleft quantum universal invariants". \section{Conclusion} Quoting the great mathematician Hermann Weyl: \begin{quotation} `\textit{In these days the angel of topology and the devil of abstract algebra fight for the soul of each individual mathematical domain' (H. Weyl, 1939)} \end{quotation} This quotation from goes straight to the point, and applies to Physics as well: while algebra (in the large) has dominated the scene of quantum mechanics for a very long time (in fact, from its beginning), we are witnessing a slow but steady emergence of geometric ideas. Not only do these geometric ideas add clarity to many concepts, but they also lead to new insights (see e.g. \cite{gossang}). This is what we had in mind while writing the present paper. \begin{acknowledgement} The present work has been supported by the Austrian Research Agency FWF (Projektnummer P20442-N13). \end{acknowledgement} \begin{acknowledgement} I wish to express my gratitude to my son Sven for having drawn the pictures in this paper. \end{acknowledgement}
2,869,038,155,155
arxiv
\section{Introduction} Recently the ATLAS and the CMS Collaborations using the combined 7~TeV and 8~TeV data found a signal for a boson with the ATLAS finding a signal at $126.0 \pm0.4 ({\rm stat})\pm 0.4({\rm sys})~{\rm GeV}$ at the $5.0\sigma$ level~\cite{:2012gk} and the CMS finding a signal at $125.3\pm 0.4 ({\rm stat})\pm 0.5({\rm sys})~{\rm GeV}$ at the $5.0\sigma$ level~\cite{:2012gu}. While the properties of this boson still need to be fully established there is the general belief that it is indeed the long sought after Higgs boson~\cite{Englert:1964et,Higgs:1964pj,Guralnik:1964eu} of the electroweak theory~\cite{Weinberg:1967tq,salam}. In the analysis below we will assume that the observed boson is indeed the Higgs particle that is remnant of the electroweak symmetry breaking. It is pertinent to observe that the results of the ATLAS and CMS Collaborations are remarkably consistent with the predictions of supergravity grand unified models~\cite{Chamseddine:1982jx,Nath:1983aw,Hall:1983iz,Arnowitt:1992aq} with radiative electroweak symmetry breaking (for a review see~\cite{Ibanez:2007pf}) which predict the Higgs boson mass to lie below around $130$ GeV~\cite{Akula:2011aa,Akula:2012kk,Arbey:2012dq,Ellis:2012aa,Baer:2012mv} (For a recent review of Higgs and supersymmetry see~\cite{Nath:2012nh}). However, the fact that the Higgs mass lies close to the upper limit of the prediction of the supergravity unification within the Minimal Supersymmetric Standard Model (MSSM) indicates that the loop correction to the Higgs boson mass is rather large which in turn implies that the existence of a high scale of supersymmetry, specifically a high scale for the squarks. However, corrections on the order of a few GeVs from a source external to MSSM can significantly lower the scale of supersymmetry. Here we investigate this possibility by considering an extension of MSSM with vector-like leptonic supermultiplets. The assumption of additional vector-like leptonic supermultiplets will not alter the Higgs production cross section and is not strongly constrained by the electroweak data. \\ Aside from the relative heaviness of the Higgs boson is the issue of any possible deviations of the Higgs boson couplings from the ones predicted in the Standard Model. If a significant deviation from the Standard Model prediction is seen, it would indicate the existence of new physics. However, it would take a considerable amount of luminosity, i.e., as much as 3000 fb$^{-1}$ at LHC14 to achieve an accuracy of 10-20\%~\cite{Peskin:2012we} in the determination of the Higgs couplings with fermions and with dibosons. An exception to the above is the diphoton channel for which the background is remarkably small and it was the discovery channel for the Higgs boson. Here the current data gives some hint of a possible deviation from the Standard Model prediction. The ATLAS and the CMS Collaborations give~\cite{:2012gu,:2012gk}: \begin{eqnarray} R_{\gamma \gamma} \equiv \frac{\sigma(pp\to h)_{\rm obs}}{\sigma(pp\to h)_{\rm SM}} \cdot \frac{\Gamma(h\to\gamma\gamma)_{\rm obs}}{\Gamma(h\to \gamma\gamma)_{\rm SM}} = 1.8\pm 0.5 ~({\rm ATLAS }), ~1.6\pm 0.4~({\rm CMS}), \label{RGG} \end{eqnarray} where \begin{eqnarray} \frac{\sigma(pp\to h)_{\rm obs}}{\sigma(pp\to h)_{\rm SM}} = 1.4\pm 0.3 {\rm ~(ATLAS)}, ~0.87 \pm 0.23 {\rm ~(CMS)}. \end{eqnarray} In the Standard Model the largest contribution to the $h\to\gamma\gamma$ mode arises from the $W^+W^-$ in the loop and this contribution is partially cancelled by the contribution arising from the top quark in the loop. If this observed enhancement is not due to QCD uncertainties~\cite{Baglio:2012fu}, one needs new contributions beyond the standard model to increase the diphoton rate. There are many works which have investigated this possibility, and an enhancement of the diphoton rate can be achieved in many ways: from light staus with large mixing~\cite{Carena:2011aa,Giudice:2012pf,Sato:2012bf,Basso:2012tr}, from extra vector-like leptons~\cite{Carena:2012xa,An:2012vp,Joglekar:2012vc,ArkaniHamed:2012kq,Almeida:2012bq,Davoudiasl:2012ig,Davoudiasl:2012tu} and through other mechanisms~\cite{Wang:2012zv,Draper:2012xt,Abe:2012fb,Haba:2012zt,Delgado:2012sm,SchmidtHoberg:2012yy,Urbano:2012tx,Moreau:2012da, Chala:2012af,Picek:2012ei,Dawson:2012mk,Choi:2012he,SchmidtHoberg:2012ip,Huo:2012tw,Cheung:2012pq,Basso:2012nh}. Additional papers where vector-like fermions have been discussed are~\cite{Dawson:2012di,Bonne:2012im,Kearney:2012zi,Voloshin:2012tv,Carmona:2013cq}. Most of these works are within non-supersymmetric framework. However, with 125 GeV Higgs mass, vacuum stability is a serious problem in most models. Thus, for example, in the Standard Model vacuum stability up to the Planck scale may not be achievable since analysis using next-to-next-leading order correction require that $m_h>129.4$ GeV for the vacuum to be absolutely stable up to the Planck scale~\cite{Degrassi:2012ry} (see, however, \cite{Alekhin:2012py,Masina:2012tz}). For this reason we consider supersymmetric models which are less problematic with regard to vacuum stability (see e.g.,~\cite{ArkaniHamed:2012kq,Hisano:2010re,Kitahara:2012pb,Carena:2012mw}). Additionally, the supersymmetric theories also avoid the well-known fine-tuning problems of non-supersymmetric theories. An analysis to determine whether a significant diphoton enhancement can be achieved in MSSM was carried out in~\cite{Desai:2012qy,Cao:2013ur}. \\ In this work, we consider effects from additional vector-like leptonic multiplets in loops both to the Higgs diphoton rate and to the Higgs mass in a supersymmetric framework. Vector-like multiplets appear in a variety of grand unified models~\cite{Georgi:1979md,Wilczek:1981iz,Babu:2005gx} as well as in string and brane models. Higgs mass enhancement via vector-like supermultiplets has been considered in previous works, see, e.g.,~\cite{Babu:2008ge,Martin:2009bg,Martin:2010dc}. New particles with couplings to the Higgs are constrained by the electroweak precision tests and such constraints have been discussed in~\cite{Cynolter:2008ea,Joglekar:2012vc,Almeida:2012bq,ArkaniHamed:2012kq} and the detection of such particles was discussed in~\cite{Giddings:2013gh}. The outline of the rest of the paper is as follows: In Section~\ref{Sec2} we give a general analysis of the diphoton rate in the Standard Model as well as in supersymmetric extensions. In Section~\ref{Sec3} we discuss the details of the model. In Section~\ref{Sec4} we give an analysis of the enhancement of the diphoton rate for the model discussed in the previous section. In Section~\ref{Sec5} we give an analysis of the correction to the Higgs boson mass from radiative corrections arising from the exchange of the vector-like supermultiplets. A numerical analysis of the corrections to the Higgs diphoton rate and to the Higgs boson mass is given in Section~\ref{Sec6} and conclusions are given in Section~\ref{Sec7}. Further details are given in Appendix~\ref{AppA} and \ref{AppB}. \section{A general analysis of the diphoton rate}\label{Sec2} We first consider the Standard Model case with the Higgs doublet $H^T=(H^+, H^0)$. The full decay width of the Higgs $h$ (where $H^0= (v+ h)/\sqrt 2$ and $v=246\GeV$) at the one-loop level involving the exchange of spin 1, spin 1/2 and spin 0 particles in the loops is given by \begin{equation} \Gamma(h\to\gamma\gamma)=\frac{\alpha^{2}m_{h}^{3}}{1024\pi^{3}}\Big|\frac{g_{hVV}}{m_{V}^{2}}Q_{V}^{2}A_{1}(\tau_{V})+\frac{2g_{hf{f}}}{m_{f}}N_{c,f}Q_{f}^{2}A_{\frac{1}{2}}(\tau_{f})+\frac{g_{hSS}}{m_{S}^{2}}N_{c,S}Q_{S}^{2}A_{0}(\tau_{S})\Big|^{2}\,, \label{SMdip} \end{equation} where $V,f,S$ denote vectors, fermions, and scalars, $Q,N$ are their charges and numbers (colors), $A$'s are the loop functions defined in~\cite{Djouadi:2005gj} and given in Appendix~\ref{AppA}, and $\tau_i = 4m^2_i/m^2_h$. The couplings $g_{hVV}$ etc are defined by the interaction Lagrangian so that \begin{eqnarray} -\mathcal{L}_{\rm int} = g_{hVV} hV_{\mu}^+V^{-\mu}+ g_{hff} hf\bar f+ g_{hSS} hS\bar S \,. \end{eqnarray} For the case of the Standard Model one has $g_{hWW}= g_2 M_{W}$ and $g_{hff}= g_2 m_f\big/(2M_W)$, where $g_2$ is the $SU(2)$ gauge coupling. Here it is easily seen that $g_{hWW}/M_W^2= 2g_{hff}/m_f= 2/v$. In the standard model, the largest contribution to the diphoton rate is from the $W$ boson exchange and this contribution is partially cancelled by the contribution from the top quark exchange. Thus for the Standard Model Eq.~\eqref{SMdip} reduces to \begin{equation} \Gamma_{\rm SM}(h\to\gamma\gamma) \approx \frac{\alpha_{em}^{2}m_{h}^{3}}{256v^2\pi^{3}}\Big|A_{1}(\tau_{W})+N_{c}Q_{t}^{2}A_{\frac{1}{2}}(\tau_{t})\Big|^2 \to \frac{\alpha^{2}_{em}m_{h}^{3}}{256v^2\pi^{3}} |\mathcal{A}_{\rm SM}|^2\,, \label{SMdipR} \end{equation} where $\mathcal{A}_{\rm SM}\approx -6.49$.\\ If the masses of the particles running in the loops which give rise to the decay of the Higgs to diphoton, are much heavier than the Higgs boson, the decay of $h\to \gamma\gamma$ is governed by an $h\gamma\gamma$ effective coupling which can be calculated through the photon self-energy corrections~\cite{Ellis:1975ap,Shifman:1979eb} and reads \begin{equation} \mathcal{L}_{h\gamma\gamma}=\frac{\alpha_{em}}{16\pi}h\Big[\sum_{i}b_{i} Q_i^2 \frac{\partial}{\partial v}\log m_{i}^{2}(v)\Big]F_{\mu\nu}F^{\mu\nu}\,. \label{Lhgg} \end{equation} where $b_{i}$ are: \begin{align} \label{2} b_{1}=-7\,, & \qquad{\rm for\ a\ vector\ boson},\\ \label{3} b_{\frac{1}{2}}=\tfrac{4}{3}\,, & \qquad{\rm for\ a\ Dirac\ fermion},\\ \label{4} b_{0}=\tfrac{1}{3}\,, & \qquad{\rm for\ a\ charged\ scalar}. \end{align} In the large mass limit, the exact one-loop result of Eq.~\eqref{SMdip} agrees with Eq.~\eqref{Lhgg}. For relative light particles with mass $m$ running in the loop, $b_i$ receives finite mass corrections to the order of $m^2_h\big/4m^2$. When there are multiple particles carrying the same electric charge circulating in the loops, one can write a more general expression by replacing $\log m_{i}^{2}$ by $\log\big(\det M^{2}\big)\,$, where $M^2$ is the mass-squared matrix of the particles circulating in the loops. \\ For MSSM one has two Higgs doublets: \begin{eqnarray} H_d = \left(\begin{matrix} H_d^0\cr H_d^-\end{matrix}\right) = \left(\begin{matrix} \frac{1}{\sqrt 2}(v_d+\phi_1)\cr H_d^-\end{matrix}\right)\,,\qquad H_u= \left(\begin{matrix}H_u^+\cr H_u^0\end{matrix}\right) =\left(\begin{matrix}H_u^+ \cr \frac{1}{\sqrt 2}(v_u+\phi_2) \end{matrix}\right)\,. \end{eqnarray} where $v_d$ and $v_u$ are the VEVs of $H^0_d$ and $H^0_u$. Extension of Eq.~\eqref{Lhgg} to the supersymmetric case is straightforward and we have \begin{equation} \mathcal{L}_{h\gamma\gamma}^{{\rm SUSY}}=\frac{\alpha_{em}}{16\pi}h\sum_{i}b_{i}Q_i^2\Big[\cos\alpha\frac{\partial}{\partial v_{u}}\log m_{i}^{2}(v_u)-\sin\alpha\frac{\partial}{\partial v_{d}}\log m_{i}^{2}(v_d)\Big]F_{\mu\nu}F^{\mu\nu}\,\,, \end{equation} where $\alpha$ is the mixing angle between the two CP-even Higgs in the MSSM. Eq.~\eqref{SMdip} is also modified in the supersymmetric case as we identify the lighter CP-even Higgs with the Standard Model Higgs: \begin{align} \Gamma_{\rm SUSY}(h\to\gamma\gamma)&\approx \frac{\alpha_{em}^{2}m_{h}^{3}}{256 v^2 \pi^{3}}\bigg| \sin(\beta-\alpha) Q_{W}^{2} A_1(\tau_W) +\frac{\cos \alpha}{\sin \beta} N_t Q^2_t A_{\frac{1}{2}}(\tau_t)\nonumber\\ &\qquad\qquad\quad +\frac{b_{\frac{1}{2}} v}{2} N_f Q_{f}^{2} \Big(\cos\alpha \frac{\partial}{\partial v_u} \log m_{f}^{2} -\sin\alpha \frac{\partial}{\partial v_d} \log m_{f}^{2} \Big) \nonumber\\ &\qquad\qquad\quad +\frac{b_0 v}{2} N_{c,S} Q_{S}^{2} \Big(\cos\alpha \frac{\partial}{\partial v_u} \log m_{S}^{2} -\sin\alpha \frac{\partial}{\partial v_d} \log m_{S}^{2} \Big) \bigg|^{2}\,,\label{SUSYdip} \end{align} where $\tan\beta= v_u/v_d$. Compared to the Standard Model case, the Higgs couplings to the $W$ boson and to the top quark are modified by factors $\sin(\beta-\alpha)$ and $\frac{\cos \alpha}{\sin \beta}$ (see Eq. \ref{SUSYdip}). Now the fermionic contribution also comes from the chargino exchange while the scalar contribution includes contributions from the exchange of the sleptons, the squarks and the charged Higgs fields. \section{The Model}\label{Sec3} To enhance the Higgs diphoton decay rate we focus on the contribution of the vector-like leptonic supermultiplets, since relatively light vector-like quark supermultiplets would affect the Higgs production cross sections while leptonic supermultiplets would not. Specifically we consider an extra vector-like leptonic generation $F$ consisting of $L,L^c,E,E^c$ with $SU(3)_C\times SU(2)_L\times U(1)_Y$ quantum numbers:\footnote{ Gauge coupling unification can be achieved with a full generation of vector-like multiplets including a vector-like quark sector. We assume relatively large masses and negligible Yukawa couplings for the quark sector and thus these additions would not contribute to the diphoton rate or to the Higgs mass enhancement.} \begin{equation} F:\quad \begin{array}{cc} L=(\mathbf{1},\mathbf{2},-\tfrac{1}{2})\,, & \qquad E^{c}=(\mathbf{1},\mathbf{1},1)\,,\\ L^{c}=(\mathbf{1},\mathbf{2},+\tfrac{1}{2})\,, & \qquad E=(\mathbf{1},\mathbf{1},-1)\,. \end{array} \end{equation} Noting that the Higgs doublets in the MSSM have quantum numbers \begin{gather} H_d=(\mathbf{1},\mathbf{2},-\tfrac{1}{2})\,,\qquad H_u=(\mathbf{1},\mathbf{2},+\tfrac{1}{2})\,, \end{gather} the superpotential for the vector-like leptonic supermultiplets is given by \begin{equation} W = yLH_{d}E^{c}+y'L^{c}H_{u}E+M_{L}LL^{c}+M_{E}EE^{c}+y_{1}^{(m)}L_{3}H_{d}E^{c}+y_{2}^{(m)}LH_{d}E_{3}^{c}\,, \end{equation} where $M_{L}$ and $M_{E}$ are the vector-like masses. We assume that the extra leptons can decay only through the third generation particles, and the corresponding couplings $y_{1,2}^{(m)}$ are assumed to be very small and they do not have any significant effect on the analysis here.\footnote{ The new leptons could mix with other generations as well. The reason for allowing for the mixings is to make the new leptons unstable. This instability can be accomplished with very small mixing angles, e.g., $\mathcal{O}(10^{-4})$ or even smaller. Because of this there is no tangible effect on any analyses involving the three generations of leptons. There is one area, however, where LFV could manifest, and that is the decay of the $\tau'\to \tau + \gamma$ very much like the possibility of the decay $\tau\to \mu + \gamma$~\cite{Ibrahim:2012ds}. The mixings can also lead to the edm of the tau lepton~\cite{Ibrahim:2010va} in the presence of CP phases.} Neglecting these small terms, the fermionic mass matrix now reduces to \begin{equation} M_{F}=\left(\begin{array}{cc} M_{L} & \tfrac{1}{\sqrt{2}}yv_{d}\\ \tfrac{1}{\sqrt{2}}y'v_{u} & M_{E} \end{array}\right)\,, \label{fermimass} \end{equation} where the off-diagonal elements are the masses generated by Yukawa interactions while the diagonal elements are the vector masses. The two squared-mass eigenvalues arising from Eq.~\eqref{fermimass} are given by \begin{align} m^2_{1,2} & =\frac{1}{4}\Big[2M^2_L+2M^2_E+y'^2 v^2_u+y^2 v^2_d \nonumber\\ &\qquad \pm \sqrt{(2M^2_L+2M^2_E+y'^2 v^2_u+y^2 v^2_d)^2-4(2M_L M_E-y y' v_u v_d)^2 }\Big]\,. \label{m1m2} \end{align} We call the heavier one $\tau'_1$ and the lighter one $\tau'_2$. We note that the neutral component of the $SU(2)$ doublet $L,L^c$ do not play any role in the analysis as they do not enter in the analysis of the diphoton rate or in the analysis of the Higgs mass enhancement. \section{Enhancement of the diphoton decay rate of the Higgs boson}\label{Sec4} Inclusion of the vector-like supermultiplet affects the diphoton rate. Using Eqs.~\eqref{SMdipR} and \eqref{SUSYdip}, the ratio of the decay width of the lighter CP-even Higgs to two photons and the Standard Model prediction can be written as \begin{align} \frac{\Gamma(h\to\gamma\gamma)}{\Gamma(h\to\gamma\gamma)_{{\rm SM}}} &\approx \frac{1}{|\mathcal{A}_{\rm SM}|^2} \bigg|\sin(\beta-\alpha)Q^2_W A_1(\tau_W) +\frac{\cos \alpha}{\sin \beta} N_t Q^2_t A_{\frac{1}{2}}(\tau_t)\nonumber\\ &\qquad\qquad\quad +\frac{b_{\frac{1}{2}} v}{2} N_F Q_{F}^{2} \Big(\cos\alpha \frac{\partial}{\partial v_u} \log M_{F}^{2} -\sin\alpha \frac{\partial}{\partial v_d} \log M_{F}^{2} \Big) \nonumber\\ &\qquad\qquad\quad +\frac{b_0 v}{2} N_{S}Q_{S}^{2} \Big(\cos\alpha \frac{\partial}{\partial v_u} \log M_{S}^{2} -\sin\alpha \frac{\partial}{\partial v_d} \log M_{S}^{2} \Big) \bigg|^{2} \,,\label{ratiotoSM} \end{align} where on the second line we have fermionic contribution from the vector-like fermions and on the third line the scalar contribution from the super-partners of the vector-like fermions. In the analysis here we focus only on the extra contributions arising from the exchange of the leptonic vector-like sector, and do not include other possible corrections to the diphoton rate such as from the exchange of staus, charginos and charged Higgs in the loops. \\ The computation of the vector-like fermion contribution is straightforward, and we find \begin{equation} \sum_i \big[\cos\alpha\frac{\partial}{\partial v_{u}}\log m^2_{i}-\sin\alpha\frac{\partial}{\partial v_{d}}\log m^2_{i}\big] =-\frac{yy'v}{m_{1}m_{2}}\cos(\alpha+\beta)\,, \end{equation} where \begin{equation} m_1 m_2 = M_L M_E - \frac{1}{2} y y' v_u v_d\,. \end{equation} For the case when $M_L=M_E=0$, the fermionic contribution to the diphoton rate is negative. However, for the case when $M_L, M_E\neq 0$ the fermionic contribution can turn positive when $M_L M_E > \frac{1}{2} y y' v_u v_d$. If the contribution is only from the vector-like fermions, the Higgs diphoton rate is enhanced by a factor of: \begin{align} \frac{\Gamma(h\to\gamma\gamma)}{\Gamma(h\to\gamma\gamma)_{{\rm SM}}} & \approx \Big|1+\frac{1}{\mathcal{A}_{{\rm SM}}}b_{\tfrac{1}{2}}N_{f}Q_{f}^{2}\frac{-v^{2}yy'}{2m_{1}m_{2}}\cos(\alpha+\beta)\Big|^{2}\nonumber \\ & \approx\Big|1+0.1 N_f \frac{v^{2}yy'}{m_{1}m_{2}}\cos(\alpha+\beta)\Big|^{2} \equiv |1+r_f|^2 \,. \label{fermionCon} \end{align} \\ To determine the contribution from the four super-partner fields of the vector-like fermions, one needs to find the mass eigenvalues of a $4\times 4$ mass mixing matrix. In the basis $(\tilde\tau_L', \tilde\tau_R', \tilde\tau_{L}'', \tilde\tau_R'')$ it is given by \begin{equation} \frac{1}{\sqrt{2}} \left(\begin{array}{cc|cc} \multicolumn{2}{c|}{\multirow{2}*{$\sqrt{2}(M_{\tilde{\tau}'}^2)_{2\times 2}$}} & y' v_u M_L + y v_d M_E & 0\\ && 0 & y' v_u M_E + y v_d M_L \\ \hline y' v_u M_L + y v_d M_E & 0 & \multicolumn{2}{c}{\multirow{2}*{$\sqrt{2} (M_{\tilde{\tau}''}^2)_{2\times 2}$}}\\ 0 & y' v_u M_E + y v_d M_L &&\\ \end{array}\right)_{4\times 4}\,, \label{slep1} \end{equation} where $(M^2_{\tilde \tau'})_{2\times 2}$ is given by \begin{eqnarray} (M^2_{\tilde \tau'})_{2\times 2}=\left(\begin{array}{cc} M_{1}^{2}+\tfrac{1}{2}y^2 v^2_d +M_{L}^{2} +\frac{(g_1^2-g_2^2)}{8} (v_d^2 - v_u^2) & \frac{1}{\sqrt 2} y (A_{\tau'} v_d - \mu v_u)\\ \frac{1}{\sqrt 2} y (A_{\tau'} v_d - \mu v_u) & M_1^2 +\tfrac{1}{2}y^2 v^2_d+ M_E^2 - \frac{g_1^2}{4} (v_d^2 - v_u^2) \end{array}\right)\,, \label{slep2} \end{eqnarray} and $(M^2_{\tilde \tau''})_{2\times 2}$ is given by \begin{eqnarray} (M^2_{\tilde \tau''})_{2\times 2} =\left(\begin{array}{cc} M_{2}^{2}+\tfrac{1}{2}y'^2 v^2_u+M_{L}^{2} -\frac{(g_1^2-g_2^2)}{8} (v_d^2 - v_u^2) & \frac{1}{\sqrt 2} y' (A_{\tau''} v_u - \mu v_d)\\ \frac{1}{\sqrt 2} y' (A_{\tau''} v_u - \mu v_d) & M_2^2 +\tfrac{1}{2}y'^2 v^2_u+ M_E^2 +\frac{g_1^2}{4} (v_d^2 - v_u^2) \end{array}\right)\,, \label{slep3} \end{eqnarray} where $M_1, M_2$ are soft scalar masses. For further convenience, we define $M^2 \equiv M_{\tilde{\tau}'}^2$ and $M'^2 \equiv M_{\tilde{\tau}''}^2$. As an approximation, we consider the case when the soft squared-mass ($M_{1,2}^2$) are much larger than the vector squared-mass ($M_{L,E}^2$). In this case, the $4\times4$ matrix becomes approximately block diagonal with the diagonal elements consisting of two $2\times 2$ matrices. As the two mass-squared matrices are decoupled, we denote the super-partners of $\tau'$ to be $\tilde{\tau}'_{1,2}$ and the super-partners of $\tau''$ to be $\tilde{\tau}''_{1,2}$. The contributions from the two decoupled matrices can be obtained straightforwardly. The total bosonic contribution can be measured by $r_b$, which reads \begin{equation} r_b = r_1 + r_2 \equiv \frac{1}{\mathcal{A}_{\rm SM}} \frac{b_0 v}{2} Q_S^2 (\Xi_{1} + \Xi_2)\,, \label{rb} \end{equation} where we define \begin{align} \Xi_1 &= \cos \alpha \frac{\partial}{\partial v_{u}} \log (\det M^2) - \sin\alpha \frac{\partial}{\partial v_{d}} \log (\det M^2)\,,\\ \Xi_2 &= \cos \alpha \frac{\partial}{\partial v_{u}} \log (\det M'^2) - \sin\alpha \frac{\partial}{\partial v_{d}} \log (\det M'^2)\,. \end{align} We first focus on $\Xi_1$. Using the $\tilde\tau'$ mass-squared matrix, a direct computation gives \begin{eqnarray} \Xi_1 = \frac{1}{ m_{\tilde\tau_1'}^2 m_{\tilde \tau_2'}^2} \left\{ \Big[ \frac{1}{2} g_1^2 M_{11}^2 - \frac{(g_1^2-g_2^2)}{4} M_{22}^2\Big] v \sin(\alpha+\beta) +\sqrt 2 M_{12}^2 y (A_{\tau'} \sin\alpha + \mu \cos\alpha ) \right\} \,. \label{xi1} \end{eqnarray} For the computation of $\Xi_2$ we need $\tilde{\tau}''$ mass-squared matrix and a similar analysis gives \begin{eqnarray} \Xi_2 = \frac{1}{ m_{\tilde\tau_1''}^2 m_{\tilde \tau_2''}^2} \left\{ \Big[ -\frac{1}{2} g_1^2 M'^2_{11} + \frac{(g_1^2-g_2^2)}{4} M'^2_{22} \Big] v \sin(\alpha+\beta) - \sqrt 2 M'^2_{12} y' (A_{\tau''} \cos\alpha + \mu \sin\alpha) \right\} \,. \label{xi2} \end{eqnarray} \\ Thus the total Higgs diphoton decay rate is enhanced by a factor \begin{eqnarray} R_{\gamma\gamma}=\big|1+ r_f + r_b\big|^2\,. \label{RGG2} \end{eqnarray} A numerical analysis of the size of the diphoton rate enhancement using the result of this section is discussed in Section~\ref{Sec6}. For the numerical analysis, we made the same approximation as above, i.e., we choose the value of soft squared-mass to be much larger than the value of vector squared-mass. \section{Higgs mass enhancement}\label{Sec5} Extra particles beyond those in MSSM can make contributions to the mass of the Higgs boson. In our case, contributions arise from the exchange of both bosonic and fermionic particles in the vector-like supermultiplets. The techniques for the computation of these corrections are well-known (see, e.g., \cite{Ibrahim:2000qj,Ibrahim:2002zk}) and are described in Appendix~\ref{AppB}. Effectively the corrections are encoded in elements $\Delta_{ij}$ which are corrections to the elements of tree-level mass-squared matrix as defined also in the Appendix~\ref{AppB}. The correction to the lighter CP-even Higgs mass is then given by \begin{eqnarray} (\Delta m_h)_{F}=(2m_h^0)^{-1} (\Delta_{11 } \sin^2\alpha +\Delta_{22} \cos^2\alpha - \Delta_{12} \sin2\alpha)\,, \label{HiggsMassCor} \end{eqnarray} where $\alpha$ is the mixing angle between the two CP-even Higgs in the MSSM. Thus, one can write the Higgs boson mass in the form \begin{equation} m_h= m_h^{\rm MSSM} + (\Delta m_h)_{F}\,, \end{equation} where $m_h^{\rm MSSM}$ is the Higgs boson mass in the MSSM and $(\Delta m_h)_{F}$ is the correction from the new sector given by Eq.~\eqref{HiggsMassCor}. In the following we will first discuss the contribution to the lightest Higgs boson mass from the bosonic sector and then from the fermionic sector of the vector-like supermultiplets. The total contribution to the Higgs mass is the sum of bosonic and fermionic contributions, and we have \begin{eqnarray} \Delta_{ij} = \Delta_{ij}^b + \Delta_{ij}^f\,. \label{deltaij} \end{eqnarray} We note that the coupling between the $\tau'$ and the $\tau''$ sectors are characterized by $M_L$ and $M_E$. For the case when $M_L=M_E=0$ the $\tau'$ and the $\tau''$ sectors (both bosonic and fermionic sectors) totally decouple. In this circumstance one can calculate $\Delta_{ij}$ analytically. \subsection{Higgs mass correction from the bosonic sector} The mass-squared matrix in the bosonic sector is given by Eqs.~\eqref{slep1}-\eqref{slep3}. Here again, we choose the soft squared-mass to be much larger than $M_L^2$ and $M_E^2$. In this circumstance the $4\times 4$ mass-squared matrix of Eq.~\eqref{slep1} becomes approximately block diagonal and one can obtain the results for Higgs mass enhancement from the super-partners of the vector-like fermions ($\tilde{\tau}'_{1,2}$ and $\tilde{\tau}''_{1,2}$). We first compute the corrections from $\tilde{\tau}'_{1,2}$. The computation of the corrections uses the Coleman-Weinberg one-loop effective potential~\cite{Coleman:1973jx,Arnowitt:1992qp} (see Appendix~\ref{AppB}). The contribution to this one-loop effective potential from $\tilde{\tau}'_{1,2}$ exchanges is given by \begin{eqnarray} \Delta V_{\tilde{\tau}'}^b=\frac{1}{64\pi^2} \sum_{i=1,2} 2 m_{\tilde{\tau}'_i}^4 \Big(\ln \frac{m_{\tilde{\tau}'_i}^2}{Q^2}-\frac{3}{2}\Big) \,, \end{eqnarray} where $Q$ is the running renormalization group scale. Our computation of $\Delta_{ij}^{\tilde{\tau}'}$ following the prescription in Appendix~\ref{AppB} (further details can be found in~\cite{Ibrahim:2000qj,Ibrahim:2002zk}) gives \begin{align} \Delta_{11}^{\tilde{\tau}'} &= \beta y^4 v^2_d \ln \frac{m_{\tilde{\tau}'_1}^2 m_{\tilde{\tau}'_2}^2}{Q^4} -\beta y^4 v^2_d A_{\tau'}^2 \frac{(A_{\tau'} -\mu\tan\beta)^2} {(m_{\tilde{\tau}'_1}^2-m_{\tilde{\tau}'_2}^2)^2} f(m_{\tilde{\tau}'_1}^2, m_{\tilde{\tau}'_2}^2) \nonumber\\ &\,\quad + 2 \beta y^4 v^2_d A_{\tau'} \frac{A_{\tau'} -\mu\tan\beta} {m_{\tilde{\tau}'_1}^2-m_{\tilde{\tau}'_2}^2} \ln \frac{m_{\tilde{\tau}'_1}^2}{m_{\tilde{\tau}'_2}^2} \,,\label{Db'11}\\ \Delta_{22}^{\tilde{\tau}'} &= - \beta y^4 v^2_d \mu^2 \frac{(A_{\tau'}-\mu\tan\beta)^2} {(m_{\tilde{\tau}'_1}^2-m_{\tilde{\tau}'_2}^2)^2} f(m_{\tilde{\tau}'_1}^2, m_{\tilde{\tau}'_2}^2)\,,\\ \Delta_{12}^{\tilde{\tau}'} &= \beta y^4 v^2_d \mu A_{\tau'} \frac{(A_{\tau'} -\mu\tan\beta)^2} {(m_{\tilde{\tau}'_1}^2-m_{\tilde{\tau}'_2}^2)^2} f(m_{\tilde{\tau}'_1}^2, m_{\tilde{\tau}'_2}^2) - \beta y^4 v^2_d \mu \frac{A_{\tau'} -\mu\tan\beta} {m_{\tilde{\tau}'_1}^2-m_{\tilde{\tau}'_2}^2} \ln \frac{m_{\tilde{\tau}'_1}^2}{m_{\tilde{\tau}'_2}^2}\,,\label{Db'12} \end{align} where $\beta=1/16\pi^2$, and $f(x,y)$ is given by \begin{eqnarray} f(x,y)= -2 + \frac{y+x}{y-x} \ln \frac{y}{x}\,. \end{eqnarray} The contribution to the one-loop effective potential from $\tilde{\tau}''_{1,2}$ exchanges is given by \begin{eqnarray} \Delta V_{\tilde{\tau}''}^b=\frac{1}{64\pi^2} \sum_{i=1,2} 2 m_{\tilde{\tau}''_i}^4\Big(\ln \frac{m_{\tilde{\tau}''_i}^2}{Q^2}-\frac{3}{2}\Big) \,. \end{eqnarray} A similar computation gives the result for $\Delta_{ij}^{\tilde{\tau}''}$: \begin{align} \Delta_{11}^{\tilde{\tau}''} &=- \beta y'^4 v_u^2 \mu^2 \frac{(A_{\tau''}-\mu\cot\beta)^2} {(m_{\tilde{\tau}''_1}^2-m_{\tilde{\tau}''_2}^2)^2} f(m_{\tilde{\tau}''_1}^2, m_{\tilde{\tau}''_2}^2)\,,\label{Db''11}\\ \Delta_{22}^{\tilde{\tau}''} &= \beta y'^4 v_u^2 \ln \frac{m_{\tilde{\tau}''_1}^2 m_{\tilde{\tau}''_2}^2}{Q^4} - \beta y'^4 v_u^2 A_{\tau''}^2 \frac{(A_{\tau''} -\mu\cot\beta)^2} {(m_{\tilde{\tau}''_1}^2-m_{\tilde{\tau}''_2}^2)^2} f(m_{\tilde{\tau}''_1}^2, m_{\tilde{\tau}''_2}^2) \nonumber\\ & \,\quad + 2 \beta y'^4 v_u^2 A_{\tau''} \frac{A_{\tau''} -\mu\cot\beta} {m_{\tilde{\tau}''_1}^2-m_{\tilde{\tau}''_2}^2} \ln \frac{m_{\tilde{\tau}''_1}^2}{m_{\tilde{\tau}''_2}^2}\,, \label{Db''22}\\ \Delta_{12}^{\tilde{\tau}''} &=-\beta y'^4 v_u^2 \mu \frac{A_{\tau''} -\mu\cot\beta} {m_{\tilde{\tau}''_1}^2-m_{\tilde{\tau}''_2}^2} \ln \frac{m_{\tilde{\tau}''_1}^2}{m_{\tilde{\tau}''_2}^2} + \beta y'^4 v_u^2 \mu A_{\tau''} \frac{(A_{\tau''} -\mu\cot\beta)^2} {(m_{\tilde{\tau}''_1}^2-m_{\tilde{\tau}''_2}^2)^2} f(m_{\tilde{\tau}''_1}^2, m_{\tilde{\tau}''_2}^2)\,.\label{Db''12} \end{align} The total contribution from the bosonic sector of the vector-like supermultiplets $\Delta_{ij}^b$ is then \begin{eqnarray} \Delta_{ij}^b= \Delta_{ij}^{\tilde{\tau}'} + \Delta_{ij}^{\tilde{\tau}''}\,. \label{bosonicdelta} \end{eqnarray} \subsection{Corrections to the Higgs boson mass from the fermionic sector} We now turn to a discussion of the contribution from the fermionic sector of the vector-like supermultiplet. Here, in contrast to the bosonic sector, there are no soft terms and further the vector masses can be comparable to the masses arising from Yukawa couplings. As a consequence $M_L, M_E$ should be included in the analysis for a reliable estimate of the contribution from the fermionic sector to the Higgs mass correction. The contribution to the one-loop effective potential from the vector-like fermions is given by \begin{equation} \Delta V_{\tau'_{1,2}}^f= -\frac{1}{64\pi^2} \sum_{i=1,2} 4 m_{i}^4 \Big(\ln\frac{m_{i}^2}{Q^2}-\frac{3}{2}\Big)\,, \end{equation} where $m_{1,2}$ are the mass eigenvalues of the vector-like fermions which are given in Eq.~\eqref{m1m2}. A straightforward analysis gives \begin{align} \Delta_{11}^f & =-\beta \Big[\big(\frac{1}{2}y^{4}v_{d}^{2}-\frac{1}{2}N_{1}\sqrt{R}+\frac{R_{d}^{\prime2}}{8R}\big)\ln\frac{m_{1}^{2}m_{2}^{2}}{Q^{4}}+\big(\frac{y^{2}v_{d}R'_{d}}{2\sqrt{R}}-\frac{1}{2}N_{1}T\big)\ln\frac{m_{1}^{2}}{m_{2}^{2}}+N_{1}\sqrt{R}\Big]\,, \label{Df11nz}\\ \Delta_{22}^f & =-\beta \Big[\big(\frac{1}{2}y'^{4}v_{u}^{2}-\frac{1}{2}N_{2}\sqrt{R}+\frac{R_{u}^{\prime2}}{8R}\big)\ln\frac{m_{1}^{2}m_{2}^{2}}{Q^{4}}+\big(\frac{y'^{2}v_{u}R'_{u}}{2\sqrt{R}}-\frac{1}{2}N_{2}T\big)\ln\frac{m_{1}^{2}}{m_{2}^{2}}+N_{2}\sqrt{R}\Big]\,, \label{Df22nz}\\ \Delta_{12}^f & =-\beta \Big[\big(\frac{1}{2}y^{2}y'^{2}v_{u}v_{d}-\frac{1}{2}N\sqrt{R}+\frac{R'_{u}R'_{d}}{8R}\big)\ln\frac{m_{1}^{2}m_{2}^{2}}{Q^{4}}\nonumber\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+\big(\frac{y^{2}v_{d}R'_{u}+y'^{2}v_{u}R'_{d}}{4\sqrt{R}}-\frac{1}{2}NT\big)\ln\frac{m_{1}^{2}}{m_{2}^{2}}+N\sqrt{R}\Big]\,, \label{Df12nz} \end{align} where \begin{align} T & =M_{L}^{2}+M_{E}^{2}+\frac{1}{2}y'^{2}v_{u}^{2}+\frac{1}{2}y^{2}v_{d}^{2}\,,\\ R & =T^{2}-(2M_{L}M_{E}-yy'v_{u}v_{d})^{2}\,,\\ N_{1} & =\frac{R'_{d}}{2v_{d}\sqrt{R}}+\frac{R_{d}^{\prime2}}{4\sqrt{R^{3}}}-\frac{R''_{d}}{2\sqrt{R}}\,,\\ N_{2} & =\frac{R'_{u}}{2v_{u}\sqrt{R}}+\frac{R_{u}^{\prime2}}{4\sqrt{R^{3}}}-\frac{R''_{u}}{2\sqrt{R}}\,,\\ N & =\frac{R'_{u}R'_{d}}{4\sqrt{R^{3}}}-\frac{R''_{ud}}{2\sqrt{R}}\,, \end{align} and \begin{gather} R'_{d}=\frac{\partial R}{\partial v_{d}}\,,\qquad R'_{u}=\frac{\partial R}{\partial v_{u}}\,,\\ R''_{d}=\frac{\partial^{2}R}{\partial v_{d}^{2}}\,,\qquad R'_{u}=\frac{\partial^{2}R}{\partial v_{u}^{2}}\,,\qquad R''_{ud}=\frac{\partial^{2}R}{\partial v_{u}\partial v_{d}}\,. \end{gather} As a check, we consider the limit $M_L=M_E=0$. In this limit $m_1\to \frac{1}{\sqrt{2}} y' v_u$ and $m_2\to \frac{1}{\sqrt{2}} y v_d$ and Eqs.~\eqref{Df11nz}-\eqref{Df12nz} reduce to \begin{align} \Delta_{11}^f & =- \beta y^4 v_d^2 \ln \frac{y^4 v_d^4}{4Q^{4}}\,, \label{Df11z}\\ \Delta_{22}^f & =- \beta y'^4 v_u^2 \ln\frac{y'^4 v_u^2}{4Q^{4}}\,, \label{Df22z}\\ \Delta_{12}^f & =0\,. \label{Df12z} \end{align} These are precisely the results that we expect in the decoupled limit. In this limit, combining Eqs.~\ref{Db'11} with \ref{Df11z}, and Eqs.~\ref{Db''22} with \ref{Df22z}, we find that the $Q$ dependence cancels out and the entire one loop correction is independent of $Q$. For the case that $M_L,M_E\neq 0$, one also expects that the $Q$ dependence would drop out when we combine the bosonic and the fermionic contributions. However, the analysis in the bosonic sector is only approximate, and thus there may be a small residual $Q$ dependence in the total bosonic and fermionic contribution. However, in the next section we check on the $Q$ dependence numerically and show that the $Q$ dependence of the total bosonic and fermionic contribution is extremely small validating our approximation in the computation of the bosonic contribution. \section{Numerical analysis}\label{Sec6} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.33]{LE0_DpR_st1.pdf} \includegraphics[scale=0.33]{LE0_DpR_st2.pdf} \\~\\ \includegraphics[scale=0.33]{LE0_Higgs_st1.pdf} ~\includegraphics[scale=0.325]{LE0_Higgs_st2.pdf} \caption{An analysis of the diphoton rate enhancement (top panels) and enhancement of the Higgs boson mass (bottom panels) for the case when the vector masses vanish, i.e., $M_L=M_E=0$. Left top: A plot of the diphoton rate enhancement $r_1$ (from $\tilde{\tau}'_{1,2}$) vs $A_{\tau'}$; Right top: A plot of the diphoton rate enhancement $r_2$ (from $\tilde{\tau}''_{1,2}$) vs $A_{\tau''}$. Left bottom: A plot of the Higgs mass enhancement from $\hat{\tau}'$ sector (GeV) vs $A_{\tau'}$; Right bottom: A plot of the Higgs mass enhancement from $\hat{\tau}''$ sector (GeV) vs $A_{\tau''}$. } \label{fig1} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.33]{LE0_DpRvsHiggs.pdf} \includegraphics[scale=0.33]{LE_DpRvsHiggs.pdf} \caption{ Left panel: A display of the correlation between the Higgs diphoton rate enhancement and the Higgs mass enhancement in the decoupled limit where $M_L=M_E=0$ as in Fig.~\ref{fig1}. Right panel: A display of the correlation between the Higgs diphoton rate enhancement and the Higgs mass enhancement for the case when the vector masses are non-vanishing where $M_L=M_E=210\GeV$. The two branches shown in each of the two plots are due to the rise and fall of the Higgs mass enhancement as exhibited in the lower panel of Fig.~\ref{fig1} and \ref{fig3}.} \label{fig2} \end{center} \end{figure} Before carrying out the numerical analysis let us summarize the results of the analysis given in Section~\ref{Sec4} and Section~\ref{Sec5}. In Section~\ref{Sec4} the correction to the Higgs diphoton rate from the fermionic sector $r_f$ was computed in Eq.~\eqref{fermionCon}, and the correction from the bosonic sector $r_b$ was computed in Eq.~\eqref{rb}, while total diphoton rate enhancement $R_{\gamma\gamma}$ is given in Eq.~\ref{RGG2}. The Higgs boson mass enhancement from the exchange of the vector-like supermultiplets is given in Section~\ref{Sec5} where the bosonic contribution is given by Eqs.~\eqref{Db'11}-\eqref{Db'12} and \eqref{Db''11}-\eqref{Db''12} while the fermionic contribution is given by Eqs.~\eqref{Df11nz}-\eqref{Df12nz}. In this section, for the numerical analysis we impose the constraint that the masses of the new particles be consistent with the experimental lower limits~\cite{Beringer:1900zz}.\\ First, we discuss the decoupled limit where $M_L=M_E=0$. In this case, both the fermionic sector and the bosonic sector of the vector-like supermultiplets are totally decoupled, and we label the two sectors as the $\hat{\tau}'$ and $\hat{\tau}''$ sectors, where $\hat{\tau}'$ denotes contributions from both $\tau'$ and its super-partners $\tilde\tau'_{1,2}$, and similar for $\hat{\tau}''$. Here we choose the following parameters: $M_1=M_2=500\GeV,\,\mu=1\TeV,\,\tan\beta=1.4,\,\alpha=\beta-\pi/2,\, y=y'=1$ and $m_h^{\rm MSSM}=120\GeV$. Using the above parameters and Eq.~\eqref{fermionCon} we find that the fermionic contribution to the Higgs diphoton rate $r_f$ is roughly $-0.4$ in this case, which is a large negative effect. However, this is compensated by the contribution from the bosonic sector and this contribution is displayed in the upper two panels of Fig.~\ref{fig1}. The upper left panel displays the diphoton rate enhancements from the exchange of $\tilde{\tau}'_{1,2}$ in the loop versus $A_{\tau'}$ while the upper right panel displays the diphoton rate enhancement from the exchange of $\tilde{\tau}''_{1,2}$ versus $A_{\tau''}$. As expected in each case we find that the contribution from scalar loops enhances the diphoton rate. The total contribution arising from the sum of the fermionic and the bosonic sectors will be given when we discuss Fig.~\ref{fig2}. \\ \begin{figure}[t!] \begin{center} \includegraphics[scale=0.33]{LE_DpR_st1.pdf} \includegraphics[scale=0.33]{LE_DpR_st2.pdf} \\~\\ \includegraphics[scale=0.33]{LE_Higgs_st2.pdf} \includegraphics[scale=0.33]{Qdep.pdf} \caption{ An analysis of the diphoton rate enhancement (top panels) and enhancement of the Higgs boson mass (bottom panels) for the case when the vector masses are non-vanishing where $M_L=M_E=210\GeV$. Left top: A plot of the Higgs diphoton rate enhancement $r_1$ (from $\tilde{\tau}'_{1,2}$) vs $A_{\tau'}$; Right top: A plot of the Higgs diphoton rate enhancement $r_2$ (from $\tilde{\tau}''_{1,2}$) vs $A_{\tau''}$. Left bottom: A plot of the total Higgs mass enhancement vs $A_{\tau''}$; Right bottom: A plot of the total Higgs mass enhancement vs the renormalization group scale $Q$. The three horizontal lines correspond to values of $A_{\tilde \tau''}=500\GeV$ (bottom), $ 1000\GeV$ (middle), $1500\GeV$ (top). } \label{fig3} \end{center} \end{figure} An analysis of the enhancement of the Higgs boson mass in the decoupled case ($M_L=M_E=0$) is given in the lower two panels of Fig.~\ref{fig1}. The lower left panel of Fig.~\ref{fig1} gives a display of the Higgs mass enhancement from the exchange of $\hat{\tau}'$ sector (including bosonic and fermionic contributions) in the loop versus $A_{\tau'}$. Here the contribution to the Higgs boson mass is rather modest not exceeding much beyond 2 GeV over the entire range of $A_{\tau'}$. A similar analysis for the mirror sector ($\hat{\tau}''$) is given in the right panel of Fig.~\ref{fig1} where the Higgs boson mass enhancement is plotted against $A_{\tau''}$. Here large contributions are seen to arise. We turn now to a display of the combined diphoton rate from the fermionic and the bosonic sector versus the combined Higgs boson mass enhancement from the fermionic and bosonic sectors. This analysis is presented in the left panel of Fig.~\ref{fig2} where we display the total diphoton rate enhancement $R_{\gamma\gamma}$ as defined in Eq.~\eqref{RGG2} versus the total Higgs mass correction (here we chose the maximum value for diphoton rate enhancement from $\tilde{\tau}'$ sector, which corresponds to $A_{\tau'}=3100\GeV$). While a simultaneous enhancement in both sector does occur, one finds in this case the sizes are rather modest, e.g., one has a 3-4 GeV enhancement in the Higgs boson mass with a 30\% enhancement in the diphoton rate at the same time. \\ Next we discuss the case when $M_L,M_E$ are non-vanishing. Here we choose the following parameters: $M_L=M_E=210\GeV,\,M_1=M_2=600\GeV,\,Q=\mu=1\TeV,\,\tan\beta=3,\,\alpha=\beta-\pi/2,\,y=y'=1$, and $m_h^{\rm MSSM}=120\GeV$. This time, the contribution to the diphoton rate from the fermionic sector is positive and gives $r_f \approx +0.1$ on using Eq.~\eqref{fermionCon}. The bosonic contribution is exhibited in the upper two panels of Fig.~\ref{fig3}, where the upper left panel displays the contribution from the exchange of $\tilde{\tau}'_{1,2}$ in the loop versus $A_{\tau'}$ while the upper right panel displays the contribution from the exchange of $\tilde{\tau}''_{1,2}$ in the loop versus $A_{\tau''}$. Here essentially all of the bosonic sector enhancement comes from the $\tau''$ sector. \\ In the lower left panel of Fig.~\ref{fig3} we display the {\em total} Higgs mass enhancements (adding up both the bosonic and fermionic contributions) versus $A_{\tau''}$, where we choose $A_{\tau'}=1000\GeV$. Similar to the diphoton enhancement, the major contribution to the Higgs boson mass enhancement is also from the exchange of $\tilde{\tau}''_{1,2}$. In the lower right panel of Fig.~\ref{fig3}, we display the total Higgs mass enhancement versus the renormalization group scale $Q$. Again we choose $A_{\tau'}=1000\GeV$, and three specific values for $A_{\tau''}$ which correspond to three different values of the Higgs mass enhancement, are chosen as shown in the plot. The values for the scale $Q$ cover a large range from 500~GeV to 10~TeV, and we see three almost straight horizontal lines for the Higgs mass enhancement as a function of $Q$. This plot shows the Higgs mass enhancement has almost no dependence on the scale $Q$, which verifies that our approximation in computing the bosonic contribution to the Higgs mass is valid. Combining the diphoton rate from both the bosonic and the fermionic sectors of the vector-like supermultiplets, we display in the right panel of Fig.~\ref{fig2} the total diphoton rate enhancement $R_{\gamma\gamma}$ versus the total Higgs mass correction (where again we fix the contribution from $\tilde{\tau}'_{1,2}$ choosing $A_{\tau'}= 1000\GeV$). Here we find that including the vector masses, one can easily achieve a diphoton rate enhancement as well as a Higgs mass enhancement of substantial size. In Fig.~\ref{fig4} we give a display of the slepton masses. Here one finds that the slepton masses from the new sector are typically in the few hundred GeV range except near the end points and lie substantially above the experimental lower limits~\cite{Beringer:1900zz}. These mass ranges are consistent with the electroweak constraints which have been discussed in a number of works~\cite{Cynolter:2008ea,Martin:2009bg,Martin:2010dc,ArkaniHamed:2012kq,Kearney:2012zi,Joglekar:2012vc}. \\ \begin{figure}[t!] \begin{center} \includegraphics[scale=0.32]{SleptonA1.pdf} \includegraphics[scale=0.32]{SleptonA2.pdf} \caption{A display of the slepton masses versus the trilinear couplings in the case $M_L=M_E=210\GeV$. Left panel: A plot of the $\tilde{\tau}'_{1,2}$ masses vs $A_{\tau'}$. Right panel: A plot of $\tilde{\tau}''_{1,2}$ masses vs $A_{\tau''}$. We note that the slepton masses over most of the parameter space lie significantly above the experimental lower limits~\cite{Beringer:1900zz}.} \label{fig4} \end{center} \end{figure} Finally, we comment on the vacuum stability constraints. These constraints on the $\hat\tau'$ and $\hat\tau''$ sectors are similar to those discussed for the stau sector of MSSM and arise from the left-right mixing of the staus~\cite{Hisano:2010re,Kitahara:2012pb,Carena:2012mw}. The mixings lead to a cubic term in the Higgs potential expanded around the electroweak symmetry breaking vacuum which is of type $- y \mu h \tilde \tau'_L \tilde \tau'_R$ and $-y' \mu h \tilde \tau''_L \tilde\tau''_R$. Such terms can generate global minima in some cases. The parameter that controls the instability is $\mu\tan\beta$. Without going into details because of the smallness of $\mu\tan\beta$ for the analysis given in Figs.~\ref{fig1}-\ref{fig4} the solutions we present are consistent with the vacuum stability constraints. \section{Conclusion}\label{Sec7} In this work we consider an extension of MSSM with vector-like leptonic supermultiplets and its possible implications to the Higgs diphoton rate and to the Higgs boson mass. Specifically we compute one-loop corrections to the diphoton rate of the Higgs boson via the exchange of the new leptons and their super-partners as well as their mirrors. A similar analysis is carried out for the Higgs boson mass where we compute corrections to its mass using the renormalization group improved Coleman-Weinberg effective potential with contributions arising also from these new particles. It is found that an enhancement of the diphoton rate as large as 1.8 can occur and simultaneously a positive correction of 4-10 GeV to the Higgs boson mass can also be obtained due to the exchange of the vector-like supermultiplets. A correction of this size can have a significant effect in relieving the constraint on the weak scale supersymmetry. \\ In the supergravity unified model with universal boundary conditions at the GUT scale, one finds that for a Higgs mass in the 125-126 GeV region, the squark masses are rather heavy (see Fig.~1 of~\cite{Akula:2011aa}) and would be difficult to access at the LHC. However, a 5-10 GeV contribution to the Higgs mass from the new sector would put the MSSM component of the Higgs mass in the 116-120 GeV range which allows a significantly lowering of the universal scalar mass (see Fig.~1 of~\cite{Akula:2011aa}). Thus a Higgs mass correction of the size discussed in this work not only gives a significant correction to the diphoton rate but also lowers the scale of supersymmetry, making sparticles more accessible in the next round of experiments at the LHC~\cite{Baer:2009dn}. We also note that in the right panel of Fig.~\ref{fig4} one finds that one of the scalar mass eigenvalue can lie close to the current experimental lower limit and thus such states could be accessible at the LHC and at the ILC.\\ The vector-like leptons can be produced at the LHC via processes such as $pp\to Z \to \tau'\bar\tau'$. The charged vector-like leptons will likely decay inside the detector via their gauge interactions similar to any heavy lepton, e.g., $\tau' \to \tau \nu_{\tau'}\bar\nu_{\tau}$ with the subsequent decay of the $\nu_{\tau'}$. The decay of $\nu_{\tau'}$ would depend on mixings and is model dependent but in the end it could produce $l^+l^- \nu_{\tau}$. In this case we have as many as three charged leptons and missing $E_T$. However, an accurate analysis of the background is needed to quantify the size of the signal which is outside the scope of this work. Of course the best chance of seeing these particles would be at the ILC through the process $e^+e^- \to Z \to \tau'\bar \tau'$ if sufficient center of mass energy can be managed.\\ \noindent {\it Acknowledgments:} WZF is grateful to HaiPeng An and Tao Liu for helpful discussions. The work of PN is supported in part by the U.S. National Science Foundation (NSF) grants PHY-0757959 and PHY-070467. WZF is supported by funds from The Hong Kong University of Science and Technology.
2,869,038,155,156
arxiv
\section{Introduction} In some pulsars, the pulsed radio emission often abruptly stops for several periods. This phenomenon, called ``pulse nulling'', was discovered by Backer in 1970 (Backer 1970). The fraction of pulses with no detectable emission is known as the nulling fraction (NF) and is a measure of degree of nulling in a pulsar. Several attempts have been made to correlate pulsar NF with various pulsar parameters, but till date no strong correlation exists. Ritchings (1976) concluded that NF in general increases with pulsar period, but pulsars close to the ``death line'' in $P-\dot{P}$ diagram are more likely to null. No such correlation between NF and age was reported in a given morphological class, when pulsars were grouped in different classes based on average profile morphology, implying stars that null more belong to profile classes, which are systematically older (Rankin 1986). A detailed study on 72 nulling pulsars suggested a correlation between NF and pulsar period (Biggs 1992). In recent past, more sensitive studies of nulling to estimate pulsar NFs more accurately have been carried out on a larger sample of pulsars (Vivekanand 1995; Wang et al. 2007) with this motivation. As the pulsar population has more than doubled after the recent Parkes multi-beam pulsar survey (Manchester et al. 2001), a 325-MHz survey of probable nulling pulsars, discovered in the Parkes multi-beam pulsar survey, with Giant Meterwave Radio Telescope (GMRT; Swarup et al. 1991) is being carried out by us to estimate their NFs. This paper presents the results on the peculiar nulling behaviour of PSR J1738$-$2330 observed in our survey. The GMRT observations are described in Section 2 alongwith the analysis procedure used. In Section 3, our results are presented and these are discussed in Section 4. \begin{figure} \centering \psfig{figure=fig1.ps,width=4in,angle=0} \caption{ON-pulse and OFF-pulse energy distributions are shown in the left panel for PSR B0809+74 and in the right panel for PSR B1112+50} \label{fig1} \end{figure} \section{GMRT Observations} For the initial phase of this survey, probable nulling pulsars were selected from those discovered in the Parkes multi-beam pulsar survey. A few well known nulling pulsars, such as PSR B0809+74 and PSR B1112+50, were also included as control pulsars. Observations of 20 sources, for about 2 hours each, were carried out with GMRT in a phased array mode at 325 MHz with 16 MHz of bandwidth. The two hands of circularly polarized voltages from 15 GMRT antennas including the compact central square and 3 arm antennas were added after compensating for phase delays forming a coherent sum. The sum of detected polarized powers was then recorded on a hard disk with a sampling time of 1 ms with 256 spectral channels across the band. The minimum detectable flux with a signal-to-noise ratio (SNR) of 6$\sigma$ was estimated to be around 7.4 mJy. So, these observations are more sensitive to single pulses compared to some of the previous observations (Wang et al. 2007). \begin{figure} \centering \psfig{figure=fig2.ps,width=4in,angle=-90} \caption{The top plot shows the modulation of pulsar energy for successive 16 period subintegrations as a contour plot for bins 1 to 42 of the data dedispersed to 128 bins across the period. Bursts near subintegrations 15, 38, 70, 75, 105, 118, 145, 178 and 210 are seen in the bins corresponding to the pulse window in the average profile, shown in the lower plot, interspersed with nulls.} \label{fig3} \end{figure} The data for each pulsar were dedispersed using programs in a publically available package SIGPROC (http://sigproc.sourceforge.net). The resultant time series was folded every period to 128 phase bins across the pulse period. A baseline, estimated using bins 25 to 120 away from the pulse, was subtracted from the data for each period. Then, two sequences were formed by averaging the energies in bins 13 to 17 (ON-pulse energy) and bins 42 to 46 (OFF-pulse energy). The two sequences were then normalized by the mean pulse energy. The energy in the scaled sequences were also binned to 100 bins to form the ON-pulse and OFF-pulse energy distributions. An excess at zero energy in the ON-pulse energy distribution indicates the fraction of nulled pulses or NF of the pulsar. This can be estimated by removing a scaled version of OFF-pulse energy distribution at zero energy from the ON-pulse distribution. The procedure is similar to that used for detecting pulse nulling in single pulse sequences (See Ritchings 1976; Vivekanand 1995). The procedure was first applied to data on two well known nulling pulsars, PSR B0809+74 and PSR B1112+50. The ON-pulse and OFF-pulse energy distributions for PSR B0809+74 are shown in the left panel of Figure \ref{fig1}. This prominently nulling pulsar has a clear bimodal ON-pulse energy distribution, with two peaks - one around the mean pulse energy and the other around zero pulse energy. The zero energy excess represents the nulled pulses and their ratio to total number of pulses gives an estimate for its NF. This was found to be 1 percent, consistent with previous studies (Lyne \& Ashworth 1983). In contrast, PSR B1112+50 is known to exhibit a large number of nulled pulses and this is evident from its ON-pulse and OFF-pulse energy distributions shown in the right panel of Figure \ref{fig1}. These distributions provide an estimate for this pulsar's NF to be 61 percent, comparable to previously known results (Ritchings 1976). \section{Nulling in PSR J1738$-$2330} PSR J1738$-$2330 was discovered in the Parkes multi-beam pulsar survey (Manchester et al. 2001; Lorimer et al. 2006). It has a period of about 1.9 s and a moderate DM (99.3 $cm^{-3}~pc$). It was observed on November 22, 2008 for about two hours with GMRT. The dedispersed data were folded every 16 periods to 128 bins and these are shown in Figure \ref{fig3} alongwith its average profile. The pulsar seems to have periodic bursts, with an average duration of about 50 periods, interspersed with nulls of about 510 periods. Work is currently in progress to check this periodic behaviour using Fourier analysis. Recently, evidence for such periodic nulling has been reported in PSR B1133+16 (Herfindal \& Rankin 2007) and PSR J1752+2359 (Lewandowski et al. 2004). If the periodic feature is confirmed, PSR J1738$-$2330 joins this class of pulsars. It is also evident from Figure \ref{fig3} that the pulsar has a high NF. The null periods were visually identified from Figure \ref{fig3}. Average profiles of nulled pulses and burst pulses ({\it i.e.} periods with detectable emission in the pulse window), culled from this single pulse analysis, were formed and are shown in Figure \ref{fig4}. It is clear from the average profile of all nulled pulses that there is no detectable weak emission during the pulse window. The average flux density in the pulsed emission during burst pulses is 94 times higher than that during nulled pulses, similar to results on other pulsars where this has been studied (Lyne \& Ashworth 1983; Vivekanand \& Joshi 1997). \begin{figure} \centering \psfig{figure=fig3.ps,width=4in,angle=0} \caption{Average profile of 3140 null pulses (top panel) and 293 burst pulses (bottom panel) for PSR J1738$-$2330} \label{fig4} \end{figure} The ON-pulse and OFF-pulse energy distributions for this pulsar were obtained in a manner similar to PSR B0809+74 and B1112+50 and these are shown in Figure \ref{fig5}. These distributions are similar to PSR B1112+50, confirming a high NF for this pulsar. Our preliminary estimate for the upper limit to NF from this analysis is about 90 percent. \begin{figure} \centering \psfig{figure=fig4.ps,width=4in,angle=0} \caption{ON-pulse and OFF-pulse energy distributions for PSR J1738$-$2330 } \label{fig5} \end{figure} \section{Discussions and future work} PSR J1738$-$2330 seems to show a periodic null-burst cycle. An upper limit to its NF of about 90 percent was obtained for the first time. The pulsed flux density declines by a factor of 94 during the nulled pulses in this pulsar. Nulling is a poorly understood phenomenon. It could be due to a cessation of pair production in the polar gap (Ruderman \& Sutherland 1975). In this framework, nulling behaviour of PSR J1738$-$2330 suggests a periodic instability in the pair cascade in the polar gap. Another interesting possibility has been recently suggested by Herfindal and Rankin (2007), where a periodicity in nulling could be caused by a partially ignited sub-beam carousel. If this is indeed the case, a large number of sub-beams are not ignited in any carousel model proposed for this pulsar. Alternatively, the nulled pulses could be caused due to refraction effects or precession of the star. However, multi-frequency observations and polarization study of this pulsar is required to test these models and such studies are planned in future with GMRT. \acknowledgements The Giant Meterwave Radio Telescope is a part of project by National Center for Radio Astrophysics which is funded by Tata Institute of Fundamental Research and Department of Atomic Energy.
2,869,038,155,157
arxiv
\section{Abstract} In microarray experiments, it is often of interest to identify genes which have a pre-specified gene expression profile with respect to time. Methods available in the literature are, however, typically not stringent enough in identifying such genes, particularly when the profile requires equivalence of gene expression levels at certain time points. In this paper, the authors introduce a new methodology, called gene profiling, that uses simultaneous differential and equivalent gene expression level testing to rank genes according to a pre-specified gene expression profile. Gene profiling treats the vector of true gene expression levels as a linear combination of appropriate vectors, i.e., vectors that give the required criteria for the profile. This gene-profile model is fitted to the data and the resultant parameter estimates are summarized in a single test statistic that is then used to rank the genes. The theoretical underpinnings of gene profiling (equivalence testing, intersection-union tests) are discussed in this paper, and the gene profiling methodology is applied to our motivating stem cell experiment. \\\\ \noindent{\it Keywords}: Gene expression; Gene profiling; Linear model; Microarray; Pluripotency; Stem cell; Time course experiment. \section{Introduction} \label{sec:introduction} Microarray technology enables researchers to examine the expression levels for many thousands of genes simultaneously \citep[see, for example,][]{Nguyen:2002,Smyth:2003}. Increasingly, information on gene expression is used to infer cell protein levels and thus cellular behaviour \citep{Nguyen:2002,Smyth:2003,Ahnert:2006,McLachlan:2006}. A further major area of interest is in investigating changes in gene expression levels over time in a population of cells \citep{Dudoit:2002,Bar-Joseph:2003,Glonek:2004,Tai:2005,Ernst:2005,Brown:2006,Ahnert:2006} and this is the subject of the present paper. We refer to the gene expression levels over time as a gene expression profile, or profile for short. \par Several methods of analysing gene expression profiles fall into the class of techniques known as unsupervised learning methods. These methods seek to group genes into a number of classes based upon their observed profiles. Some of the methodologies discussed in the recent microarray literature are hierarchical classification \citep{Eisen:1998}, self-organizing maps \citep{Tamayo:1999}, the $K$-means algorithm \citep{Tavazoie:1999}, multivariate Gaussian mixtures \citep{Ghosh:2002,Yeung:2001}, and mixtures of linear mixed models \citep{Celeux:2005}. A related problem that arises in applications of microarray time course experiments is to specify, in advance, a gene expression profile of interest and then to identify the genes with matching expression profiles. However, unsupervised methods do not address this problem and various alternative approaches have been proposed. \par One such method is Pareto optimization, proposed by \cite{Fleury:2002} and \cite{Hero:2004}, in which a set of functions, each measuring the association of a gene to a pre-specified profile, is chosen. Genes found to be Pareto-optimal with respect to these criteria are identified as matching the pre-specified profile. The main disadvantage with Pareto optimization is that some genes will be selected as Pareto-optimal genes whilst only matching the pre-specified profile for a subset of the profile's criteria. \par In an unpublished paper, \cite{Lonnstedt:2006} describe a different method for ranking genes, based on the inner product between the vector of observed log ratios and a pre-specified profile. This method works well for some profiles, but did not provide useful outcomes in our application. \par Gene profiling is a new approach developed by the present authors, which aims to identify genes that match a pre-specified gene expression profile, with greater specificity than the previously described approaches. Gene profiling entails treating the vector of true gene expression levels for each gene as a linear combination of linearly independent vectors chosen to represent the pre-specified profile. The gene-profile model is fitted to the observed log ratios, and the genes are ranked by a single test statistic which incorporates simultaneous differential and equivalent gene expression testing. \par In Section \ref{sec:motivation}, our motivation for gene profiling is presented. Section \ref{sec:design} sets out the details of the experimental design for a pluripotent (stem cell) time course experiment which provided our initial motivation for the ensuing methodological development. The theoretical underpinnings of gene profiling are described in Section \ref{sec:profile}, which entails a review of equivalence testing (Section \ref{sec:equiv}) and intersection-union tests (Section \ref{sec:IUT}). The gene profiling methodology is set out in Section \ref{sec:method}, and the results obtained from our application to a stem cell experiment are presented in Section \ref{sec:results}. In Section \ref{sec:discussion}, some further work and how to apply the methods in {\tt limma} are briefly discussed. \section{Motivation: pluripotency} \label{sec:motivation} Our motivating example is a stem cell experiment originally conducted by the Rathjen laboratory, formerly of the University of Adelaide. The aim of the experiment was to identify genes associated with pluripotency in mice embryonic stem cells \citep{DAmour:2003,Ramalho-Santos:2002}. Early stem cells have the potential to differentiate into any body cell: a property known as pluripotency. This ability is present in mice stem cells up to and including day three. After this the stem cells become multipotent: they still have the ability to differentiate into different types of cells, but now a limited number. For example, haemopoetic stem cells can differentiate into blood cells but not nerve cells. As pluripotency is restricted to the early stem cells, day 3 or earlier, genes that have high expression levels in cells up to day 3, but low, or monotonically decreasing expression levels thereafter, are likely to be associated with the biochemical pathways involved in the pluripotency ability of these cells (personal communication, Dr Chris Wilkinson). \subsection{Pluripotency example: experimental design} \label{sec:design} Stem cells were isolated from the early embryo and grown in culture dishes. The cells were allowed to replicate and grow over the medium in the dish. Once the cells had crowded the plate, they were removed, separated and plated onto new plates. This cycle of growth and re-plating is called a {\it passage}. The Rathjen laboratory isolated mice embryonic stem cells, and for this experiment, used cells from passages 21, 22, 23 and 24. The cells were stimulated to differentiate into multipotent cells, and on days 0, 3, 6 and 9 after stimulation, samples were taken and the messenger RNA (mRNA) obtained. \par The gene expressions of the 16 samples of stem cell mRNA for the four days (0, 3, 6, 9) and four passages, were measured. Within each passage, five comparisons were made, namely, day 0 to day 3, day 0 to day 9, day 3 to day 6, day 3 to day 9, and day 6 to day 9. The experimental design in terms of the true gene expression levels, $\boldsymbol{\mu}$ (see Section \ref{sec:profile}) is summarized in Figure \ref{fig:design}, while the experimental design in terms of the gene profiling parameters, $\boldsymbol{\gamma}$ (Section \ref{sec:profile}), is compared to the design in terms of the true expression levels in Table \ref{tab:parameter}. The clone library used in the experiment was the Compugen 22,000 mouse oligonucleotide library (http://www.microarray.adelaide.edu.au/libraries/). In total, 20 arrays were hybridized on two-colour long-oligonucleotide microarrays, with five arrays within each passage. The five arrays consisted of the five comparisons detailed above. In this analysis, the stem cells from each passage were treated as independent biological replicates. \begin{figure}[htbp] \begin{center} \psset{xunit=1cm,yunit=1cm} \begin{pspicture} (0,0)(4,4) \rput(0,4){$\mu_0$} \rput(0,0){$\mu_9$} \rput(4,4){$\mu_3$} \rput(4,0){$\mu_6$} \psline{->}(0.3,-0.05)(3.75,-0.05) \psline[linestyle=dashed]{<-}(0.25,0.05)(3.7,0.05) \psline[linestyle=dashed]{->}(0.3,4.05)(3.75,4.05) \psline{<-}(0.25,3.95)(3.7,3.95) \psline[linestyle=dashed]{->}(-0.05,0.3)(-0.05,3.75) \psline{<-}(0.05,0.25)(0.05,3.7) \psline{->}(3.95,0.3)(3.95,3.75) \psline[linestyle=dashed]{<-}(4.05,0.25)(4.05,3.7) \psline[linestyle=dashed]{->}(0.25,0.3)(3.75,3.8) \psline{<-}(0.25,0.2)(3.75,3.7) \end{pspicture} \caption{Microarray comparisons made within each passage for the pluripotency stem cell experiment. Each arrow represents two arrays, one for each passage (passage 21/22 continuous arrow, passage 23/24 dashed arrow), with the arrow head pointing to the sample labeled with cy5, and the sample at the arrow tail labeled with cy3. Day 0, 3, 6, and 9 are represented by $\mu_0,\mu_3,\mu_6$, and $\mu_9$ respectively.} \label{fig:design} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{ccc} \hline Day & Parameterization& Parameterization\\ & in terms of $\mu$'s & in terms of $\gamma$'s\\ \hline 0 & $\mu_0$ & $\gamma_0+\gamma_1+\gamma_2+\frac12\gamma_3$\\ 3 & $\mu_3$ & $\gamma_0+\gamma_1+\gamma_2-\frac12\gamma_3$\\ 6 & $\mu_6$ & $\gamma_0+\gamma_2$\\ 9 & $\mu_9$ & $\gamma_0$\\ \hline \end{tabular} \end{center} \caption{Parameterization of stem cell experiment in terms of absolute mean gene expressions ($\mu_i, i=0,3,6,9$) and in terms of the gene profile coefficients ($\gamma_i, i=0,1,2,3$).} \label{tab:parameter} \end{table} \section{Gene profiling methodology} \label{sec:profile} \subsection{Development of method for stem cell experiment} \label{sec:stem_cell} The expression criteria over time required for a pluripotent gene are: \begin{itemize} \item equal gene expression levels for days 0 and 3, \item higher gene expression levels for days 0 and 3 compared to day 9, and \item the gene expression level for day 6 to lie between the gene expression levels for day 0 and day 3, and the gene expression level for day 9. \end{itemize} The requisite (hypothetical) profile is illustrated in Figure \ref{fig:pluri}. \par Consider the vector of true mean gene expression levels, $\boldsymbol{\mu}=\left(\mu_0,\mu_3,\mu_6,\mu_9\right)'$, where $\mu_i, i=0,3,6,9$ is the mean gene expression level on day $i$ as shown in Figure \ref{fig:design}. Since this is a vector in $\mathbb{R}^4$, it can be expressed as the linear combination of four linearly independent vectors. The first step in gene profiling is to choose vectors that represent the criteria for pluripotency. In the present example, this corresponds to \begin{eqnarray} \boldsymbol{\mu}=\gamma_0\left(\begin{array}{r}1 \\1 \\1 \\1\end{array}\right)+\gamma_1\left(\begin{array}{r}1 \\1 \\ 0 \\ 0\end{array}\right)+\gamma_2\left(\begin{array}{r}1 \\1 \\1 \\0\end{array}\right)+\gamma_3\left(\begin{array}{r}1/2 \\\mbox{-}1/2 \\0 \\0 \end{array}\right).\label{eq:mu_gamma} \end{eqnarray} With this choice of model, it follows that $\gamma_0=\mu_9, \gamma_1=(\mu_0+\mu_3)/2-\mu_6, \gamma_2=\mu_6-\mu_9,$ and $\gamma_3=\mu_0-\mu_3.$ Therefore, the pluripotent profile requires that $\gamma_1>0, \gamma_2>0, \gamma_3=0,$ but does not constrain $\gamma_0$. To find genes that achieve these criteria requires tests for equivalence as well as (simultaneous) tests for differential gene expression. In the next section, equivalence testing is discussed. We then describe how to simultaneously test for both differential and equivalent gene expression in a time course experiment. \begin{figure}[!ht] \begin{center} \includegraphics{pluri_profile_anno} \caption{The pre-specified gene expression profile for pluripotent genes. For each day, the log ratio with respect to day 0 is plotted.} \label{fig:pluri} \end{center} \end{figure} \subsection{Statistical Equivalence} \label{sec:equiv} To determine pluripotency, it is necessary to demonstrate that $\gamma_3=0$. Conventional hypothesis testing is not applicable to this situation, but the equivalence testing approach discussed in \cite{Wellek:2002} is. \par If $\boldsymbol{X}$ is a random vector whose probability distribution depends on a real-valued parameter $\theta$, then to test if $\theta$ is equivalent to zero, a neighbourhood around zero is constructed and the following null and alternative hypotheses are tested: \begin{eqnarray} H_0:&\left|\theta\right|\geq \epsilon,&\epsilon>0,\label{eq:equiv}\\ H_a:&\left|\theta\right|<\epsilon.&\nonumber \end{eqnarray} The neighbourhood defined by $\epsilon$ is the maximum that the parameter can vary and still be considered equivalent to zero. This neighbourhood is necessary to ensure that the power of the statistical test is greater than its significance level \citep{Wellek:2002}. \par For the gene profiling model, the parameter $\epsilon$ is taken to be the largest that a gene's mean log ratio can vary around zero and not be of ``significant'' gene expression, according to biologists. In practice, a working understanding of equivalent gene expression should be decided upon in advance in consultation with biologists. Unfortunately however, relatively little is known about gene-specific variation per se: information that could of course be used to decide on an appropriate value of $\epsilon$. We discuss potentially suitable choices of $\epsilon$ in Section \ref{sec:results}, but for the present we will assume an appropriate $\epsilon$ to be available. Using such a value of $\epsilon$, the simplest and most common way to test the hypotheses in (\ref{eq:equiv}) is via {\it Confidence Interval Inclusion} (CII). \par Consider the null and alternative hypotheses specified in (\ref{eq:equiv}). We calculate a confidence interval, $R_\alpha(\boldsymbol{X})$, from the observed data $\boldsymbol{X}$, where \begin{eqnarray} R_\alpha(\boldsymbol{X})=\left(L_\alpha(\boldsymbol{X}),U_\alpha(\boldsymbol{X})\right); \label{eq:region} \end{eqnarray} $L_\alpha(\boldsymbol{X})$ and $U_\alpha(\boldsymbol{X})$ are random variables, such that \begin{eqnarray*} P\left( \theta\in\left( L_\alpha(\boldsymbol{X}),\infty\right)\right)=P\left(\theta\in\left(\mbox{-}\infty,U_\alpha(\boldsymbol{X})\right)\right)=1-\alpha. \end{eqnarray*} We reject the null hypothesis in favour of equivalence if and only if \begin{eqnarray*} R_\alpha(\boldsymbol{X})\subset(\mbox{-}\epsilon,\epsilon), \end{eqnarray*} i.e., the confidence interval is contained entirely within the interval $(\mbox{-}\epsilon,\epsilon)$. This is an $\alpha$-level test. \par The equivalence formulation can be used to test that $\gamma_3$ in equation (\ref{eq:mu_gamma}) is equivalent to zero with the following null and alternative hypotheses: \begin{eqnarray} H_0:\left|\gamma_3\right|\geq \epsilon &\mbox{ vs. } & H_a:\left|\gamma_3\right|< \epsilon.\label{eq:gamma} \end{eqnarray} For example, to test the hypotheses in (\ref{eq:gamma}), the confidence interval $$ \left(\hat{\gamma}_3-t^*\mbox{SE}(\hat{\gamma}_3),\hat{\gamma}_3+t^*\mbox{SE}(\hat{\gamma}_3)\right), $$ is calculated and $\gamma_3$ is concluded to be equivalent to zero if this confidence interval lies within $\left(\mbox{-}\epsilon,\epsilon\right)$. In this confidence interval, $t^*$ is chosen such that $P(T>t^*)=\alpha$, where $T$ has a $t$-distribution with the appropriate degrees of freedom for $\gamma_3$. \par Confidence interval inclusion can also be used to (separately) test whether $\gamma_1$ and $\gamma_{2}$ are significantly positive. The null and alternative composite hypotheses for $\gamma_{1}$ are \begin{eqnarray*} H_0:\gamma_1\leq 0 &\mbox{ vs. } & H_a:\gamma_1>0, \end{eqnarray*} and for $\gamma_{2}$ are \begin{eqnarray*} H_0:\gamma_2\leq 0 &\mbox{ vs. } & H_a:\gamma_2>0. \end{eqnarray*} \par For an $\alpha$-level test here, a one-sided $(1-\alpha)100\%$ confidence interval for $\gamma_1$ is calculated: \begin{eqnarray*} \left(\hat{\gamma}_1-t^*\mbox{SE}(\hat{\gamma}_1),\infty\right), \end{eqnarray*} and if this interval is contained in $(0,\infty)$, $\gamma_1$ is concluded to be significantly positive. Similarly for $\gamma_{2}$. \par These methods allow testing of each criterion separately, but for pluripotency all three criteria need to be valid simultaneously. The authors' method to simultaneously test for both equivalence of parameters to zero and significant departures of parameters from zero is described in the next section. \subsection{Intersection-Union test} \label{sec:IUT} The test for each criterion discussed in Section \ref{sec:equiv} can be incorporated simultaneously into a single null and a single alternative hypothesis as follows: \begin{eqnarray} H_0:&\left(\gamma_1\leq0\right) \;\bigcup\; \left(\gamma_2\leq0\right) \;\bigcup\; \left(\left|\gamma_3\right|\geq\epsilon\right),&\epsilon>0,\label{eq:null}\\ \mbox{versus } H_a:&\left(\gamma_1>0\right) \;\bigcap\; \left(\gamma_2>0\right) \;\bigcap\; \left(\left|\gamma_3\right|<\epsilon\right)\label{eq:alt}.& \end{eqnarray} \par The hypotheses in (\ref{eq:null}) and (\ref{eq:alt}) represent an {\it intersection-union test} (IUT)\citep{Berger:1982}. To review, in an IUT, the null hypothesis is expressed as a union, $$ H_0:\theta\in\bigcup_{\gamma\in\Gamma}\Theta_\gamma, $$ where $\Theta_\gamma$ is a subset of the parameter space indexed by $\gamma$. The rejection region $R$ of this IUT is of the form $R=\bigcap_{\gamma\in \Gamma}R_\gamma$, where $R_\gamma$ is the rejection region for a test of $H_{0 \gamma}:\theta\in\Theta_{\gamma}$ versus $H_{1 \gamma}:\theta\in\Theta_{\gamma}^{c}$. This is an $\alpha$-level test, where $\alpha=\mbox{sup}_{\gamma\in\Gamma}\alpha_\gamma$ and $\alpha_\gamma$ is the size of the test $H_{0 \gamma}$, with rejection region $R_\gamma$. \par Thus for each $\gamma_i, i=1,2,3$, in the null hypothesis statement (\ref{eq:null}), a test of size $\alpha_i$ is found, and the overall IUT will be of level $\mbox{sup } \alpha_i$. Using the confidence interval inclusion method discussed in the previous section to test each $\gamma_i$ separately, each test being of level $\alpha$, gives an overall $\alpha$-level test. \par Our main aim is to rank the genes in our motivating example according to their match with the pluripotent profile. The testing methodology described can be modified to give a quantitative measure of how closely each gene matches the desired profile. Considering each gene separately, for each parameter, $\gamma_i, i=1,2,3$, confidence interval inclusion is used to test the associated null hypothesis. Rather than using a fixed significance level, the smallest significance level, $\alpha_i$, for each $\gamma_i,i=1,2,3$ respectively, is found, such that the null hypothesis is rejected. The supremum of $\alpha_i, i=1,2,3$ is used as the test statistic to rank the genes. In fact, in the stem cell experiment, rather than calculate $\alpha_{i}$ for each $\gamma_{i}, i=1,2,3$, the width of the largest confidence interval, $U_{i}$, for each $\gamma_{i}$ that was contained within the rejection region was used. The infimum, $U,$ of the $U_{i}$ was then used to rank the genes (it should be noted that this is equivalent to ranking based on $\sup \alpha_i$). \par To further elucidate the method, consider Figure \ref{fig:rej_region}. This illustrates a two-dimensional example where the criteria are $\gamma_1=0$ and $\gamma_2>0$. The rejection region is indicated by the rectangular shaded region. The point $\left(\hat{\gamma}_1,\hat{\gamma_2}\right)$ is the estimate of $\left(\gamma_1,\gamma_2\right)$. The distance to the nearest boundary of the rejection region is calculated in standard errors of the estimate and this distance is used to rank the genes, with larger values indicative of association with pluripotency. Genes whose profiles do not lie within the rejection region are excluded from the ranking. \par The above development leads to the general methodology for determining pluripotency described in the next section. \begin{figure}[htbp] \begin{center} \includegraphics[width=3in]{rej_region3} \caption{Illustration: for each gene, the distance from $\left(\hat{\gamma}_1,\hat{\gamma}_2\right)$ to the nearest boundary of the rejection region is used to rank the genes.} \label{fig:rej_region} \end{center} \end{figure} \subsection{Gene profiling for pluripotency} \label{sec:method} The scanned images for each hybridized microarray slide were analysed using {\tt SPOT} \citep{Yang:2001} to give the cy3 and cy5 intensities for each gene \citep{Yang:2001,Adams:1994}. The data were then normalized by within-array print-tip loess, and the gene profile model was fitted to the normalized data using {\tt limma} \citep{Smyth:2005a} in {\tt R} \citep{R-Development-Core-Team:2006}. For each gene, the model parameter estimates and standard errors obtained by {\tt limma} were used to calculate the $U$ statistic (see below) using C code embedded in {\tt R} code. The genes were then ranked using the $U$ statistic. \par The vector of observed log ratios $\boldsymbol{M}$ was expressed as a linear model of the true gene expression levels $\boldsymbol{\mu}$ as follows: $$ \boldsymbol{M}=X^*\boldsymbol{\mu}+\boldsymbol{E}, $$ where $X^*$ is the design matrix representing the mRNA comparisons made on each array, and $\boldsymbol{E}$ is assumed to be distributed as $N_{20}(\boldsymbol{0},\sigma^2I)$. Using equation (\ref{eq:mu_gamma}) to substitute for $\boldsymbol{\mu}$, gives \begin{eqnarray*} M&=&X^* \left(\begin{array}{rrrr}1 & 1 & 1 & 1/2 \\1 & 1 & 1 & \mbox{-}1/2 \\1 & 0 & 1 & 0 \\1 & 0 & 0 & 0\end{array}\right) \left(\begin{array}{c}\gamma_0 \\\gamma_1 \\\gamma_2 \\\gamma_3\end{array}\right) +\boldsymbol{E}\\ &=&X\boldsymbol{\gamma}+\boldsymbol{E}. \end{eqnarray*} In the stem cell experiment, the microarray platform used was two-colour long oligonucleotide which, as for cDNA microarrays, measures relative gene expression, but not absolute gene expression levels. Therefore, the overall gene expression level, $\gamma_{0}$, could not be estimated and was removed from the model by changing the parameter vector to $(\gamma_1,\gamma_2,\gamma_3)$ and removing the first column of $X$. \par Estimates of $\boldsymbol{\gamma}$ were calculated via least squares, and the estimate of $\sigma^2$ was obtained using the empirical Bayes method utilized in {\tt limma}; this gives a robust posterior estimate of $\sigma^2$ based on a prior which ``borrows'' information from the observed variance of all the genes on the array. \par For each gene, three tests statistics, $U_1,U_2$ and $U_3$ were calculated as follows: \begin{eqnarray*} U_1=\frac{\hat{\gamma}_1}{\mbox{SE}(\hat{\gamma}_1)},\ U_2=\frac{\hat{\gamma}_2}{\mbox{SE}(\hat{\gamma}_2)},\ U_3=\frac{\epsilon-\left|\hat{\gamma}_3\right|}{\mbox{SE}(\hat{\gamma}_3)}, \end{eqnarray*} where SE($\hat{\gamma}_i$) is the $i$th diagonal element of the square matrix: $s\sqrt{(X'X)^{\mbox{-}1}}$, and $s$ is the posterior estimate of $\sigma$. The minimum of $U_i, i=1,2,3$, is used to rank the genes, for which genes with larger values of $U$ are more likely to be associated with pluripotency. \par Genes whose estimate $(\hat{\gamma_1},\hat{\gamma_2},\hat{\gamma_3})$ of $(\gamma_1,\gamma_2,\gamma_3)$ did not lie within the rejection region, i.e. those genes for which at least one $U_i, i=1,2,3$ was negative, were excluded from the ranking. \section{Application: determining genes associated with pluripotency using gene profiling} \label{sec:results} The model (\ref{eq:mu_gamma}) was fitted to the stem cell data with $\epsilon=1$. In addition, the test statistics were changed to test for $\gamma_2>1.5$, i.e., $U_2=\frac{\hat{\gamma}_2-1.5}{\mbox{SE}(\hat{\gamma}_2)}$. The value of $1.5$ was chosen to ensure a large difference between the gene expression levels on days 0, 3 and 6 compared with the gene expression level on day 9. \par The ranked genes are given in Table \ref{tab:pluri}, and the fitted profiles for these 15 genes are shown in Figure \ref{fig:pluri10}. Figure \ref{fig:pluri10} shows the fitted log ratios with respect to day 0 for the four time points: day 0, day 3, day 6, and day 9. Therefore, all of the profiles will pass through zero on day 0. The profiles demonstrate the required trajectory: equal expression for day 0 and day 3, higher gene expression levels for days 0 and 3 compared to day 9, and the gene expression level for day 6 lying between the gene expression levels for days 0 and 3 and that for day 9. \input{./Tables/Tab_1_pluri_anno.txt} \par The top-ranked gene, Oct4, is well-known to be associated with pluripotency \citep{Rodda:2005,Loh:2006} and would therefore be expected to appear amongst the top-ranked genes for pluripotency in this experiment. Other genes of note in the ranked genes in Table \ref{tab:pluri} are Utf1 (rank 2) which is associated with undifferentiated embryonic cell transcription \citep{Nishimoto:2005fj}, and Nanog (rank 11) which is central to embryonic stem cell pluripotency \citep{Wang:2006}. \par The recent article by \cite{Wang:2006} isolated proteins associated with the protein Nanog and thus with pluripotency. Of the 38 proteins discussed in \cite{Wang:2006}, Oct4 and Nanog appeared in our list of ranked genes using model (\ref{eq:mu_gamma}): ranks 1 and 11 respectively. The remaining proteins were not in the ranked genes as the profiles of the associated mRNAs are not consistent with profile (\ref{eq:mu_gamma}). \par \begin{figure}[!ht] \begin{center} \includegraphics{Rplot1_pluri_anno} \caption{Fitted log ratios with respect to day 0 for the ranked genes for the pluripotency profile (\ref{eq:mu_gamma}).} \label{fig:pluri10} \end{center} \end{figure} \par {\it Sensitivity analysis}: As stressed previously, the choice of the neighbourhood around zero assumed for equivalence (i.e., $\epsilon$) should be decided upon in consultation with biologists. However, this is problematic since biologists still have relatively little explicit knowledge of gene-wise expression variability, and therefore, what precisely and quantitatively may represent equivalence of gene expression. \par To investigate the potential effects of altering the neighbourhood defined by $\epsilon$, the primary analysis was repeated assuming, respectively, values of $\epsilon= 0.5,1,1.5, \mbox{ and }2.$ In Figure \ref{fig:sensitivity} the profiles for the genes which have observed profiles that lie within the rejection region are plotted for each choice of equivalence neighbourhood. As the equivalence neighbourhood width ($\epsilon$) increases, more genes have profiles that lie within the rejection region, but there is greater variation between the gene expression levels for day 0 and day 3. Nevertheless, gene profiling in this application has been demonstrated to be reasonably robust. For $\epsilon$=0.5, 1 and 1.5, Oct4 was ranked as the top gene, while for $\epsilon$=2, it had only dropped to rank 2. \par \begin{figure}[!ht] \begin{center} \includegraphics{Rplot2_SenAnal_anno} \caption{Fitted log ratios with respect to day 0 for the ranked genes with (a) $\epsilon=$ 0.5, (b) $\epsilon=$ 1, (c) $\epsilon=$ 1.5, and (d) $\epsilon=$ 2.} \label{fig:sensitivity} \end{center} \end{figure} \par {\it Profiling for Sox2}: It is well known that the gene Sox2 is commonly associated with pluripotency \citep{Rodda:2005}, but it was not in the ranked genes using the gene expression profile (\ref{eq:mu_gamma}): the fitted gene expression profile for Sox2 is very different from the pluripotent profile used in the analysis. The criteria for the profile of Sox2 are: higher gene expression level on day 0 compared to the gene expression levels for days 6 and 9; equivalent gene expression levels on days 6 and 9; and the gene expression level for day 3 to lie between the gene expression level for day 0 and the levels for days 6 and 9. Gene profiling can be used to rank the genes according to these alternative criteria. An appropriate model for Sox2 is: \begin{eqnarray*} \boldsymbol{\mu} =\left(\begin{array}{rrrr}1 & 1 & 1 & 0 \\1 & 0 & 1 & 0 \\1 & 0 & 0 & \frac12 \\1 & 0 & 0 & \mbox{-}\frac12\end{array}\right) \boldsymbol{\gamma}, \end{eqnarray*} in which $\gamma_0$ is unrestrained, $\gamma_1>0$, $\gamma_2>0$, and $\gamma_3$ is equivalent to zero. This model was fitted to the data and the ranked genes are shown in Figure \ref{fig:Sox2_results}. The ranked genes were Cpt1a, 1200014E20Rik, 2210409E12Rik, Sox-2, Np-1, Birc5, 5730419I09Rik, MGI:1922156, retSDR3, and clone RP21-505L19 on chromosome 5. Sox2 was ranked at position 4. Of note is retSDR3. This gene has the required form with a larger difference in gene expression between day 0 and days 6 and 9 compared to the other genes. Even with this large difference, retSDR3 is low down in the ranking at rank 9. This low ranking is because retSDR3 has a large gene expression variance (0.181) compared to the other ranked genes (average gene expression variance of 0.054). This illustrates that if two genes have the same coefficient values, gene profiling will rank lower the gene which has the larger variance and thus more uncertainty about its true profile. \begin{figure}[!ht] \begin{center} \includegraphics{Rplot_Sox2_analysis_anno} \caption{Fitted log ratios with respect to day 0 for the top 10 ranked genes for the Sox2 profile.} \label{fig:Sox2_results} \end{center} \end{figure} \section{Discussion of further work} \label{sec:discussion} In general, gene profiles of interest to molecular biologists often consist of two types of criteria: equal gene expression at different time points and differential gene expression at different time points. Gene profiling provides a straightforward methodology to filter genes which satisfy these two types of criteria simultaneously. We believe that this has not been accomplished using previously available techniques. By simultaneously testing for all criteria, gene profiling effectively filters out and excludes genes that are only partially consistent with the required profile. We now touch on some areas requiring further work. \par {\it Choice of $\epsilon$}: As noted in Section \ref{sec:IUT}, to test for a parameter being equal to zero, a neighbourhood of width $\epsilon$ is defined. This neighbourhood is the amount that the parameter could vary and still be considered equivalent to zero. In this paper, the choice of $\epsilon$ was based on plotting profiles for the various choices of $\epsilon$, and choosing the best $\epsilon$ to give the required pre-specified profiles. Ideally, the choice of $\epsilon$ should be based on consultation with biologists, to the extent that such knowledge is available. One would anticipate that such requisite knowledge will gradually accrue over time, as microarray and other new genomics technologies are more widely applied in molecular biology and genetics. \par {\it Invariance of parameterization}: Another area requiring further research is the invariance (or otherwise) of reparameterization. In his (2002) book, Wellek notes: \begin{quotation} ``... in contrast to the corresponding conventional testing problems with the common boundary of null and alternative hypothesis [{\it sic}] being given by zero, equivalence problems remain generally not invariant under redefinitions of the main parameter.'' \end{quotation} To illustrate this point, consider the problem of finding marker genes for day 3 in the stem cell experiment. The criteria for such genes are: high gene expression level on day 3, as well as equal and low gene expression levels on day 0, day 6 and day 9. The requisite profile is illustrated in Figure \ref{fig:marker3}. Examination of the profile reveals three possible models: $$ \boldsymbol{\mu}=\left(\begin{array}{rrrr}1 & 0 & 0 & 0 \\1 & 1 & \mbox{-}\frac13 & \mbox{-}\frac13 \\1 & 0 & 0 & \mbox{-}1 \\1 & 0 & \mbox{-}1 & 0\end{array}\right)\boldsymbol{\gamma}, \boldsymbol{\mu}=\left(\begin{array}{rrrr}1 & 0 & 0 & 0 \\1 & 1 & \mbox{-}\frac23 & \frac13 \\1 & 0 & \mbox{-}1 & 1 \\1 & 0 & \mbox{-}1 & 0\end{array}\right)\boldsymbol{\gamma}, \boldsymbol{\mu}=\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\1 & 1 & \mbox{-}\frac23 & \mbox{-}\frac13 \\1 & 0 & \mbox{-}1 & 0 \\1 & 0 & \mbox{-}1 & \mbox{-}1\end{array}\right)\boldsymbol{\gamma}, $$ where $\boldsymbol{\gamma}=(\gamma_0,\gamma_1,\gamma_2,\gamma_3)'$ with $\gamma_0$ unrestrained, $\gamma_1$ significantly positive, $\gamma_2$ equivalent to zero, and $\gamma_3$ equivalent to zero. \par The three models may not necessarily give the same results. This is because equivalence is not transitive, i.e., if $\mu_{0}$ is equivalent to $\mu_{6}$, and $\mu_{6}$ is equivalent to $\mu_{9}$, it is not necessarily true that $\mu_{0}$ is equivalent to $\mu_{9}$. This is because equivalence is defined in a neighbourhood and so a ``drift'' resulting in $\mu_{0}$ and $\mu_{9}$ being too far apart to be considered equivalent can occur. Methods to impose invariance are currently under investigation by the authors. Although this is an interesting area of research, invariance of reparameterization does not limit the use of gene profiling. There are many pre-existing statistical tests, e.g., the Wald test, that are not invariant under reparameterization. In many cases, the research hypotheses will dictate the optimal model to use. \par {\it Applying gene profiling using {\tt limma}}: Gene profiling is easily implemented by fitting the model to the data using {\tt limma} and then calculating the $U$ statistics. The calculation of the $U$ statistics was written in C to decrease the run time, but is easy to implement in {\tt R}. \par To conclude, gene profiling introduces a flexible method to select genes for a pre-specified time-course profile. Gene profiling is straightforward to implement in practice, requiring only small modifications to the {\tt R} package {\tt limma}, and can be used to select for most profiles of interest to biologists. The application of gene profiling in this article has been to two-colour microarrays, but it could readily be modified for use for other microarrays platforms, such as Affymetrix GeneChip \citep{Lockhart:1996}, and for other technologies where it is required to rank observations by correspondence with a pre-specified profile. \begin{figure}[htbp] \begin{center} \includegraphics{marker_anno} \caption{Hypothetical gene expression profile for a day 3 marker gene.} \label{fig:marker3} \end{center} \end{figure} \section{Acknowledgements} We thank the Rathjen group for the use of the stem cell data, and Dr Chris Wilkinson and Professor Terry Speed for useful information about stem cell experiments. The second- and third-named authors are grateful to the Australian Research Council for research support through a Discovery Project Grant. The first-named author was supported by a George Fraser PhD scholarship at the University of Adelaide.
2,869,038,155,158
arxiv
\section*{INTRODUCTION} The DUMAND Collaboration is building a neutrino observatory with the aim of studying both diffuse and point sources of astrophysical neutrinos in the TeV range. Fluxes of charged primary cosmic ray particles have been measured as a function of energy and spectra have been presented to us at this conference showing the characteristic knee and ankle features whose origins are at present only subject to speculation. Because neutrinos are produced by some of the same processes that produce charged particles, but are neutral and also only weakly interacting, they have the potential of revealing the spatial origins of the cosmic rays. Neutrino observations have helped elucidate the origins of solar energy production and the mechanism of Supernova 1987A. We expect DUMAND to elucidate the production of high energy cosmic particles and their origins. The Dumand II neutrino observatory\cite{roberts} is an array of 216 photomultiplier tubes deployed in nine vertical strings, in an octagonal pattern with 40m sides and one string in the center. (DUMAND I designates a ship-suspended single prototype string experiment successfully conducted in 1987.) The array will be moored on the ocean floor at depth 4800m, 25 km from the Island of Hawaii, and connected to a shore laboratory by a cable combining electrical and fiber optic elements, terminating in an underwater junction box. The underwater site places no inherent limitation on possibilities for future expansion of the detector. DUMAND II when completed will have an effective detection area of 20,000 m$^2$, instrumenting a column of water which has the height of the Eiffel tower and its width at the base. \section*{SUMMARY OF DUMAND II CONSTRUCTION PROGRESS} The basic infrastructure of DUMAND, comprising the underwater junction box, 30 km data and power cable to shore, and the shore station facility are completed. Environmental monitoring equipment and the site-defining navigational sonar array have been laid out and used in the 12/93 deployment operation. One of the optical module strings was deployed and used to record the muon events. Unfortunately, a hairline fracture in one of over 100 penetrators for the pressure vessels produced a small water leak. Seawater eventually engulfed the electronics, disabling further observation of muons after about 10 hours of operation. The disabled string was remotely released and recovered at sea in 1/94, and returned to Honolulu for diagnosis and repair. The fault was found and ways to avoid future recurrences have been identified. Besides the refurbished first string, two further strings are currently undergoing final assembly and testing. We plan to make extensive deep water tests of these three strings before deploying them at the DUMAND site. The earliest time that we can obtain the ship and underwater vehicle resources needed to carry out deployment and interconnection operations is around December, 1994. Signals from the optical modules are digitized locally providing time of arrival (to 1 ns accuracy) and pulse height . Signals from the 24 OMs on each mooring are serialized and sent to shore via an optical fiber at 0.5 GHz rate. Technological innovations in this system include the design and production of a 27 channel monolithic GaAs TDC chip with high reliability and compact size. This chip has been implemented to include all digitizer, buffer memory and multiplexing functions. This system has been built to cope with the background rate from radioactivity in the water and bioluminescence and still generate very little dead-time for recording cosmic events. The same optical fiber link will carry environmental and acoustical ranging information which are used to measure the geometry of the array. The raw information is sent to the shore station 25 km away. The trigger system looks for patterns in time, space and pulse height of the OMs consistent with the passage of charged particles through the array. Events satisfying the trigger are recorded for further off-line analysis. Our studies of the trigger system predict that the system will be $>90$\% efficient for events that penetrate the array from the lower hemisphere. Since 1992, DUMAND crews have been preparing the site and testing underwater assembly operations. DUMAND II requires a reasonably flat site with suitable soil properties. The chosen site has been marked with acoustical transponders which have been accurately surveyed in geocentric coordinates. The suitability of this site was verified remotely by acoustical means, film camera and video recordings; in addition, DUMAND personnel have cruised the area in a manned submarine, the US Navy's {\it DSV Sea Cliff}, to be certain that the site is fairly flat and free of any features that would interfere with a successful deployment and operation. We have verified the exceptional clarity of the water. After the deployment of the junction box with its single string of OMs and the shore cable, each successive string will be moored in a ring at a radius of 40 m. Strings will be connected to the junction box by an umbilical cable and wet-mateable electrical/fiber-optic connector. Since this operation must be carried out at a depth of 4800m, specialized underwater vehicles must be used. Using a mock junction box and string mooring, we used the US Navy's Advanced Tethered Vehicle (ATV) to maneuver within the array and to carry out the connecting operation. The ATV successfully maneuvered into position, unholstered the connector plug and cable, carried it in a predetermined path and plugged it home in the socket. We proved that tethered vehicles (which are cheaper and more readily available than manned submersibles) are capable of carrying out this operation with control from the surface. The {\it DSV Sea Cliff} and ATV operations in late 1992 required integration of our acoustical transponder system with surface GPS navigation equipment and demonstrated that we can locate and work at an ocean bottom site in a routine fashion. We need to be able to point reconstructed muon tracks onto the celestial sphere with an accuracy better than 1$^o$ (the median angle between primary $\nu$ and secondary $\mu$ at 1 TeV). The global positioning satellite (GPS) system provides accurate geographical coordinates for stations on the surface. The transfer from the coordinates of satellite receiver antennas on the surface to the underwater array is accomplished via an acoustical positioning system of our own design and construction\cite{berns93}. In order to achieve reliable positioning, we created a system that could have a high signal to noise ratio in spite of the long distances involved, the ocean noise, and the multi-path interference of the sound waves. The system measures acoustical transit times with 10 $\mu$sec precision and utilizes frequency modulated chirps and matched filtering via DSPs to recover the signal. We have achieved 1 cm accuracy in positioning in real time in short base line tests and have positioned the site transponders to the accuracy of the GPS system in the site survey. In the final survey of the system, we plan to use phase sensitive techniques to survey the actual OMs to an accuracy of $< 10$ cm. The position of the OMs will be thereafter be monitored continually. \section*{DEPLOYMENT OPERATIONS} In December of 1994, a DUMAND scientific team and the crew of the University of Washington oceanographic ship {\it RV Thomas G. Thompson} were able to successfully deploy all the elements of one string and the infrastructure for eight more strings, including the junction box, the environmental module, and the shore cable. Other DUMAND scientific crews prepared the shore station for operation. The procedures for the lowering and cable laying operations had been worked out in practice runs. The cable laying equipment was leased and mounted on the ship. Last minute adjustment and assembly of string components were completed, and final testing was accomplished in refrigerated truck containers in a completely connected configuration. The containers were then loaded onto the {\it RV Thomas G. Thompson}, which was well equipped to handle all of this work. At the time that DUMAND deployment operations began, the weather was quite favorable and the seas reasonably calm. The practice and planning paid off as we found that the complicated operation required to lower the string and junction box went very well. The navigation was excellent and the string and junction box were landed well within the target areas. Fig.~\ref{deploy} shows the procedure used in carrying out the deployment. The string top was attached to a sacrificial line and anchor which was lowered first. The string followed, then came the junction box. The junction box was lowered on the shore cable to the bottom, leaving the string in an arched configuration between the junction box and the sacrificial anchor. After touchdown the cable was paid out and laid on the bottom as we headed for shore. To avoid laying the cable on the rocky shore which is pounded by surf, divers threaded the cable through a previously prepared slant-drilled tunnel which was bored from near the shore station to an appropriate location offshore. By the end of the day, we had hooked the shore cable up to power and control cables and were able to exercise controls on the environmental module and acquire data. This was an exciting day, the culmination of years of planning and preparation. \section*{RESULTS FROM THE DUMAND ARRAY} We logged data from the DUMAND array as it was being lowered, when it touched down on the bottom, on shipboard during the cable laying operation, and then from the shore station. In all, we recorded about 10 hours of data. The results are described in the following sections. We set a minimum threshold trigger of single photoelectron hits on single OMs, in effect opening up the DAQ system to record singles on all tubes. We recorded data with this trigger for about 10 hours. Because of the 60 Khz singles rates, the DAQ created quite a lot of dead time, primarily due to the time required to dump a buffer of data to disk. The live time recorded was therefore approximately 2 minutes. These data were then filtered offline for track candidates. We have 10 candidate events with 6 or more OMs firing within a 100 nsec interval. With a 60 Khz random singles rate, the expected number of events from pure chance is about $10^{-5}$. The calculated rate of downgoing muons is $2\times 10^6$/yr or 12 in the two minute interval. These data are thus well within expectation. Fig.~\ref{timing} shows the timing diagram for pulses from 7 OMs in our most striking candidate. The leading edges of the pulses are the arrival times and the pulse width gives the time over threshold (TOT) which is proportional to the log of the integrated charge collected by the OM. The space-time hit pattern roughly agrees with the hypothesis of a particle normal to the string. The brightness peaks at the intersection point and falls off rapidly in agreement with this hypothesis. The best fit hypothesis is that there are two downgoing parallel particles. (Fig.~\ref{tracks}.) Earlier investigations\cite{learned93} have suggested the possibility that a very large volume and inexpensive detector of high energy neutrinos is possible by acoustical detection. The deposition of energy into the water by the particles generates a low level characteristic bipolar sound pulse with a frequency range of about 30 to 60 KHz. Our simulation studies suggest that by using noise cancellation and signal coherence techniques (ie, treating our set of hydrophones as a phased array), we will be able to systematically enhance noise rejection and detect high energy particles. The DUMAND II array is equipped to observe coincidences of OM and acoustical signals and this will provide the first direct practical test of acoustical detection. \section*{FUTURE PLANS} Although the success of the DUMAND deployment was marred by the failure of a single penetrator, we learnt enough from the limited period of live operation to be confident that we can complete and operate the whole DUMAND array. We also gained confidence in the ability of our group to recover faulty equipment from the sea, an essential task for long term operation. We are hoping that resources for the deployment of the three strings can be available this winter. The capabilities of the three string array are discussed below. A demonstration of the viability of the three string configuration will allow us to complete the deployment of the following six strings in the next year. The capabilities of the full DUMAND II array have been reviewed in previous reports\cite{icrc93}. Here I will summarize some of the expected observations from the 3-string array. Monte Carlo simulations of the response of the 3-string array (Triad) will have an effective detection area for muons above 3 TeV that exceed previous and existing underground detectors. The median pointing accuracy at this energy will be about 3 degrees. Thus the Triad will be able to search for astronomical sources of very high energy neutrinos at a greater level of sensitivity than has so far been achieved in other experiments. Our planned trigger scheme, while keeping the trigger rate at a reasonable level, has the unintended consequence of a strong energy dependence. For 10 TeV muons, the Triad effective area will be 3100 m$^2$, which scales roughly with log(energy). Using the neutrino fluxes calculated by several authors for Active Galactic Nuclei (AGNs), the following table of projected event rates is obtained: \begin{table} \caption{Expected AGN event rates in DUMAND for several models.} \begin{tabular}[ph]{|l|l|} \hline Model\cite{HENA} & Event Rate (year$^{-1}$)\\ \hline Biermann & 72 \\ Stecker et al & 97 \\ Sikora and Begeleman & 71 \\ Protheroe and Szabo & 21 \\ \hline \end{tabular} \end{table} The rates calculated are the integral flux from all AGNs. Except for the prediction of Protheroe et al, all models predict diffuse event rates which exceed the atmospheric background rates. Thus the triad will have substantial capability for detecting UHE cascades at distances of several hundred meters, and could provide the first evidence for diffuse AGN fluxes. \section*{CONCLUSIONS} We have demonstrated the viability of the DUMAND detector, includ2ing successful deployment and operation of components required for a large-scale underwater neutrino observatory. The December 1993 deployment operation was in effect a full up test of the DUMAND design for hardware, software, system integration and analysis procedures, and results were remarkably favorable, given that DUMAND is one of the most complex oceanographic projects ever undertaken. Furthermore, tests with the acoustical system of DUMAND show that we have the capability of detecting PeV cascades in the ocean with our present system of hydrophones. We look forward to completing the full DUMAND II array. We welcome you to find out more about DUMAND with text and pictures accessed via the DUMAND Home Page on World Wide Web. The URL address is \noindent \begin{verbatim} http:\\web.phys.washington.edu/local_web/dumand/aaa_dumand_home.html \end{verbatim} You'll find colour photos, videos from the underwater camera, and other DUMAND news. Agencies providing the funds for construction include the US DOE, HEP Division; the Japanese Mombusho, from several funds; the Swiss NSF; the US NSF; all participating institutions, and the State of Hawaii. We would particularly like to thank Vincent Z. Peterson and Syo Tanaka, who retired recently, for their many contributions over the years.
2,869,038,155,159
arxiv
\section{Introduction} Let $X$ be a non-singular quasi-projective variety of dimension $n$ defined over a field of arbitrary characteristic, and let $\mathcal E$ be a vector bundle of rank $r$ over $X$. Let ${\mathbb G_X({d}, \mathcal E)}$ be the Grassmann bundle of $\mathcal E$ over $X$ parametrizing corank $d$ subbundles of $\mathcal E$ with projection $\pi : {\mathbb G_X({d}, \mathcal E)} \to X$, and let $\mathcal Q \gets \pi^*\mathcal E$ be the universal quotient bundle of rank ${d}$ on ${\mathbb G_X({d}, \mathcal E)}$. We denote by $\theta$ the first Chern class $c_1(\det\mathcal Q)= c_1(\mathcal Q)$ of $\mathcal Q$, and call $\theta$ the {\it Pl\"ucker class} of ${\mathbb G_X({d}, \mathcal E)}$: In fact, the determinant bundle $\det \mathcal Q$ is isomorphic to the pull-back of the tautological line bundle $\mathcal O_{\mathbb P_X(\wedge^{d} \mathcal E)}(1)$ of $\mathbb P_X(\wedge^{d} \mathcal E)$ by the relative Pl\"ucker embedding over $X$. The purpose of this article is to study the push-forward of powers of the Pl\"ucker class to $X$ by $\pi$, namely, $\pi_*(\theta^N)$, where $\pi_{*} : A^{*+d( r - d )}({\mathbb G_X({d}, \mathcal E)})\to A^{*}(X)$ is the push-forward by $\pi$ between the Chow rings. The main result is a closed formula for the push-forward of $\ch (\det\mathcal Q) :=\exp \theta = \sum_{N\ge 0} \frac{1}{N!}\theta^{N}$, the Chern character of $\det \mathcal Q$ in terms of the Segre classes of $\mathcal E$, as follows: \begin{theorem}\label{theorem:main_theorem} We have \begin{equation*} \pi_* \ch (\det \mathcal Q) = \sum_{k} \frac{ \prod_{0 \le i < j \le d-1}(k_i - k_j -i + j)} {\prod_{0 \le i \le d-1} (r + k_i -i )!} \prod_{0 \le i \le d-1} s_{k_i}(\mathcal E) \end{equation*} in $A^{*}(X)\otimes \mathbb Q$, where $k = (k_0, \dots , k_{d-1}) \in \mathbb Z_{\ge 0}^d$, and $s_i(\mathcal E)$ is the $i$-th Segre class of $\mathcal E$. \end{theorem} The Segre classes $s_i(\mathcal E)$ here are the ones satisfying $s(\mathcal E, t)c(\mathcal E, -t)=1$ as in \cite{fujita}, \cite{laksov}, \cite{laksov-thorup}, where $s(\mathcal E, t)$ and $c(\mathcal E, t)$ are respectively the Segre series and the Chern polynomial of $\mathcal E$ in $t$. Note that our Segre class $s_i(\mathcal E)$ differs by the sign $(-1)^i$ from the one in \cite{fulton}. Theorem \ref{theorem:main_theorem} yields \begin{corollary}[Degree Formula for Grassmann Bundles]% \label{corollary:degree_formula} If $X$ is projective and $\wedge^{d} \mathcal E$ is very ample, then ${\mathbb G_X({d}, \mathcal E)}$ is embedded in the projective space $\mathbb P(H^0(X, \wedge^{d} \mathcal E))$ by the tautological line bundle $\mathcal O_{{\mathbb G_X({d}, \mathcal E)}}(1)$, and its degree is given by $$ \deg {\mathbb G_X({d}, \mathcal E)} ={(d(r-d)+n)!} \sum_{\vert k\vert = n} \frac{ \prod_{0 \le i < j \le d-1}(k_i - k_j -i + j)} {\prod_{0 \le i \le d-1} (r + k_i -i )!} \int_{X} \prod_{0 \le i \le d-1} s_{k_i}(\mathcal E) , $$ where $\vert k \vert := \sum_i k_i$. \end{corollary} Here a vector bundle $\mathcal F$ over $X$ is said to be {\it very ample} if the tautological line bundle $\mathcal O_{\mathbb P_X(\mathcal F)}(1)$ of $\mathbb P_X(\mathcal F)$ is very ample. We also give a proof for the following: \begin{theorem} [\cite{kaji-terasoma}, \cite{manivel}] \label{theorem:another_formula} We have $$ \pi_*\ch (\det \mathcal Q) = \sum_{\lambda} \frac{1}{\vert \lambda +\varepsilon \vert !} f^{\lambda+\varepsilon} \varDelta_{\lambda} (s(\mathcal E)) $$ in $A^{*}(X)\otimes \mathbb Q$, where $\lambda =(\lambda_1 , \dots, \lambda_d)$ is a partition with $\vert \lambda \vert := \sum_i \lambda _i$, $\varepsilon := (r-d)^d =(r-d,\dots , r-d)$, $f^{\lambda+\varepsilon}$ is the number of standard Young tableaux with shape $\lambda+\varepsilon$, and $\varDelta_{\lambda}(s(\mathcal E)):= \det[s_{\lambda_i+j-i}(\mathcal E)]_{1 \le i,j\le d}$ is the Schur polynomial in the Segre classes of $\mathcal E$ corresponding to $\lambda$. \end{theorem} Note that our proofs for Theorem \ref{theorem:another_formula} as well as Theorem \ref{theorem:main_theorem} do not use the push-forward formula of J\'ozefiak-Lascoux-Pragacz \cite{jlp}, while the proofs given in \cite{kaji-terasoma}, \cite{manivel} do. We establish instead a new push-forward formula, as follows: Let $\X{d}$ be the partial flag bundle of $\mathcal E$ on $X$, parametrizing flags of subbundles of corank $1$ up to $d$ in $\mathcal E$, let $p : \X{d}\to X$ be the projection, and denote by $p_{*}: A^{*+c}(\X{d}) \to A^{*}(X)$ the push-forward by $p$, where $c$ is the relative dimension of $\X{d}/X$. Let $\xi_{0}, \dots , \xi_{d-1}$ be the set of Chern roots of $\mathcal Q$. It turns out (see \S1) that one may consider $A^{*+c}(\X{d})$ as an $A^{*}(X)$-algebra generated by the $\xi_{i}$. Then \begin{theorem}[Push-Forward Formula] \label{theorem:general_push_forward_formula} For any polynomial $F \in A^{*}(X)[T_0 \dots, T_{d-1}]$, we have $$ p_{*} F(\underline{\xi}) = \const_{\underline t} \Big( \Delta(\underline t) \prod_{i=0}^{d-1} t_i^{r -d} F(1/\underline{t}) \prod_{i=0}^{d-1} s(\mathcal E, t_i) \Big) , $$ in $A^{*}(X)$, where $\underline{\xi}:=({\xi_0}, \dots, {\xi_{d-1}})$, $\const_{\underline{t}}(\cdots)$ denotes the constant term in the Laurent expansion of $\cdots$ in $\underline{t} := (t_0 , \dots, t_{d-1})$, $\Delta(\underline t):=\prod_{0 \le i < j \le d-1}(t_i - t_j)$ and $F(1/\underline t) := F(1/t_0, \dots, 1/t_{d-1})$. \end{theorem} The contents of this article are organized as follows: The general theories \cite[\S6]{laksov-thorup}, \cite[\S\S0--1]{scott} on the structure of Chow ring of certain partial flag bundles are reviewed in \S1. Then, Theorem \ref{theorem:general_push_forward_formula} is proved in \S2, by which it is shown that $\pi_*\ch(\det Q)$ is given as the constant term of a certain Laurent series with coefficients in the Chow ring $\HH{X}$ of $X$, denoted by $P(\underline{t})$ (Proposition \ref{prop:Laurent_series}). To evaluate the constant term of $P(\underline{t})$, in \S3, a linear form on the Laurent polynomial ring, denoted by $\Phi$, is introduced (Definition \ref{definition:linear_form}), and an evaluation formula is proved (Proposition \ref{prop:evaluation_formula}): The evaluation formula is the key in the final step to prove Theorems \ref{theorem:main_theorem} and \ref{theorem:another_formula}. In \S4, a generalization of Cauchy determinant formula is given (Proposition \ref{prop:generalization_of_Cauchy_identity}). This yields another proof of a push-forward formula for monomials of the $\xi_{i}$ (Lemma \ref{lemma:monomial}). \section{Set-up} Let $X$ be a non-singular quasi-projective variety of dimension $n$ defined over a field $k$, let $\mathcal E$ be a vector bundle of rank $r $ on $X$, and let $\varpi : \mathbb P(\mathcal E) \to X$ be the projection. Denote by $\xi$ the first Chern class of the tautological line bundle $\mathcal O_{\mathbb P(\mathcal E)}(1)$, and define a polynomial $P_{\mathcal E} \in A^*(X)[T]$ associated to $\mathcal E$ by setting $$ P_{\mathcal E} (T):= T^r - c_1(\mathcal E)T^{r-1} + \cdots + (-1)^r c_r(\mathcal E) , $$ where $A^*(X)$ is the Chow ring of $X$. Then, $P_{\mathcal E}(\xi)=0$ by definition of the Chern classes (\cite[Remark 3.2.4]{fulton}), and \begin{equation}\label{equation:Chow_ring_proj_space_bundle} A^*(\mathbb P(\mathcal E)) =\bigoplus_{0 \le i \le r-1} A^*(X) \xi^i \simeq A^*(X)[T]/(P_{\mathcal E}(T)) \end{equation} (\cite[Theorem 3.3 (b); Example 8.3.4]{fulton}). Let $\varpi_* : A^{*+r-1}(\mathbb P(\mathcal E)) \to A^*(X)$ be the push-forward by $\varpi$. Then $\varpi_* \alpha $ is equal to the coefficient of $\alpha$ in $\xi^{r-1}$, denoted by $\coeff_{\xi}(\alpha)$, with respect to the decomposition \eqref{equation:Chow_ring_proj_space_bundle} for $\alpha \in A^{*+r-1}(\mathbb P(\mathcal E))$ (\cite[Proposition 3.1]{fulton}): \begin{equation}\label{equation:push_forward_for_projective_space_bundle} \varpi_* \alpha = \coeff_{\xi}(\alpha) \end{equation} Denote by $\X{d}$ the partial flag bundle of $\mathcal E$ on $X$, parametrizing flags of subbundles of corank $1$ up to $d$ in $\mathcal E$, and let $p : \X{d}\to X$ be the projection. Set $\mathcal E_0 := \mathcal E$, and let $\mathcal E_{i+1}$ be the kernel of the canonical surjection from the pull-back of $\mathcal E_{i}$ to $\mathbb P(\mathcal E_{i})$, to the tautological line bundle $\mathcal O_{\mathbb P(\mathcal E_{i})}(1)$, with $\rk \mathcal E_{i} = r-i$ $(i \ge 0)$. Set $\xi_i := c_1(\mathcal O_{\mathbb P(\mathcal E_{i})}(1))$. We have an exact sequence on $\mathbb P(\mathcal E_{i})$, $$ 0 \to \mathcal E_{i+1} \to \mathcal E_{i} \to \mathcal O_{\mathbb P(\mathcal E_{i})}(1) \to 0, $$ and an equation of Chern polynomials, \begin{equation}\label{equation:Chern_polynomials} c(\mathcal E_i,t) = c(\mathcal E_{i+1},t) (1 + \xi_i t) , \end{equation} where we omit the symbol of the pull-back by the projection $\mathbb P_{\mathbb P(\mathcal E_{i})}(\mathcal E_{i+1}) \to \mathbb P(\mathcal E_{i})$. It is easily shown that the projection $p : \X{d} \to X$ decomposes as a successive composition of projective space bundles, $\mathbb P_{\mathbb P(\mathcal E_{i})}(\mathcal E_{i+1}) \to \mathbb P(\mathcal E_{i})$ $(i \ge 0)$: $$ p : \X{d} = \mathbb P(\mathcal E_{d-1}) \to \mathbb P(\mathcal E_{d-2}) \to \cdots \to \mathbb P(\mathcal E_{1}) \to \mathbb P(\mathcal E_{0}) \to X . $$ In fact, $\mathbb P(\mathcal E_i) \simeq \X{i+1}$ $(0\le i \le d-1)$. Using \eqref{equation:Chow_ring_proj_space_bundle} repeatedly, we see that the Chow ring of $\X{d}$ is given as follows: \begin{equation}\label{equation:cohomology_ring} \HH{\X{{d}}} = \bigoplus_{\substack{0\le i_l \le {r} -l-1\\(0\le l \le {d} -1)}} \HH{X} \xi_0^{i_0} \xi_1^{i_1} \cdots \xi_{{d}-1}^{i_{{d} -1}} = \frac{\HH{X}[T_0, T_1, \dots ,T_{{d} -1}]} {( \{ P_{\mathcal E_i}(T_i)\vert 0 \le i \le d-1 \} )} . \end{equation} Denote by $p_* : A^{*+c}(\X{d})$ $\to A^{*}(X)$ the push-forward by $p$, where $c := \sum_{0 \le i \le d-1} (r-i-1)$, the relative dimension of $\X{d}/X$. Then, using \eqref{equation:push_forward_for_projective_space_bundle} repeatedly, we see that \begin{equation}\label{equation:trace} p_*\alpha = \coeff_{\underline{\xi}}(\alpha) \end{equation} for $\alpha \in \HH{\X{{d}}}$, where $\coeff_{\underline{\xi}}(\alpha)$ denotes the coefficient of $\alpha$ in $\xi_0^{r-1}\xi_1^{r-2}\cdots\xi_{d-1}^{r-d}$ with respect to the decomposition \eqref{equation:cohomology_ring}. Let ${G}:={\mathbb G_X({d}, \mathcal E)}$ be the Grassmann bundle of corank $d$ subbundles of $\mathcal E$ on $X$, and let $\mathcal Q \gets \pi^*\mathcal E$ be the universal quotient bundle of rank ${d}$. Consider the flag bundle $\G{d-1}$ of $\mathcal Q$ on ${G}$, parametrizing flags of subbundles of corank $1$ up to $d-1$ in $\mathcal Q$. Then, as in the case of $\X{d}$, the projection $\G{d-1}\to G$ decomposes as a successive composition of projective space bundles, $ \mathbb P_{\mathbb P(\mathcal Q_{i})}(\mathcal Q_{i+1})\to\mathbb P(\mathcal Q_{i})$ $(i \ge 0)$: $$ q : \G{d-1} = \mathbb P(\mathcal Q_{d-2}) \to \mathbb P(\mathcal Q_{d-2}) \to \cdots \to \mathbb P(\mathcal Q_{1}) \to \mathbb P(\mathcal Q_{0}) \to G , $$ where $\mathcal Q_0 := \mathcal Q$, and $\mathcal Q_{i+1}$ is the kernel of the canonical surjection from the pull-back of $\mathcal Q_{i}$ to $\mathbb P(\mathcal Q_{i})$, to the tautological line bundle $\mathcal O_{\mathbb P(\mathcal Q_{i})}(1)$, with $\rk \mathcal Q_{i} = d-i$ $(i \ge 0)$: In fact, $\mathbb P(\mathcal Q_i) \simeq \G{i+1}$ $(0\le i \le d-2)$ and $\mathbb P_{\mathbb P(\mathcal Q_{d-2})}(\mathcal Q_{d-1}) \simeq \mathbb P(\mathcal Q_{d-2}) \simeq \G{d-1}= \G{d}$. It follows from the construction of the $\mathcal Q_i$ that the Pl\"ucker class $\theta:=c_1(\det\mathcal Q)=c_1(\mathcal Q)$ is equal to the sum of the first Chern classes $c_1(\mathcal O_{\mathbb P(\mathcal Q_{i})}(1))$ $(0 \le i \le d-1)$ in $\HH{\G{{d}-1}}$, where $\mathcal O_{\mathbb P(\mathcal Q_{d-1})}(1) = \mathcal Q_{d-1}$ via $\mathbb P_{\mathbb P(\mathcal Q_{d-2})}(\mathcal Q_{d-1})\simeq \mathbb P(\mathcal Q_{d-2})$. It follows from the construction of the $\mathcal E_i$ that $\mathcal E_d$ is a corank $d$ subbundle of $p^*\mathcal E$ on $\X{d}$, which induces a morphism, $r : \X{d} \to G$ over $X$ by the universal property of the Grassmann bundle $G$. Then it turns out that $\G{d-1}$ is naturally isomorphic to $\X{d}$ over $G$ via $r$, as is easily verified by using the universal property of flag bundles: We identify them via the natural isomorphism $\G{d-1} \simeq \X{d}$. Under this identification, it follows that $p=\pi \circ q$ and $\xi_i = c_1(\mathcal O_{\mathbb P(\mathcal E_{i})}(1)) = c_1(\mathcal O_{\mathbb P(\mathcal Q_{i})}(1)) $ in $\HH{\X{{d}}}=\HH{\G{{d}-1}}$ $(0 \le i \le d-1)$, where the symbol of pull-back to $\X{d}=\G{d-1}$ is omitted, as before. Thus we have \begin{equation}\label{equation:theta} q^*\theta = \xi_0 + \cdots + \xi_{d-1} \end{equation} in $\HH{\X{d}}=\HH{\G{d-1}}$. For details, we refer to \cite[\S6]{laksov-thorup}, \cite[\S\S0--1]{scott}. \section{Laurent series} We keep the same notation as in \S1. \begin{lemma}\label{lemma:xi} For any non-negative integer ${p}$, $$ \coeff_{\xi}(\xi^{p}) = \const_t( t^{-{p}+r-1}s(\mathcal E,t)) , $$ where $\const_t(\cdots)$ denotes the constant term in the Laurent expansion of $\cdots$ in $t$. \end{lemma} \begin{proof} Set $R_{{p}}(x_{{p}} , \dots , x_{{{p}}-r}):=\sum_{i=0}^{r}(-1)^i c_i(\mathcal E) x_{{{p}}-i}$, and consider a recurring relation, $R_{{p}}(x_{{p}} , \dots , x_{{{p}}-r})=0$ $({{p}} \ge r)$ for $\{ x_i \} \subseteq A^*(X)$. If $a_{{p}}:= \coeff_{\xi}(\xi^{{p}})$, then $$ R_{{p}}(a_{{p}} , \dots , a_{{{p}}-r}) =\coeff_{\xi}\Big(\sum_{i=0}^{r}(-1)^i c_i(\mathcal E)\xi^{{p}-i}\Big) = 0 $$ by $P_{\mathcal E}(\xi)=0$. On the other hand, if $b_{{p}}:= \const_t( t^{-{{p}}-1+r}s(\mathcal E,t))$, then $$ R_{{p}}(b_{{p}} , \dots , b_{{{p}}-r}) =\const_t\Big(\sum_{i=0}^{r} c_i(\mathcal E) (-t)^{i}t^{-{{p}}-1+r}s(\mathcal E,t)\Big) =\const_t(t^{-{{p}}-1+r})= 0 $$ by $c(\mathcal E,-t)s(\mathcal E,t)=1$. Thus both of $\{a_{{p}}\}$ and $\{b_{{p}}\}$ satisfy the recurring relation $R_{{p}} = 0$, so that $a_{{p}} = b_{{p}}$ for all ${{p}}$: Indeed, $a_{r}=b_{r}=c_1(\mathcal E)$, $a_{r-1}=b_{r-1}=1$, and $a_{{p}} =b_{{p}} = 0$ if $0\le {{p}} \le r-2$. We here note that $x_{{p}}$ is determined by $x_{{{p}}-1}, \dots, x_{{{p}}-r}$ if $R_{{p}}(x_{{p}}, \dots x_{{{p}}-r})=0$. \end{proof} \begin{lemma}\label{lemma:monomial} For any non-negative integers $p_0, \dots , p_{d-1}$, we have $$ \coeff_{\underline{\xi}}(\xi_{0}^{p_{0}} \cdots \xi_{d-1}^{p_{d-1}} ) = \const_{\underline t} \Big( \Delta(\underline{t}) \prod_{i=0}^{d-1} t_i^{-p_i + r -d} s(\mathcal E, t_i) \Big) , $$ where $\const_{\underline{t}}(\cdots)$ denotes the constant term in the Laurent expansion of $\cdots$ in $\underline{t} := (t_0 , \dots, t_{d-1})$, and $\Delta(\underline{t}):= \prod_{0 \le i < j \le d-1}(t_i - t_j)$ is the Vandermonde polynomial of $\underline t$. \end{lemma} \begin{proof} Since $s(\mathcal E_{d-1}, t_{d-1})= (1-\xi_{d-2}t_{d-1}) s(\mathcal E_{d-2}, t_{d-1}) $ by \eqref{equation:Chern_polynomials}, it follows from Lemma \ref{lemma:xi} that $$ \coeff_{{\xi_{d-1}}} (\xi_{d-1}^{p_{d-1}}) = \const_{t_{d-1}}(t_{d-1}^{-p_{d-1}+r -d} (1-\xi_{d-2}{t_{d-1}} )s(\mathcal E_{d-2}, t_{d-1}) ) $$ in $A^{*}(\mathbb P(\mathcal E_{d-2}))$, where $\coeff_{{\xi_{d-1}}} (\cdots)$ denotes the coefficient of $\cdots$ in $\xi_{d-1}^{r -d}$. Therefore, using Lemma \ref{lemma:xi} again, we have {\allowdisplaybreaks % \begin{align*} \coeff&_{{\xi_{d-2}}, {\xi_{d-1}}} (\xi_{d-2}^{p_{d-2}} \xi_{d-1}^{p_{d-1}}) \\=& \coeff_{{\xi_{d-2}}} ( \xi_{d-2}^{p_{d-2}} \const_{t_{d-1}}(t_{d-1}^{-p_{d-1}+r -d} (1-\xi_{d-2}{t_{d-1}} )s(\mathcal E_{d-2}, t_{d-1}) ) ) \\=& \coeff_{{\xi_{d-2}}} ( \xi_{d-2}^{p_{d-2}} \const_{t_{d-1}}(t_{d-1}^{-p_{d-1}+r -d} s(\mathcal E_{d-2}, t_{d-1}) ) ) \\& \hskip 3pt + \coeff_{{\xi_{d-2}}} ( \xi_{d-2}^{{p_{d-2}}+1} \const_{t_{d-1}}(t_{d-1}^{-p_{d-1}+r -d} (-t_{d-1} )s(\mathcal E_{d-2}, t_{d-1}) ) ) \\=& \const_{t_{d-2}}(t_{d-2}^{-p_{d-2}+r -d+1} s(\mathcal E_{d-2}, t_{d-2}) ) \const_{t_{d-1}}(t_{d-1}^{-p_{d-1}+r -d} s(\mathcal E_{d-2}, t_{d-1}) ) \\& \hskip 3pt + \const_{t_{d-2}}(t_{d-2}^{-(p_{d-2}+1)+r -d+1} s(\mathcal E_{d-2}, t_{d-2}) ) \const_{t_{d-1}}(t_{d-1}^{-p_{d-1}+r -d} (-t_{d-1} )s(\mathcal E_{d-2}, t_{d-1}) ) \\=& \const_{t_{d-2}, t_{d-2}} (t_{d-2}^{-p_{d-2}+r -d+1} s(\mathcal E_{d-2}, t_{d-2}) t_{d-1}^{-p_{d-1}+r -d} s(\mathcal E_{d-2}, t_{d-1}) ) \\& \hskip 3pt + \const_{t_{d-2}, t_{d-2}} (t_{d-2}^{-p_{d-2}+r -d} s(\mathcal E_{d-2}, t_{d-2}) t_{d-1}^{-p_{d-1}+r -d} (-t_{d-1} )s(\mathcal E_{d-2}, t_{d-1}) ) \\=& \const_{t_{d-2}, t_{d-2}} \Big( (t_{d-2} - t_{d-1} ) \prod_{i=d-2}^{d-1} t_{i}^{-p_{i}+r -d} s(\mathcal E_{d-2}, t_{i}) \Big) \end{align*} }% in $A^{*}(\mathbb P(\mathcal E_{d-3}))$, where $\coeff_{{\xi_{d-2}}, {\xi_{d-1}}} (\cdots)$ denotes the coefficient of $\cdots$ in $\xi_{d-2}^{r-d+1}\xi_{d-1}^{r-d}$, and $\coeff_{{\xi_{d-2}}} (\cdots)$ the coefficient of $\cdots$ in $\xi_{d-2}^{r -d+1}$. Repeating this procedure, we obtain the conclusion. \end{proof} \begin{remark} Expanding the determinant $\Delta(\underline t)$ in the right-hand side in Lemma \ref{lemma:monomial}, using \eqref{equation:trace}, we obtain a formula, $p_*(\xi_{0}^{p_{0}}\cdots\xi_{d-1}^{p_{d-1}}) = \det [s_{p_i+j-r+1}(\mathcal E)]_{0 \le i , j \le d-1}$ in terms of the Schur polynomials in Segre classes of $\mathcal E$, which is equivalent to the determinantal formula \cite[8.1 Theorem]{laksov} with $f_i(\xi_i):= \xi_i^{p_i}$ $(0 \le i \le d-1)$. \end{remark} \begin{proposition}\label{prop:general_push_forward_formula} For any polynomial $F \in A^{*}(X)[T_0, \dots, T_{d-1}]$, we have $$ \coeff_{\underline {\xi}} (F(\underline{\xi})) = \const_{\underline t} \Big( \Delta(\underline t) \prod_{i=0}^{d-1} t_i^{r -d} F(1/\underline{t}) \prod_{i=0}^{d-1} s(\mathcal E, t_i) \Big) , $$ where $\underline{\xi}:=({\xi_0}, \dots, {\xi_{d-1}})$, $\const_{\underline{t}}(\cdots)$ denotes the constant term in the Laurent expansion of $\cdots$ in $\underline{t} := (t_0 , \dots, t_{d-1})$, $\Delta(\underline t):=\prod_{0 \le i < j \le d-1}(t_i - t_j)$, and $F(1/\underline t) := F(1/t_0, \dots, 1/t_{d-1})$. \end{proposition} \begin{proof} This follows from Lemma \ref{lemma:monomial}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:general_push_forward_formula}] The assertion follows from \eqref{equation:trace} and Proposition \ref{prop:general_push_forward_formula}. \end{proof} \begin{proposition}\label{prop:Laurent_series} With the same notation as in \S1, we have $$ \pi_* \ch(\det \mathcal Q) = \const_{\underline{t}}(P(\underline{t})) , $$ where $\pi_* : A^{*+d( r - d )}({\mathbb G_X({d}, \mathcal E)}) \otimes \mathbb Q \to A^{*}(X) \otimes \mathbb Q$ is the push-forward by $\pi$, $\ch(\det \mathcal Q)$ is the Chern character of $\det \mathcal Q$, $\const_{\underline{t}}(\cdots)$ denotes the constant term in the Laurent expansion of $\cdots$ in $\underline{t} := (t_0 , \dots, t_{d-1})$, and $$ P(\underline{t}) := \Delta(\underline t) \prod_{i=0}^{d-1} t_i^{r-d-(d-1-i)} \exp\Big( \sum_{i=0}^{d-1} \frac{1}{t_i} \Big) \prod_{i=0}^{d-1} s(\mathcal E, t_i) . $$ \end{proposition} Note that, though $\exp\big( \sum_{i=0}^{d-1} \frac{1}{t_i} \big)$ is an element in $\mathbb Q[[\{ \frac{1}{t_i} \}_{0 \le i \le d-1}]]$, $\const_{\underline{t}}(P(\underline{t}))$ is well defined since the Segre series are polynomials in $\underline t$. \begin{proof} Since $\G{i+1} \to \G{i}$ is a $\mathbb P^{d-1-i}$-bundle, using \cite[Proposition 3.1]{fulton} repeatedly, for a non-negative integer $N$, we have $$ \theta^N= q_*(\xi_0^{d-1} \xi_1^{d-2}\cdots \xi_{d-2}q^*\theta^N) , $$ where $q$ is the composition of the projections, $\G{d-1}\to \cdots \to \G{1} \to G$. It follows from \eqref{equation:theta} and the commutativity $p=\pi \circ q$ via the identification $\G{d-1} = \X{d}$ that {\allowdisplaybreaks % \begin{align*} \pi_*(\theta^N) =& \pi_* q_*( \xi_0^{d-1} \xi_1^{d-2}\cdots \xi_{d-2}q^*\theta^N) \\ =& \pi_* q_* \Big( \prod_{i=0}^{d-1} \xi_i^{d-1-i} \Big( \sum_{i=0}^{d-1} \xi_i \Big)^N \Big) = p_*\Big( \prod_{i=0}^{d-1} \xi_i^{d-1-i} \Big( \sum_{i=0}^{d-1} \xi_i \Big)^N \Big) , \end{align*} where $p$ is the composition of the projections, $\X{d}\to \cdots \to \X{1} \to X$. Now, apply Theorem \ref{theorem:general_push_forward_formula} with $F:= \prod_{i=0}^{d-1} T_i^{d-1-i} \Big( \sum_{i=0}^{d-1} T_i \Big)^N $. Then, $$ p_* \Big( \prod_{i=0}^{d-1} \xi_i^{d-1-i} \Big( \sum_{i=0}^{d-1} \xi_i \Big)^N \Big) = \const_{\underline t} \Big( \Delta(\underline t) \prod_{i=0}^{d-1} t_i^{r -d-(d-1-i)} \Big( \sum_{i=0}^{d-1} t_i^{-1} \Big) ^N \prod_{i=0}^{d-1} s(\mathcal E, t_i) \Big) . $$ Thus the conclusion follows with $\ch(\det \mathcal Q) = \exp (\theta)$. \end{proof} \section{A linear form on the Laurent polynomial ring} \begin{definition}\label{definition:linear_form} Let $A$ be a $\mathbb Q$-algebra. We define a linear form $\Phi:A[\{t_i,\frac{1}{t_i}\}_{0 \le i \le d-1}] \to A$ on the Laurent polynomial ring $A[\{t_i,\frac{1}{t_i}\}_{0 \le i \le d-1}]$ by \begin{equation*} \Phi(f):= \const_{\underline{t}} \Big( \Delta(\underline{t}) \exp\Big( \sum_{i=0}^{d-1} \frac{1}{t_i} \Big) f (\underline t) \Big) \qquad \Big(f \in A\Big[\Big\{t_i,\frac{1}{t_i} \Big\}_{0 \le i \le d-1} \Big]\Big), \end{equation*} where $\underline t:= (t_0, \dots,t_{d-1})$. \end{definition} \begin{lemma}\label{lemma:constant_part_linear_form} \begin{enumerate} \item \label{lemma:constant_part_linear_form_1} Consider the natural action of the permutation group $\mathfrak S_d$ on \linebreak $A[\{t_i,\frac{1}{t_i}\}_{0 \le i \le d-1}]$ with $\sigma(t_i):= t_{\sigma(i)}\; (\sigma \in \mathfrak S_d)$. Then we have $\Phi(\sigma(f))=\sgn(\sigma)\Phi(f)$. As a consequence, we have $$ \Phi\Big(\prod_{i=0}^{d-1}t_i^{-(d-1-i)}f(\underline{t})\Big) =(-1)^{d(d-1)/2}\Phi\Big(\prod_{i=0}^{d-1}t_i^{-i}f(\underline{t})\Big) $$ for a symmetric function $f(\underline{t})$. \item \label{lemma:constant_part_linear_form_3} For a Schur polynomial $s_{\lambda}(\underline{t})$ and a symmetric function $f(\underline{t})$, we have $$ \Phi\Big(\prod_{i=0}^{d-1}t_i^{-i}f(\underline{t})s_{\lambda}(\underline{t})\Big) = \Phi\Big(\prod_{i=0}^{d-1} t_i^{-i+\lambda_{i+1}} f(\underline{t}) \Big) . $$ \end{enumerate} \end{lemma} Here the {\it Schur polynomial} $s_{\lambda}(\underline t)$ in $\underline t=(t_0 , \dots, t_{d-1})$ for a partition ${\lambda} = (\lambda_1, \dots , \lambda_d)$ is the polynomial defined by $$ s_{\lambda}(\underline t) := \frac{\det[t_j^{\lambda_{i}+d-i}] } {\det [ t_{j} ^{d-i} ] } = \frac{\det[t_j^{\lambda_{i}+d-i}] } {\Delta(\underline t) } , $$ where $1 \le i\le d$, $0 \le j \le d-1$ (see, {\it e.g.}, \cite[14.5 and A.9]{fulton}, \cite[Chapter I, \S3]{macdonald}). \begin{proof} \eqref{lemma:constant_part_linear_form_1}. The assertion is a direct consequence from the definition of $\Phi$ and a property of $\Delta(\underline{t})$. \eqref{lemma:constant_part_linear_form_3}. Using \eqref{lemma:constant_part_linear_form_1}, we have {\allowdisplaybreaks % \begin{align*} \Phi \Big( \prod_{i=0}^{d-1} t_i^{-i}f(\underline{t})s_{\lambda}(\underline{t}) \Big) &= \frac{1}{d!} \Phi \Big( \prod_{i=0}^{d-1}t_i^{-(d-1)} f(\underline{t})s_{\lambda}(\underline{t}) \sum_{\sigma \in \mathfrak{S}_d} \sgn(\sigma) \prod_{i=0}^{d-1}t_{\sigma(i)}^{d-1-i} \Big) \\ &= \frac{1}{d!} \Phi \Big( \prod_{i=0}^{d-1}t_i^{-(d-1)} f(\underline{t})s_{\lambda}(\underline{t}) \Delta(\underline{t}) \Big) \\ &= \frac{1}{d!} \Phi \Big( \prod_{i=0}^{d-1}t_i^{-(d-1)} f(\underline{t}) \det[t_j^{\lambda_l+d-l}]_{1 \le l \le d, 0 \le j \le d-1} \Big) \\ &= \frac{1}{d!} \sum_{\sigma \in \mathfrak{S}_d} \sgn(\sigma) \Phi \Big( \prod_{i=0}^{d-1} t_{\sigma(i)}^{-i+\lambda_{i+1}} f(\underline{t}) \Big) = \Phi \Big( \prod_{i=0}^{d-1}t_i^{-i+\lambda_{i+1}} f(\underline{t}) \Big) . \qedhere \end{align*} }% \end{proof} To simplify the notation, for a finite set of integers $\{ a_i \}_{0 \le i \le d-1}$, set $$ \pr{a_i} := \prod_{0 \le i \le d-1} a_i ! , \quad \Delta(a_i) := \prod_{0 \le i< j \le d-1}(a_i - a_j) . $$ Setting $m! := \Gamma(m+1)$ for $m \in \mathbb Z$, we have $1/m! = 0$ if $m < 0$. \begin{proposition}[Evaluation Formula] \label{prop:evaluation_formula} For $k = (k_0 , \dots, k_{d-1}) \in \mathbb Z_{\ge 0}^d$, we have $$ \Phi\Big(\prod_{i=0}^{d-1}t_i^{k_i}\Big) = \frac{(-1)^{d(d-1)/2} \Delta(k_i)} {\{k_i+d-1\}!}. $$ \end{proposition} \begin{proof} We have {\allowdisplaybreaks % \begin{align*} \Phi\Big(\prod_{i=0}^{d-1}t_i^{k_i}\Big) &= \const_{\underline{t}} \Big( \sum_{\sigma\in \mathfrak S_d} \sgn(\sigma)\prod_{i=0}^{d-1}\Big(t_i^{k_i+d-1-\sigma(i)} \exp \Big(\frac{1}{t_i}\Big) \Big) \Big) \\ &= \sum_{\sigma\in \mathfrak S_d} \sgn(\sigma)\prod_{i=0}^{d-1} \const_{{t_i}} \Big(t_i^{k_i+d-1-\sigma(i)} \exp \Big(\frac{1}{t_i}\Big) \Big) \\ &= \sum_{\sigma\in \mathfrak S_d} \frac{\sgn(\sigma)} {\{k_i+d-1-\sigma(i)\}!} = \det \begin{bmatrix} \dfrac{1} {(k_i+d-1-j)!} \end{bmatrix} _{0 \le i,j \le d-1} \\ &= \frac{(-1)^{d(d-1)/2} \Delta(k_i)} {\{k_i+d-1\}!}. \end{align*} The last equality follows from the lemma below. \end{proof} \begin{lemma} [{\cite[Example A.9.3]{fulton}}] \label{lemma:det} $$ \det \begin{bmatrix} \dfrac{1}{(x_i + j)! } \end{bmatrix} _{0 \le i,j \le d-1} = \frac{\Delta(x_i)}{\pr{x_i + d-1} } . $$ \end{lemma} \begin{proof}[Proof of Theorem \ref{theorem:main_theorem}] By Proposition \ref{prop:Laurent_series} and Lemma \ref{lemma:constant_part_linear_form} \eqref{lemma:constant_part_linear_form_1} with $A:= A^{*}(X)\otimes \mathbb Q$, we have {\setlength{\multlinegap}{36pt} \begin{multline} \label{last computation} \pi_* \ch(\det \mathcal Q) = \Phi \Big( \prod_{i=0}^{d-1} t_i^{-(d-1-i)} \prod_{i=0}^{d-1} \big( t_i^{r -d}s(\mathcal E, t_i) \big) \Big) \\= (-1)^{d(d-1)/2} \Phi \Big( \prod_{i=0}^{d-1} t_i^{-i} \prod_{i=0}^{d-1}\big( t_i^{r-d}s(\mathcal E, t_i) \big) \Big) . \end{multline} }% Since $$ \prod_{i=0}^{d-1}s(\mathcal E, t_i) = \sum_{k} \prod_{i=0}^{d-1} s_{k_i}(\mathcal E) t_i^{k_i} , $$ it follows from Proposition \ref{prop:evaluation_formula} that the most right-hand side of (\ref{last computation}) is equal to $$ (-1)^{d(d-1)/2} \sum_k \Phi \Big( \prod_{i=0}^{d-1} t_i^{r -d+k_i-i} \prod_{i=0}^{d-1} s_{k_i}(\mathcal E) \Big) = \sum_k \frac{\Delta(k_{i}-i)}{ \{r +k_{i}-i-1\}!} \prod_{i=0}^{d-1}s_{k_i}(\mathcal E) , $$ where $k = (k_0, \dots , k_{d-1}) \in \mathbb Z_{\ge 0}^d$. Thus we obtain the conclusion. \end{proof} \begin{proof}[Proof of Corollary \ref{corollary:degree_formula}] By the assumption ${\mathbb G_X({d}, \mathcal E)}$ is projective and the tautological line bundle $\mathcal O_{\mathbb P_X(\wedge^{d} \mathcal E)}(1)$ defines an embedding $\mathbb P_X(\wedge^{d} \mathcal E) \hookrightarrow \mathbb P(H^0(X, \wedge^{d} \mathcal E))$. Therefore ${\mathbb G_X({d}, \mathcal E)}$ is considered to be a projective variety in $\mathbb P(H^0(X, \wedge^{d} \mathcal E))$ via the relative Pl\"ucker embedding ${\mathbb G_X({d}, \mathcal E)} \hookrightarrow \mathbb P_X(\wedge^{d} \mathcal E)$ over $X$ defined by the quotient $\wedge^d\pi^*\mathcal E \to \wedge^d \mathcal Q=\det \mathcal Q$. Since the hyperplane section class of ${\mathbb G_X({d}, \mathcal E)}$ is equal to the Pl\"ucker class $\theta$, we obtain the conclusion, taking the degree of the equality in Theorem \ref{theorem:main_theorem}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:another_formula}] By Lemmas \ref{lemma:Cauchy_formula} below, \ref{lemma:constant_part_linear_form} \eqref{lemma:constant_part_linear_form_3} and Proposition \ref{prop:evaluation_formula}, the most right-hand side of (\ref{last computation}) is equal to {\allowdisplaybreaks % \begin{align*} (-1)^{d(d-1)/2} & \sum_{\lambda} \Phi \Big( \prod_{i=0}^{d-1}t_i^{r-d-i} s_{\lambda}(\underline{t}) \Big) \varDelta_{\lambda}(s(\mathcal E)) \\ &= (-1)^{d(d-1)/2} \sum_{\lambda} \Phi \Big( \prod_{i=0}^{d-1}t_i^{r-d-i+\lambda_{i+1}} \Big) \varDelta_{\lambda}(s(\mathcal E)) \\ &= \sum_{\lambda} \frac{\Delta(r-d-i+\lambda_{i+1})} {\{ r-d-i+\lambda_{i+1} + (d-1) \}!} \varDelta_{\lambda}(s(\mathcal E)) \\ &= \sum_{\lambda} \frac{\Delta(\lambda_{i+1}-(i+1))}{ \{\lambda_{i+1}+r-(i+1)\}!} \varDelta_{\lambda}(s(\mathcal E)) = \sum_{\lambda} \frac{f^{\lambda + \varepsilon}}{\vert \lambda + \varepsilon \vert !} \varDelta_{\lambda}(s(\mathcal E)) . \qedhere \end{align*} }% \end{proof} \begin{lemma} \label{lemma:Cauchy_formula} $$ \prod_{i=0}^{d-1} s(\mathcal E,t_i) = \sum_{\lambda} \varDelta_{\lambda}(s(\mathcal E)) s_{\lambda}(\underline{t}) . $$ \end{lemma} \begin{proof} Using Cauchy identity \cite[Chapter I, (4.3)]{macdonald} and Jacobi-Trudi identity \cite[Lemma A.9.3]{fulton}, we have $$ \prod_{i=0}^{d-1} s(\mathcal E,t_i)= \prod_{i=0}^{d-1}\frac{1}{c(\mathcal E,-t_i)} =\prod_{i=0}^{d-1}\prod_{j=1}^r \frac{1}{1-\alpha_jt_i} = \sum_{\lambda}s_{\lambda}(\underline{\alpha})s_{\lambda}(\underline{t}) = \sum_{\lambda} \varDelta_{\lambda}(s(\mathcal E)) s_{\lambda}(\underline{t}) , $$ where $\underline \alpha = \{ \alpha_1, \dots , \alpha_{r} \}$ are the Chern roots of the vector bundle $\mathcal E$. \end{proof} \section{Appendix: A generalization of {Cauchy Determinant Formula}} Consider a polynomial ring $R_1:=A[\xi_0, \dots, \xi_{r-1}]$ with $r$ variables over a $\mathbb Q$-algebra $A$. Denote by $c''_i$ the $i$-th elementary symmetric polynomial in $\xi_d,\dots, \xi_{r-1}$, and by $c_i$ the $i$-th elementary symmetric polynomial in $\xi_0,\dots, \xi_{r-1}$. We define the Segre series $s(t)$ by $$ s(t):=\frac{1}{\prod_{i=0}^{r-1}(1-\xi_it)}. $$ Set $R_2:=A[\xi_0, \dots, \xi_{d-1},c''_1, \dots, c''_{r-d}]$, and $R_3:=A[c_1, \dots, c_{r}]$. Then, $R_1 \supset R_2 \supset R_3$, and $R_1$ (resp. $R_2)$ is a free $R_3$-modules generated by $\{\xi_0^{i_0}\cdots \xi_{r-1}^{i_{r-1}}\}$ (resp. $\{\xi_0^{i_0}\cdots \xi_{d-1}^{i_{d-1}}\}$), where $0\leq i_l \leq r-l-1$ (see, {\it e.g.}, \cite[Chapitre 4, \S6]{bourbaki}, \cite[\S\S2--3]{laksov}). In particular, we have a decomposition, \begin{equation}\label{equation:cohomology_ring_genral_setting} R_2 = \bigoplus_{\substack{0\le i_l \le {r} -l-1\\(0\le l \le {d} -1)}} R_3 \cdot \xi_0^{i_0} \xi_1^{i_1} \cdots \xi_{{d}-1}^{i_{{d} -1}} . \end{equation} For $\alpha \in R_2$, we denote by $\coeff_{\underline{\xi}}(\alpha)$ the coefficient of $\alpha$ in $\xi_0^{r-1}\cdots \xi_{d-1}^{r-d}$ with respect to the decomposition \eqref{equation:cohomology_ring_genral_setting}. Let ${\mathcal A}$ (resp. ${\mathcal A}'$, ${\mathcal A}''$) be the anti-symmetrizer for variables $\{ \xi_0, \dots,$ $\xi_{r-1} \}$ (resp. $\{ \xi_0,\dots, \xi_{d-1} \}$, $\{ \xi_{d} , \dots,$ $\xi_{r-1} \}$), that is, ${\mathcal A}(\alpha):=\sum_{\sigma \in \mathfrak S_r} \sgn(\sigma)\sigma(\alpha)$ $(\alpha \in R_1)$, for instance. \begin{proposition}[Generalization of {Cauchy Determinant Formula}] \label{prop:generalization_of_Cauchy_identity} We have an equality $$ {\mathcal A} \Big( \frac{\Delta(\xi_0, \dots,\xi_{d-1})\Delta(\xi_d,\dots,\xi_{r-1})} {\prod_{0\leq i,j\leq d-1}(\tau_j-\xi_i)} \Big) = \frac{\Delta(\xi_0, \dots,\xi_{r-1})} {\prod_{0\leq i\leq r-1,0\leq j\leq d-1}(\tau_j-\xi_i)} . $$ By setting $\tau_i:=\dfrac{1}{t_i}$, we have $$ {\mathcal A} \Big( \frac{\Delta(\xi_0,\dots, \xi_{d-1}) \cdot \Delta(\xi_d, \dots,\xi_{r-1})}{\prod_{0\leq i,j \leq d-1}(1-\xi_it_j)} \Big)=\frac{\Delta(\xi_0,\dots, \xi_{r-1})\prod_{i=0}^{d-1}t_i^{r-d}} {\prod_{0\leq i \leq r-1, 0 \leq j\leq d-1}(1-\xi_it_j)} . $$ \end{proposition} \begin{proof} The fractional expression, $$ {\mathcal A} \Big( \frac{\Delta(\xi_0, \dots,\xi_{d-1})\Delta(\xi_d, \dots,\xi_{r-1})} {\prod_{0\leq i,j\leq d-1}(\tau_j-\xi_i)} \Big) \prod_{0\leq i\leq r-1,0\leq j \leq d-1}(\tau_j-\xi_i) $$ is actually a homogeneous polynomial in the variables, $\xi_0, \dots, \xi_{r-1}$, $\tau_0, \dots, \tau_{r-1}$, with degree ${d(d-1)}/{2}+{(r-d)(r-d-1)}/{2}-d^2+rd={r(r-1)}/{2}$, and anti-symmetric with respect to the $\xi_i$. Therefore it is a multiple of $\Delta(\xi_0, \dots, \xi_{r-1})$. By comparing the coefficient of $\xi_0^{r-1}\cdots \xi_{r-1}^0$, we see that those polynomials are equal to each other, and we obtain the first equality. The second equality follows from the first one. \end{proof} \begin{proof}[Another Proof of Lemma \ref{lemma:monomial}] Let $G(\underline t)$ be the generating function of $\coeff_{\underline{\xi}} (\xi_0^{p_0}\cdots \xi_{d-1}^{p_{d-1}}), $ that is, $$ G(\underline t) := \sum_{p_0, \dots, p_{d-1}\geq 0} \coeff_{\underline{\xi}}(\xi_{0}^{p_{0}} \cdots \xi_{d-1}^{p_{d-1}} ) t_0^{p_0} \cdots t_{d-1}^{p_{d-1}} . $$ For $0\leq i_l \leq r-l-1$, we have $$ {\mathcal A}(\xi_0^{i_0}\cdots \xi_{r-1}^{i_{r-1}})= \begin{cases} \Delta(\xi_0, \dots, \xi_{r-1}) , & (i_0,\dots, i_{r-1})= (r-1, \dots, 0) , \\ 0 , & (i_0,\dots, i_{r-1})\neq (r-1, \dots, 0). \\ \end{cases} $$ Since ${\mathcal A}$ is $R_3$-linear, we have an equality, $$ {\mathcal A}(\alpha\cdot\xi_d^{r-d-1}\cdots \xi_{r-1}^0) = \coeff_{\underline{\xi}} (\alpha) \Delta(\xi_0,\dots, \xi_{r-1}) $$ in $R_1$ for $\alpha\in R_2$. Therefore, {\allowdisplaybreaks % \begin{align*} \begin{split} {\Delta(\xi_0,\dots, \xi_{r-1})} {G(\underline t)} &= \sum_{p_0, \dots, p_{d-1}\geq 0}{\mathcal A}(\xi_0^{p_0},\dots, \xi_{d-1}^{p_{d-1}} \cdot\xi_d^{r-d-1}\cdots \xi_{r-1}^0)\ t_0^{p_0}\cdots t_{d-1}^{p_{d-1}} \\ &= {\mathcal A} \Big( \frac{\xi_d^{r-d-1}\cdots \xi_{r-1}^0} {(1-\xi_0t_0)\cdots (1-\xi_{d-1}t_{d-1})} \Big) \\ &= {\mathcal A} \Big( {\mathcal A}' \Big( \frac{1}{(1-\xi_0t_0)\cdots (1-\xi_{d-1}t_{d-1})} \Big) {\mathcal A}''(\xi_d^{r-d-1}\cdots \xi_{r-1}^0) \Big) \\ &= {\mathcal A} \Big( \frac{ \Delta(t_0,\dots, t_{d-1}) \Delta(\xi_0,\dots, \xi_{d-1}) \Delta(\xi_d, \dots,\xi_{r-1})}{\prod_{0\leq i,j \leq d-1}(1-\xi_it_j)} \Big) \\ &= {\mathcal A} \Big( \frac{ \Delta(\xi_0,\dots, \xi_{d-1}) \Delta(\xi_d, \dots,\xi_{r-1})}{\prod_{0\leq i,j \leq d-1}(1-\xi_it_j)} \Big) \Delta(t_0,\dots, t_{d-1}) . \end{split} \end{align*} }% Here we used the equality, $$ {\mathcal A}(f(\xi_0, \dots, \xi_{d-1}) g(\xi_d,\dots, \xi_{r-1})) = {\mathcal A}({\mathcal A}'(f(\xi_0, \dots, \xi_{d-1})) {\mathcal A}''(g(\xi_d,\dots, \xi_{r-1}))) $$ and Cauchy determinant formula (\cite[p.67, I.4, Example 6]{macdonald}). Finally, using Proposition \ref{prop:generalization_of_Cauchy_identity}, we see that $$ {G(\underline t)} = \frac{\Delta(t_0,\dots, t_{d-1})\prod_{i=0}^{d-1}t_i^{r-d}} {\prod_{0\leq i \leq r-1, 0 \leq j\leq d-1}(1-\xi_it_j)} = \Delta(t_0,\dots, t_{d-1})\prod_{i=0}^{d-1}t_i^{r-d} s(t_i) , $$ and this proves Lemma \ref{lemma:monomial} with $R_{1}:= A^{*}(X)$ and $R_{2}:= A^{*}(\X{d})= A^{*}(\G{d-1})$. \end{proof} \smallskip \noindent% {\it Acknowlgments.} The authors thank Professor Hiroshi Naruse and Professor Takeshi Ikeda, too, for useful discussion and kind advice. The first author is supported by JSPS KAKENHI Grant Number 25400053. The second author is supported by JSPS KAKENHI Grant Number 15H02048.
2,869,038,155,160
arxiv
\section*{Appendix} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.2\textwidth} \includegraphics[scale=1.2]{figures/drawing_convnet.pdf} \caption{\mbox{\textit{ConvNet}~}} \label{fig:drawing_convnet} \end{subfigure} ~ \begin{subfigure}[b]{0.2\textwidth} \includegraphics[scale=1.2]{figures/drawing_small_convnet.pdf} \caption{\mbox{\textit{SmallConvNet}~}} \label{fig:drawing_small_convnet} \end{subfigure} \caption{The \mbox{\textit{ConvNet}~} and \mbox{\textit{SmallConvNet}~} architectures. White boxes denote convolutional layers, black thick lines stand for fully connected layers. Arrows show information flow, dashed lines indicate a max-pooling operation.} \end{figure} \begin{figure*}[ht] \centering \includegraphics[scale=1.0]{figures/drawing_unet.pdf} \caption{A schematic drawing of the \mbox{\textit{AUNet}~} architecture. White boxes denote convolutional layers, their width corresponds to the number of convolutional kernels, their height corresponds to the size of the resulting feature maps. Arrows show information flow, dashed lines indicate either a max-pooling operation, if the line goes from a higher to a lower box, or an upscaling operation if the line goes from a lower to a higher box. Grey boxes next to white boxes denote a concatenation of feature maps. We made two adaptations to the original UNet architecture. The first is the use of upscaling operations instead of deconvolutions, and the second adaptation is the last layer having convolutions with a large kernel width in the frequency direction. \label{fig:drawing_unet}} \end{figure*} \input{figures/architectures.tex} \section{Introduction} The problem of polyphonic transcription can be formally described as the transformation of a time-ordered sequence of (audio) samples $\mathbf{X} = (\mathbf{x})_{t=0}^{T}, \mathbf{x}_t \in \mathcal{X}$ into a set of tuples $(t_{s}, t_{e}, F_0, A)$, describing start, end, fundamental frequency or pitch and optionally amplitude of the notes that were played. A slightly easier problem is framewise transcription, or tone-quantized multi-$\mathrm{F}_0$ estimation, where the output is a time-ordered sequence $\mathbf{Y} = (\mathbf{y})_{t=0}^{T}, \mathbf{y}_t \in \mathcal{Y}$, with $\mathbf{y}_t \in \{0, 1\}^{\mathrm{K}}$ being a vector of indicator variables and $\mathrm{K}$ denoting the tonal range. In other words, the $\mathbf{y}$ vectors specify the note pitches believed to be active in a given audio frame $\mathbf{x}$. Another simplifying assumption is usually the presence of only a single instrument, which more often than not turns out to be the piano, having a tonal range of $\mathrm{K} = 88$. We will focus on framewise transcription systems only, as they turn out to be a crucial stage in the full transcription process, especially in so called \textit{hybrid systems} that post-process the framewise output with dynamic probabilistic models to extract the aforementioned tuples describing musical notes, such as \cite{Sigtia_Benetos_Boulanger_Weyde_Avila_Dixon_2015, Sigtia_Benetos_Dixon_2016}. A diverse set of methods have been employed to tackle the framewise transcription problem, with non-negative matrix factorization being one of the more prominent methods. Smaragdis and Brown with their seminal paper \cite{Smaragdis_Brown_2003} using non-negative matrix factorization (NMF) for polyphonic transcription already identified an undesirable property of the technique. NMF seeks to minimize the reconstruction error $\|\mathbf{X} - \mathbf{W}\mathbf{H}\|_{\mathrm{N}}$, where $\mathbf{X} \in \mathbb{R}_{+}^{D \times T}$ is the vector valued signal to reconstruct, $\mathbf{W} \in \mathbb{R}_{+}^{D \times d}$ is the dictionary, $\mathbf{H} \in \mathbb{R}_{+}^{d \times T}$ are the activations in time of the bases and $\mathrm{N}(\cdot)$ is a matrix norm. If no additional constraints are applied and no a priori knowledge is exploited, Smaragdis and Brown \cite{Smaragdis_Brown_2003} note that the method learns a dictionary of \textit{unique events}, rather than individual notes. Two remedies for this problem are also named: either choose sets of notes in such a way that from their intersection single notes can be identified, or present all individual notes in isolation, so a meaningful dictionary can be learned first. A similar effect is achievable if the dictionary matrix is harmonically constrained. The latter two methods seem to be popular choices in the literature \cite{Smaragdis_Brown_2003, Benetos_Ewert_Weyde_2014, Bertin_Badeau_Richard_2007, Bertin_Badeau_Vincent_2009, Dessein_Cont_Lemaitre_2010, Grindlay_Ellis_2009, OHanlon_Plumbley_2014, Vincent_Bertin_Badeau_2010, Weninger_Kirst_Schuller_Bungartz_2013, Khlif_Sethu_2015} to solve this problem for NMF. We conduct a simple experiment to examine whether neural networks trained for a piano transcription task suffer from the same \textit{disentanglement} problems, followed by an analysis of two very different neural network architectures and the extent to which they exhibit this behavior. \section{Methods} Lacking proper theoretical analytic tools for the model class of neural networks, we resort to empirical tools, namely computational experiments. We train several deep neural networks in a supervised fashion for a framewise piano transcription task and analyze their error behavior. Adhering very closely to already established model architectures, as exemplified in \cite{Sigtia_Benetos_Dixon_2016, Kelz_Dorfer_Korzeniowski_Boeck_Arzt_Widmer_2016}, we deviate only in very few aspects, mostly concerning hyperparameter choices that affect training time but have little effect on performance. These parametrized functions we learn are of the following form: $f_{net}: \mathcal{X} \rightarrow \mathcal{Y}$, with $f_{net}$ in turn being composed of multiple simpler functions, commonly referred to as \textit{layers} in the neural network literature. An example of a network with an input, hidden and output layer would be $f_{net}(\mathbf{x}) = f_3(f_2(f_1(\mathbf{x}; \theta_3); \theta_2); \theta_3)$, where $f_i(\mathbf{z}_{i-1};\theta_i) = \sigma(\mathbf{W}_i \mathbf{z}_{i-1} + \mathbf{b}_i)$ with $\theta_i = \{\mathbf{W}_i, \mathbf{b}_i\}$ having matching dimensions to fit the output $\mathbf{z}_{i-1}$ of the previous layer. $\sigma(\cdot)$ is a nonlinear function applied elementwise. We note here that the functions $f_i$ may actually have more than one input $\textbf{z}$, and it may also be from layers other than the directly previous layer. We do not explicitly model convolution as it can be expressed as a matrix-matrix product, given $\textbf{W}$ and $\textbf{z}$ have the right shapes. We choose neural network architectures already established to work well for framewise transcription. Our first choice is exactly the \mbox{\textit{ConvNet}~} architecture as proposed in \cite{Kelz_Dorfer_Korzeniowski_Boeck_Arzt_Widmer_2016}, which achieves state of the art results for framewise transcription on a popular benchmark dataset. We also designed a much smaller version of this network, which will be referred to as \mbox{\textit{SmallConvNet}}. Additionally, we borrow an architecture originally employed for medical image segmentation, called the UNet \cite{Ronneberger_Fischer_Brox_2015} and make two small modifications to adapt it for our purposes. We call the adapted architecture \mbox{\textit{AUNet}}. It is able to directly integrate information at different scales, which is beneficial for smoothing in the temporal direction, and identifying groups of partials and their distance in the frequency dimension. The precise definitions for all networks, as well as schematic drawings of the architectures for the \mbox{\textit{ConvNet}}, the \mbox{\textit{SmallConvNet}~} and the \mbox{\textit{AUNet}~} can be found in the appendix. The definitions are listed in tables \ref{table:convnet}, \ref{table:small_convnet} and \ref{table:aunet} whereas the schemata are depicted in figures \ref{fig:drawing_convnet}, \ref{fig:drawing_small_convnet} and \ref{fig:drawing_unet} respectively. \section{Datasets} We use a synthetic dataset to conduct small scale experiments with the \mbox{\textit{SmallConvNet}}. A subset of the MAPS dataset \cite{Emiya_Badeau_David_2010} is used to train and test the \mbox{\textit{ConvNet}~} and the \mbox{\textit{AUNet}~} models. The MAPS dataset consists of several classical piano pieces, along with isolated notes and common chords, rendered with 7 different software synthesizers (samplers) in addition to 2 Disklavier piano recordings, one with the microphone close to the piano, and one with the microphone farther away and thus containing room acoustics. We now describe each subset in turn: \subsection{FLUID} For focused computational experiments we synthesize two-note combinations and isolated notes. We only use notes within an $11$ semitone range around a reference pitch (C4/MIDI60), creating $\binom{23}{2} = 253$ two-note intervals. The onset and offset of the two notes are exactly synchronous. We use the free software sampler Fluidsynth\footnote{\url{www.fluidsynth.org}} together with the freely available Fluid-R3-GM\footnote{\url{http://www.musescore.org/download/fluid-soundfont.tar.gz}} soundfont to render a dataset FLUID-COMBI where the train and validation sets both consist of the aforementioned intervals, whereas the test set contains individual notes only. For FLUID-ISOL, the individual notes are in the train and validation sets, whereas the test set contains the intervals. So for both datasets the intersection of unique events in train and test sets is the empty set. The error behavior of the \mbox{\textit{SmallConvNet}~} on this dataset is discussed in section \ref{sec:results_fluid}. \subsection{MAPS-MUS} This subset consists only of the rendered classical piano pieces in the MAPS dataset. We adopt the more realistic train-test scenario described in \cite{Sigtia_Benetos_Dixon_2016}, which is referred to as \textit{Configuration-II}. It is more realistic because it trains only on synthetic renderings, and tests on the real piano recordings. We select the training set as all pieces from 6 synthesizers, the validation set is comprised of all renderings from a randomly selected 7th, and the test set is made up of all Disklavier recordings. We will refer to this dataset as MAPS-MUS from now on. The respective error behaviors of the two larger models, the \mbox{\textit{ConvNet}~} and the \mbox{\textit{AUNet}~} on this dataset are discussed in section \ref{sec:results_maps_mus}. We did not use the test set for conducting any error analysis, other than measuring final performance after model selection, to make sure that both models actually achieve state of the art results. The rationale behind this is explained in detail in section \ref{sec:results_maps_mus}. \section{Results} \subsection{\mbox{\textit{SmallConvNet}~} and FLUID} \label{sec:results_fluid} We start with a controlled empirical analysis of the \textit{disentanglement} problem using our synthetic datasets. We train the \mbox{\textit{SmallConvNet}~} for framewise transcription on logarithmic filtered, log-magnitude spectrograms with $229$ bins, as proposed in \cite{Kelz_Dorfer_Korzeniowski_Boeck_Arzt_Widmer_2016}. The output size of the network is limited to $23$ notes, and it has only $5327$ parameters, to make it approximately comparable to NMF with a dictionary matrix $\textbf{W} \in \mathbb{R}^{229 \times 23}$ having $5267$ parameters. We note that overfitting, fitting noise in the data, is not the real problem here, as the acoustic properties of the sound sources are the same for train and test set. The general idea of this experiment is discovering to which extent the network is capable of detecting isolated notes, if all it has ever seen were combinations, and vice versa. We can find a partial answer to this question in figures \ref{fig:smallconv_fluid-combi} and \ref{fig:smallconv_fluid-isol}. The figures all show the proportion of frames where all notes have been exactly identified, and contrast them with the proportion of frames in which notes have been added or omitted. This means that the three quantities do not necessarily sum to one, because notes could have been added \textit{and} some others omitted in a frame. In figure \ref{fig:smallconv_fluid-combi} we can observe that after seeing only \mbox{two-note} intervals, the network is able to generalize to isolated notes to some extent. While a surprising number of individual notes are transcribed perfectly, some notes are still not recognized properly. For these notes their companion notes from the train set are predicted as simultaneously sounding, indicating a failure to disentangle note combinations during training. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/tei/smallconv_fluid-combi-c4/bw_isolated_frames.pdf} \caption{For isolated notes present \textbf{only} in the test set, this is the proportion of exactly transcribed frames, along with the proportions of frames that had notes added or omitted, respectively. Transcriptions stem from the \mbox{\textit{SmallConvNet}~} trained on FLUID-COMBI. \label{fig:smallconv_fluid-combi}} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/tei/smallconv_fluid-isol-c4/bw_intervals_frames.pdf} \caption{For a selection of the $23$ best transcribed intervals present \textbf{only} in the test set, this is the proportion of exactly transcribed frames, along with the proportions of frames that had notes added or omitted, respectively. Transcriptions stem from the \mbox{\textit{SmallConvNet}~} trained on FLUID-ISOL. \label{fig:smallconv_fluid-isol}} \end{figure} In figure \ref{fig:smallconv_fluid-isol} we see that the network utterly fails to generalize from isolated notes to note combinations, with only two exceptions. We plotted only the $23$ best transcribed note combinations, as for the remaining $230$ intervals the proportion of omission errors is very close to or even exactly $1.0$. The network does manage to transcribe two of the intervals with acceptable accuracy, however an explanation of why exactly these two intervals could be recognized eludes us at the moment. We might draw a preliminary conclusion from these results: the strategy most successful for alleviating the \textit{disentanglement} problem for NMF, namely learning the dictionary $\textbf{W}$ from isolated notes, does not work for neural transcription systems. The NMF of spectrograms is a linear system, and therefore has the superposition property. Its response to multiple inputs is the sum of the responses for individual inputs. This is not necessarily true for neural networks, as they \textit{may} learn to approximate a linear function, but do not \textit{have} to. The other strategy mentioned in \cite{Smaragdis_Brown_2003}, namely showing combinations of notes to the networks, seems to work fairly well for the majority of isolated notes, as can be observed in figure \ref{fig:smallconv_fluid-combi}. Unfortunately, the number of combinations for the tonal range of the piano grows large very quickly. Even when assuming a maximum polyphony of only $6$, we would already need to show $\sum_{i=2}^{6} \binom{88}{i} = 41.621.206$ combinations to the network. \subsection{\mbox{\textit{ConvNet}}, \mbox{\textit{AUNet}~} and MAPS-MUS} \label{sec:results_maps_mus} We now turn our attention to a more musically relevant dataset. We train several instances of both a \mbox{\textit{ConvNet}~} and an \mbox{\textit{AUNet}}, closely adhering to the training procedure described in \cite{Kelz_Dorfer_Korzeniowski_Boeck_Arzt_Widmer_2016}, and select the model for analysis that achieves highest framewise f-measure on the validation set. Our analysis of error behavior is restricted to the validation set as well, simply because we want to avoid learning too much about the composition of the MAPS test set. The scenario is the same, as the validation set consists of pieces rendered by an unseen synthesizer. We feel that this also lends some additional strength to our argument, as we conduct our analysis on the best performing model for this set. Two different scenarios are considered. The first scenario looks at the transcription results for notes and note combinations that are present in both the train and validation set, referred to as ``shared'' combinations. A low proportion of additions will tell us that there were a sufficient number of examples for this particular combination, so it could not be overshadowed by combinations containing additional notes. A high proportion of omissions will indicate issues with generalization to different acoustic properties. If both proportions are high, this indicates that one or more notes in the combination have been mistaken for others. The second scenario examines the transcription results for notes and note combinations that are present only in the validation set, referred to as ``unshared''. If the proportion of exactly transcribed frames is high, the network must have learned to disentangle individual notes from different combinations shown to it, and be able to recognize these disentangled parts in new, unseen combinations. A high proportion of additions will mainly tell us that the network has failed to disentangle parts, but still tries to combine the ones it knows about. A high proportion of omissions points to either a failure to simultaneously disentangle and recombine, a failure to generalize to different acoustic properties, or more probably both. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/tei/convnet_maps_mus/bw_shared_frames.pdf} \caption{For the most common note combinations present \textbf{both} in the train set and validation set, this is the proportion of exactly transcribed frames, along with the proportion of frames that had notes added or omitted, respectively. Transcriptions stem from the \mbox{\textit{ConvNet}~} trained on MAPS-MUS \label{fig:convnet-maps-mus-shared}} \end{figure} In figure \ref{fig:convnet-maps-mus-shared} we can see two things: the most common note combinations present in both train and validation set are actually isolated notes, and the relative frequency of exactly transcribed notes is comparatively high. Unfortunately, we can also see that the proportion of frames in which additional notes were erroneously transcribed is much higher than we would prefer, pointing to both a lack of examples for these individual notes at train time and the failure to generalize from combinations. They all are confused with combinations every so often. The low proportion of omission errors for isolated notes indicate only mild difficulties to generalize to different acoustical properties. Looking at figure \ref{fig:convnet-maps-mus-unshared}, we can see the error behavior of the network for the most common note combinations that are only present in the validation set. We notice a large amount of omission errors - which also indicates a failure to generalize to unseen note combinations. A few combinations, such as (G3, A3, C4, D4), stand out though as being transcribed with great accuracy. We could find no satisfactory explanation for this so far, other than the suspicion it has to do with their low polyphony. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/tei/convnet_maps_mus/bw_unshared_frames.pdf} \caption{The most common note combinations present \textbf{only} in the validation set, and the proportion of exactly transcribed frames, along with the proportion of frames that had notes added or omitted, respectively. Transcriptions stem from the \mbox{\textit{ConvNet}~} trained on MAPS-MUS. \label{fig:convnet-maps-mus-unshared}} \end{figure} If we compare the results of the \mbox{\textit{ConvNet}~} (\mbox{figure \ref{fig:convnet-maps-mus-shared}}) and the \mbox{\textit{AUNet}~} (\mbox{figure \ref{fig:unet-maps-mus-shared}}) for the most common note combinations which are shared by the train and validation set, we can observe that the \mbox{\textit{AUNet}~} achieves marginally better exact transcription results across the board. In some cases, the proportion of added notes is reduced, however this happens at the expense of a slightly increased amount of omitted note combinations. Likewise, the results for the \mbox{\textit{ConvNet}~} (\mbox{figure \ref{fig:convnet-maps-mus-unshared}}) and \mbox{\textit{AUNet}~} transcriptions (\mbox{figure \ref{fig:unet-maps-mus-unshared}}) for the ``unshared'' case appear to be very similar, indicating a comparable error behavior across very different architectures. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/tei/unet_maps_mus/bw_shared_frames.pdf} \caption{The most common note combinations present \textbf{both} in the train set and validation set, and proportion of exactly transcribed frames, along with the proportion of frames that had notes added or omitted, respectively. Transcriptions stem from the \mbox{\textit{AUNet}~} trained on MAPS-MUS. \label{fig:unet-maps-mus-shared}} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{figures/tei/unet_maps_mus/bw_unshared_frames.pdf} \caption{The most common note combinations present \textbf{only} in the validation set, and the proportion of exactly transcribed frames, along with the proportion of frames that had notes added or omitted, respectively. Transcriptions stem from the \mbox{\textit{AUNet}~} trained on \mbox{MAPS-MUS}. \label{fig:unet-maps-mus-unshared}} \end{figure} Concluding this section we would like to emphasize that both architectures achieve the same (or even slightly exceed) framewise transcription results on the MAPS dataset as reported in \cite{Kelz_Dorfer_Korzeniowski_Boeck_Arzt_Widmer_2016}, which currently defines the state of the art. In other words, it is unlikely that the problematic results reported above are due to the fact that we made poor hyperparameter choices. \section{Summary} We have experimentally shown that certain neural network architectures have difficulties \textit{disentangling} inputs which are superpositions or mixtures of individual parts, as discussed in section \ref{sec:results_maps_mus}. They learn to do so only if they are shown a large number of combinations whose constituent parts overlap, and they utterly fail to generalize to combinations when trained on individual parts of the mixture alone, as we determined in a small experiment described in section \ref{sec:results_fluid}. Any approach that tries to learn from a fixed set of combinations, for example defined by a set of music pieces, without incorporating additional constraints or prior knowledge, as is done in \cite{Sigtia_Benetos_Boulanger_Weyde_Avila_Dixon_2015, Sigtia_Benetos_Dixon_2016, Kelz_Dorfer_Korzeniowski_Boeck_Arzt_Widmer_2016}, will suffer from this problem. The brute force approach to solve the \textit{disentanglement} problem would be showing all possible combinations to the network. Unfortunately this solution is intractable, due to the large tonal range and maximum polyphony of certain instruments. Arguably this approach would also not necessarily force the networks to learn how to \textit{disentangle}, as they could, in principle, simply memorize all combinations. Learning a different note detector for each note, as done in \cite{Marolt_2004, Nam_Ngiam_Lee_Slaney_2011} suffers from the same problems, if the combinations shown to each detector are not diverse enough. Depending on the expressiveness of the model class, ``diverse enough'' could easily mean ``all combinations''. A partial solution to this problem might involve a modification of the loss function for the network. An additional objective must specify the need to \textit{disentangle} individual notes explicitly. The network needs to learn to decompose a (nonlinear) mixture of signals into its constituent parts - a task commonly known as ``source separation''. Finding a formulation of a joint objective combining multi-label losses with a separation encouraging penalty that solves this \textit{disentanglement problem} is the topic of ongoing research. \section*{Acknowledgements} This work is supported by the European Research Council (ERC Grant Agreement 670035, project \mbox{CON ESPRESSIONE}). The Tesla K40 used for this research was donated by the NVIDIA Corporation.
2,869,038,155,161
arxiv
\section{Introduction} Model checking is a verification technique that performs an exhaustive search among states of a given finite-state machine to verify that this model satisfies a given property, expressed in temporal or richer modal logics~\cite{Clarke-Grumberg-others-99}. Some of these logics, such as CTL~\cite{Clarke-Emerson-82}, are branching: they express facts about the computation tree of the model. This branching aspect is even more critical when dealing with multi-modal logics. The most common example is CTLK, a temporal-epistemic logic reasoning about time and knowledge in multi-agent systems~\cite{Penczek-Lomuscio-03}. Both temporal and epistemic information are captured as different relations over states of the model and properties express facts about all these relations. A major benefit of model checking is the capability to generate a counter-example when a property is not satisfied. Unfortunately, most of the current state-of-the-art model checkers only return linear counter-examples while, in general, branching logics need branching counter-examples~\cite{Buccafurri-Eiter-others-01}. Let's take the example of Alice and Bob, where Alice randomly picks a number $N$ between $10$ and $100$ and Bob has to guess whether the number is prime or not. At each step, Bob can ask Alice whether $N$ is divisible by another number $m$. Based on Alice's answers, Bob has to say whether $N$ is prime or not. This problem can be modeled into a multi-agent system with Alice and Bob as agents; in such a model, $N$ would be undisclosed to Bob. Such a model can then be model checked to verify that Bob always finally knows whether $N$ is prime or not; this property can be expressed in CTLK as $\AF{(\Kk{Bob}{P_N} \vee \Kk{Bob}{\neg P_N})}$, where $P_N$ is true in a state in which $N$ is prime. We say that \emph{Bob knows p}, written $\Kk{Bob}{p}$, in a state $s$ if $p$ is true in all states that are undistinguishable from $s$ for Bob. This property is not verified by a model that allows Bob to ask only three questions, and a model checker checking this property would return a counter-example. An adequate counter-example for this property is not a single computation path: it has to show a path composed of states in which Bob does not know whether $N$ is prime, i.e. it also has to show, for each state of this path, another reachable undistinguishable state where $N$ is prime and another one where $N$ is not. Figure~\ref{figure:scheduler-tlace} gives a tree-like annotated counter-example explaining why the given property is violated. It corresponds to a scenario where $N=19$, Bob asks whether $N$ is divisible by the three first prime numbers ($2$, $3$ and $5$) and Alice answers negatively each time. The state labels are the values of $N$, which Bob does not know. The memory of Bob, used to remember Alice's answers, is not shown. The wavy transitions link together states that are undistinguishable by Bob and arrowed transitions are temporal ones. States are annotated with the properties they satisfy and transitions are annotated with properties they explain. The main branch, composed of bold states, explains how Bob asks his three questions and does not know whether $N$ is prime or not. For each state of this main path, two states that are undistinguishable by Bob from the main state are given to show that Bob does not know that $N$ is prime (right state) nor that it is not (left state). Furthermore, dashed states show that each of these states are reachable from an initial one. The highlighted path corresponds to the linear counter-example that a standard model checker would provide. \begin{figure}[!ht] \centering \scalebox{0.85}{\includegraphics{figures/AliceBob.pdf}} \caption{A tree-like annotated counter-example for $\AF{(\Kk{Bob}{P_N} \vee \Kk{Bob}{\neg P_N})}$.} \label{figure:scheduler-tlace} \end{figure} This paper proposes branching structures, called \emph{tree-like annotated counter-examples} (TLACEs), that are suitable for explaining violations of branching logic properties. Furthermore, each state of these counter-examples is annotated with the part of the (negated) property that it satisfies. These counter-examples are defined in the framework of Action-Restricted CTL (ARCTL), a branching-time logic with action-labelled transitions~\cite{Pecheur-Raimondi-07}, and their utility is illustrated in the framework of CTLK, a temporal-epistemic logic that can be reduced to ARCTL~\cite{Lomuscio-Pecheur-others-07}. Tree-like annotated counter-examples are built upon tree-like counter-examples as defined by Clarke et al.~\cite{Clarke-Jha-others-02}, which provide full counter-examples for the universal fragment of $\omega$-regular logics. These counter-examples combine cycles and finite paths in a specific, tree-like structure. Our tree-like annotated counter-examples extend the notion of tree-like counter-examples to ARCTL and annotate the states with formulas they satisfy to give a better understanding of the violation. TLACEs take inspiration from the work of Rasse~\cite{Rasse-92}. Rasse presents branching counter-examples for CTL interpreted over states of LTSs. Furthermore, the states of the counter-examples are annotated with the sub-formulas they explain. This notion of counter-examples is close to tree-like annotated counter-examples. Nevertheless, Rasse does not provides a way to generate counter-examples and TLACEs for ARCTL are more general since they are applicable to richer modal logics. We have extended the state-of-the-art symbolic model checker NuSMV~\cite{Cimatti-Clarke-others-02} to generate and export tree-like annotated counter-examples for ARCTL properties. One of the drawbacks of these richer counter-examples is their size and complexity, which can be polynomial in the number of states of the system and exponential in the length of the checked formula in the worst case. We have developed a tool that takes TLACEs generated by our extended NuSMV and provides a graphical interface to visualize and browse them. These tools have been used to provide a first assessment of the approach. They have been designed to generate and visualize TLACEs. Thanks to their parameters and functionalities, they allow the user to limit and manage the size of the counter-example. The contributions of this paper are: \begin{itemize} \item the definition of tree-like annotated counter-examples; \item the design of an algorithm generating these counter-examples; \item the implementation of this algorithm in NuSMV; \item the design and implementation of an interactive visualization tool for browsing these counter-examples. \end{itemize} This paper is structured as follows. Section~\ref{section:branching-logics} reminds the syntax and semantics of ARCTL and CTLK. Section~\ref{section:TLACE} defines tree-like annotated counter-examples and explains how to generate them. Section~\ref{section:example} illustrates the approach with the example of the dining cryptographers. Section~\ref{section:tools} describes the extension of NuSMV and the visualization tool. Section~\ref{section:evaluation} presents the evaluation of these tools. Finally, Section~\ref{section:related-work} presents related work. \section{Temporal and Epistemic Logics} \label{section:branching-logics} This section presents the logics used in this work, ARCTL and CTLK. It first presents the syntax and semantics of ARCTL~\cite{Pecheur-Raimondi-07}, then presents CTLK, a temporal-epistemic logic, and describes a reduction of CTLK to ARCTL. \subsection{Action-Restricted CTL} Action-Restricted CTL is an extension of CTL applied to systems with labelled states and actions, where temporal operators are augmented with propositional expressions over actions, expressing properties of particular paths of the system. In addition to the usual logical connectives ($\neg$, $\vee$, $\wedge$, $\implies$ and $\iff$), ARCTL provides temporal operators, composed of an action-restricted path quantifier $\Ea$ or $\Aa$ immediately followed by a path operator ($\X$, $\G$, $\F$, $\U$ and $\W$). Path operators define path formulas while path quantifiers and logical connectives define state formulas. Actions expressions $\alpha$ are composed of actions and logical connectives. For example, $\EaX{a}{\phi}$ means that there exists a successor reachable through the action $a$ that satisfies $\phi$; $\AaG{b}{\psi}$ means that all states reachable through $b$ actions satisfy $\psi$. ARCTL properties are interpreted over the states of a Mixed Transition System (MTS). An MTS is a tuple $\mathcal{M} = (S, S_{0}, A, T, \mathcal{V}_S, \mathcal{V}_A)$ over two sets of atomic propositions $P_S$ and $P_A$, where $S$ is a set of states, $S_{0} \subseteq S$ are initial states, $A$ is a set of actions, $T \subseteq S \times A \times S$ is a transition relation, and $\mathcal{V}_S : S \rightarrow 2^{P_S}$ and $\mathcal{V}_A : A \rightarrow 2^{P_A}$ are two functions labeling states with subsets of $P_S$, and labeling actions with subsets of $P_A$, respectively. These two functions represent the propositions that are interpreted over states and actions, respectively. We write $s \xrightarrow{a} s'$ for $(s, a, s') \in T$. A path of $\mathcal{M}$ starting at $s_0$ is a (finite or infinite) sequence of states and actions $w = \langle s_0, a_1, s_1, a_2, s_2,... \rangle$ such that $s_i \xrightarrow{a_{i+1}} s_{i+1}$; $w(i)$ denotes $s_i$. $\Pi(\mathcal{M}, s)$ is the set of maximal paths in $\mathcal{M}$ starting at $s$. $\mathcal{M}|_{\alpha} = (S, S_{0}, A, T|_{\alpha}, \mathcal{V}_S, \mathcal{V}_A)$ is the $\alpha$-restriction of $\mathcal{M}$ where $T|_{\alpha} = \{(s,a,s') \in T ~|~ \mathcal{M}, a \models \alpha\}$ is the transition relation $T$ where only actions $a$ satisfying $\alpha$ are considered. We write $\mathcal{M}, s \models \phi$ when a state $s$ of an MTS $\mathcal{M}$ satisfies an ARCTL property $\phi$. Logical connectives are interpreted in the natural way. For temporal operators, $s$ satisfies $\Ea{\alpha}{\pi}$ (resp. $\Aa{\alpha}{\pi}$), where $\pi$ is a path formula, if and only if there exists a path (resp. all paths) in $\Pi(\mathcal{M}|_{\alpha}, s)$ satisfying $\pi$. A path $w$ of $\mathcal{M}$ satisfies $\X{\phi}$ if and only if $w(1)$ satisfies $\phi$; $w$ satisfies $\F{\phi}$ (resp. $\G{\phi}$) iff $w(i)$ satisfies $\phi$ for some (resp. for all) $i$. Finally, $w$ satisfies $\U{\phi}{\psi}$ iff $w(i)$ satisfies $\psi$ for some $i$ and $w(j)$ satisfies $\phi$ for all $j < i$. $\W{\phi}{\psi}$ is equivalent to $(\U{\phi}{\psi}) \lor (\G{\phi})$. In the remainder of this paper, all given ARCTL formulas are considered as reduced to their \emph{negative normal form}: negations are distributed over all operators so that they are only applied to atomic propositions. Furthermore, equivalences can be applied to reduce formulas to the following base cases: $b$, $\neg b$ (for atomic propositions $b$), $\phi \wedge \psi$, $\phi \vee \psi$, $\EaX{\alpha}{\phi}$, $\EaG{\alpha}{\phi}$, $\EaU{\alpha}{\phi}{\psi}$ and $\Aa{\alpha}{\pi}$. This allows a more concise presentation of concepts without loss of generality. \subsection{CTLK} CTLK is a branching-time epistemic logic mixing knowledge relations and temporal ones~\cite{Penczek-Lomuscio-03}. This logic is designed to express facts about time and knowledge of agents in a multi-agent system. This section presents the syntax and semantics of CTLK. CTLK provides the usual logical connectives together with CTL operators ($\EX$, $\AG$, $\EF$, etc.) and the knowledge operator $\Kk$ where $ag$ is an agent. It also provides some other epistemic operators for the \emph{group knowledge} $\Ek$, the \emph{distributed knowledge} $\Dk$ and the \emph{common knowledge} $\Ck$, where $g$ is a group of agents, but they are not developed here. Nevertheless, they also can be reduced to ARCTL~\cite{Lomuscio-Pecheur-others-07}. CTLK is interpreted over multi-agent systems, where each agent is aware of the possible behaviors of the system and of its own local state but not of the local state of other agents. Formally, a multi-agent system composed of $n$ agents is a Kripke structure $\mathcal{M_A} = (S, S_0, T, \sim_1, ..., \sim_n, \mathcal{V})$ where $T \subseteq S \times S$ is a (temporal) transition relation and $\sim_i\; \subseteq S \times S$ are epistemic relations. $(s,s') \in\; \sim_i$, written $s \sim_i s'$, iff $s$ and $s'$ are reachable states that share the same local state for agent $ag_i$. An agent $ag_i$ knows $\phi$ in a state $s$ iff $\phi$ holds in all reachable states that are undistinguishable from $s$ by $ag_i$. Formally, $\mathcal{M_A}, s \models \Kk{ag_i}{\phi}$ if and only if $\forall s' \in S : s' \sim_i s \Rightarrow \mathcal{M_A}, s' \models \phi$. Note that $\sim_i$ must be restricted to \emph{reachable} states (i.e. $T^*(S_0)$), capturing the fact that $ag_i$ knows the global system behavior. A witness for a reachable state is thus a reverse execution path back to an initial state. \subsection{From CTLK to ARCTL} Some multi-modal branching logics, i.e. logics dealing with more than one transition relation, can be reduced to ARCTL. CTLK is such a logic: a multi-agent system and a CTLK formula can be reduced to an MTS and an ARCTL formula, respectively~\cite{Lomuscio-Pecheur-others-07}. Given a multi-agent system, the corresponding MTS has the same set of states. The set of actions contains actions $RUN$ and $BACK$, used to label temporal and reverse temporal transitions, and one action per agent, to label epistemic transitions. The transition relation is an aggregation of the temporal relation, the reverse temporal relation and the epistemic ones, using corresponding actions. The labeling of states is augmented with the proposition $Init$ to label initial states. $Init$ is used to express the reachability of states: a state is reachable from an initial state iff it satisfies $\EaF{\{BACK\}}{Init}$. Formally, given a multi-agent system $\mathcal{M_A} = (S, S_0, T, \sim_1, ..., \sim_n, \mathcal{V})$ composed of $n$ agents, the corresponding MTS is given by $\mathcal{M} = (S, S_0, A, T', \mathcal{V}_S, \mathcal{V}_A)$ where \begin{itemize} \item $A = 2^{\{RUN, BACK, Agt_1, ..., Agt_n\}}$\footnote{The use of subsets of labels is needed to handle distributed knowledge. See~\cite{Lomuscio-Pecheur-others-07} for details.} \item for all states $s, s' \in S$ : (i) $(s, \{RUN\}, s') \in T'$ iff $(s, s') \in T$; (ii) $(s, \{BACK\}, s') \in T'$ iff $(s', s) \in T$; (iii) $(s, \{Agt_i\}, s') \in T'$ iff $s \sim_i s'$; (iv) $(s, \{Agt_i ~|~ ag_i \in g\}, s') \in T'$ iff $\forall ag_i \in g : s \sim_i s'$. \item $\mathcal{V}_S(s) = \mathcal{V}(s) \cup \{Init\} \textrm{ if } s \in S_0, \mathcal{V}(s) \textrm{ otherwise}$; $\mathcal{V}_A$ is the identity function. \end{itemize} To reduce a CTLK formula into an ARCTL one, we use the labels $RUN$, $BACK$ and $Agt_i$ to represent a temporal transition, a reverse temporal transition and an epistemic transition of agent $ag_i$, respectively. Formally, let the function $R$ reduce CTLK formulas into their ARCTL form, $R$ is inductively defined as \begin{itemize} \item $R(b) = b$ if $b$ is a propositional formula; \item $R(\EX{\phi}) = \EaX{\{RUN\}}{R(\phi)}$; $R(\EG{\phi}) = \EaG{\{RUN\}}{R(\phi)}$; $R(\EU{\phi}{\psi}) = \EaU{\{RUN\}}{R(\phi)}{R(\psi)}$; \item $R(\Kk{a_i}{\phi}) = \AaX{\{Agt_i\}}{(\reachable \implies R(\phi))}$, where $\reachable$ is a shortcut for $\EaF{\{BACK\}}{Init}$. \end{itemize} \section{Tree-Like Annotated Counter-Examples} \label{section:TLACE} This section presents generic structures called \emph{tree-like annotated counter-examples}, or TLACEs for short, for explaining why a state $s$ of a system does not satisfy an ARCTL property $\phi$. A \emph{counter-example} explaining a violation of $\phi$ in $s$ amounts to a \emph{witness} explaining satisfaction of $\neg\phi$ in $s$. From now on, this paper will discuss TLACEs as witnesses of properties rather than counter-examples to avoid carrying negations all through. A TLACE witnessing $\phi$ is a \emph{node} composed of a state annotated with the direct sub-formulas of $\phi$ that it satisfies. Furthermore, for each existential temporal sub-formula it satisfies (i.e. $\Ea$ formulas), the node contains a branch explaining the formula. A branch is a list of nodes and actions representing a path in the model and witnessing the temporal formula. Formally, tree-like annotated counter-examples are defined based on the following grammar of nodes $n$ and paths $p$: \begin{align*} n & ::= node(s, \{(b ~|~ \neg b)^{*}\}, \{(\Ea{\alpha}{\pi} : p)^{*}\}, \{(\Aa{\alpha}{\pi})^{*}\}) \\ p & ::= \langle n, (a, n)^* \rangle ~|~ \langle n, (a, n)^*, a, loop(n) \rangle \end{align*} where $s$ are states, $b$ are atomic propositions, $a$ are actions, $\alpha$ are boolean expressions over actions and $\pi$ are ARCTL path formulas. The $loop$ marker is used to represent a looping path, the marked node is the first one of the loop. Given a node $node(s, aps, \{\Ea{\alpha_i}{\pi_i} : p_i\}, abs)$, each $\Ea{\alpha_i}{\pi_i} : p_i$ is called a \textit{branch}; $aps$, $\{\Ea{\alpha_i}{\pi_i}\}$ and $abs$ are \textit{annotations}. Let $State(node(s, aps, ebs, abs)) = s$ and $First(\langle n_0, a_1, ..., n_m \rangle) = First(\langle n_0, a_1, ..., n_m, a_{m+1}, loop(n') \rangle) = n_0$.\\A TLACE node $node(s, aps, \{\Ea{\alpha_i}{\pi_i} : p_i\}, abs)$ is \emph{consistent} iff all its paths $p_i$ are \emph{consistent} and satisfy $s = State(First(p_i))$; a TLACE path $\langle n_0, a_1, ..., n_m \rangle$ is \emph{consistent} iff all its nodes are \emph{consistent}; a TLACE path $\langle n_0, a_1, ..., n_m, a_{m+1}, loop(n') \rangle$ is \emph{consistent} iff $\langle n_0, a_1, ..., n_m, a_{m+1}, n' \rangle$ is \emph{consistent} and $n' = n_j$ for some $0 \leq j \leq m$. We will only consider consistent TLACEs in the sequel, and call TLACE, or witness, a consistent TLACE node. \subsection{Adequate Witnesses} \label{section:adequate} TLACEs ought to be \emph{adequate} witnesses for a formula $\phi$ in a state $s$ of a model $\mathcal{M}$, in a precisely defined sense. Given a tree-like annotated witness $n = node(s, aps, ebs, abs)$, the witness has to satisfy the following conditions to be adequate: \begin{itemize} \item The witness represents a part of the computation tree of $\mathcal{M}$. Its paths are execution paths in $\mathcal{M}$. % \item The atomic propositions annotating nodes of the witness are satisfied in the corresponding states of $\mathcal{M}$ and the actions of paths satisfy action formulas of $\phi$. \item The witness is effectively a witness for $\phi$. It represents (generally partially) a computation tree ensuring $\phi$. \item The annotations of the witness are coherent with $\phi$. Branch annotations are sub-formulas of $\phi$. \end{itemize} The first condition above is formally expressed as $n \ensuremath{\ matches\ } \mathcal{M}$---the witness is part of the model---while the three last ones are expressed as $n \ensuremath{\ explains\ } (\mathcal{M}, \phi)$---the witness explains the property in the model. An \emph{adequate} witness for $\phi$ in $s$ of $\mathcal{M}$ is a witness in $s$ that matches $\mathcal{M}$ and explains $\phi$ in $\mathcal{M}$. The witness $n$ matches $\mathcal{M}$ if $s$ is a state of $\mathcal{M}$ and each path in $ebs$ corresponds to a path in $\mathcal{M}$ such that the nodes recursively match their respective states. Formally, let $\mathcal{M} = (S, S_0, A, T, \mathcal{V}_S, \mathcal{V}_A)$. A node $n = (s, aps, ebs, abs) \ensuremath{\ matches\ } \mathcal{M}$ iff (i) $s \in S$ and (ii) $\forall (\Ea{\alpha_i}{\pi_i} : p_i) \in ebs : p_i \ensuremath{\ matches\ } \mathcal{M}$. A path $p = \langle n_0, a_1, ..., n_m \rangle \ensuremath{\ matches\ } \mathcal{M}$ iff (i) $\forall i, 0 \leq i \leq m : n_i \ensuremath{\ matches\ } \mathcal{M}$ and (ii) $\forall i, 0 \leq i < m : State(n_i) \xrightarrow{a_{i+1}} State(n_{i+1})$. A looping path $p = \langle n_0, a_1, ..., $ $ n_m, a_{m+1}, loop(n') \rangle \ensuremath{\ matches\ } \mathcal{M}$ iff $\langle n_0, a_1, ..., n_m, a_{m+1}, n' \rangle$ $\ensuremath{\ matches\ } \mathcal{M}$. The witness $n$ explains $\phi$ in $\mathcal{M}$ if it has the shape of a witness for $\phi$. This highly depends on the structure of $\phi$. For example, a witness for $\phi_1 \land \phi_2$ is a node composed of the annotations and branches of two nodes explaining $\phi_1$ and $\phi_2$, respectively. Formally, $n \ensuremath{\ explains\ } (\mathcal{M}, \phi)$ is defined recursively over the structure of $\phi$. This definition is given for state formulas $\phi$ by the following two tables. \begin{center} \begin{tabularx}{\textwidth}{|p{2cm}|X|} \hline $\phi$ & $n \ensuremath{\ explains\ } (\mathcal{M}, \phi)$ iff\dots \\ \hline $true$ & $n = node(s, \{\}, \{\}, \{\})$ \\ $b$ or $\neg b$ & $n = node(s, \{\phi\}, \{\}, \{\})$ and $\mathcal{M}, s \models \phi$ \\ $\phi_1 \vee \phi_2$ & $n \ensuremath{\ explains\ } (\mathcal{M}, \phi_1)$ or $n \ensuremath{\ explains\ } (\mathcal{M}, \phi_2)$ \\ $\phi_1 \wedge \phi_2$ & $n = node(s, aps_1 \cup aps_2, ebs_1 \cup ebs_2, abs_1 \cup abs_2)$ \\ & and $node(s, aps_i, ebs_i, abs_i) \ensuremath{\ explains\ } (\mathcal{M}, \phi_i)$ \\ \end{tabularx}\vspace{-1pt} \begin{tabularx}{\textwidth}{|p{2cm}|X|} $\Aa{\alpha}{\pi}$ & $n = node(s, \{\}, \{\}, \{\Aa{\alpha}{\pi}\})$ \\ $\Ea{\alpha}{\pi}$ & $n = node(s, \{\}, \{\Ea{\alpha}{\pi} : p\}, \{\})$ and $p \ensuremath{\ explains\ } (\mathcal{M}, \phi)$ \\ \hline \end{tabularx} \begin{tabularx}{\textwidth}{|p{2cm}|X|} \hline $\phi$ & $p \ensuremath{\ explains\ } (\mathcal{M}, \phi)$ iff\dots \\ \hline $\EaX{\alpha}{\phi}$ & $p = \langle node(s, \{\}, \{\}, \{\}), a, n_1 \rangle$, $n_1 \ensuremath{\ explains\ } (\mathcal{M}, \phi)$ and $\mathcal{M}, a \models \alpha$ \\ $\EaU{\alpha}{\phi_1}{\phi_2}$ & $p = \langle n_0, a_1, ..., n_m \rangle$, $\forall i, 0 \leq i < m : n_i \ensuremath{\ explains\ } (\mathcal{M}, \phi_1)$, $n_m \ensuremath{\ explains\ } (\mathcal{M}, \phi_2)$ \\ & and $\forall i, 1 \leq i \leq m : \mathcal{M}, a_i \models \alpha$ \\ $\EaG{\alpha}{\phi}$ & $p = \langle n_0, a_1, ..., n_m, a_{m+1}, loop(n') \rangle$, $\forall i : n_i \ensuremath{\ explains\ } (\mathcal{M}, \phi)$ \\ & and $\forall i, 1 \leq i \leq m+1 : \mathcal{M}, a_i \models \alpha$ \\ \hline \end{tabularx} \end{center} Finally, $n$ is an adequate witness for $\mathcal{M}, s \models \phi$ iff $State(n) = s$, $n \ensuremath{\ matches\ } \mathcal{M}$ and $n \ensuremath{\ explains\ } (\mathcal{M}, \phi)$. Note that universal temporal sub-formulas (i.e. $\Aa$ formulas) are not explained: elements of $abs$ are just annotations, i.e. ARCTL formulas. While explaining an $\Ea$ formula needs only one path, the whole system is potentially needed to explain an $\Aa$ formula. There is no inherent difficulty in explaining them, but it would usually result in a huge, unmanageable structure. Tree-like annotated witnesses are full witnesses for the existential fragment of ARCTL. In this fragment, only existential path quantifiers are allowed and negations are only applicable to atomic propositions. TLACEs are full witnesses in the sense that there exists an \emph{adequate} witness for $\mathcal{M}, s \models \phi$ if and only if $\phi$ is satisfied in the state $s$ of $\mathcal{M}$. They adequately describe the satisfaction because they provide all the reasonably useful information. On the other hand, because tree-like annotated witnesses do not explain universal operators, they are not full witnesses for full ARCTL. \subsection{Generating Counter-Examples} \label{section:generating} This section gives an algorithm to generate tree-like annotated witnesses. This algorithm is described by the function $explain$ given below, which takes as arguments a mixed transition system $\mathcal{M} = (S, S_0, A, T, \mathcal{V}_S, \mathcal{V}_A)$, a state $s \in S$, and a property $\phi$ such that $\mathcal{M}, s \models \phi$, and returns a consistent and adequate tree-like annotated witness for $\mathcal{M}, s \models \phi$. To perform this computation, it works recursively on the structure of $\phi$. The algorithm uses the sub-algorithms $EaGexplain$, $EaUexplain$ and $EaXexplain$, which return paths in $\mathcal{M}$ satisfying $\EaG$, $\EaU$ and $\EaX$ operators, respectively. More precisely, $EaGexplain(\mathcal{M}, s, \phi, \alpha)$ returns a path $\langle s_0, a_1, ..., s_m \rangle$ where $s = s_0$, $\forall i, 0 \leq i \leq m : \mathcal{M}, s_i \models \phi$, $\exists k, 0 \leq k < m : s_k = s_m$ and $\forall i, 1 \leq i \leq m : \mathcal{M}, a_i \models \alpha$. $EaUexplain(\mathcal{M}, s, \phi, \psi, \alpha)$ returns a path $\langle s_0, a_1, ..., s_m \rangle$ where $s = s_0$, $\forall i, 0 \leq i < m :\mathcal{M}, s_i \models \phi$, $\mathcal{M}, s_m \models \psi$ and $\forall i, 1 \leq i \leq m : \mathcal{M}, a_i \models \alpha$. $EaXexplain(\mathcal{M}, s, \phi, \alpha)$ returns a path $\langle s_0, a_1, s_1 \rangle$ where $s = s_0$, $\mathcal{M}, s_1 \models \phi$ and $\mathcal{M}, a_1 \models \alpha$. \begin{function}[!ht] \DontPrintSemicolon \KwData{$\mathcal{M}$ a Mixed Transition System, $s$ a state of $\mathcal{M}$, $\phi$ an ARCTL property, s.t. $\mathcal{M}, s \models \phi$.} \KwResult{a tree-like annotated witness $n$ s.t. $State(n) = s$, $n \ensuremath{\ matches\ } \mathcal{M}$ and $n \ensuremath{\ explains\ } (\mathcal{M}, \phi)$.} \BlankLine \Switch{$\phi$}{ \Case{$true$}{ \Return{$node(s, \{\}, \{\}, \{\})$} } \Case{$b$, $\neg b$}{ \Return{$node(s, \{\phi\}, \{\}, \{\})$} } \Case{$\psi_1 \vee \psi_2$}{ \lIf{$\mathcal{M}, s \models \psi_1$}{\Return{$explain(\mathcal{M}, s, \psi_1)$}}\; \lElse{\Return{$explain(\mathcal{M}, s, \psi_2)$}} } \Case{$\psi_1 \wedge \psi_2$}{ $node(s, aps_1, ebs_1, abs_1) \leftarrow explain(\mathcal{M}, s, \psi_1)$\; $node(s, aps_2, ebs_2, abs_2) \leftarrow explain(\mathcal{M}, s, \psi_2)$\; \Return{$node(s, aps_1 \cup aps_2, ebs_1 \cup ebs_2, abs_1 \cup abs_2)$} } \Case{$\Aa{\alpha}{\pi}$}{ \Return{$node(s, \{\}, \{\}, \{\Aa{\alpha}{\pi}\})$} } \Case{$\EaX{\alpha}{\psi}$}{ $\langle s_0, a_1, s_1 \rangle \leftarrow EaXexplain(\mathcal{M}, s, \psi, \alpha)$\; \Return{$node(s, \{\}, \{\EaX{\alpha}{\psi} : \langle node(s_0, \{\}, \{\}, \{\}), a_1, explain(\mathcal{M}, s_1, \psi) \rangle\}, \{\})$} } \Case{$\EaU{\alpha}{\psi_1}{\psi_2}$}{ $\langle s_0, a_1, ..., s_m \rangle \leftarrow EaUexplain(\mathcal{M}, s, \psi_1, \psi_2, \alpha)$\; $p \leftarrow \langle \rangle$\; \For{$i \in 0 .. m-1$}{ $p \leftarrow p + \langle explain(\mathcal{M}, s_i, \psi_1), a_{i+1}\rangle$ } \Return{$node(s, \{\}, \{\EaU{\alpha}{\psi_1}{\psi_2} : p + \langle explain(\mathcal{M}, s_m, \psi_2)\rangle\}, \{\})$} } \Case{$\EaG{\alpha}{\psi}$}{ $\langle s_0, a_1, ..., s_m \rangle \leftarrow EGexplain(\mathcal{M}, s, \psi, \alpha)$\; $p \leftarrow \langle \rangle$\; \For{$i \in 0 .. m-1$}{ $n_i \leftarrow explain(\mathcal{M}, s_i, \psi)$\; \lIf{$s_i = s_m$}{$n' \leftarrow n_i$}\; $p \leftarrow p + \langle n_i, a_{i+1} \rangle$ } \Return{$node(s, \{\}, \{\EaG{\alpha}{\psi} : p + \langle loop(n')\rangle\}, \{\})$} } } \caption{explain($\mathcal{M}$, $s$, $\phi$)} \label{function:explain} \end{function} This algorithm is correct, i.e. if $\mathcal{M}, s \models \phi$, then $explain(\mathcal{M}, s, \phi)$ returns a \emph{consistent} and \emph{adequate} tree-like annotated witness for $\mathcal{M}, s \models \phi$. Its correctness can be proved using induction over the structure of $\phi$. Due to space limit, the proof is not developed here, but the intuition is given for the $\EaU$ case. First, $EaUexplain$ returns a witness path for $\EaU{\alpha}{\psi_1}{\psi_2}$, so $\mathcal{M}, s_m \models \psi_2$, $\mathcal{M}, s_i \models \psi_1$ for $i < m$ and $\mathcal{M}, a_i \models \alpha$ for all $i$. The construction of $p$ in the \emph{for} loop and the following instruction builds a path composed of nodes witnessing $\psi_1$ with a last node witnessing $\psi_2$, thus altogether $p$ correctly explains $\EaU{\alpha}{\psi_1}{\psi_2}$ and the result explains $\EaU{\alpha}{\psi_1}{\psi_2}$. The result clearly belongs to $\mathcal{M}$ since $EaUexplain$ returns a path in $\mathcal{M}$. Finally, by construction, the state of the result is $s$. \newpage \section{Example: the Dining Cryptographers Protocol} \label{section:example} This section uses the dining cryptographers protocol~\cite{Chaum-88}, a well-known example in temporal epistemic logic, to illustrate the applicability of tree-like annotated counter-examples to CTLK and multi-agent systems. A model of the protocol is given, a CTLK property violated by the system is presented and the corresponding counter-example is described. Summarizing~\cite{Chaum-88}, the protocol of the dining cryptographers can be described as follows: \begin{quote} Three cryptographers at restaurant made an arrangement with the waiter for the bill to be paid anonymously. One of them might be the payer or it might be NSA. The three cryptographers wonder if NSA is paying but nobody wants to say if she pays. To resolve this problem, the following protocol is performed. Each cryptographer flips a coin behind his menu such that her right neighbor and she can see the result. Each one then claims aloud whether the two coins that she saw are equal or different, stating the opposite if she is the payer. An odd number of differences indicates a lier, and then a payer; an even number indicates that NSA is paying, assuming that the dinner was paid once. No not-paying cryptographer can tell which one of the others is the payer. \end{quote} We consider a model composed of three agents $a$, $b$ and $c$, representing three cryptographers. Each agent knows whether she paid, the result of the coin flips to her left and right and the claims of all agents. The protocol is executed in three steps. The initial step determines, for each agent, if she is the payer or not, making sure that at most one of them is the payer. Then each agent flips her coin. Finally each agent makes her claim, depending on the results of the coin flips and whether she is the payer or not. We consider the CTLK property $\phi \equiv \neg a.payer \implies \AF{(\Kk{a}{b.payer} \vee \Kk{a}{c.payer})}$. It expresses that if $a$ is not the payer, then she will eventually either know that $b$ is the payer or that $c$ is. This property is obviously violated by the system as the protocol ensures anonymity of the payer. The violation of this property is explained by the tree-like annotated counter-example presented in Figure \ref{figure:crypto-tlace}. It is a witness for $\neg \phi \equiv \neg a.payer \wedge \EG{(\neg\Kk{a}{b.payer} \wedge \neg\Kk{a}{c.payer})}$, with a path ending with a loop (the gray states) composed of states satisfying $\neg\Kk{a}{b.payer} \wedge \neg\Kk{a}{c.payer}$. Each state explains $\neg\Kk{a}{b.payer}$ by giving an equivalent state for $a$ satisfying $\neg b.payer$ and with a backward path to an initial state (and similarly for $\neg\Kk{a}{c.payer}$). \begin{figure}[!ht] \center \scalebox{0.83}{\includegraphics{figures/diningCrypto.pdf}} \caption{A counter-example for $\neg a.payer \implies \protect\AF{(\protect\Kk{a}{b.payer} \vee \protect\Kk{a}{c.payer})}$ in the model of the dining cryptographers. Straight arrows are temporal transitions, waived arrows are epistemic equivalences for $a$. } \label{figure:crypto-tlace} \end{figure} This counter-example clearly illustrates the need for rich counter-examples. A linear counter-example would only give the gray part of the presented counter-example, without the annotations, missing a lot of information, and the understanding of the violation would be very hard for the user. This counter-example is a good representative of CTLK properties mixing temporal and epistemic operators. \section{Implementation} \label{section:tools} The principles exposed in the previous sections have been implemented in two distinct parts. First, the well-known open-source symbolic model checker NuSMV~\cite{Cimatti-Clarke-others-02} has been modified to generate tree-like annotated counter-examples for ARCTL properties. Second, a new tool called {TLACE Visualizer}{} has been implemented to visualize and manipulate these counter-examples\footnote{These tools are available on http://lvl.info.ucl.ac.be/Tools/NuSMV-ARCTL-TLACE.}. An XML format has been designed as a transfer syntax between the two tools. The modified version of NuSMV implements the generation algorithm presented in Section \ref{section:generating}. The implementation is based on the $EXexplain$, $EUexplain$ and $EGexplain$ algorithms already implemented in NuSMV and modified to take actions into account. It generates tree-like annotated counter-examples and can export them into a custom XML format. Technically, the algorithm has been implemented slightly differently than presented in this paper but the result is equivalent. The implemented algorithm generates a witness for every ARCTL formula, without prior reduction to normal form. Negations are handled within the recursive traversal of sub-formulas. The implementation supports two kinds of parameters to limit the amount of generated information. The first set of parameters allows to selectively generate branches only for some temporal operators (e.g. for $\EaX$ but not for $\EaU$ nor $\EaG$). The second parameter limits the maximum depth of generated branches, in terms of number of nested temporal operators. In terms of output, the original version of NuSMV returns only linear (looping) paths as counter-examples; on the other hand, the modified version of NuSMV returns richer information that becomes difficult to display in a text format. We developed {TLACE Visualizer}{}, an interactive graphical interface application for displaying and browsing tree-like annotated counter-examples. The counter-examples are loaded from XML files produced by the modified NuSMV and pictured as a graph in the main area of the interface. The tool also provides different means to arrange the layout of the graph and explore the detailed information associated to each node. A snapshot of the interface is given in Figure~\ref{figure:snapshot}. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{images/dincry-screenshot.png} \caption{A snapshot of the interface illustrating the different feature of the tool.} \label{figure:snapshot} \end{figure} The tool automatically lays out the counter-example upon loading, according to a custom layout algorithm that takes into account the semantic structure of the counter-example. This representation presents the general structure of the counter-example, showing branches and loops. Single states or entire subtrees can be dragged around for better readability. To support browsing of larger graphs, branches can be folded and unfolded to reduce clutter and selectively show relevant information. A side panel displays the values of all variables and annotations along a selected path in the graph, in a collapsible hierarchical presentation. All variable values can also be accessed as a pop-up menu on each node in the main panel, and variables can be selected for display as part of the node's label, giving immediate visibility for a few variables of interest. Both our extended NuSMV and the visualization tool currently only support ARCTL logic natively. That means that epistemic relations are shown in their ARCTL-reduced form. A desirable future extension is to display counter-examples according to their original logic notations. This requires some additional engineering but poses no major technical challenge. \section{Evaluation} \label{section:evaluation} This section first assesses the benefits of the provided browsing facilities to manage complex counter-examples in the context of multi-agent systems and CTLK. It then discusses how the approach could be extended to handle larger counter-examples and universal witnesses using an interactive, incremental generation. \subsection{Richness and Complexity of TLACEs} The need for branching counter-examples for CTLK properties has already been illustrated in Section~\ref{section:example} with a property violated by the protocol of the dining cryptographers. This example showed that a linear counter-example was not enough to fully understand why the model (or the property) was wrong. To illustrate the increasing richness and complexity of counter-examples, let's consider the following property on the dining cryptographers: cryptographer $a$ will eventually know whether cryptographer $b$ knows whether $a$ is paying or not. Let $\Kp{ag}{\phi}$ be a shortcut for $\Kk{ag}{\phi} \vee \Kk{ag}{\neg \phi}$, meaning that agent $ag$ knows whether $\phi$ is true or not. The above property can then be expressed as $\AF{\Kp{a}{\Kp{b}{a.payer}}}$, meaning that $a$ always eventually knows whether $b$ knows if $a$ is the payer or not. This property is violated by the model since, if $b$ or $c$ is the payer, then $a$ cannot say whether $b$ knows whether $a$ paid or not (if $b$ paid, $b$ knows, if $c$ paid, $b$ does not). A screenshot of a counter-example for this property is given in Figure~\ref{figure:knowsknows}. The counter-example features many different branches, due to the nested operators and the disjunction resulting from $\Kp{ag}{\phi}$. This complexity increases when the number of nested epistemic operators increases: the counter-example for the property $\AF{\Kp{b}{\Kp{a}{\Kp{b}{a.payer}}}}$ contains $75$ TLACE nodes and $37$ branches; the counter-example for the same property with $8$ nested $\Kp$ operators would contain $195$ TLACE nodes and $97$ branches. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{images/dincry-knowsknows-deco.png} \caption{A counter-example for the property $\AF{\Kp{a}{\Kp{b}{a.payer}}}$ violated by the protocol of the dining cryptographers, as presented by the tool {TLACE Visualizer}{}. The counter-example is expressed in terms of MTS and ARCTL. The inner values of states show who is the payer. The transitions labels show the transitions types: $RUN=1$ for temporal transitions, $PAST=1$ for reverse temporal transitions and $ag.me=1$ for the epistemic transitions of agent $ag$.} \label{figure:knowsknows} \end{figure} While this increase still remains linear in terms of the length of the property, theory predicts (and experiments confirm) that the number of nodes of a tree-like counter-example for $\mathcal{M}, s \models \phi$ can be $O(|S|^{|\phi|})$ in the worst case, where $|S|$ is the number of states of the model $\mathcal{M}$ and $|\phi|$ the length of the violated property. Indeed, for an ARCTL formula of depth $D$ (for example $\EaG{true}{\EaG{true}{...~\EaG{true}{b}}}$), the top-level branch in the counter-example may have up to $O(|S|)$ nodes, each of which carrying a branch with a counter-example of depth $D-1$, giving a total of $O(|S|^D)$ nodes. Consequently, the time needed to generate a tree-like annotated counter-example is at least in $O(|S|^{|\phi|})$ since the tool has to generate all nodes of the counter-example. The exact performance depends on the complexity of BDD-based re-construction of the execution tree of the counter-example, amortized through the use of memoizing, which is beyond the scope of this analysis. \subsection{Towards Interactive Witness Generation} \label{section:interactive} As the size of the model and the complexity of the property grow, the generation of a counter-example may become intractable. A proposed solution, still to be investigated, is to generate the counter-examples in a lazy manner: instead of generating all the information in one batch, the tool outputs an initial state or an initial prefix of the counter-example and the user can ask the system to extend the parts of the counter-example that are most relevant to his understanding of the reported situation. This interactive, incremental approach can also handle witnesses of universal operators: the user will be offered to ask for the expansion of selected branches, rather than being provided with the expansion of all branches. In such an approach, the Tool plays a game with the User where the Tool tries to show that the property is violated while the User tries to show that it is satisfied. The Tool will be responsible of showing witnesses for existential operators---by giving adequate branches---while the User will attempt to refute witnesses for universal operators (and fail, if the universal property indeed holds). This approach will require a two-way interaction between the visualizer and the model checker (NuSMV), through which the visualizer will drive the incremental witness generation in the model checker. The model checker will obviously have to be extended to support those incremental capabilities. This approach is related to game-based model checking, as developed e.g. in ~\cite{Stevens-Stirling-98,Alur-Henzinger-others-98}. \section{Related Work} \label{section:related-work} Other authors propose structures similar to tree-like annotated counter-examples to provide useful information about a violation. For example, Gurfinkel and Chechik generate proof-like counter-examples for CTL properties violated by Kripke structures~\cite{Gurfinkel-Chechik-03}. These counter-examples are based on a proof of the violation and are composed of states to which parts of the proof are linked. The proof steps are mechanically derived from the structure of the property and the counter-example, so a similar result could be produced on TLACEs by the visualization tool. Note that this would invert the process, since Gurfinkel and Chechick generate the counter-example from the proof and not the other way round. Shoham and Grumberg propose a game-based framework for CTL counter-example generation~\cite{Shoham-Grumberg-03}. Counter-examples are sub-graphs of the game-graph used to perform model checking. Each node of this graph is composed of a state of the model and a sub-formula of the property that it violates. This approach, similar to TLACEs and the proof-like counter-examples of~\cite{Gurfinkel-Chechik-03}, is applied to the context of incremental abstraction-refinement. The structure of these counter-examples is similar to TLACEs. Nevertheless, due to the granularity of the explanation, the number of steps to illustrate the violation---and then the number of nodes of the counter-example---is larger than for a TLACE. Such a counter-example for a given property $\phi$ is then larger than the corresponding TLACE for $\phi$, while giving the same information. Dong et al.\ define a framework to explore rich witness structures for modal $\mu$-calculus called \emph{evidences}~\cite{Dong-Ramakrishnan-others-03}. An evidence is a graph with nodes composed of a state of the system and a sub-formula of the property. As these evidences can be large, they develop a relational graph algebra to manipulate them, and provide an implementation. Evidences have a structure similar to counter-examples of Shoham and Grumberg~\cite{Shoham-Grumberg-03}. This framework could be adapted to explore multi-modal logics counter-examples like TLACEs. Meolic et al.\ propose another model for richer counter-examples~\cite{Meolic-Fantechi-others-04}. These counter-examples are automata accepting all finite linear traces of a given LTS violating a given ACTL (Action-based Computation Tree Logic) property. The tackled problem is not the same as the one of this paper since Meolic et al.\ only consider linear traces. Furthermore, they do not annotate their counter-examples. Some authors propose complementary approaches to analyze counter-examples and to extract their useful information. For example, Jin et al.\ annotate a linear counter-example into \emph{fated} and \emph{free-will} segments, representing the parts of the path where the environment can force the system to go to the error (fated segments) or where the system performs mistaken behavior and could avoid it (free-will segments)~\cite{Jin-Ravi-others-04}. This approach is complementary to any counter-example generation and could be applied to tree-like annotated counter-examples. Other authors like Groce and Visser~\cite{Groce-Visser-03} and Copty et al.~\cite{Copty-Irron-others-03} generate and check variations of a found (linear) counter-example to identify the critical parts that cause the violation. This approach is also complementary and could in principle be applied to tree-like counter-examples. Some people have applied SAT solving to verify CTL properties. For example, Penczek et al.\ describe an algorithm to transform an ACTL (the universal fragment of CTL) model checking problem into a SAT problem~\cite{penczek-wozna-others-02a}. The idea is to create a set of paths of length $k$ in the model connected together to form a branching counter-example. A solution to the SAT problem represents a viable counter-example. Nevertheless, they do not explicitly describe how to provide the counter-example to the user. In the domain of epistemic logics, MCK~\cite{Gammie-Meyden-04} and MCMAS~\cite{Lomuscio-Raimondi-06} are two tools that perform CTLK model checking. The first one, MCK, provides some debugging features like the export of the full graph of the system or the counter-example resulting from bounded model checking of a property. MCMAS gives also some debugging features: it presents a branching counter-example for a violated CTLK property, similar to TLACEs, but these counter-examples are not annotated. It also displays states information that can be filtered in terms of agents but does not give browsing features like the ones provided by {TLACE Visualizer}{}. \section{Conclusion and Perspectives} \label{section:conclusion} This paper presents a structure, an algorithm and an implementation to represent, generate and explore \emph{tree-like annotated counter-examples} (TLACEs) for Action-Restricted CTL. These counter-examples are branching, explaining why an ARCTL property is violated by a model, but also recursively explaining why sub-formulas of the negation of the property are satisfied by the model. Furthermore, elements of these counter-examples are annotated to help their understanding. While these counter-examples explain ARCTL formulas violations, they become really useful to explain violations of richer branching logics like CTLK, that can be reduced to ARCTL. The algorithm uses sub-algorithms to generate paths in the model satisfying particular temporal operators and works recursively to explain why sub-formulas are satisfied by states of these paths. The implementation combines an extension of the NuSMV model checker for generating TLACEs and a graphical tool for displaying and inspecting them interactively. These counter-examples give more information about the violation than linear counter-examples and give a better understanding about the system. The annotations help the user to understand the structure of the counter-example. By nature, such branching counter-examples can become very large and their generation is computationally more costly than generating linear counter-examples. The provided visualization tool is essential for conveniently and productively inspecting such large structures. To be able to handle larger and more complex counter-examples, we are investigating an interactive incremental approach, where the tool provides an initial state of the model violating the property and the user can ask to expand branches of interest. This approach would also make it possible to provide useful witnesses for universal operators, by allowing the user to choose and simulate only selected branches. \bibliographystyle{eptcs}
2,869,038,155,162
arxiv
\section{Introduction} Indoor localization is one of the most demanding applications in both business and public safety. Commercially, it can be used to track children, people with special needs, assist blind people in navigation, identify equipment, and mobile robots, among other things \cite{ref1}. Aside from making navigation easier for users, an indoor positioning system ensures a pleasant user experience and offers the option of heat mapping, which allows us to see how people move within a space. Some of the indoor positioning systems are based on time-related data/estimation, such as Time of Arrival (ToA), Time of Flight (ToF), and Time Difference of Arrival (TDoA) \cite{ref2}, \cite{ref3}. Either appropriate time synchronization or an antenna array is required for the ToA, ToF, and TDoA positioning systems, which may raise the system cost. In contrast, a RSSI-based positioning system is based on the characteristics of wireless signal strength over time and does not require time synchronization or angle measurement \cite{ref4}. In RSSI-based positioning system, the AP collects the RSSI values from Reference Points (RPs) to make the fingerprint feature vectors of the location grids in indoor localization systems, known as the RSSI-based or fingerprinting datasets. One of the main challenges of implementing the indoor positioning systems that use fingerprinting datasets is to find an appropriate supervised learning algorithm to classify locations based on their labels. In recent years, various kinds of shallow learning algorithms, for instance, k-Nearest Neighbors (k-NN) \cite{ref5}, Support Vector Machine (SVM) \cite{ref6}, Logistic Regression, Gradient Boosted Decision Tree (GBDT) \cite{ref16}, and Extreme Gradient Boosting (XGBoost) \cite{ref9} have been applied to the RSSI fingerprinting data. Despite the fact that, GBDT and XGBoost have achieved acceptable results, they could not reduce the significant effect of signal interference in the data which is one of the most challenging problems in indoor localization systems. Hence, some deep neural networks (DNNs) have been recently introduced to deal with the noise issue \cite{ref11, ref12, ref13}. Deep learning is a convenient machine learning technique for feature augmentation algorithms which increase the statistical dependencies between the predictions of the individual base models \cite{ref10}. To elucidate higher level models of noisy inputs due to the signal fluctuations, the feature augmentation method extracts features from fingerprinting data. Transfer learning is a suitable machine learning approach for improving the feature augmentation algorithm's outcome. The idea behind transfer learning is to freeze the first few layers of a pre-trained Artificial Neural Network (ANN), then retrain the remainder of the layers on new data \cite{ref20}. Because the new task is comparable to the previous task, we presume that the embedding will be beneficial for the new task. In this paper, we first gather the fingerprinting dataset in a BLE network. Then, we use a state-of-the-art algorithm named AugBoost-ANN which was introduced in \cite{ref10}. To the best of our knowledge, this study is the first to use the AugBoost-ANN for indoor positioning systems in support of IoT services. Our contributions in this paper are summarized as follows: \begin{itemize} \item We first prepare fingerprinting dataset by collecting the RSSI values from a few BLE nodes as RPs using a Raspberry Pi as an AP. Then, we propose an indoor positioning algorithm so called AugBoost-ANN, which implements GBDT's stage-wise additive expansions with a neural-network-based feature augmentation method. Therefore, in each iteration of making a decision tree (DT) the algorithm combines the DT with an ANN. \item We compare our proposed technique with existing deep learning and gradient boosting algorithms recently proposed in literature, in terms of mean and standard deviation of location accuracy. The mean accuracy of our proposed technique is 27\% , 70\%, and 19\% more accurate than \cite{ref11}, \cite{ref12}, and \cite{ref9}, respectively. Moreover, in terms of standard deviation, the localization error of our proposed technique is 11\% better than that of \cite{ref9}. \end{itemize} The rest of this paper is organized as follows. In Section II, literature and background review are investigated. Our proposed indoor localization technique is discussed in detail in Section III. Section IV organizes experimental results for the indoor positioning system. Finally, Section V presents the conclusion and future works. \section{BACKGROUND and LITERATURE REVIEW} In this section, an overview of indoor localization is first provided and then the application of GBDT algorithm for indoor localization is discussed, and the related works are reviewed. \subsection{An overview on Indoor Localization} The process of monitoring an indoor place using data obtained from various sources such as wired or wireless networks is known as a indoor localization system \cite{ref7}. The majority of indoor localization systems have been suggested using Zigbee, Bluetooth, Wi-Fi, and cellular, with various degrees of implementation complexity and accuracy. BLE modules are more suitable for indoor localization, since it is a low-cost technology and we need only to install the battery-operated cheap BLE devices in the monitoring area. A general architecture of an indoor positioning system consisting of BLE modules and a Raspberry Pi is presented in Fig. \ref{fig:BLE network artichectrue}. The presented network architecture consists of three main subsystems, including endpoints, coordinator, and cloud server. Endpoints consist of BLE modules located in different coordinates of the map provide RSSI data. The coordinator's main component is a Raspberry Pie that collects RSSI of endpoints, and it is in charge of communication between the cloud server and the endpoints. The coordinator is the center node of the star network topology, and it can gather and handle data without the need for contact with the cloud server. In the event that the server's communication link is broken (e.g. internet connection is down), the Raspberry Pi can realize the circumstance and will execute and train the learning model by itself \cite{ref15}. Also, the cloud server receives data from the coordinator and performs the training and stores the data through a database. \begin{figure} \centering \includegraphics[width=0.9\linewidth, height=4.0cm]{ble.PNG} \caption{The general network architecture for the indoor positioning system.} \label{fig:BLE network artichectrue} \end{figure} \subsection{Gradient Boosted Decision Tree} The concept of boosting arises from combining weak learners to get a model with significantly improved performance. Gradient Boosting is a machine learning approach to tackle regression and classification problems. It creates a prediction model using an ensemble of weak prediction models, such as decision trees \cite{ref16}. Bias Error and Variance Error are the two types of errors that can occur in machine learning systems. Gradient boosting is one of the boosting methods used to reduce the model's bias error. Gradient boosting includes three major components: a loss function, a weak learner, and an additive model. The loss function is responsible for optimization, and the weak learner's task is to make predictions. Moreover, the additive model is utilized for appending weak learners to reduce the loss value. The Gradient Boosted Decision Tree (GBDT) is a gradient boosting method based on decision trees that are comprised several decision trees in practical applications \cite{ref17}. The GBDT is based on a regression decision tree and is capable of adapting to non-linear features. It can be used to process various data like fingerprinting datasets. It has been successfully utilized in a variety of contexts because of its solid theoretical foundation, precise prediction, and simplicity. It is especially well suited to huge data applications, such as matrix and vector computations \cite{ref16}. \subsection{Related Work} Many of the indoor localization problems have either been solved or alleviated by the deployment of machine learning algorithms, however, each has its own strengths, deficiencies, and weaknesses. For example, various kinds of Recurrent Neural Networks (RNNs), including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and bidirectional LSTM (BiLSTM), have been implemented and evaluated in \cite{ref13}. Despite that, they achieved a model with valuable mean location accuracy in terms of meter is presented in \cite{ref13}, the location accuracy standard deviation of the values obtained by their model is 0.64 m, which makes their indoor positioning system unreliable in some real indoor environments. Some GBDT methods have been recently utilized to classify the locations based on the voting between decision trees models. For instance, in \cite{ref16}, the authors present a new multiple fingerprints method utilizing the GBDT and characteristics of RSSI. In \cite{ref18}, a Wi-Fi based indoor positioning system is proposed to obtain RSSI-based dataset, and an auto-encoder neural network is implemented as a feature extraction algorithm to deal with the noise in the fingerprinting dataset. Then, a novel GBDT algorithm so called LightGBM is used in \cite{ref18} to classify the locations. Furthermore, a method for indoor positioning based on a semi-supervised deep reinforcement learning model is proposed in \cite{ref5}, which makes use of the enormous amount of unlabeled collected data. In \cite{ref14}, a deep learning technique is utilized to build a database by employing a channel measurement spatial beam signal-to-noise ratios (SNRs) as described in IEEE 802.11ad/ay standard. \section{The Proposed Indoor Localization Technique} \label{sec:proposed technique} In this section, we present our proposed indoor localization technique. We first describe the indoor environment for the data collection and the collected dataset. Then, the model's feature augmentation algorithm is presented which implements a deep neural network, followed by an introduction of AugBoost-ANN method for the indoor positioning system. The general architecture of the indoor localization technique is demonstrated in Fig. \ref{fig:augboost}. \begin{figure} \centering \includegraphics[width=1.0\linewidth, height=10.5cm]{AugBoost-ANN.png} \caption{The general architecture of the proposed indoor localization technique.} \label{fig:augboost} \end{figure} \subsection{Data collection} The dataset is collected in an indoor environment. At first, we partition the map into grids and then locate and fix a few BLE modules with a predetermined distance from each other. Then, we implement a scenario in which a Raspberry Pi is moving on different locations in different periods in the map, and collects data of BLE nodes every second, which are represented as iBeacon or Eddystone BLE profiles, and store RSSI parameters of transmitted packets in a dataset. The dataset is in CSV format and is illustrated with $m+1$ columns and $N$ rows, where $m$ in number of features (number of BLE nodes) and $N$ is the number of samples (seconds). Therefore, in the collected dataset, there are $m$ columns that contain RSSI of $m$ BLE modules and a column that shows the label of the location of Raspberry Pi for every second. Let denote the collected dataset by $S={(R_1,y_1),(R_2,y_2),(R_3,y_3),...,(R_N,y_N)}$, where $R_i=[{r_i^1},{r_i^2},{r_i^3},...,{r_i^m}]$ is the vector of collected m-dimensional RSSI, and $y$ is the vector of labels of the AP's locations. So, we have $N$ samples with $m$ features. Furthermore, we presume that $\left\{R_i\right\}_{i=0}^N$ is original features and $y$ is a target in our proposed technique. An example of an indoor environment for a parking's map with 10 BLE modules is illustrated in Fig. \ref{fig:map}. \begin{figure} \centering \includegraphics[width=1.0\linewidth, height=3.5cm]{map.jpg} \caption{The illustration of a parking's map and 10 BLE nodes.} \label{fig:map} \end{figure} \subsection{Feature Augmentation with ANN} \label{sec:feature augmentation} The original features do not change before each iteration of making a Decision Tree (DT) during the training phase of the GBDT. Therefore, for indoor positioning tasks, the GBDT will not succeed in achieving an accurate model to classify the locations in the presence of the signal interference. Feature augmentation as a common technique for tackling Multi-Dimensional Classification (MDC) issues, manipulates the feature space by incorporating the label information \cite{ref21}. According to \cite{ref10}, a state-of-the-art method so called feature augmentation with Artificial Neural Network (ANN) has been proposed, which aims to train an ANN until the loss stops to improve, with the original features and the updated target. The ANNs architecture includes three fully connected hidden layers with Rectified Linear Unit ($ReLU$) activation function for each hidden layer which is defined as: \begin{equation} \label{eq1} Relu(x) = {max\ (0,x)}. \end{equation} The number of neurons in each hidden layer is equal to the number of input neurons. Furthermore, the batch size value is between 300 and $\frac{1}{15}$ of the samples in the dataset. In order to extract features from the ANN, neurons of the 3rd hidden layer are considered the augmented features using the transfer learning technique. Thus, we freeze the first few layers of a trained ANN and retrain the remainder of the layers on new data \cite{ref20}. Since the new task is comparable to the previous task, we presume that the embedding will be beneficial for the new task. This is the case when both tasks are similar, and it is thus prudent to retain as many layers as feasible from the pretrained ANN. This implies that we just dump the ANN's final layer \cite{ref10}. The structure of the proposed feature augmentation with ANN is presented in Fig. \ref{fig:ANN}. \begin{figure} \centering \includegraphics[width=0.9\linewidth, height=4.9cm]{ANN.PNG} \caption{Representation of the proposed ANN structure.} \label{fig:ANN} \end{figure} \subsection{Indoor localization via AugBoost-ANN} \label{sec:augboost} As mentioned in the previous subsection, one of the main weaknesses of GBDT is that the original features do not change before each iteration. Therefore, feature augmentation algorithms have been proposed to increase the accuracy of the model in case of having signal fluctuation in the data. As Fig. \ref{fig:augboost} has shown, the gradient boosting enhanced with step-wise feature augmentation using artificial neural network (AugBoost-ANN) uses the proposed ANN as feature augmentation algorithm for the fingerprinting data before each iteration of creating a DT. At the end, AugBoost-ANN votes between predictions of all DTs with their predetermined weights to find the best model. The main idea of AugBoost-ANN was first proposed in \cite{ref10}, and organized the training procedure in Algorithm \ref{Alg:augboost}. \newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}} \SetCommentSty{mycommfont} \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \begin{algorithm} \caption{AugBoost-ANN Training Procedure} \label{Alg:augboost} \SetAlgoLined \textbf{Input}: \begin{small} $S=\{\left.\begin{aligned}(R_1,y_1),(R_2,y_2),(R_3,y_3),...,(R_N,y_N)\end{aligned}\right\}$ \\ \end{small} \textbf{Output}: $M_T(R)$ \\ $M_0(R)=\underset{\rho}{\arg\min} \sum_{i=1}^N (\mathcal{L}(y_i, \rho))$ \\ \For{$i=1:N$}{ Initialize the target for each sample: ${y}_{i}={y}_{i}$} \For{$t=1:T$}{ \uIf{$t-1$ is divisible by $c_{BA}$}{ Split features of $R_i$ for district $i \in \{1,\dotsc,N\}$ to J random subsets. \\ \For{$j=1:J$}{ Apply feature augmentation method to $j^{\text{\tiny th}}$ subset using the proposed ANN which retrains the model for the subset and is represented in Fig \ref{fig:ANN}. } Finally, extract augmented features from $3$rd hidden layer of the proposed ANN. \\ } \Else{ Set the feature augmentation method and subsets similar to them in the previous iteration. \\ } \For{$i=1:N$}{ obtain $LGL$ from (\ref{eq3}) and update the target ${y}_{i}: \; {y}_{i} =LGL$} Train $D_t$ through the augmented features and targets. Set values of the weights ($\rho_t$) using (\ref{eq4})\\ $\mathit{M_t(R)}=M_{t-1}(R)+\rho_t.D_t(R')$ } \end{algorithm} The algorithm takes $S$ as an input and returns a model named $M_T(R)$, which is based on a vote between predictions of $T$ decision trees with their corresponding weights. The algorithm defines $\mathcal{L}$ as a loss function which is mean square error ($\mathit{MSE}$), and is presented as follows, \begin{equation} \label{eq2} \mathit{MSE}=\frac{1}{n} \sum_{i=1}^n (z_{i}-\hat{z}_{i})^2, \end{equation} \noindent where $n$ is the number of data points, $z_{i}$ is observed values, and $\hat{z}_{i}$ is predicted values. In the first stage of training, we initialize $M_{0}(R)$ with a value of $\rho$ for which loss functions for $\tilde{y}_{i}$ and $\rho$ attains its minimum. Moreover, $\tilde{y}_{i}$ for distinct $i \in \{1,\dotsc,N\}$ are the targets for the algorithm, and before starting iterations, we initialize each of the $\tilde{y}_{i}$ with $y_{i}$ as early targets. The main purpose of GBDT is to make a decision tree in each of $T$ iterations and vote between predictions of $T$ decision trees with their predetermined weights to find the best model. The entire algorithm of AugBoost-ANN is similar to the GBDT algorithm except for the feature augmentation part, which should be taken place in each iteration before creating a decision tree. Although, We do not train the model used for feature augmentation in all iterations of making decision trees. Instead, starting with the first iteration, we retrain the model for every $c_{\textrm {BA}}$ iterations (BA is an acronym for 'Between Augmentations'). The model from the previous iteration is replicated in the subsequent iterations. Because each decision tree may only be able to use a portion of the data in each set of new features, this is designed to allow the boosting process to exploit the information in each set of new features. Therefore, for $t^{\text{\tiny th}}$ iteration, if $t–1$ is divisible by $c_{\textrm {BA}}$ we split features of $R_{i}$ for distinct $i \in \{1,\dotsc,N\}$ to $J$ random subsets, and we apply the feature augmentation algorithm to each subset by means of the transfer learning technique. On the other hand, if $t–1$ is not divisible by $c_{BA}$ we set the feature augmentation method and subsets similar to them in the previous iteration. The reason behind using $\mathit{MSE}$ as a loss function is to measure the quality of each split. Next, we update the targets to assume the ensemble's errors from the previous iteration using the last gradient of the loss function denoted by $LGL$ for each sample of the original features, which is given as: \begin{equation} \label{eq3} \mathit{LGL}=-\left[ \frac{\partial }{\partial M(R_i)}{\mathcal{L}(y_i, M(R_i))} \right]_{M(R)=M_{t-1}(R)}. \end{equation} Afterward, we train a decision tree using the output of the feature augmentation algorithm and the updated targets. We set the weight of the model which is generated from the decision tree in $t^{\text{\tiny th}}$ iteration and denoted by $\rho_t$ for optimizing loss function as: \begin{equation} \label{eq4} \mathit{\rho_t}=\underset{\rho}{\arg\min} \sum_{i=1}^N (\mathcal{L}(y_i, M_{t-1}(R_i)+\rho.D_t(R_i^\prime))), \end{equation} where $D_t$ is the decision tree in $t^{\text{\tiny th}}$ iteration, $M_{t-1}(R)$ denotes the ensemble of models generated in the previous iteration, and $R_i^\prime$ is the $i^{\text{\tiny th}}$ augmented feature that gives to $D_t$ as an input. Hence, we initialize $\rho_t$ with a value of $\rho$ for which loss functions attains its minimum. After all, $M_{t}(R)$ is obtained from ensembles of previous models. \section{EXPERIMENTAL RESULTS} \label{sec:result} In this section, we evaluate feature augmentation and supervised learning methods in terms of location accuracy for the different iterations. At first, we collect the fingerprinting dataset in a parking's of a building with a size of $12.5\times18$ square meters and partition the map into grids of size $1.25\times1.5$ square meters. Next, we locate 10 BLE modules ($m=10$) with the distance of approximately 4 meters. Finally, we move a Raspberry Pi 4 B+ in the map during 1090 seconds ($N=1090$) in different periods to gather the data. The full document and dataset are accessible in \cite{ref19}. For our experiments, all of the results are represented as outcomes for the calculation of mean accuracy and standard deviation in terms of meters using Euclidean distance for locations. The standard deviation formula is defined as: \begin{equation} \label{eq5} \sigma=\sqrt{\frac{(\sum(x_i-\mu))^2}{n}}, \end{equation} \noindent where $\sigma$ is accuracy standard deviation, $n$ is the number of the training processes, $x_i$ is a value from the list of all accuracies in all training processes, and $\mu$ is the mean accuracy. \subsection{Feature Augmentation} A feature augmentation algorithm so called Random Projection (RP) is suggested in \cite{ref10}, where instead of projecting all of the features with the same random projection, the algorithm applies the projections independently to each of the randomly chosen feature subsets. Then, rather than simply employing the new features, it concatenates the original and new features. After all, per number of iterations, it just re-extracts the features once \cite{ref10}. We compare an appropriate use of ANN approach to enhance the outcomes from utilizing RP and to augment the most important features in the algorithm that played a significant role in an accurate location classifier as one of the primary contributions in the suggested indoor localization methodology. For this purpose, we apply AugBoost-ANN, AugBoost-RP, and XGBoost on one fold test with 70\% of the dataset random training trajectories samples. Fig. \ref{fig:ANN_vs_RP} shows mean accuracy of AugBoost algorithm using ANN and RP for the different number of iterations. It is noticed that AugBoost-ANN is more accurate than AugBoost-RP in 85\% of the iterations. Especially, AugBoost-ANN achieves its best mean accuracy of 0.64 meters in $150^{\text{\tiny th}}$ iteration; however, AugBoost-RP obtains the mean accuracy of 0.72 meters in $135^{\text{\tiny th}}$ iteration. XGBoost is one of the most common gradient boosting methods, and also belongs to GBDT family, which is implemented in \cite{ref9} for indoor positioning systems and outperforms some of shallow learning algorithms in terms of mean accuracy. As Fig. \ref{fig:ANN_vs_RP} has shown, AugBoost-ANN outperforms XGBoost with regard to mean accuracy in all iterations. Although, the mean accuracy of XGBoost is better than AugBoost-RP in the first few iterations. \begin{figure} \centering \includegraphics[width=0.8\linewidth, height=4.2cm]{mean_acc.png} \caption{The comparison of mean accuracy (m) of AugBoost-ANN, AugBoost-RP, and XGBoost on one fold for different iterations.} \label{fig:ANN_vs_RP} \end{figure} \subsection{Localization Performance} As the second experiment, we evaluate location accuracy in terms of mean accuracy and standard deviations for different supervised learning algorithms. For implementing AugBoost-ANN algorithm, we use 150 iterations and $c_{BA}$ equal to 15, hence, all over the procedure, we train 150 DTs and 15 times retrain the proposed ANN. Moreover, we choose Adam optimizer as an optimization function and set the number of epochs and learning rate to 30 and 0.1, respectively. All of the results are presented after 8-fold tests with 70\% of the dataset random training trajectories samples. In our research, the Keras package is utilized for implementing the deep neural network on TensorFlow \cite{ref22}. To evaluate our proposed deep learning approach, we compare some supervised learning algorithms which were presented in the recent publications in the field of indoor localization systems and are demonstrated in Table \ref{table:comparison}. \begin{table} \begin{center} \caption{\label{table:comparison}The comparison of our proposed technique after 8-fold tests with the existing related works.} \begin{tabular}{||c c c c||} \hline Algorithm & labels & Grid size & Location accuracy \\ [0.3ex] \hline\hline MLP \cite{ref11} & 50 & 1.7 m & $2.8 \pm 0.1$ m \\ [0.3ex] \hline MLNN \cite{ref12} & 20 & 1.5 m & $1.1 \pm 1.2$ m \\ [0.3ex] \hline XGBoost \cite{ref9} & 1401 & 1.5 m & $3.99 \pm 2.81$ m \\ [0.3ex] \hline Proposed AugBoost-ANN & 54 & 1.25-1.5 m & $0.77 \pm 0.3$ m \\ [0.3ex] \hline \end{tabular} \end{center} \end{table} According to Table \ref{table:comparison}, we compare our proposed method with deep learning algorithms named Multi-Layer Perceptron (MLP) \cite{ref11}, Multi-Layer Neural Network (MLNN) \cite{ref12}, and Extreme Gradient Boosting (XGBoost) \cite{ref9} that each of which has their strength and weaknesses. For example, MLP obtains a standard deviation of 0.1 m which is 0.2 m better that the standard deviation of AugBoost-ANN. However, its mean accuracy is 2.8 m which means that our proposed method is more reliable in this matter. Also, despite that, XGboost and AugBoost-ANN are based on GBDT, XGBoost achieved a mean accuracy and standard deviation of 3.99 m and 2.81 m, which indicates that applying the feature augmentation method improves accuracy significantly. Furthermore, compared with MLNN, AugBoost-ANN has performed better in all respects. Finally, we investigate the performance of the deep neural network to evaluate the mean and standard deviation (std) of logarithm with base 10 of the loss function (log loss) for the different number of iterations. The loss function is cross-entropy since our neural network augments features. Also, from Fig. \ref{fig:mea_log_std}, we observe that from the first iteration to $45^{\text{\tiny th}}$ iteration mean log loss decreased rapidly from 1.8 to 0.7, and std log loss raise significantly by 0.5. From $45^{\text{\tiny th}}$ iteration to $180^{\text{\tiny th}}$ iteration, these values change smoothly and being stabled in $150^{\text{\tiny th}}$ iteration. The code repository is accessible in \cite{ref23}. \begin{figure} \centering \includegraphics[width=0.8\linewidth, height=4.2cm]{mean_loss_std.png} \caption{The learning curve of AugBoost-ANN on one fold for different iterations with respect to the log loss.} \label{fig:mea_log_std} \end{figure} \section{conclusion} \label{sec:conclusion} In this paper, we proposed a novel deep learning solution for classifying locations in an indoor environment using BLE networks. The current study is the first to use AugBoost-ANN for indoor positioning systems. For this purpose, we utilized an IoT architecture with star network topology with 10 BLE modules and a Raspberry Pi to gather fingerprinting data. Then, we used a novel deep learning approach called AugBoost-ANN, which augments features in each iteration of making a decision tree using a deep neural network and transfer learning technique. Our proposed algorithm outperforms the gradient boosting and deep learning algorithms for indoor localization, recently proposed in related works. As our future work, we will investigate new deep learning approaches such as Auto Encoder networks for feature extraction. \bibliographystyle{IEEEtran}
2,869,038,155,163
arxiv
\section{Introduction} Arrangements of lines and, in general, arrangements of hyperplanes are paramount data structures in computational geometry whose combinatorial properties have been extensively studied, partially motivated by the point-hyperplane duality. Pseudo-line arrangements are a combinatorial generalization of line arrangements. Defined by Levi in 1926 the full potential of working with these structures was first exploited by Goodman and Pollack. While pseudo-lines can be considered either as combinatorial or geometric objects, they also lack certain geometric properties that may be needed in proofs. The following example motivated the research presented in this paper. Consider a finite set of lines that are either red or blue, no two of them parallel and no three of them passing through the same point. Every such arrangement has a bichromatic triangle, i.e., an empty triangular cell bounded by red and blue lines. This can be shown using a distance argument similar to Kelly's proof of the Sylvester-Gallai theorem (see, e.g.,~\cite[p.~73]{proofs_book}). We sketch another nice proof. Think of the arrangement as a union of two monochromatic arrangements in colors blue and red. Continuously translate the red arrangement in positive $y$-direction while keeping the blue arrangement in place. Eventually the combinatorics of the union arrangement will change with a triangle flip, i.e., with a crossing passing a line. The area of monochromatic triangles is not affected by the motion. Therefore, the first triangle that flips is a bichromatic triangle in the original arrangement. See \figurename~\ref{fig_proof_triangle}~(left). \begin{figure} \centering \includegraphics{proof_triangle} \caption{Vertical translation of the red lines shows that there is always a bichromatic triangle in a bichromatic line arrangement (left). For pseudo-line arrangements, a vertical translation may result in a structure that is no longer a valid pseudo-line arrangement (right). } \label{fig_proof_triangle} \end{figure} This argument does not generalize to pseudo-line arrangements. See \figurename~\ref{fig_proof_triangle}~(right). Actually the question whether all simple bichromatic pseudo-line arrangements have bichromatic triangles is by now open for several years. The crucial property of lines used in the above argument is that shifting a subset of the lines vertically again yields an arrangement, i.e., the shift does not introduce multiple crossings. We were wondering whether any pseudo-line arrangement can be drawn s.t.\ this property holds. In this paper, we show that this is not true and that arrangements where this is possible constitute an interesting class of pseudo-line arrangements. Define an \emph{arrangement of pseudo-lines} as a finite family of $x$-monotone bi-infinite connected curves (called \emph{pseudo-lines}) in the Euclidean plane s.t.\ each pair of pseudo-lines intersects in exactly one point, at which they cross. For simplicity, we consider the $n$ pseudo-lines $\{\ell_1, \dots, \ell_n\}$ to be indexed from $1$ to $n$ in top-bottom order at left infinity.% \footnote{Pseudo-line arrangements are often studied in the real projective plane, with pseudo-lines being simple closed curves that do not separate the projective plane. All arrangements can be represented by $x$-monotone arrangements~\cite{semispaces}. As $x$-monotonicity is crucial for our setting and the line at infinity plays a special role, we use the above definition.} A pseudo-line arrangement is \emph{simple} if no three pseudo-lines meet in one point; if in addition no two pairs of pseudo-lines cross at the same $x$-coordinate we call it \emph{$x$-simple.} An \emph{arrangement of approaching pseudo-lines} is an arrangement of pseudo-lines where each pseudo-line~$\ell_i$ is represented by function-graph $f_i(x)$, defined for all $x \in \mathbb{R}$, s.t., for any two pseudo-lines $\ell_i$ and $\ell_j$ with $i < j$, the function $x \mapsto f_i(x) - f_j(x)$ is monotonically decreasing and surjective. This implies that the pseudo-lines approach each other until they cross, and then they move away from each other, and exactly captures our objective to vertically translate pseudo-lines in an arbitrary way while maintaining the invariant that the collection of curves is a valid pseudo-line arrangement (If $f_i-f_j$ is not surjective the crossing of pseudo-lines $i$ and $j$ may be lost upon vertical translations.) For most of our results, we consider the pseudo-lines to be \emph{strictly approaching}, i.e., the function is strictly decreasing. For simplicity, we may sloppily call arrangements of approaching pseudo-lines \emph{approaching arrangements}. In this paper, we identify various notable properties of approaching arrangements. In Section~\ref{sec_manipulating}, we show how to modify approaching arrangements and how to decide whether an arrangement is $x$-isomorphic to an approaching arrangement in polynomial time. Then, we show a specialization of Levi's enlargement lemma for approaching pseudo-lines and use it to show that arrangements of approaching pseudo-lines are dual to generalized configurations of points with an underlying arrangement of approaching pseudo-lines. In Section~\ref{sec_properties}, we describe arrangements which have no realization as approaching arrangement. We also show that asymptotically there are as many approaching arrangements as pseudo-line arrangements. We conclude in Section~\ref{sec_higher} with a generalization of the notion of being approaching to three dimensions; it turns out that arrangements of approaching pseudo-planes are characterized by the combinatorial structure of the family of their normal vectors at all points. \paragraph{Related work.} Restricted representations of Euclidean pseudo-line arrangements have been considered already in early work about pseudo-line arrangements. Goodman~\cite{goodman_proof} shows that every arrangement has a representation as a \emph{wiring diagram}. More recently there have been results on drawing arrangements as convex polygonal chains with few bends~\cite{convex_arc_drawings} and on small grids~\cite{small_grids}. Goodman and Pollack~\cite{polynomial_realization} consider arrangements whose pseudo-lines are the function-graphs of polynomial functions with bounded degree. In particular, they give bounds on the degree necessary to represent all isomorphism classes of pseudo-line arrangements. Generalizing the setting to higher dimensions (by requiring that any pseudo-hyperplane can be translated vertically while maintaining that the family of hyperplanes is an arrangement) we found that such approaching arrangements are representations of \emph{Euclidean oriented matroids}, which are studied in the context of pivot rules for oriented matroid programming (see~\cite[Chapter~10]{oriented_matroids}). \section{Manipulating approaching arrangements} \label{sec_manipulating} Lemma~\ref{lem_polygonal} shows that we can make the pseudo-lines of approaching arrangements piecewise linear. This is similar to the transformation of Euclidean pseudo-line arrangements to equivalent wiring diagrams. Before stating the lemma it is appropriate to briefly discuss notions of isomorphism for arrangements of pseudo-lines. Since we have defined pseudo-lines as $x$-monotone curves there are two faces of the arrangement containing the points at $\pm$infinity of vertical lines. These two faces are the \emph{north-face} and the \emph{south-face}. A \emph{marked arrangement} is an arrangement together with a distinguished unbounded face, the north-face. Pseudo-lines of marked arrangements are oriented such that the north-face is to the left of the pseudo-line. We think of pseudo-line arrangements and in particular of approaching arrangements as being marked arrangements. Two pseudo-line arrangements are \emph{isomorphic} iff there is an isomorphism of the induced cell complexes which maps north-face to north-face and respects the induced orientation of the pseudo-lines. Two pseudo-line arrangements are \emph{$x$-isomorphic} iff a sweep with a vertical line meets the crossings in the same order. Both notions can be described in terms of allowable sequences. An \emph{allowable sequence} is a sequence of permutations tarting with the identity permutation ${\sf id} = (1, \dots, n)$ in which (i) a permutation is obtained from the previous one by the reversal of one or more non-overlapping substrings, and (ii) each pair is reversed exactly once. An allowable sequence is \emph{simple} if two adjacent permutations differ by the reversal of exactly two adjacent elements. Note that the permutations in which a vertical sweep line intersects the pseudo-lines of an arrangement gives an allowable sequence. We refer to this as \emph{the allowable sequence} of the arrangement and say that the arrangement \emph{realizes} the allowable sequence. Clearly two arranements are $x$-isomorphic if they realize the same allowable sequence. Replacing the vertical line for the sweep by a moving curve (vertical pseudo-line) which joins north-face and south-face and intersects each pseudo-line of the arrangement exactly once we get a notion of pseudo-sweep. A pseudo-sweep typically has various options for making progress, i.e., for passing a crossing of the arrangement. Each pseudo-sweep also produces an allowable sequence. Two arrangements are isomorphic if their pseudo-sweeps yield the same collection of allowable sequences or equivalently if there are pseudo-sweeps on the two arrangements which produce the same allowable sequence. \begin{lemma}\label{lem_polygonal} For any arrangement of approaching pseudo-lines, there is an $x$-isomorphic arrangement of approaching polygonal curves (starting and ending with a ray). If the allowable sequence of the arrangement is simple, then there exists such an arrangement without crossings at the bends of the polygonal curves. \end{lemma} \begin{proof} Consider the approaching pseudo-lines and add a vertical `helper-line' at every crossing. Connect the intersection points of each pseudo-line with adjacent helper-lines by segments. This results in an arrangement of polygonal curves between the leftmost and the rightmost helper-line. See \fig{fig_polygonal}. Since the original pseudo-lines were approaching, these curves are approaching as well; the signed distance between the intersection points with the vertical lines is decreasing, and this property is maintained by the linear interpolations between the points. To complete the construction, we add rays in negative $x$-direction starting at the intersection points at the first-helper line; the slopes of the rays are to be chosen s.t.\ their order reflects the order of the original pseudo-lines at left infinity. After applying the analogous construction at the rightmost helper-line, we obtain the $x$-isomorphic arrangement. If the allowable sequence of the arrangement is simple, we may choose the helper-lines between the crossings and use a corresponding construction. This avoids an incidence of a bend with a crossing. \end{proof} \begin{figure}[ht] \centering \includegraphics[scale=.8]{polygonal} \caption{Transforming an arrangement of approaching pseudo-lines into an isomorphic one of approaching polygonal pseudo-lines.} \label{fig_polygonal} \end{figure} The construction used in the proof yields pseudo-lines being represented by polygonal curves with a quadratic number of bends. It might be interesting to consider the problem of minimizing bends in such polygonal representations of arrangements. Two simple operations which can help to reduce the number of bends are \emph{horizontal streching}, i.e., a change of the $x$-coordinates of the helper-lines which preserves their left-to-right order, and \emph{vertical shifts} which can be applied a helper-line and all the points on it. Both operations preserve the $x$-isomorphism class. The two operations are crucial for our next result, where we show that the intersection points with the helper-lines can be obtained by a linear program. Asinowski~\cite{sub_allowable} defines a \emph{suballowable sequence} as a sequence obtained from an allowable sequence by removing an arbitrary number of permutations from it. An arrangement thus realizes a suballowable sequence if we can obtain this suballowable sequence from its allowable sequence. \begin{theorem}\label{thm_realizability} Given a suballowable sequence, we can decide in polynomial time whether there is an arrangement of approaching pseudo-lines with such a sequence. \end{theorem} \begin{proof} We attempt to construct a polygonal pseudo-line arrangement for the given suballowable sequence. As discussed in the proof of Lemma~\ref{lem_polygonal}, we only need to obtain the points in which the pseudo-lines intersect vertical helper-lines through crossings. The allowable sequence of the arrangement is exactly the description of the relative positions of these points. We can consider the $y$-coordinates of pseudo-line $\ell_i$ at a vertical helper-line $v_c$ as a variable $y_{i,c}$ and by this encode the suballowable sequence as a set of linear inequalities on those variables, e.g., to express that $\ell_i$ is above $\ell_j$ at $v_c$ we use the inequality $y_{i,c} \geq y_{j,c} +1$. Further, the curves are approaching iff $y_{i, c} - y_{j,c} \geq y_{i,c+1} - y_{j, c+1}$ for all $1\leq i<j \leq n$ and~$c$. These constraints yield a polyhedron (linear program) that is non-empty (feasible) iff there exists such an arrangement. Since the allowable sequence of an arrangement of $n$ pseudolines consists of $\binom{n}{2}+1$ permutations the linear program has $O(n^4)$ inequalities in $O(n^3)$ variables. Note that it is actually sufficient to have constraints only for neighboring points along the helper lines, this shows that $O(n^3)$ inequalities are sufficient. \end{proof} Let us emphasize that deciding whether an allowable sequence is realizable by a line arrangement is an $\exists\mathbb{R}$-hard problem~\cite{allowable_er}, and thus not even known to be in NP. While we do not have a polynomial-time algorithm for deciding whether there is an isomorphic approaching arrangement for a given pseudo-line arrangement, Theorem~\ref{thm_realizability} tells us that the problem is in NP, as we can give the order of the crossings encountered by a sweep as a certificate for a realization. The corresponding problem for lines is also $\exists\mathbb{R}$-hard~\cite{mnev}. The following observation is the main property that makes approaching pseudo-lines interesting. \begin{obs}\label{obs:main} Given an arrangement $A$ of strictly approaching pseudo-lines and a pseudo-line $\ell\in A$, any vertical translation of $\ell$ in $A$ results again in an arrangement of strictly approaching pseudo-lines. \end{obs} Doing an arbitrary translation, we may run into trouble when the pseudo-lines are not strictly approaching. In this case it can happen that two pseudo-lines share an infinite number of points. The following lemma allows us to ignore non-strictly approaching arrangements for many aspects. \begin{lemma}\label{lem_strictly} Any simple arrangement of approaching pseudo-lines is homeomorphic to an $x$-isomorphic arrangement of strictly approaching pseudo-lines. \end{lemma} \begin{proof} Given an arrangement $A$, construct a polygonal arrangement $A'$ as described for Lemma~\ref{lem_polygonal}. If the resulting pseudo-lines are strictly approaching, we are done. Otherwise, consider the rays that emanate to the left. We may change their slopes s.t.\ all the slopes are different and their relative order remains the same. Consider the first vertical slab defined by two neighboring vertical lines $v$ and $w$ that contains two segments that are parallel (if there are none, the arrangement is strictly approaching). Choose a vertical line $v'$ slightly to the left of the slab and use $v'$ and $w$ as helper-lines to redraw the pseudo-lines in the slab. Since the arrangement is simple the resulting arrangement is $x$-isomorphic and it has fewer parallel segments. Iterating this process yields the desired result. \end{proof} \begin{lemma}\label{lem_simple_sweep} If $A$ is an approaching arrangement with a non-simple allowable sequence, then there exists an approaching arrangement $A'$ whose allowable sequence is a refinement of the allowable sequence of $A$, i.e., the sequence of $A'$ may have additional permutations between consecutive pairs $\pi,\pi'$ in the sequence of $A$. \end{lemma} \begin{proof} Since its allowable sequence is non-simple arrangement $A$ has a crossing point where more than two psudo-lines cross or $A$ has several crossings with the same $x$-coordinate. Let $\ell$ be a pseudo-line participating in such a degeneracy. Translating $\ell$ slightly in vertical direction a degeneracy is removed and the allowable sequence is refined. \end{proof} Ringel's homotopy theorem~\cite[Thm.~6.4.1]{oriented_matroids} tells us that given a pair $A$, $B$ of pseudo-line arrangements, $A$ can be transformed to $B$ by homeomorphisms of the plane and so-called \emph{triangle flips}, where a pseudo-line is moved over a crossing. Within the subset of arrangements of approaching pseudo-lines, the result still holds. We first show a specialization of Ringel's isotopy result~\cite[Prop.~6.4.2]{oriented_matroids}: \begin{lemma}\label{lem_transform_sweep_equivalent} Two $x$-isomorphic arrangements of approaching pseudo-lines can be transformed into each other by a homeomorphism of the plane s.t.\ all intermediate arrangements are $x$-isomorphic and approaching. \end{lemma} \begin{proof} Given an arrangement~`$A$ of approaching pseudo-lines, we construct a corresponding polygonal arrangement $A'$. Linearly transforming a point $f_i(x)$ on a pseudo-line $\ell_i$ in $A$ to the point $f'_i(x)$ on the corresponding line $\ell'_i$ in $A'$ gives a homeomorphism from $A$ to $A'$ which can be extended to the plane. Given two $x$-isomorphic arrangements $A'$ and $B$ of polygonal approaching pseudo-lines, we may shift helper-lines horizontally, so that the $\binom{n}{2}+1$ helper-lines of the two arrangements become adjusted, i.e., are at the same $x$-coordinates; again there is a corresponding homeomorphism of the plane. Now recall that these arrangements can be obtained from solutions of linear programs. Since $A'$ and $B$ have the same combinatorial structure, their defining inequalities are the same. Thus, a convex combination of the variables defining the two arrangements is also in the solution space, which continuously takes us from $A'$ to $B$ and thus completes the proof. \end{proof} \begin{theorem}\label{thm_transform} Given two simple arrangements of approaching pseudo-lines, one can be transformed to the other by homeomorphisms of the plane and triangle flips s.t.\ all intermediate arrangement are approaching. \end{theorem} \begin{proof} Let $A_0$ be a fixed simple arrangement~$A_0$ of~$n$ lines. We show that any approaching arrangement $A$ can be transformed into $A_0$ with the given operations. Since the operations are invertible this is enough to prove that we can get from $A$ to $B$. Consider a vertical line $v$ in $A$ such that all the crossings of $A$ are to the right of $v$ and replace the part of the pseudo-lines of $A$ left of $v$ by rays with the slopes of the corresponding lines of $A_0$. This replacement is covered by Lemma~\ref{lem_transform_sweep_equivalent}. Let $v_0$ be a vertical line in $A_0$ which has all the crossings of~$A_0$ to the left. Now we vertically shift the pseudo-lines of $A$ to make their intersections with $v$ an identical copy of their intersections with $v_0$. During the shifting we have a continuous family of approaching arrangements which can be described by homeomorphisms of the plane and triangle flips. At the end the order of the intersections on $v$ is completely reversed, all the crossings are left of $v$ where the pseudo-lines are straight and use the slopes of $A_0$. It remains to replace the part of the pseudo-lines of $A$ to the right of $v$ by rays with the slopes of the corresponding \end{proof} Note that the proof requires the arrangement to be simple. Vertical translations of pseudo-lines now allows us to prove a restriction of our motivating question. \begin{theorem}\label{thm_bichromatic} An arrangement of approaching red and blue pseudo-lines contains a triangular cell that is bounded by both a red and a blue pseudo-line unless it is a pencil, i.e., all the pseudo-lines cross in a single point. \end{theorem} \begin{proof} By symmetry in color and direction we may assume that there is a crossing of two blue pseudo-lines above a red pseudo-line. Translate all the red pseudo-lines upwards with the same speed. Consider the first moment $t>0$ when the isomorphism class changes. This happens when a red pseudo-line moves over a blue crossing, or a red crossing is moved over a blue pseudo-line. In both cases the three pseudo-lines have determined a bichromatic triangular cell of the original arrangement. Now consider the case that at time $t$ parallel segments of different color are concurrent. In this case we argue as follows. Consider the situation at time $\varepsilon>0$ right after the start of the motion. Now every multiple crossing is monochromatic and we can use an argument as in the proof of Lemma~\ref{lem_strictly} to get rid of parallel segments of different colors. Continuing the translation after the modification reveals a bichromatic triangle as before. \end{proof} \section{Levi's lemma for approaching arrangements} Proofs for showing that well-known properties of line arrangements generalize to pseudo-line arrangements often use Levi's enlargement lemma. (For example, Goodman and Pollack~\cite{helly_type} give generalizations of Radon's theorem, Helly's theorem, etc.) Levi's lemma states that a pseudo-line arrangement can be augmented by a pseudo-line through any pair of points. In this section, we show that we can add a pseudo-line while maintaining the property that all pseudo-lines of the arrangement are approaching. \begin{lemma}\label{lem_combination} Given an arrangement of approaching pseudo-lines containing two pseudo-lines $l_i$ and $l_{i+1}$ (each a function $\mathbb{R} \mapsto \mathbb{R}$), consider $l' = l'(x) = \lambda l_i(x) + (1-\lambda) l_{i+1}(x)$, for some $0 \leq \lambda \leq 1$. The arrangement augmented by $l'$ is still an arrangement of approaching pseudo-lines. \end{lemma} \begin{proof} Consider any pseudo-line $l_j$ of the arrangement, $j \leq i$. We know that for $x_2 > x_1$, $l_i(x_1) - l_j(x_1) \geq l_i(x_2) - l_j(x_2)$, whence $\lambda l_j(x_1) - \lambda l_i(x_1) \geq \lambda l_j(x_2) - \lambda l_i(x_2)$. Similarly, we have $(1-\lambda)l_j(x_1) - (1-\lambda) l_{i+1}(x_1) \geq (1-\lambda) l_j(x_2) - (1-\lambda) l_{i+1}(x_2)$. Adding these two inequalities, we get \[ l_j(x_1) - l'(x_1) \geq l_j(x_2) -l'(x_2) \enspace . \] The analogous holds for any $j \geq i+1$. \end{proof} The lemma gives us a means of producing a convex combination of two approaching pseudo-lines with adjacent slopes. Note that the adjacency of the slopes was necessary in the above proof. \begin{lemma}\label{lem_above} Given an arrangement of $n$ approaching pseudo-lines, we can add a pseudo-line $l_{n+1} = l_{n+1}(x) = l_n(x) + \delta (l_{n}(x) - l_{n-1}(x))$ for any $\delta > 0$ and still have an approaching arrangement. \end{lemma} \begin{proof} Assuming $x_2 > x_1$ implies \[ l_n(x_1) - l_{n+1}(x_1) = l_n(x_1) - l_n(x) - \delta (l_{n}(x_1) - l_{n-1}(x_1)) = \delta(l_{n-1}(x_1) - l_{n}(x_1)) \]\vskip-6mm \[ \geq \delta(l_{n-1}(x_2) - l_{n}(x_2)) = l_n(x_2) - l_{n+1}(x_2)\enspace . \] With $l_j(x_1) - l_{n}(x_1) \geq l_j(x_2) - l_{n}(x_2)$ we also get $l_j(x_1) - l_{n+1}(x_1) \geq l_j(x_2) - l_{n+1}(x_2)$ for all $1\leq j < n$. \end{proof} \begin{theorem}\label{thm_approaching_levi} Given an arrangement of strictly approaching pseudo-lines and two points $p$ and $q$ with different $x$-coordinates, the arrangement can be augmented by a pseudo-line $l'$ containing $p$ and $q$ to an arrangement of approaching pseudo-lines. Further, if $p$ and $q$ do not have the same vertical distance to a pseudo-line of the initial arrangement, then the resulting arrangement is strictly approaching. \end{theorem} \begin{proof} Let $p$ have smaller $x$-coordinate than $q$. Vertically translate all pseudo-lines such that they pass through $p$ (the pseudo-lines remain strictly approaching, forming a pencil through~$p$). If there is a pseudo-line that also passes through $q$, we add a copy $l'$ of it. If $q$ is between $l_i$ and $l_{i+1}$, then we find some $0<\lambda<1$ such that $l'(x) = \lambda l_i(x) + (1-\lambda) l_{i+1}(x)$ contains $p$ and $q$. By Lemma~\ref{lem_combination} we can add $l'$ to the arrangement. If $q$ is above or below all pseudo-lines in the arrangement, we can use Lemma~\ref{lem_above} to add a pseudo-line; we choose $\delta$ large enough such that the new pseudo-line contains $q$. Finally translate all pseudo-lines back to their initial position. This yields an approaching extension of the original arrangement with a pseudo-line containing $p$ and $q$. Observe that the arrangement is strictly approaching unless the new pseudo-line was chosen as copy of $l'$. \end{proof} Following Goodman et al.~\cite{spread}, a \emph{spread of pseudo-lines} in the Euclidean plane is an infinite family of simple curves such that \begin{enumerate} \item each curve is asymptotic to some line at both ends, \item every two curves intersect at one point, at which they cross, and \item there is a bijection $L$ from the unit circle $C$ to the family of curves such that $L(p)$ is a continuous function (under the Hausdorff metric) of $p \in C$. \end{enumerate} It is known that every projective arrangement of pseudolines can be extended to a spread~\cite{spread} (see also \cite{topological_plane}). For Euclidean arrangements this is not true because condition 1 may fail (for an example take the parabolas $(x-i)^2$ as pseudo-lines). However, given an Euclidean arrangement $A$ we can choose two vertical lines $v_-$ and $v_+$ such that all the crossings are between $v_-$ and $v_+$ and replace the extensions beyond the vertical lines by appropriate rays. The reult of this procedure is called the \emph{truncation} of $A$. Note that the truncation of $A$ and $A$ are $x$-isomorphic and if $A$ is approaching then so is the truncation. We use Lemma~\ref{lem_combination} to show the following. \begin{theorem} The truncation of every approaching arrangement of pseudo-lines can be extended to a spread of pseudo-lines and a single vertical line such that the non-vertical pseudo-lines of that spread are approaching. \end{theorem} \begin{proof} Let $l_1, \dots, l_n$ be the pseudo-lines of the truncation of an approaching arrangement. Add two almost vertical straight lines $l_0$ and $l_{n+1}$ such that the slope of the line connecting two points on a pseudoline $l_i$ is between the slopes of $l_0$ and $l_{n+1}$. The arrangement with pseudo-lines $l_0,l_1, \dots, l_n,l_{n+1}$ is still approaching. Initialize $S$ with these $n+2$ pseudolines. For each $0\leq i \leq n$ and each $\lambda \in (0,1)$ add the pseudo-line $\lambda l_i(x) + (1-\lambda) l_{i+1}(x)$ to $S$. The proof of Lemma~\ref{lem_combination} implies that any two pseudo-lines in $S$ are approaching. Finally, let~$p$ be the intersection point of $l_0$ and $l_{n+1}$ and add all the lines containing $p$ and some point above these two lines to $S$. This completes the construction of the spread $S$. \end{proof} \section{Approaching generalized configurations} Levi's lemma is the workhorse in the proofs of many properties of pseudo-line arrangements. Among these, there is the so-called \emph{double dualization} by Goodman and Pollack~\cite{semispaces} that creates, for any arrangement of pseudo-lines, a corresponding primal generalized configuration of points. A \emph{generalized configuration of points} is an arrangement of pseudo-lines with a specified set of $n$ vertices, called \emph{points}, such that any pseudo-line passes through two points, and, at each point, $n-1$ pseudo-lines cross. We assume for simplicity that there are no other vertices in which more than two pseudo-lines of the arrangement cross. Let $\mathcal{C} = (\mathcal{A}, P)$ be a generalized configuration of points consisting of an approaching arrangement $\mathcal{A}$, and a set of points $P = \{p_1, \dots, p_n\}$, which are labeled by increasing $x$-coordinate. We denote the pseudo-line of $\mathcal{A}$ connecting points $p_i, p_j \in P$ by $p_{ij}$. Consider a point moving from top to bottom at left infinity. This point traverses all the pseudo-lines of $\mathcal{A}$ in some order. We claim that if we start at the top with the identity permutation $\pi = (1, \dots, n)$, then, when passing $p_{ij}$ we can apply the (adjacent) transposition $(i,j)$ to $\pi$. Moreover, by recording all the permutations generated during the move of the point we obtain an allowable sequence $\Pi_{\mathcal{C}}$. Consider the complete graph $K_P$ on the set $P$. Let $c$ be an unbounded cell of the arrangement $\mathcal{A}$, when choosing $c$ as the north-face of $\mathcal{A}$ we get a left to right orientation on each $p_{ij}$. Let this induce the orientation of the edge $\{i,j\}$ of $K_P$. These orientations constitute a tournament on $P$. It is easy to verify that this tournament is acyclic, i.e., it induces a permutation $\pi_c$ on $P$. \begin{itemize} \item The order $\pi$ corresponding to the top cell equals the left-to-right order on $P$. Since we have labeled the points by increasing $x$-coordinate this is the identity. \item When traversing $p_{ij}$ to get from a cell $c$ to an adjacent cell $c'$ the two orientations of the complete graph only differ in the orientation of the edge $\{i,j\}$. Hence, $\pi_c$ and $\pi_c$ are related by the adjacent transposition $(i,j)$. \end{itemize} The allowable sequence $\Pi_{\mathcal{C}}$ and the allowable sequence of $\mathcal{A}$ are different objects, they differ even in the length of the permutations. We say that an arrangement of pseudo-lines is \emph{dual} to a (\emph{primal}) generalized configuration of points if they have the same allowable sequence. Goodman and Pollack~\cite{semispaces} showed that for every pseudo-line arrangement there is a primal generalized configuration of points, and vice versa. We prove the same for the sub-class of approaching arrangements. \begin{lemma}\label{lem_dualize} For every generalized configuration $\mathcal{C} = (\mathcal{A}, P)$ of points on an approaching arrangement $\mathcal{A}$, there is an approaching arrangement ${A}^*$ with allowable sequence~$\Pi_{\mathcal{C}}$. \end{lemma} \begin{proof} Let $\Pi_{\mathcal{C}} = \pi_0,\pi_1,\ldots,\pi_h$. We call $(i,j)$ the adjacent \emph{transposition at $g$} when $\pi_g=(i,j)\circ\pi_{g-1}$. To produce a polygonal approaching arrangement~${A}^*$ we define the $y$-coordinates of the pseudo-lines $\ell_1,\ldots,\ell_n$ at $x$-coordinates $i\in[h]$. Let $(i,j)$ be the transposition at $g$. Consider the pseudo-line $p_{ij}$ of $\mathcal{C}$. Since $p_{ij}$ is $x$-monotone we can evaluate $p_{ij}(x)$. The $y$-coordinate of the pseudo-line $\ell_k$ dual to the point $p_k=(x_k,y_k)$ at $x=g$ is obtained as $y_{g}(k) = p_{ij}(x_k)$. We argue that the resulting pseudo-line arrangement is approaching. Let $(i,j)$ and $(s,t)$ be transpositions at $g$ and $g'$, respectively, and assume $g < g'$. We have to show that $y_{g}(a) - y_{g}(b) \geq y_{g'}(a) - y_{g'}(b)$, for all $1\leq a < b \leq n$. From $a < b$ it follows that $p_a$ is left of $p_b$, i.e., $x_a < x_b$. $p_{ij}$ and $p_{st}$ are approaching, we get $p_{ij}(x_a) - p_{st}(x_a) \geq p_{ij}(x_b) - p_{st}(x_b)$, i.e., $p_{ij}(x_a) - p_{ij}(x_b) \geq p_{st}(x_a) - p_{st}(x_b)$, which translates to $y_{g}(a) - y_{g}(b) \geq y_{g'}(a) - y_{g'}(b)$. This completes the proof. \end{proof} Goodman and Pollack use the so-called \emph{double dualization} to show how to obtain a primal generalized configuration of points for a given arrangement ${A}$ of pseudo-lines. In this process, they add a pseudo-line through each pair of crossings in ${A}$, using Levi's enlargement lemma. This results in a generalized configuration $\mathcal{C}'$ of points, where the points are the crossings of ${A}$. From this, they produce the dual pseudo-line arrangement $\mathcal{A}'$. Then, they repeat the previous process for $\mathcal{A}'$ (that is, adding a line through all pairs of crossings of~$\mathcal{A}'$). The result is a generalized configuration $\mathcal{C}$ of points, which they show being the primal generalized configuration of~$\mathcal{A}$. With Theorem~\ref{thm_approaching_levi} and Lemma~\ref{lem_dualize}, we know that both the augmentation through pairs of crossings and the dualization process can be done such that we again have approaching arrangements, yielding the following result. \begin{lemma}\label{lem_primalize} For every arrangement of approaching pseudo-lines, there is a primal generalized configuration of points whose arrangement is also approaching. \end{lemma} Combining Lemmas~\ref{lem_dualize} and~\ref{lem_primalize}, we obtain the main result of this section. \begin{theorem} An allowable sequence is the allowable sequence of an approaching generalized configuration of points if and only if it is the allowable sequence of an approaching arrangement. \end{theorem} \section{Realizability and counting}\label{sec_properties} Considering the freedom one has in constructing approaching arrangements, one may wonder whether actually all pseudo-line arrangements are $x$-isomorphic to approaching arrangements. As we will see in this section, this is not the case. We use the following lemma, that can easily be shown using the construction for Lemma~\ref{lem_polygonal}. \begin{lemma}\label{lem_three_like_lines} Given a simple suballowable sequence of permutations $({\sf id}, \pi_1, \pi_2)$, where ${\sf id}$ is the identity permutation, the suballowable sequence is realizable with an arrangement of approaching pseudo-lines if and only if it is realizable as a line arrangement. \end{lemma} \begin{proof} Consider any realization~$A$ of the simple suballowable sequence with an arrangement of approaching pseudo-lines. Since the arrangement is simple, we can consider the pseudo-lines as being strictly approaching, due to Lemma~\ref{lem_strictly}. There exist two vertical lines $v_1$ and $v_2$ s.t.\ the order of intersections of the pseudo-lines with them corresponds to $\pi_1$ and $\pi_2$, respectively. We claim that replacing pseudo-line $p_i\in A$ by the line $\ell_i$ connecting the points $(v_1,p_i(v_1))$ and $(v_2,p_i(v_2))$ we obtain a line arrangement representing the suballowable sequence $({\sf id}, \pi_1, \pi_2)$. To prove the claim we verify that for $i < j$ the slope of $\ell_i$ is less than the slope of $\ell_j$. Since $A$ is approaching we have $p_i(v_1) - p_j(v_1) \geq p_i(v_2) - p_j(v_2)$, i.e., $p_i(v_1) - p_i(v_2) \geq p_j(v_1) - p_j(v_2)$. The slopes of $\ell_i$ and $\ell_j$ are obtained by dividing both sides of this inequality by $v_1-v_2$, which is negative. \end{proof} Asinowski~\cite{sub_allowable} identified a suballowable sequence $({\sf id}, \pi_1, \pi_2)$, with permutations of six elements which is not realizable with an arrangement of lines. \begin{cor}\label{cor_no_suballowable} There exist simple suballowable sequences that are not realizable by arrangements of approaching pseudo-lines. \end{cor} With the modification of Asinowski's example shown in \figurename~\ref{fig_non_realizable}, we obtain an arrangement not having an isomorphic approaching arrangement. The modification adds two almost-vertical lines crossing in the north-cell s.t.\ they form a wedge crossed by the lines of Asinowski's example in the order of $\pi_1$. We do the same for $\pi_2$. The resulting object is a simple pseudo-line arrangement, and each isomorphic arrangement contains Asinowski's sequence. \begin{figure} \centering \includegraphics{non_realizable} \caption{A part of a six-element pseudo-line arrangement (bold) whose suballowable sequence (indicated by the vertical lines) is non-realizable (adapted from~\cite[Fig.~4]{sub_allowable}). Adding the two thin pseudo-lines crossing in the vicinity of the vertical line crossed by the pseudo-lines in the order of $\pi_1$ and doing the same for $\pi_2$ enforces that the allowable sequence of any isomorphic arrangement contains the subsequence $({\sf id}, \pi_1, \pi_2)$.} \label{fig_non_realizable} \end{figure} \begin{cor} There are pseudo-line arrangements for which there exists no isomorphic arrangement of approaching pseudo-lines. \end{cor} Aichholzer et al.~\cite{monotone_paths} construct a suballowable sequence $({\sf id}, \pi_1, \pi_2)$ on $n$ lines s.t.\ all line arrangements realizing them require slope values that are exponential in the number of lines. Thus, also vertex coordinates in a polygonal representation as an approaching arrangement are exponential in $n$. Ringel's Non-Pappus arrangement~\cite{ringel} shows that there are allowable sequences that are not realizable by straight lines. It is not hard to show that the Non-Pappus arrangement has a realization with approaching pseudo-lines. We will show that in fact the number of approaching arrangements, is asymptotically larger than the number of arrangements of lines. \begin{theorem}\label{thm_number} There exist $2^{\Theta(n^2)}$ isomorphism classes of simple arrangements of $n$ approaching pseudo-lines. \end{theorem} \begin{proof} The upper bound follows from the number of non-isomorphic arrangements of pseudo-lines. Our lower-bound construction is an adaptation of the construction presented by Matou\v{s}ek~\cite[p.~134]{Matousek} for general pseudo-line arrangements. See the left part \fig{fig_lower_bound} for a sketch of the construction. We start with a construction containing parallel lines that we will later perturb. Consider a set $V$ of vertical lines $v_i : x = i$, for $i \in [n]$. Add horizontal pseudo-lines $h_i : y = i^2$, for $i \in [n]$. Finally, add parabolic curves $p_i : y = (x + i)^2 - \varepsilon$, defined for $x \geq 0$, some $0 < \varepsilon \ll 1$, and $i \in [n]$ (we will add the missing part towards left infinity later). Now, $p_i$ passes slightly below the crossing of $h_{i+j}$ and $v_j$ at $(j,(i+j)^2)$. See the left part \fig{fig_lower_bound} for a sketch of the construction. We may modify $p_i$ to pass above the crossing at $(j,(i+j)^2)$ by replacing a piece of the curve near this point by a line segment with slope $2(i+j)$; see the right part of \fig{fig_lower_bound}. Since the derivatives of the parabolas are increasing and the derivatives of $p_{i+1}$ at $j - 1$ and of $p_{i-1}$ at $j + 1$ are both $2(j+i)$ the vertical distances from the modified $p_i$ to $p_{i+1}$ and $p_{i-1}$ remain increasing, i.e., the arrangement remains approaching. For each crossing $(j,(i+j)^2)$, we may now independently decide whether we want $p_i$ to pass above or below the crossing. The resulting arrangement contains parallel and vertical lines, but no three points pass through a crossing. This means that we can slightly perturb the horizontal and vertical lines s.t.\ the crossings of a horizontal and a vertical remain in the vicinity of the original crossings, but no two lines are parallel, and no line is vertical. To finish the construction, we add rays from the points on $p_i$ with $x=0$, each having the slope of $p_i$ at $x=0$. Each arrangement of the resulting class of arrangements is approaching. We have $\Theta(n^2)$ crossings for which we make independent binary decisions. Hence the class consists of $2^{\Theta(n^2)}$ approaching arrangements of $3n$ pseudo-lines. \end{proof} \begin{figure} \centering \includegraphics{lower_bound} \caption{A construction for an $2^{\Omega(n^2)}$ lower bound on the isomorphism classes of approaching arrangements.} \label{fig_lower_bound} \end{figure} As there are only $2^{\Theta(n \log n)}$ isomorphism classes of simple line arrangements~\cite{upper_bounds_configurations}, we see that we have way more arrangements of approaching pseudo-lines. The number of allowable sequences is $2^{\Theta(n^2 \log n)}$~\cite{stanley}. We show next that despite of the existence of nonrealizable suballowable sequences (Corollary~\ref{cor_no_suballowable}), the number of allowable sequences for approaching arrangements, i.e., the number of $x$-isomorphism classes of these arrangements, is asymptotically the same as the number of all allowable sequences. \begin{theorem} There are $2^{\Theta(n^2 \log n)}$ allowable sequences realizable as arrangements of approaching pseudo-lines. \end{theorem} \begin{proof} The upper bound follows from the number of allowable sequences. For the lower bound, we use the construction in the proof of Theorem~\ref{thm_number}, but omit the vertical lines. Hence, we have the horizontal pseudo-lines $h_i : y = i^2$ and the paraboloid curves $p_i : y = (x + i)^2 - \varepsilon$, defined for $x \geq 0$ and $0 < \varepsilon \ll 1$. For a parabolic curve $p_i$ and a horizontal line $h_{i+j}$, consider the neighborhood of the point $(j,(i+j)^2)$. Given a small value $\alpha$ we can replace a piece of $p_i$ by the appropriate line segment of slope $2(i+j)$ such that the crosssing of $h_{i+j}$ and the modified $p_i$ has $x$-coordinate $j-\alpha$. For fixed $j$ and any permutation $\pi$ of $[n-j]$ we can define values $\alpha_i$ for $i \in [n-j]$ such that $\alpha_{\pi(1)} < \alpha_{\pi(2)} < \ldots \alpha_{\pi(n-j)}$. Choosing the offset values $\alpha_i$ according to different permutations $\pi$ yields different vertical permutations in the neighborhood of $x=j$, i.e., the allowable sequences of the arrangements differ. Hence, the number allowable sequences of approaching arrangements is at least the superfactorial $\prod_{j=1}^{n} j!$, which is in $2^{\Omega(n^2 \log n)}$. \end{proof} We have seen that some properties of arrangements of lines are inherited by approaching arrangements. It is known that every simple arrangement of pseudo-lines has $n-2$ triangles, the same is true for non-simple non-trivial arrangements of lines, however, there are non-simple non-trivial arrangements of pseudo-lines with fewer triangles, see~\cite{felsner_kriegel}. We conjecture that in this context approaching arrangements behave like line arrangements. \begin{conjecture} Every non-trivial arrangement of $n$ approaching pseudo-lines has at least $n-2$ triangles. \end{conjecture} \section{Higher dimensions}\label{sec_higher} An \emph{arrangement of pseudo-hyperplanes} in $\mathbb{R}^d$ is a finite set $A$ of hypersurfaces, each homeomorphic to $\mathbb{R}^{d-1}$, with the property that any $k\leq d$ of them intersect as $k$ hyperplanes (no two of them parallel) do. More formally, for any $h_1,\ldots,h_k\in A$, $k\leq n$, the cell complex induced by $h_1,\ldots,h_k$ is isomorphic to the cell complex of ${\bf e}^\perp_1,\ldots,{\bf e}^\perp_k$ where ${\bf e}^\perp_i$ is the hyperplane whose normal is the $i$th vector ${\bf e}_i$ of the standard basis. We focus on arrangements of $\emph{pseudo-planes}$ in $\mathbb{R}^3$. We define arrangements of approaching pseudo-planes via one of the key properties observed for arrangements of approaching pseudo-lines (Observation~\ref{obs:main}). An \emph{arrangement of approaching pseudo-planes} in $\mathbb{R}^3$ is an arrangement of pseudo-planes $h_1,\ldots,h_n$ where each pseudo-plane $h_i$ is the graph of a continuously differentiable function $f_i: \mathbb{R}^2\mapsto \mathbb{R}$ such that for any $c_1,\ldots,c_n\in \mathbb{R}$, the graphs of $f_1+c_1,\ldots, f_n+c_n$ form a valid arrangement of pseudo-planes. This means that we can move the pseudo-planes up and down along the $z$-axis while maintaining the properties of a pseudo-plane arrangement. Clearly, arrangements of planes (without parallels) are approaching. Consider an arrangement of approaching pseudo-lines with pseudo-lines given by continuously differentiable functions. The condition that $f_1(x)-f_2(x)$ is strictly monotonically decreasing implies that for any $x$ the slope of $f_1(x)$ is at most the slope of $f_2(x)$, where at some single points, they might be equal, e.g. at $x=0$ for $f_1(x)=0$ and $f_2(x)=x^3$. In other words for each $x$ the identity permutation is the sorted order of the slopes of the tangents at $x$. Note that we may think of permutations as labeled Euclidean order types in one dimension. In this section we show an analogous characterization of approaching arrangements of pseudo-planes: The two-dimensional order type associated to the tangent planes above a point $(x,y)$ is the same except for a sparse set of exceptional points where the order type may degenerate. Let $G$ be a collection of graphs of continuously differentiable functions $f_i: \mathbb{R}^2\mapsto \mathbb{R}$. For any point $(x,y)$ in $\mathbb{R}^2$, let $n_i(x,y)$ be the upwards normal vector of the tangent plane of $f_i$ above $(x,y)$. We consider the vectors $n_i(x,y)$ as points $p_i(x,y)$ in the plane with homogeneous coordinates. (That is, for each vector we consider the intersection of its ray with the plane $z=1$.) We call $p_i(x,y)$ a \emph{characteristic point} and let $P_G(x,y)$ be the set of characteristic points. The Euclidean order type of the point multiset $P_G(x,y)$ is the \emph{characteristic order type} of $G$ at $(x,y)$, it is denoted $\chi_G(x,y)$. We denote by $\chi_G$ the set of characteristic order types of $G$ on the whole plane, that is, $\chi_G=\{\chi_G(x,y)|(x,y)\in\mathbb{R}^2\}$. We say that $\chi_G$ is \emph{admissible} if the following conditions hold: \begin{enumerate} \item[(1)] for any two points $(x_1,y_1)$ and $(x_2,y_2)$ in the plane, we have that if an ordered triple of characteristic points in $P_G(x_1, y_1)$ is positively oriented, then the corresponding triple in $P_G(x_2,y_2)$ is either positively oriented or collinear; \item[(2)] for any triple $p_1,p_2,p_3$ of characteristic points, the set of points in the plane for which $p_1, p_2, p_3$ are collinear is either the whole plane or a discrete set of points (i.e, for each $(x,y)$ in this set there is some $\varepsilon >0$ such that the $\varepsilon$-disc around $(x,y)$ contains no further point of the set); \item[(3)] for any pair $p_1,p_2$ of characteristic points, the set of points in the plane for which $p_1=p_2$ has dimension 0 or 1 (this implies that for each $(x,y)$ in this set and each $\varepsilon >0$ the $\varepsilon$-disc around $(x,y)$ contains points which are not in the set). \end{enumerate} From the above conditions, we deduce another technical but useful property of admissible characteristic order types. \begin{lemma}\label{lem:2b} Let $\chi_G$ be an admissible order type and $|G|\geq 3$. For any pair $p_1,p_2\in \chi_G$ and for every point $(x_0,y_0)$ in the plane for which $p_1=p_2$ there is a neighborhood $N$ such that for $V =\{p_2(x,y)-p_1(x,y) : (x,y)\in N\}$, the positive hull of $V$ contains no line. \end{lemma} \begin{proof} Choose $p_3$ such that $p_3(x_0,y_0) \neq p_1(x_0,y_0) = p_2(x_0,y_0)$. In a small neighborhood $N$ of $(x_0,y_0)$ point $p_3$ will stay away from the line spanned by $p_1$ and $p_2$ (continuity). If in $N$ the positive hull of $V$ contains a line, then the orientation of $p_1,p_2,p_3$ changes from positive to negative in $N$, this contradicts condition~(1) of admissible characteristic order types. \end{proof} \begin{theorem} Let $G$ be a collection of graphs of continuously differentiable functions $f_i: \mathbb{R}^2\mapsto \mathbb{R}$. Then $G$ is an arrangement of approaching pseudo-planes if and only if $\chi_G$ is admissible and all the differences between two functions are surjective. \end{theorem} \begin{proof} Note that being surjective is a necessary condition for the difference of two functions, as otherwise we can translate them until they do not intersect. Thus, in the following, we will assume that all the differences between two functions are surjective. We first show that if $\chi_G$ is admissible then $G$ is an arrangement of approaching pseudo-planes. Suppose $G$ is not an arrangement of approaching pseudo-planes. Suppose first that there are two functions $f_1$ and $f_2$ in $G$ whose graphs do not intersect in a single pseudo-line. Assume without loss of generality that $f_1=0$, i.e., $f_1$ is the constant zero function. Let $f_1\cap f_2$ denote the intersection of the graphs of $f_1$ and $f_2$. If the intersection has a two-dimensional component, the normal vectors of the two functions are the same for any point in the relative interior of this component, which contradicts condition~(3), so from now on, we assume that $f_1\cap f_2$ is at most one-dimensional. Also, note that due to the surjectivity of $f_2-f_1$, the intersection $f_1\cap f_2$ is not empty. Note that if $f_1\cap f_2$ is a single pseudo-line then for every $r\in f_1\cap f_2$ there exists a neighborhood $N$ in $f_1$ such that $f_1\cap f_2\cap N$ is a pseudo-segment. Further, on one side of the pseudo-segment, $f_1$ is below $f_2$, and above on the other, as otherwise we would get a contradiction to Lemma~\ref{lem:2b}. In the next two paragraphs we argue that indeed $f_1\cap f_2$ is a single pseudo-line. In paragraph (a) we show that for every $r\in f_1\cap f_2$ the intersection locally is a pseudo-segment; in (b) we show that $f_1\cap f_2$ contains no cycle and that $f_1\cap f_2$ has a single connected component. (a) Suppose for the sake of contradiction that $f_1\cap f_2$ contains a point $r$ such that for every neighborhood $N$ of $r$ in $f_1$ we have that $f_1\cap f_2\cap N$ is not a pseudo-segment. For $\varepsilon >0$ let $N_\varepsilon$ be the $\varepsilon$-disc around $r$. Consider $\varepsilon$ small enough such that $f_1\cap f_2\cap N_\varepsilon$ consists of a single connected component. Further, let $\varepsilon$ be small enough such that whenever we walk away from $r$ in a component where $f_2$ is above (below) $f_1$, the difference $f_2-f_1$ is monotonically increasing (decreasing). The existence of such an $\varepsilon$ follows from the fact that $f_1$ and $f_2$ are graphs of continuously differentiable functions. Then $f_1\cap f_2$ partitions $N_\varepsilon$ into several connected components $C_1,\ldots, C_m$, ordered in clockwise order around $r$. In each of these components, $f_2$ is either above or below $f_1$, and this sidedness is different for any two neighboring components. In particular, the number of components is even, that is, $m=2k$, for some natural number $k$. We will distinguish the cases where $k$ is even and odd, and in both cases we will first show that at $r$ we have $p_1=p_2$ and then apply Lemma~\ref{lem:2b}. We start with the case where $k$ is even. Consider a differentiable path $\gamma$ starting in $C_i$, passing through $r$ and ending in $C_{k+i}$. As $k$ is even, $f_2$ is above $f_1$ in $C_i$ if and only if $f_2$ is also above $f_1$ in $C_{k+i}$. In particular, the directional derivative of $f_2-f_1$ for $\gamma$ at $r$ is $0$. This holds for every choice of $i$ and $\gamma$, thus at $r$ all directional derivatives of $f_2-f_1$ vanish. This implies that at $r$ the normal vectors of $f_1$ and $f_2$, coincide, hence $p_1=p_2$. Now, consider the boundary of $C_i$. Walking along this boundary, $f_2-f_1$ is the constant zero function, and thus the directional derivatives vanish. Hence, at any point on this boundary, $p_2-p_1$ must be orthogonal to the boundary, pointing away from $C_i$ if $f_2$ is above $f_1$ in $C_i$, and into $C_i$ otherwise. Let now $a$ and $b$ be the intersections of the boundary of $C_i$ with the boundary of $N_\varepsilon$. The above argument gives us two directions of vectors, $p_2(a)-p_1(a)$ and $p_2(b)-p_1(b)$, and a set of possible directions of vectors $p_2(c)-p_1(c)$, $c\in C_i$, between them. By continuity, all of these directions must be taken somewhere in $C_i$ (see Figure \ref{fig_directions_existence} for an illustration). Let now $C_+$ be the set of all components where $f_2$ is above $f_1$, and let $D_+$ be the set of all directions of vectors $p_2(c)-p_1(c)$, $c\in C_+$. Further, let $V_+$ be the set of rays emanating from $r$ which are completely contained in $C_+$. By continuity, for every small enough $\varepsilon$, there are two rays in $V_+$ which together span a line. It now follows from the above arguments, that for these $\varepsilon$, the directions in $D_+$ also positively span a line. This is a contradiction to Lemma~\ref{lem:2b}. \begin{figure} \centering \includegraphics{directions_existence} \caption{A component $C_i$ induces many directions of $p_2-p_1$.} \label{fig_directions_existence} \end{figure} Let us now consider the case where $k$ is odd. Consider the boundary between $C_{2k}$ and $C_1$ and denote it by $\gamma_1$. Similarly, let $\gamma_2$ be the boundary between $C_k$ and $C_{k+1}$. Let now $\gamma$ be the path defined by the union of $\gamma_1$ and $\gamma_2$ and consider the vectors $p_2-p_1$ when walking along $\gamma$. Assume without loss of generality that $C_1\in C_+$, and thus $C_{2k}, C_{k+1}\in C_-$ and $C_{k}\in C_+$. Analogous to the arguments in the above case, along $\gamma$ the vectors $p_2-p_1$ are orthogonal to $\gamma$, pointing from $C_+$ into $C_-$. In particular, they always point to the same side of $\gamma$. However, at $r$ the path $\gamma$ is also incident to $C_2\in C_-$ and to $C_{k+2}\in C_+$. The same argument now shows that at $r$, the vector $p_2(r)-p_1(r)$ must point from $C_{k+2}$ into $C_2$, that is, into the other side of $\gamma$. This is only possible if $p_2(r)-p_1(r)=0$, and thus, as claimed, we again have $p_1=p_2$ at $r$. We can now again consider the set of directions $D_+$, and this time, for every small enough $\varepsilon$, the set $D_+$ is the set of all possible directions (see Figure \ref{fig_directions_span} for an illustration), which is again a contradiction to Lemma~\ref{lem:2b}. This concludes the proof of claim (a). \begin{figure} \centering \includegraphics{directions_span} \caption{$D_+$ spans a line for $k$ even (left) and contains all directions for $k$ odd (right).} \label{fig_directions_span} \end{figure} (b) Suppose that the intersection $f_1\cap f_2$ contains a cycle. In the interior of the cycle, one function is above the other, so we can vertically translate it until the cycle contracts to a point, which again leads to a contradiction to Lemma~\ref{lem:2b}. Now suppose that the intersection contains two disjoint pseudo-lines. Between the pseudo-lines, one function is above the other, so we can vertically translate it until the pseudo-lines cross or coincide. If they cross, we are again in the case discussed in (a) and get a contradiction to Lemma \ref{lem:2b}. If they coincide, $f_2-f_1$ has the same sign on both sides of the resulting pseudo-line which again leads to a contradiction to Lemma \ref{lem:2b}. Thus, we have shown that if $\chi_G$ is admissible then any two pseudo-planes in $G$ intersect in a single pseudo-line. Now consider three functions $f_1,f_2,f_3$ such that any two intersect in a pseudo-line but the three do not form a pseudo-hyperplane arrangement. Then in one of the three functions, say $f_1$, the two pseudo-lines defined by the intersections with the other two functions do not form an arrangement of two pseudo-lines; after translation, we can assume that they touch at a point or intersect in an interval. First assume that they touch at a point. At this touching point, one normal vector of tangent planes is the linear combination of the other two: assume again without loss of generality that $f_1=0$. Further assume without loss of generality that the curves $f_2\cap f_1$ and $f_3\cap f_1$ touch at the point $(0,0)$ and that the $x$-axis is tangent to $f_2\cap f_1$ at this point. Then, as the two curves touch, the $x$-axis is also tangent to $f_3\cap f_1$. In particular, the normal vectors to both $f_2$ and $f_3$ lie in the $y$-$z$-plane. As the normal vector to $f_1$ lies on the $z$-axis, the three normal vectors are indeed linearly dependent. For the order type, this now means that one vector is the affine combination of the other two, i.e., the three vectors are collinear. Further, on one side of the point the three vectors are positively oriented, on the other side they are negatively oriented, which is a contradiction to condition~(1). On the other hand, if they intersect in an interval, then the set of points where the vectors are collinear has dimension greater than 0 but is not the whole plane, which is a contradiction to condition~(2). This concludes the proof that if $\chi_G$ is admissible then $G$ is an arrangement of approaching pseudo-planes. For the other direction consider an approaching arrangement of pseudo-planes and assume that $\chi_G$ is not admissible. First, assume that condition~(1) is violated, that is, there are three pseudo-planes $f_1,f_2,f_3$ whose characteristic points $p_1,p_2,p_3$ change their orientation from positive to negative. In particular, they are collinear at some point. Assume without loss of generality that $f_2$ and $f_3$ are planes containing the origin whose characteristic points are thus constant, and assume without loss of generality that they are $p_2=(0,1)$ and $p_3=(0,-1)$. In particular, the intersection of $f_2$ and $f_3$ is the $x$-axis in $\mathbb{R}^3$. Consider now a $\varepsilon$-disc $B$ around the origin in $\mathbb{R}^2$ and let $B_<$, $B_0$ and $B_>$ be the subsets of $B$ with $x<0$, $x=0$ and $x>0$, respectively. Assume without loss of generality that in $B$ the characteristic point $p_1$ is to the left of the $y$-axis in $B_<$, to the right in $B_>$, and on the $y$-axis in $B_0$. Also, assume that $f_1$ contains the origin in $\mathbb{R}^3$. But then, $f_1$ is below the $(x,y)$-plane everywhere in $B$. In particular, $f_1$ touches $f_2\cap f_3$ in a single point, namely the origin. Hence, $f_1\cap f_3$ and $f_2\cap f_3$ is not an arrangement of two pseudo-lines in $f_3$. Similar arguments show that \begin{enumerate} \item if condition (2) is violated, then after some translation the intersection of some two pseudo-planes in a third one is an interval, \item if condition (3) is violated, then after some translation the intersection of some two pseudo-planes has a two-dimensional component, \end{enumerate} \end{proof} On the other hand, from the above it does not follow to what extent an arrangement of approaching pseudo-planes is determined by its admissible family of characteristic order types. In particular, we would like to understand which admissible families of order types correspond to families of characteristic order types. To that end, note that for every graph in an arrangement of approaching pseudo-planes, the characteristic points define a vector field $F_i: \mathbb{R}^2\mapsto \mathbb{R}^2$, namely its gradient vector field (a normal vector can be written as $(\text{d}f(x), \text{d}f(y), -1)$.) In particular, the set of all graphs defines a map $\phi(i,x,y)$ with the property that $\phi(i,\cdot,\cdot)=F_i$ and the order type of $\phi(\cdot,x,y)$ is $\chi_G(x,y)$. We call the family of vector fields obtained by this map the \emph{characteristic field} of $G$. A classic result from vector analysis states that a vector field is a gradient vector field of a scalar function if and only if it has no curl. We thus get the following result: \begin{cor}\label{characteristic_field} Let $(F_1,\ldots,F_n)$ be a family of vector fields. Then $(F_1,\ldots,F_n)$ is the characteristic field of an arrangement of approaching pseudo-planes if and only if each $F_i$ is curl-free and for each $(x,y)\in\mathbb{R}^2$, the set of order types defined by $F_1(x,y),\ldots,F_n(x,y)$ is admissible. \end{cor} Let now $G=(g_1,\ldots,g_n)$ be an arrangement of approaching pseudo-planes. A natural question is, whether $G$ can be extended, that is, whether we can find a pseudo-plane $g_{n+1}$ such that $(g_1,\ldots,g_n,g_{n+1})$ is again an arrangement of approaching pseudo-planes. Consider the realization of $\chi_G(x,y)$ for some $(x,y)\in\mathbb{R}^2$. Any two points in this realization define a line. Let $\mathcal{A}(x,y)$ be the line arrangement defined by all of these lines. Note that even if $\chi_G(x,y)$ is the same order type for every $(x,y)\in\mathbb{R}^2$, the realization might be different and thus there might be a point $(x',y')\in\mathbb{R}^2$ such that $\mathcal{A}(x',y')$ is not isomorphic to $\mathcal{A}(x,y)$. For an illustration of this issue, see Figure \ref{fig_admissible_cell}. (This issue also comes up in the problem of extension of order types, e.g. in \cite{wheels}, where the authors count the number of order types with exactly one point in the interior of the convex hull.) \begin{figure} \centering \includegraphics{admissible_cell} \caption{Two different arrangements induced by the same order type.} \label{fig_admissible_cell} \end{figure} We call a cell of $\mathcal{A}(x,y)$ \emph{admissible}, if its closure is not empty in $\mathcal{A}(x',y')$ for every $(x',y')\in\mathbb{R}^2$. Clearly, if we can extend $G$ with a pseudo-plane $g_{n+1}$, then characteristic point $p$ of the normal vector $n_{n+1}(x,y)$ must lie in an admissible cell $c$. On the other hand, as $c$ is admissible, it is possible to move $p$ continuously in $c$, and if all the vector fields $(F_1,\ldots,F_n)$ are curl-free, then so is the vector field $F_{n+1}$ obtained this way. Thus, $F_{n+1}$ is the vector field of a differentiable function $f_{n+1}$ and by Corollary \ref{characteristic_field}, its graph $g_{n+1}$ extends $G$. In particular, $G$ can be extended if and only if $\mathcal{A}(x,y)$ contains an admissible cell. As the cells incident to a characteristic point are always admissible, we get that every arrangement of approaching pseudo-planes can be extended. Furthermore, by the properties of approaching pseudo-planes, $g_{n+1}$ can be chosen to go through any given point $p$ in $\mathbb{R}^3$. In conclusion, we get the following: \begin{theorem} Let $G=(g_1,\ldots,g_n)$ be an arrangement of approaching pseudo-planes and let $p$ be a point in $\mathbb{R}^3$. Then there exists a pseudo-plane $g_{n+1}$ such that $(g_1,\ldots,g_n,g_{n+1})$ is an arrangement of approaching pseudo-planes and $p$ lies on $g_{n+1}$. \end{theorem} On the other hand, it could possible that no cell but the ones incident to a characteristic point are admissible, heavily restricting the choices for $g_{n+1}$. In this case, every pseudo-plane that extends $G$ is essentially a copy of one of the pseudo-planes of $G$. For some order types, there are cells that are not incident to a characteristic point but still appear in every possible realization, e.g. the unique $5$-gon defined by $5$ points in convex position. It is an interesting open problem to characterize the cells which appear in every realization of an order type. \section{Conclusion} In this paper, we introduced a type of pseudo-line arrangements that generalize line arrangements, but still retain certain geometric properties. One of the main algorithmic open problems is deciding the realizability of a pseudo-line arrangement as a isomorphic approaching arrangement. Further, we do not know how projective transformations influence this realizability. The concept can be generalized to higher dimensions. Apart from the properties we already mentioned in the introduction, we are not aware of further non-trivial observations. Eventually, we hope for this concept to shed more light on the differences between pseudo-line arrangements and line arrangements. For higher dimensions, we gave some insight into the structure of approaching hyperplane arrangements via the order type defined by their normal vectors. It would be interesting to obtain further properties of this setting. \bibliographystyle{abbrv}
2,869,038,155,164
arxiv
\section{Introduction} The present program {\tt mFOAM}, and the {\tt FOAM} program of Refs.~\cite{Jadach:2002kn,Jadach:1999sf}, from which {\tt mFOAM} is derived, are both examples of a general-purpose self-adapting Monte Carlo simulator/integrator. Let us briefly recapitulate main features of {\tt FOAM}, which are shared with the present project. In the cellular algorithm of {\tt FOAM}, points are generated randomly in the multi dimensional space according to an arbitrary, user-defined, unnormalized probability distribution function (PDF) $\rho(x)$. The algorithm works in two stages: {\em exploration } and {\em generation}. In the exploration stage the shape of the distribution function is explored using MC methods, dividing the integration domain into a system of cells referred to as ``foam''. The foam of cells is produced in a recursive process of binary splittings of the cells starting from the root cell, which can be a single $k$-dim hyperrectangle, an $n$-dim simplex or a Cartesian product of both. In {\tt mFOAM} we restrict ourselves to hyperrectangles. The PDF $\rho(x)$ is approximated by another PDF $\rho '(x)$, which is equal to a constant within each cell. The main aim of the process of the foam evolution through binary splittings is to minimize either the ratio of the variance of the weight distribution to the average weight $\sigma/\langle w \rangle$, or the ratio of the maximum weight to the average weight $w_{\max}/ \langle w \rangle$, where $w=\rho (x) / \rho^{'} (x)$ is the Monte Carlo weight. In the generation stage every single MC weighted event is generated as follows: first a cell is chosen randomly and next, within this cell, a point (MC event) is generated according to an uniform distribution equal to $\rho '$ and finally the MC weight $w=\rho(x)/\rho'(x)$ is evaluated. As usual, the rejection method may turn these weighted events into weight-one events, with a certain rejection rate (inefficiency). The main aim of the rather sophisticated cell-splitting algorithm of {\tt FOAM} (exploration phase) is the reduction of $w_{\max}/ \langle w \rangle$, assuring a low rejection rate. Another option is the variance-reduction providing for self-adapting MC method of precise evaluation of the integrals. In either case, the value of the integrand is already known approximately from the exploration stage and can be estimated with even better precision in the generation phase. It is instructive to compare the cellular algorithm of {\tt FOAM} to algorithms used by two older programs in the family of self-adapting MC tools: VEGAS \cite{Lepage:1978sw} and MISER \cite{Press:1989vk}. VEGAS primarily implements the so-called importance sampling (variance-reducing) method. It approximates the exact distribution by a multidimensional sampling function $g$. The function $g$ is separable by construction, i.e. $g(x_1,x_2, \ldots, x_n)= g_1(x_1) g_2(x_2),\ldots,g_n(x_n)$. Owing to this feature, the function $g$ can be stored effectively in the computer memory as a collection of $n$ (one for each dimension) histograms with $K$ bins, without an explosion in the total number of bins, which would in general grow like $K^n$. The sampling distribution is constructed iteratively, step by step, by means of making a number of Monte Carlo explorations over the integration region, while inspecting $n$ 1-dimensional histograms of the projections the distribution function, each for one dimension. These histograms are used to define the new improved function $g_i$, which in turn are used to generate MC points in the next iteration. In principle, after a few iterations, one obtains the reference distribution $g$ approximating the PDF. An estimated of the value of the integral over PDF is also obtained. In practice the performance of VEGAS depends heavily on the goodness of the factorizability assumption for a given PDF. Generally, VEGAS turns out to be quite efficient for many distributions (integrands) featuring a single well localized peak. The MISER program% \footnote{Unfortunately, the MISER algorithm was overlooked in the previous papers on the {\tt FOAM} project.} is based on the idea of the ``recursive stratified sampling'' and employs the technique of variance reduction similar to that in {\tt FOAM}. It explores the PDF until a fixed maximal number of available function evaluations $N$ is exhausted. In the very beginning $N$ is allocated to the root cell being a hypercube and later on redistributed among the daughter cells. In the simplest variant the starting hypercube is divided by bisecting it across one of the edges into two sub-cells of equal volume% \footnote{A quite similar 2-dimensional algorithm is also present in the MC program LESKO, of ref.~\cite{Jadach:1991ty}, and in other programs; see ref.~\cite{Jadach:1999sf} for more references.}. The division plane is chosen by examining all possible $n$ bisections of the $n$-dimensional cell and selecting the one that minimizes the resulting total variance of the two cells. Similarly as in {\tt FOAM}, the variances are estimated cell by cell during a short MC survey with a small fraction of ``allocated'' events for this cell. The remaining pool of unexploited function calls is allocated to the resulting sub-cells in a proportion that fulfills the condition for minimum variance. The whole procedure is repeated for each of the two sub-cells and continues recursively until the number of ``allocated function calls'' in a given cell falls below some predefined limit. In each cell the estimation of the integral is obtained by means of the plain MC method. At the end, the results for all cells are combined together to obtain the final value of the integrand and the error estimate. {\tt FOAM} employs a combination of both techniques: importance and stratified sampling. Contrary to VEGAS, there is no assumption in the {\tt FOAM} algorithm about the factorizability of the distribution (integrand). In the variance reduction mode {\tt FOAM} resembles MISER, but it employs a different, far more sophisticated cell division algorithm; the division plane of the cell is not at the half-point of the edge, but is optimized. The algorithm of {\tt FOAM} has passed many practical tests and proved its efficiency in several problems in high-energy physics; see for instance~\cite{Placzek:2003zg,Jadach:2005bf}. The foundations of the {\tt FOAM} algorithm are well consolidated and our current work concentrates mainly on the updates of earlier implementations and improvements of the efficiency and functionality. For a detailed description of the algorithm of {\tt FOAM} version 2.05 we refer the interested reader to Refs.~\cite{Jadach:2002kn} and~\cite{Jadach:1999sf}. The use of the original {\tt FOAM} program \cite{Jadach:1999sf} has been mainly limited by the memory consumption. {\tt FOAM}~v.2.05 divides the $n$-dimensional parameter space into hyperrectangular or simplical cells. Final MC efficiency increases mainly with the requested maximum number of cells $N_c$, so it is very important to economize on the memory used by single cell in order reach a higher number of cells. For the hyperrectangular grid of cells a memory saving arrangement algorithm of coding cells in the memory was found~\cite{Jadach:2002kn}. It reduces memory consumption down to a mere 80 bytes/cell, independently of space dimension $n$. The present version, limited to hyperrectangles, profits from this memory-saving algorithm of recording the cell parameters. We would like to mention, that in the mean-time a similar memory-saving algorithm has been also found and implemented for simplices. It will be included in the forthcoming version 2.06 of the {\tt FOAM}~\cite{foam2-6}. The unspoken assumption in {\tt mFOAM} is that the calculation of the PDF is cheap in terms of CPU time. This is often true in practice. If not, then {\tt mFOAM} may be used to model the main features of the singularities in the PDF and the fine details, which can be CPU-costly, are then added by extra MC weight during the MC run, after the exploration. However, in order to deal better with the cases of PDFs which are costly in terms of CPU and feature relatively mild peaks, one should introduce in the future development of {\tt mFOAM} the possibility to limit the total number of PDF calls, in addition to limiting the number of cells. The paper is organized as follows: Section 2 describes changes in basic classes and their functionality. Section 3 describes the configuration of {\tt mFOAM}. Section 4 discusses the usage of {\tt mFOAM} classes under the ROOT system. Conclusions follow. \section{Description of {\tt mFOAM} code} {\tt mFOAM} (mini FOAM) is a new version of {\tt FOAM} with slightly limited functionality, well integrated with ROOT~\cite{root:1997}. Our principal aim is to provide a compact and easy to use tool, for numerical Monte Carlo generation and integration of PDFs with arbitrarily complicated structure of peaks, in the number dimensions limited up to say 20. With the increasing popularity of ROOT in high-energy community we believe that this implementation tied up with ROOT will attract the interest of the new users who already exploit ROOT in their daily work. Let us comment on our decision of removing the simplical cells from the {\tt mFOAM} algorithm and the code. It was done because of an empirical observation (based on practical experience with the wide range of the distributions) that the use of simplical cells was usually giving rise to worse MC efficiency than that of hyperrectangular cells. In addition, maintaining simplical cells increases complexity of the source code. The main motivation for the closer integration of {\tt mFOAM} with the ROOT system was to profit fully from the {\em persistency mechanism} for its objects and help users who already use ROOT daily. Also, thanks to the closer integration with ROOT, the code of {\tt mFOAM} gets more compact, since the internal histogramming and other low-level structures are replaced by the well tested ROOT facilities. Altogether, we have managed to reduce significantly the total size of code (by about 50\%) and its complexity as well, with respect to the original {\tt FOAM}, at the same time improving its stability. Obviously, the above improvements and gains are purely technical, nevertheless they are very important, if object of the {\tt mFOAM} class are to be used as ``rock solid'' building blocks in any more complex, large scale, Monte Carlo projects. \begin{table}[!bt] \centering \begin{small} \begin{tabular}{|l|p{12.0cm}|} \hline Class & Short description \\ \hline \hline {\tt TFoamIntegrand} & Abstract class for the integrand function \\ {\tt TFoamVect } & Utility class of vectors with dynamic allocation of memory \\ {\tt TFoamCell } & Class representing the single-cell object \\ {\tt TFoam } & Main class of mFOAM. The entire MC generator\\ {\tt TFoamMaxwt }& Monitors MC weight, measures performance of the MC run \\ \hline \end{tabular} \end{small} \caption{\sf Summary on C++ classes of {\tt mFOAM}.} \label{tab:classes} \end{table} {\tt mFOAM}, like its ancestor, is written fully in the object-oriented programming (OOP) style in the C++ programming language. The classes of the {\tt mFOAM} program are listed in Table~\ref{tab:classes}. Some classes present in {\tt FOAM-2.05} have been removed, because they are needed only for the simplical cells. The remaining classes changed their names to comply with the ROOT naming conventions. For the same reason, names of preserved data members now begin with the letter ``f''. Two basic classes, {\tt TFoam} and {\tt TFoamCell}, are greatly simplified by the removing all of simplical structure. All other remaining classes have the same functionality as in {\tt FOAM} version 2.05. In particular, an abstract base class {\tt TFoamIntegrand} provides the user interface to any user-provided PDF. Classes {\tt TFoamVect} and {\tt TFoamMaxwt} are unmodified auxiliary utility classes. In {\tt mFOAM} we use the library of random generators of ROOT; the {\tt TPSEMAR} class of {\tt FOAM} is removed. All classes of {\tt mFOAM} inherit I/O capabilities from ROOT's {\tt TObject} class. As already advertised, we have payed special attention to the persistency issue. Generally, it is not trivial to get full persistency for the {\tt mFOAM} and {\tt FOAM} classes, mainly because of the intensive use of the pointers in the coding of the linked binary trees of the foam cells. All these problems are now solved efficiently with the help of the ROOT pointer classes. Consequently, any object of the {\tt mFOAM} class can be written more easily at any time into disk and restored later on, with the help of the ``automatic streamers'' generated by ROOT. In this way, generation of the MC events can be easily stopped and resumed. When the MC generation of the series of events is resumed, then MC generation continues as if there was no disk-read and disk-write in the meantime. A simple persistent abstract class (interface) representing any user-defined PDF is available. We refer the reader to Section~\ref{sec:usage} for a number of explicit examples/templates how to exploit it. Let us now characterize briefly the role of most important classes in the implementation of the {\tt mFOAM} algorithm. \subsection{{\tt TFoam} class} \begin{table}[!ht] \centering \begin{small} \begin{tabular}{|l|p{11.0cm}|} \hline {\tt TFoam} member & Short description \\ \hline\hline TString fVersion$^g$ & Actual version of the {\tt mFOAM} (like 1.02m)\\ TString fDate & Release date of the {\tt mFOAM}\\ TString fName & Name of a given instance of the {\tt TFoam} class\\ Int\_t fDim$^{s,g}$ & Dimension of the integration space\\ Int\_t fNCells$^s$ & Maximum number of cells\\ Int\_t fRNmax & Maximum number of random numbers generated at once\\ \hline Int\_t fOptDrive$^s$ & Optimization =1,2 for variance or maximum weight reduction\\ Int\_t fChat$^s$ & =0,1,2 chat level in output; =1 for normal output\\ Int\_t fOptRej$^s$ & =0 for weighted events; =1 for unweighted events in MC generation\\ \hline Int\_t fNBin$^s$ & No. of bins in edge histogram for cell MC exploration\\ Int\_t fNSampl$^s$ & No. of MC events, when dividing (exploring) cell\\ Int\_t fEvPerBin$^s$ & Maximum number of effective ($w=1$) events per bin\\ Double\_t fMaxWtRej$^s$; &Maximum weight in rejection for getting $w=1$ events\\ \hline \end{tabular} \end{small} \caption{\sf Data members of the {\tt TFoam} class. Associated setters and getters marked as superscripts $s$ and $g$.} \label{tab:TmFOAMmembers1} \end{table} \begin{table}[!ht] \centering \begin{small} \begin{tabular}{|l|p{90mm}|} \hline {\tt TFoam} member & Short description \\ \hline \multicolumn{ 2}{|c|}{{ Provision for the multibranching } }\\ \hline Int\_t *fMaskDiv &![fDim] Dynamic mask for cell division\\ Int\_t *fInhiDiv &![fDim] Flags inhibiting cell division \\ Int\_t fOptPRD &Option switch for predefined division, for quick check\\ TFoamVect **fXdivPRD &!Lists of division values encoded in one vector per direction\\ \hline \multicolumn{ 2}{|c|}{{ Geometry of cells } }\\ \hline Int\_t fNoAct &Number of active cells\\ Int\_t fLastCe &Index of the last cell\\ TFoamCell **fCells &[fNCells] Array of ALL cells\\ \hline \multicolumn{ 2}{|c|}{{ Monte Carlo generation } }\\ \hline TFoamMaxwt *fMCMonit; &Monitor of the MC weight for measuring MC efficiency\\ TRefArray *fCellsAct &Array of pointers to active cells. \\ Double\_t *fPrimAcu &[fNoAct] Array of cumulative $\sum_{i=1}^k R'_i$ \\ TObjArray *fHistEdg &Histograms of $w$, one for each edge \\ TObjArray *fHistDbg &Histograms for debug \\ TH1D *fHistWt; &Histograms of MC weight \\ \hline \multicolumn{ 2}{|c|}{{ Externals } }\\ \hline TMethodCall* fMethodCall$^s$ & !ROOT's pointer to global distribution function \\ TFoamIntegrand *fRho$^{g,s}$ & Pointer to class with distribution function \\ TRandom *fPseRan$^{g,s}$ &Generator of the uniform pseudo-random numbers\\ \hline \multicolumn{ 2}{|c|}{{ Statistics and MC results } }\\ \hline Long\_t fNCalls$^g$ &Number of function calls\\ Long\_t fNEffev$^g$ &Total No. of effective $w=1$ events in build-up\\ Double\_t fSumOve &Sum of overweighted events \\ Double\_t fSumWt, fSumWt2&Sum of weight $w$ and squares $w^2$\\ Double\_t fNevGen &No. of MC events\\ Double\_t *fMCvect &[fDim] MC vector \\ Double\_t fMCwt &MC weight \\ Double\_t fWtMax, fWtMin &Maximum/Minimum weight (absolute)\\ Double\_t fPrime$^g$ &Primary integral $R'$, ($R=R' \langle w \rangle$)\\ Double\_t fMCresult &True integral $R$ from the cell exploration MC\\ Double\_t fMCerror &and its error\\ Double\_t *fRvec &[fRNmax] random number vector \\ \hline \multicolumn{ 2}{|c|}{{ Working space for cell exploration } }\\ \hline Double\_t *fAlpha &[fDim] Internal parameters of the h-rectangle: $0<\alpha_i<1$\\ \hline \end{tabular} \end{small} \caption{\sf Data members of the {\tt TFoam} class. Cont.} \label{tab:TmFOAMmembers2} \end{table} \begin{table}[hp] \centering \begin{small} \begin{tabular}{|l|p{80mm}|} \hline {\tt TFoam} method & Short description \\ \hline \multicolumn{ 2}{|c|}{ Constructors and destructors }\\ \hline TFoam() & Default constructor (for ROOT streamer)\\ TFoam(const Char\_t *) & User constructor\\ $\tilde{\mbox{}}$ TFoam() & Explicit destructor\\ TFoam(const TFoam\&) & Copy Constructor NOT USED\\ TFoam\& operator=(const TFoam\& )& Substitution NOT USED \\ \hline \multicolumn{2}{|c|}{ Initialization, foam build-up }\\ \hline void Initialize() & Initialization, allocation of memory\\ void SetRho(TFoamIntegrand *) & Sets the pointer to distribution function\\ void ResetRho(TFoamIntegrand *) & Resets the pointer to distribution function\\ void SetRhoInt(void *) & Sets the pointer to user-defined global function \\ void SetPseRan(TRandom*) & Sets the pointer to r.n.g. \\ void ResetPseRan(TRandom*) & Resets the pointer to r.n.g. \\ void InitCells(void) & Initializes memory for cells and starts exploration\\ void Grow(void) & Adds new cells to foam, until buffer is full\\ Int\_t Divide(TFoamCell *) & Divides cell into two daughters\\ void Explore(TFoamCell *Cell) & MC exploration of cell main subprogram\\ void Carver(Int\_t\&,Double\_t\&,Double\_t\&)& Determines the best edge, $w_{\max}$ reduction\\ void Varedu (Double\_t[~], & \\ Int\_t\&,Double\_t\&,Double\_t\&)& Determines the best edge, $\sigma$ reduction\\ Long\_t PeekMax(void) & Chooses one active cell, used in {\tt Grow}\\ void MakeAlpha(void) & Generates rand. point inside h-rectangle\\ Int\_t CellFill(Int\_t, TFoamCell*) & Fills next cell and returns its index\\ void MakeActiveList(void) & Creates table of all active cells\\ void SetInhiDiv(Int\_t, Int\_t ) & Sets inhibition of cell division along certain edge \\ void SetXdivPRD(Int\_t, Int\_t, Double\_t[]); & Sets predefined division points\\ Double\_t Eval(Double\_t *) & Evaluates value of the distribution function \\ \hline \multicolumn{ 2}{|c|}{ Generation }\\ \hline void MakeEvent(void) & Makes (generates) single MC event\\ void GetMCvect(Double\_t *) & Provides generated random MC vector\\ Double\_t GetMCwt(void) & Provides MC weight\\ Double\_t MCgenerate(Double\_t *MCvect)& All the above in single method\\ void GenerCel2(TFoamCell *\&) & Chooses one cell with probability $\sim R'_j$\\ \hline \multicolumn{ 2}{|c|}{ Finalization, reinitialization }\\ \hline void Finalize(Double\_t\&, Double\_t\&) & Prints summary of MC integration\\ void GetIntegMC(Double\_t\&, Double\_t\&)& Provides MC integral\\ void GetIntNorm(Double\_t\&, Double\_t\&)& Provides normalization\\ void GetWtParams(const Double\_t, & \\ ~~~~Double\_t\&, Double\_t\&, Double\_t\&) & Provides MC weight parameters\\ \hline \multicolumn{ 2}{|c|}{ Debug }\\ \hline void CheckAll(const Int\_t) & Checks correctness of the data structure\\ void PrintCells(void) & Prints all cells\\ \hline \end{tabular} \end{small} \caption{\sf Methods of the {\tt TFoam} class.} \label{tab:TmFOAMmethods1} \end{table} {\tt TFoam} is the main class. Each instance of the {\tt TFoam} class is a separate, independent MC generator. In Tables~\ref{tab:TmFOAMmembers1} and \ref{tab:TmFOAMmembers2}, we provide a full list of data members of the class {\tt TFoam} and their short description. Most of the methods (procedures) of the class {\tt TFOAM} are listed in Table~\ref{tab:TmFOAMmethods1}. We omitted in this table ``setters'' and ``getters'', which provide access to some data members, and simple inline functions, such as {\tt sqr} for squaring a {\tt Double\_t} variable. Data members that are served by the setters and getters are marked in Tables~\ref{tab:TmFOAMmembers1} and \ref{tab:TmFOAMmembers2} by the superscripts ``$s$'' or/and ``$g$''. We followed closely the ROOT naming conventions and decided to use appropriate ROOT types instead of raw C number types. In this way we assure the portability of our code to the forthcoming generation of inexpensive 64-bit processors. Below we briefly describe the functionality of the most important methods in the {\tt TFoam} class. \subsubsection{Constructor} The {\tt TFoam(const Char\_t *)} constructor creates the {\tt TFoam} object whose name is given by its argument. For example the following line of code creates an instance of {\tt mFOAM} generator named {\tt FoamX}: \begin{verbatim} TFoam *FoamX = new TFoam("FoamX"); // Create Simulator \end{verbatim} The main role of the constructor is to initialize data members to their default values -- no memory allocation is done at this stage. The principal configuration parameters can be optionally changed by using setter methods (this is described in Sect.~\ref{configuring} ). \subsubsection{Setting distribution function and random number generator} The user should also provide her/his own unintegrated non-negative probability distribution function (PDF). Note that the PDF may be discontinuous. {\tt mFOAM} can cope with integrable infinite singularities in the PDF. However, we do not really recommend to use it for such cases. Two methods were available for providing a PDF object to an {\tt mFOAM} object: {\tt SetRho(TFoamIntegrand *)} sets the pointer of the PDF object through the abstract class {\tt TFoamIntegrand} pointer (interface). The user can also provide a global PDF, making it available to the {\tt mFOAM} object by calling the method {\tt SetRhoInt(void *)}. A detailed description of how to implement all kinds of PDFs is given in Sect.~\ref{example}. The random number generator (RNG) object is created by the user and set as a pointer in the {\tt SetPseRan (TRandom *)} method; see explicit examples in Sect.~\ref{sec:usage}. How to organize the interrelation between the RNG and PDF objects of {\tt TRandom} and {\tt TFoamIntegrand} classes, serving several objects of the {\tt mFOAM} class without destroying the persistency, will be discussed in Sect.~\ref{sec:extern}. \subsubsection{Initialization step methods} To begin the process of the foam build-up, the user should invoke the {\tt Initialize()} method. The method {\tt InitCells} initializes the memory storage for cells and begins the exploration process starting from the root cell. The empty cells are allocated/filled using {\tt CellFill}. The procedure {\tt Grow} which loops over cells, picks up the cell with the biggest ``driver integral'' (see Ref.~\cite{Jadach:2002kn} for explanations) with the help of the {\tt PeekMax} procedure. The chosen cell is split using the {\tt Divide} procedure. Subsequently, the procedure {\tt Explore} called by {\tt Divide} (and by {\tt InitCells} for the root cell) does the most important job in the {\tt mFOAM} build-up: it performs a low statistics MC exploration run for each newly allocated daughter cell. It calculates how profitable the future split of the cell will be and defines the optimal cell division geometry with the help of the {\tt Carver} or {\tt Varedu} procedures, for maximum weight or variance optimization respectively. All essential results of the exploration are written into the explored cell object. At the very end of the foam build-up, {\tt MakeActiveList} is invoked to create a list of pointers to all active cells, for the purpose of the quick access during the MC generation. The procedure {\tt Explore} uses {\tt MakeAlpha}, which provides random coordinates inside a given cell with the uniform distribution. The above sequence of the procedure calls is depicted in Fig.~\ref{fig:initialize}. \begin{figure}[!ht] \begin{center} \epsfig{file=initialize.eps,width=80mm,angle=270} \end{center} \caption{\sf Calling sequence of the {\tt mFOAM} procedures during the foam build-up (initialization). } \label{fig:initialize} \end{figure} \subsubsection{MC event generation step methods} The MC generation of a single MC event is done by invoking {\tt MakeEvent}, which chooses randomly a cell with the help of the method {\tt GenerCell2} and, next, the internal coordinates of the point within the cell using {\tt MakeAlpha}. The absolute coordinates of the MC event are calculated and stored in the data member double-precision vector {\tt fMCvect}. The MC weight is calculated using the procedure {\tt Eval}, which provides the density distribution $\rho(x)$. The MC event (double-precision vector) and its weight are available through getters {\tt GetMCvect} and {\tt GetMCwt}. The user may alternatively call {\tt MCgenerate}, which invokes {\tt MakeEvent} and provides a MC event and its weight simultaneously. \subsubsection{Finalize step methods} The use of the method {\tt Finalize} is not mandatory. It prints statistics and calculates the estimate of the integral using the average weight from the MC run. The amount of printed information depends on the values of {\tt fChat}. For the normalization of the plots and integrals, the user needs to know the exact value of $R'=\int \rho'(x) dx$, which is provided by the method {\tt GetIntNorm} or {\tt Finalize}. The actual value of the integrand from the MC series is provided by {\tt GetIntegMC}. Note that, for the convenience of the user, {\tt GetIntNorm} provides $R'$ or an MC estimate of $R=\int \rho(x) dx$, depending on whether the MC run was with variable weight or weight $=1$ events. Another useful finalization procedure \begin{center} \small \begin{verbatim} GetWtParams(const Double_t eps, Double_t &AveWt, Double_t &WtMax, Double_t &Sigma) \end{verbatim} \end{center} \noindent provides three parameters that characterize the MC weight distribution: the average weight {\tt AveWt}, the ``intelligent'' maximum weight% \footnote{The $\varepsilon$-dependent maximum weight is defined such that events with $w>w^{\varepsilon}_{\max}$ contribute an $\varepsilon$-fraction to the total integral. It is numerically more stable in the numerical evaluation than the one defined as the largest weight in the MC run.} {\tt WtMax}~$=w^\varepsilon_{\max}$, for a given value of {\tt eps}~$=\varepsilon$ and the variance {\tt sigma}~$=\sigma$. In particular, in the case of $w=1$ events, $w^\varepsilon_{\max}$ can be used as an input for the next MC run. \subsubsection{Debug facility} The {\tt TFoam} class includes method {\tt CheckAll} for the debugging purposes. It checks the correctness of the pointers in the doubly linked tree of cells (this can take time for large $N_c$). Another debugging method {\tt PrintCells} can be used at any stage of the calculation in order to print the list of all cells. \begin{table}[!ht] \centering \begin{small} \begin{tabular}{|l|p{11.0cm}|} \hline TFoamCell member & Short description \\ \hline\hline \hline \multicolumn{ 2}{|c|}{ ``Static'' member, the same for all cells! }\\ \hline Short\_t fDim & Dimension of integration space\\ \hline \multicolumn{ 2}{|c|}{ Linked tree organization}\\ \hline Int\_t fSerial & Serial number (index in fCells from TFoam class)\\ Int\_t fStatus & Status (active or inactive)\\ TRef fParent & Pointer to parent cell\\ TRef fDaught0 & Pointer to daughter 1\\ TRef fDaught1 & Pointer to daughter 2\\ \hline \multicolumn{ 2}{|c|}{The best split geometry from the MC exploration}\\ \hline Double\_t fXdiv & Factor $x$ of the cell split\\ Int\_t fBest & The best edge candidate for the cell split\\ \hline \multicolumn{ 2}{|c|}{Integrals of all kinds}\\ \hline Double\_t fVolume & Cartesian volume of this cell\\ Double\_t fIntegral & Integral over cell (estimate from exploration)\\ Double\_t fDrive & Driver integral $R_{\rm loss}$ for cell build-up\\ Double\_t fPrimary & Primary integral $R'$ for MC generation\\ \hline \end{tabular} \end{small} \caption{\sf Data members of the {\tt TFoamCell} class.} \label{tab:TFCELLmembers} \end{table} \subsection{TFoamCell class} The {\tt TFoamCell} class contains data and methods relevant to a single cell object. Data members of the class are listed in Table~\ref{tab:TFCELLmembers}. In comparison with {\tt FOAM} the number of data members is significantly reduced. Most of the methods of the {\tt TFoamCell} class are setters and getters. The non-trivial methods are {\tt GetHcub} and {\tt GetHSize}, which calculate the absolute position and size of hyperrectangles, and {\tt CalcVolume}, which calculates the Cartesian volume of the cell. The linked tree structure of {\tt TFoamCell} objects was not properly treated by the ROOT automatic streamers, hence in the previous version of {\tt FOAM} the persistency has been achieved with the help of some workarounds -- namely pointers to cells in the linked list of cells were replaced in {\tt FOAM} by the integer indexes% \footnote{This workaround will be unnecessary after certain bugs have been corrected in the future implementation of the ROOT streamers.}. In {\tt mFOAM} we go back to the pointers, but instead of the raw C++ pointers we employ objects of the special class of persistent pointers {\tt TRef} of ROOT. This solution works very well, and as a consequence the method {\tt LinkCells}% \footnote{{\tt LinkCells} and integer pointers in the {\tt TFoamCell} class were introduced in {\tt FOAM} as a ``workaround'' solution for certain problems with persistency of pointers in ROOT. It is still implemented in {\tt FOAM} as a void function in for the purpose of the backward compatibility in the user applications.} from {\tt TFOAM} class became obsolete. However, in the present implementation the memory consumption is increased with respect to indexing using integers; one cell now occupies 116 bytes of memory, simply because objects of the {\tt TRef} class are composite objects. \subsection{TRandom -- ROOT's collection of random-number generators} \label{TRandom} The full version 2.05 of {\tt FOAM} uses its own internal random-number generator called \mbox{RANMAR~\cite{Marsaglia:1990ig}}. In {\tt mFOAM} it is replaced by the {\tt TRandom} class interfacing to ROOT's internal library of the three random-number generators. Two of them are rather simple generators, and we do not recommend their use in any serious applications. We recommend to use its Mersenne Twister generator {\tt TRandom3}, which has huge period $2^{19937}-1$ and generally very good quality \cite{mtwistor}. At the present moment the {\tt TRandom} package does not include any random-number generator with the perfect (controllable) ``randomness'', such as RANLUX \cite{Luscher:1993dy} \cite{James:1993vv}, which is necessary for certain applications% \footnote{However, the authors of ROOT are planning to include RANLUX in the near future.}. Generally, we have decided to use {\tt TRandom}, because it meets our set of the minimal requirements for the library of random-number generators, which can be characterized as follows: \begin{itemize} \item Possibility to set (and reset) initial ``seed'' in the form of just one integer. \item Availability of a method generating single uniform random number. \item Presence of a method generating series of uniform random numbers in a single call. \item Possibility to record (disk-write) the complete status of the random-number generator and restart it using this record. (This, of course, is assured by the persistency mechanism of the ROOT.) \end{itemize} An advanced user of ROOT can also easily add his favourite random-number generator with the same standardized interface (using inheritance from {\tt TRandom}). The use of {\tt TRandom} is rather simple. As an example lets us show the following line of code: \begin{verbatim} TRandom *PseRan = new TRandom3(4357); // Create random number generator \end{verbatim} which creates an instance of Mersenne Twister generator with the $seed=4357$. Note that the {\tt TRandom} class includes many ``utility methods'', however, only a small subset of them are used in {\tt mFOAM}. For the detailed description of the {\tt TRandom}, class we refer the interested reader to the online ROOT documentation. How to use a single {\tt TRandom} object for serving several objects of the {\tt TFoam} class is described in Section \ref{sec:extern}. \section{Configuring {\tt mFOAM}} \label{configuring} \begin{table}[ht!] \centering \begin{small} \begin{tabular}{|l|l|p{12.0cm}|} \hline Param. & Value & Meaning \\ \hline\hline kDim & 0$^*$ & Dimension of the integration space\\ nCells & 1000$^*$ & Maximum number of cells\\ nSampl & 200$^*$ & No. of MC events in the cell MC exploration\\ nBin & 8$^*$ & No. of bins in edge-histogram in cell exploration\\ OptRej & 1$^*$ & OptRej = 0, weighted; =1, $w=1$ MC events\\ OptDrive & 2$^*$ & Maximum weight reduction\\ & 1 & or variance reduction\\ EvPerBin & 25$^*$ & Maximum number of effective $w=1$ events/bin\\ & 0 & or counting of number of effective events/bin is inactive\\ Chat & 1$^*$ & =0,1,2 is the ``chat level'' in the standard output\\ MaxWtRej & 1.1$^*$ & Maximum weight used to get $w=1$ MC events\\ \hline \end{tabular} \end{small} \caption{\sf Nine principal configuration parameters and switches of the {\tt mFOAM} program. The default values are marked with the superscript star.} \label{tab:TFCELLparams} \end{table} At present {\tt mFOAM} has {\em nine principal configuration parameters}. In addition, the user may optionally (re)define certain internal configuration parameters of {\tt mFOAM} in order to inhibit and/or predefine the division geometry in the cell split. All of the nine principal parameters are listed in Table~\ref{tab:TFCELLparams}. They control all essential features of the program and are preset to some meaningful default values, appropriate for the generation of unweighted events. The new inexperienced user of {\tt mFOAM} usually does not need to reset them. The only exception is the dimension of integration space {\tt kDim}. It is mandatory to set {\tt kDim} to a non-zero integer value before invoking {\tt Initialize}. In comparison with {\tt FOAM-2.05} two steering parameters were completely removed: {\tt nDim}, {\tt OptOrd}, as relevant only for simplical cells. The other three are hidden from the users eyes, because their usefulness is rather limited. Functionality of the program was frozen for the following choice: {\tt OptPeek=0}, {\tt OptEdge=0 } and {\tt OptMCell=1}. Finally, the default value of another optional input parameter {\tt OptRej} switch is now set to 1 (weight $=1$ events), instead of 0. If the user wants to redefine configuration parameters according to his needs, then the relevant piece of code will look as follows: \begin{verbatim} FoamX->SetkDim( kDim); FoamX->SetnCells( nCells); FoamX->SetnSampl( nSampl); FoamX->SetnBin( nBin); FoamX->SetOptRej( OptRej); FoamX->SetOptDrive( OptDrive); FoamX->SetEvPerBin( EvPerBin); FoamX->SetMaxWtRej( MaxWtRej); FoamX->SetChat( Chat); \end{verbatim} The user of {\tt mFOAM} can decide to inhibit the division in some variables. This can be done with the method {\tt SetInhiDiv(Int\_t iDim, Int\_t InhiDiv)} of the class {\tt TFoam}, where {\tt iDim} is the index of the variable for which the inhibition is done and {\tt InhiDiv} is the inhibition switch. This method should be used before invoking {\tt Initialize}, after setting {\tt kDim}. The relevant code may look as follows: \begin{verbatim} FoamX->SetInhiDiv(0, 1); //Inhibit division of x_1 FoamX->SetInhiDiv(1, 1); //Inhibit division of x_2 \end{verbatim} The allowed values are {\tt InhiDiv=0,1} and the default value is {\tt InhiDiv=0}. Note that the numbering of integration variables with the index {\tt iDim} starts from zero. The inhibited variables are generated uniformly. The user may also predefine divisions of the root cell in certain variables using the method {\tt SetXdivPRD(Int\_t iDim, Int\_t len, Double\_t xDiv[])}. The relevant piece of the user code may look as follows: \begin{verbatim} Double_t xDiv[3]; xDiv[0]=0.30; xDiv[1]=0.40; xDiv[2]=0.65; FoamX->SetXdivPRD(0, 3, xDiv); \end{verbatim} Again, this should be done before invoking {\tt Initialize}, after setting {\tt kDim}. \section{Usage of the mFOAM package} \label{sec:usage} To begin work with the {\tt mFOAM} package, a user should have basic knowledge of ROOT and the CINT interpreter. Very good documentation of ROOT is available. At this moment {\tt mFOAM} is already included in the ROOT standard distribution (beginning from version 4.04). The ROOT package can be obtained from ROOT's web page% \footnote{See http://root.cern.ch for more information.}. Precompiled binaries are also available as tar archive files for many major platforms: PC computers with both Linux and MS Windows systems and workstations under UNIX. All supported operating systems can be found on ROOT's home page. The installation process is straightforward and on most UNIX-like systems amounts to unpacking the tarball file and setting environment variables: {\tt ROOTSYS}, which should point to the ROOT main directory and {\tt LD\_LIBRARY\_PATH} locating ROOT libraries. We strongly recommend to use binaries, which exactly match the user operating system. If precompiled binaries for user system are not available, then a direct installation from source code is necessary. Source code can be obtained as a tarball or through the CVS repository. A detailed description of the configuration and compilation of the ROOT package is beyond the scope of this article. Therefore we refer the interested user to ROOT's online documentation. After successful installation, the shared library {\tt libFoam.so} is present in the \${\tt ROOTSYS/lib} directory. This library can be loaded directly to ROOT by issuing the following command from CINT command line% \footnote {Explicit loading of the {\tt mFOAM} library is really needed in rare cases, when valid {\tt system.rootmap} file was not created after the compilation of source code with the help of {\tt make map} command.}: \begin{flushleft} \hspace{0.8cm} \tt root [0] .L \$ROOTSYS/lib/libFoam.so \end{flushleft} From now on, the user will get an access to all {\tt mFOAM} classes while interpreting/executing C++ scripts/programs under the {\tt CINT} interpreter of {\tt ROOT}, or simply working interactively from the command line. \subsection{Demonstration programs} \label{example} The user application program can be compiled/run using one of the following three methods: \begin{enumerate} \item The user program is interpreted by {\tt CINT} of {\tt ROOT}. This simple method might be too slow in execution and will inhibit the use of the persistency of the {\tt mFOAM} class. \item The user program is compiled/linked in flight employing the Automatic Compiler of Libraries (ACLiC) facility of {\tt CINT}. This automatizes the process of compilation and linking and the persistency of the {\tt mFOAM} class is available. It is the preferred mode of work for medium and small-size applications. \item Standard compile-link-run method. This method is well suited for large MC projects, which are run in the batch mode. \end{enumerate} We tried to provide the user with examples of all possible compile/run methods. Demonstration scripts in the {\tt \$ROOTSYS/tutorials} directory cover the first two methods and show the basic features of {\tt mFOAM}. In addition there is a collection of simple programs showing how to build and run stand-alone applications. They are distributed as a {\tt mFoam-examples-1.2.tar} file which is available from the authors web page. \subsubsection{Examples in \$ROOTSYS/tutorials directory} \label{tutorial} Let us now describe in more detail some demonstration scripts in the {\tt tutorials} subdirectory of the ROOT distribution directory. There are 3 demonstration programs there. The first of them, {\tt foam\_demo.C}, demonstrates the full power of the {\tt mFOAM} compiled by ACLiC facility (scenario 2 above), showing all essential phases of its usage: initialization, the setting up of random-number generator a the distribution to be generated/integrated. The examples of setting up optional input parameters are also shown. Finally, MC generation and getting the value of the integral and other parameters after MC generation are also demonstrated. This example is a slightly modified version of the analogous program in the {\tt FOAM} distribution~\cite{Jadach:2002kn}. Let us explain the content of the {\tt foam\_demo.C} script. After collection of headers we see the definition of the distribution to be generated/integrated: \begin{verbatim} class TFDISTR: public TFoamIntegrand { public: TFDISTR(); Double_t Density(Int_t, Double_t *){ ...................... } ClassDef (TFDISTR,1) //Class of testing functions for FOAM }; ClassImp(TFDISTR) ..................... TFoamIntegrand *rho= new TFDISTR(); FoamX->SetRho(rho); \end{verbatim} Class {\tt TFDISTR} inherits from the abstract class {\tt TFoamIntegrand}. Note the presence of the {\tt ClassImp} and {\tt ClassDef} statements, which tell ROOT to create an automatic streamer for this class. The subsequent piece of the code creates the objects of the random-number generator, the integrand distribution and the {\tt mFOAM} object itself: \begin{verbatim} TRandom *PseRan = new TRandom3(); // Create random number generator PseRan->SetSeed(4357); // Set seed TFoamIntegrand *rho= new TFDISTR(); // Create integrand distribution TFoam *FoamX = new TFoam("FoamX"); // Create MC simulator/generator \end{verbatim} Next, some configuration parameters of the {\tt TFoam} object {\tt FoamX} are redefined before it is initialized (exploration): \begin{verbatim} FoamX->SetkDim(kDim); // mandatory! FoamX->SetnCells(nCells); // optional FoamX->SetRho(rho); // mandatory! FoamX->SetPseRan(PseRan) // mandatory! FoamX->Initialize(); // Initialize MC simulator/generator \end{verbatim} At this point the attention should be payed to the fact that just {\em after the exploration phase} the object of the {\tt mFOAM} class is written to file {\tt rdemo.root}: \begin{verbatim} TFile RootFile("rdemo.root","RECREATE","histograms"); ............. FoamX->Write("FoamX"); // Writing mFOAM object on the disk ............. RootFile.Write(); RootFile.Close(); \end{verbatim} Finally, the series of MC events are generated: \begin{verbatim} for(loop=0; loop<NevTot; loop++) { FoamX->MakeEvent(); // generate MC event FoamX->GetMCvect( MCvect); // get MC point MCwt=FoamX->GetMCwt(); // get MC weight .......... } \end{verbatim} The code ends up with the printouts of the value of the integral over PDF and some other statistics concerning the MC run. The user is invited to manipulate the configuration parameters of {\tt mFOAM}. In particular we recommend to switch to weighted events ({\tt OptRej=0}) and change the number of cells {\tt nCells} in the initialization. The {\tt foam\_demo.C} program is compiled, linked and executed from the CINT shell by issuing the following commands: \begin{flushleft} \hspace{0.8cm} \tt \$ root \\ \hspace{0.8cm} root [0] .L ../lib/libFoam.so \\ \hspace{0.8cm} root [1] .x foam\_demo.C+ \\ \end{flushleft} Note that the suffix ``+'' instructs CINT to use the Automatic Compiler of Libraries (ACLiC) facility. In such a case the process of compilation and linking is completely automatized. During the compilation phase the shared library {\tt foam\_demo\_C.so} is created, which contains the definition of the {\tt TFDISTR} class, together with its automatic streamers. This is exactly what we need for testing persistency. In the stand-alone application the class of the PDF would have to be directly compiled and put in the shared library for further use. Here it is done in a simplified way. The second small program, {\tt foam\_demopers.C}, demonstrates the use of the persistency of the {\tt mFOAM} class. It reads the {\tt mFOAM} object from the disk, checks its consistency, prints out geometry of cells and starts generation of events. It can be interpreted directly by {\tt CINT}: \begin{flushleft} \hspace{0.8cm} \tt \$ root \\ \hspace{0.8cm} \tt root [0] .x foam\_demopers.C\\ \end{flushleft} The {\tt demo\_C.so} library, defining the {\tt TFDISTR} class, is loaded at the run-time with the help of \begin{verbatim} gROOT->ProcessLine(".L foam_demo.C+") \end{verbatim} in the code. The user may verify that the output from it is {\em exactly the same} as the analogous output of {\tt foam\_demo.C}. This illustrates the fact that the {\tt mFOAM} object, the MC simulator, can be dumped into the disk at any moment and it resumes its functioning after reloading it from the disk, as if there was no disk-write and disk-read at all. The other macro {\tt foam\_kanwa.C} is a simplified shorter version of {\tt foam\_demo.C}, without any unnecessary modification of the configuration parameters of the {\tt mFOAM} (they are internally set to sensible default values). This macro might be useful for the first-time user of the {\tt mFOAM}. On the other hand, this program adds a simple example of the graphics using {\tt ROOT}; the 2-dimensional distribution of the produced MC events is shown dynamically on the screen, as the accumulated MC statistics grows. Notice the use of the {\tt TApplication} object, in order to stabilize the picture on the screen in the execution. This macro can be executed/interpreted (scenario 1) directly by means of typing: \begin{flushleft} \hspace{0.8cm} \tt \$ root \\ \hspace{0.8cm} \tt root [0] .x foam\_kanwa.C \\ \end{flushleft} The example output from running {\tt foam\_kanwa.C} is reproduced in the appendix. Simulation will start and then a plot of the distribution function will pop-up on the graphical {\em canvas} on the screen. The execution is noticeably slower, as is always the case for the interpreted programs. The main difference with the {\tt foam\_kanwa.C} is in the distribution function, which is now defined simply as a global function {\tt Camel2}. It is made accessible to the {\tt mFOAM} object {\tt FoamX} in the following line of code: \begin{verbatim} FoamX->SetRhoInt(Camel2); \end{verbatim} Another difference is that the shared library of {\tt mFOAM} is loaded with the following explicit instruction: \begin{verbatim} gSystem->Load("libFoam.so"); \end{verbatim} instead of the linking procedure. This instruction is not really needed if ROOT is already aware of the location of the {\tt mFOAM} library. In some of the above examples we could not exploit the persistency of the ROOT objects. This is because of the restrictions in CINT, which does not allow an interpreted function to inherit from the {\tt TObject} class. This is the reason why, in these examples where PDF is the global function, the automatic streamer cannot be generated. Even if one would write the {\tt mFOAM} object on the disk, the information about the PDF will be lost. Of course, the user may always go back to one of the compilation methods and enjoy full persistency of the {\tt mFOAM} objects. In addition to better persistency, the compiled applications have the advantage of being significantly faster in the execution. \subsubsection{More advanced examples of the use of {\tt mFOAM}} \label{examps} Let us now describe in more detail some examples of the use of {\tt mFOAM} classes in stand-alone applications (scenario 3). It may be of interest to more advanced users, who plan to use {\tt mFOAM} as part of their large-scale Monte Carlo projects. It is assumed that {\tt ROOT} is installed and the environment variable {\tt ROOTSYS} is properly set. After unpacking the distribution file {\tt mFoam-examples-1.2.tar} one should execute the {\tt configure} script: \begin{flushleft} \hspace{0.8cm} \tt \$ cd mFoam-examples-1.2 \\ \hspace{0.8cm} \tt \$ ./configure \\ \end{flushleft} which inspects the system configuration and looks up for the {\tt ROOT} library, then generates the {\tt Makefile} file. Version 4.04 of {\tt ROOT} or later is required. The {\tt configure} script can fail for many reasons. In that case the user should first check if the {\tt ROOTSYS} environment variable indeed points to {\tt ROOT} installation location. The default behaviour of the {\tt configure} script can be changed by additional command line parameters and environment variables. It may be useful if the computer is equipped with a compiler other than {\tt gcc}. A full list of available options is displayed by the {\tt 'configure -h'} command. The {\tt configure} script and the accompanying configuration files were generated% \footnote{The distribution directory with the {\tt configure} script was created with the command sequence {\tt (autoreconf -i; ./configure; make dist)}, activating the directive {\tt AM\_MAINTAINER\_MODE} in {\tt config.in}. } using {\tt automake} tools version 1.91. In case the user wants to re-create the {\tt configure} script and the accompanying files, version 1.61 or later of {\tt automake} is needed. To compile and link these codes one should type the following: \begin{flushleft} \hspace{0.8cm} \tt \$ make \\ \hspace{0.8cm} \tt \$ make install \\ \end{flushleft} We have successfully tested the installation of {\tt mFOAM-examples} on computers with several variants of the Linux operating system: CERN Scientific Linux SLC3, Red Hat Linux 7.3, Fedora Linux FC3. The code is highly portable and we think that it should compile without any problems on all other systems supported by ROOT developers. In rare case, certain minor modifications of the source code may be necessary. After successful compilation one can run demonstration programs with the following commands: \begin{verbatim} make kanwa make demo make testpers \end{verbatim} The content and functionality of the programs {\tt demo.cxx} and {\tt kanwa.cxx} are the same as those of their macro counterparts {\tt foam\_demo.C} and {\tt foam\_kanwa.C} described above. The code of these programs can serve as a useful template for the user applications. The command {\tt make testpers} runs an advanced test of persistency with two generator objects served by one central random generator. In this example two classes of MC event generators {\tt TGenMC} and {\tt TGenMC2} are defined and the corresponding library {\tt libTGenMC.so} is created. An object of each MC event generator uses one own object of {\tt mFOAM} class and one external object of the class {\tt TRandom} -- the central RNG. In the program {\tt Main.cxx}, two objects of the class {\tt TGenMC} and {\tt TGenMC2} are created. Also a single central RNG object is allocated and made available to both MC generators. All three objects are written into a disk file and used to generate 200k MC events, using each of the two MC generators. The other program {\tt MainW.cxx} reads all three objects from the disk file, reassigns the central RNG to {\tt mFOAM} objects inside the two MC event generators; again, 200k MC events are generated, using each of the two MC generators. Since the disk-write in {\tt Main} was done after initialization and before MC generation, the MC series of the events from {\tt MainW} should be the same as from {\tt Main}. This is checked by ``diffing'' two files which record first 15 events from {\tt Main} and {\tt MainW} correspondingly. We find that their content is identical, and this provides an empirical proof that this complicated setup of the two MC event generators using two {\tt mFOAM} objects and single central RNG is surviving the disk-write and disk-read operations without any loss of its functionality. The compile--link--execute chain for the tandem of {\tt Main} and {\tt MainW} programs and ``diffing'' output files is realized by the single command `{\tt make testpers}'. The above example of organization with the single central RNG is well suited for any large Monte Carlo projects with many {\tt mFOAM} objects and many Monte Carlo sub-generators served by the single central RNG. The other interesting feature of the above examples is the implementation of the PDF as the {\tt Density} method of the {\tt TGenMC2} class. In our example the {\tt TGenMC2} class inherits from {\tt TFoamIntegrand}. Consequently, the {\tt Density} function is provided to the {\tt mFOAM} object (which is the member of the {\tt TGenMC2} class) as {\tt this}. In the other MC generator of the class {\tt TGenMC}, the PDF is defined in an object of the separate class {\tt TFDISTR} and the PDF of this class is allocated and its pointer is assigned to the {\tt mFOAM} object in the {\tt TGenMC} object during its initialization. The above test demonstrates a few fairly complicated examples of how to organize the relation between several {\tt mFOAM} objects, RNGs and PDFs within the MC project. However, it does not cover all possible situations. In the next section we shall discuss this issue in a general case and we shall argue that object of the {\tt mFOAM} class are able to cope with all possible scenarios in an efficient and transparent way. \subsection{External RNG and PDF objects and\\ the implementation of persistency} \label{sec:extern} Persistency is undoubtedly a very valuable feature of the objects of the class {\tt TFoam}, and of ROOT objects in general. It is therefore worthwhile to clarify certain features of its implementation, which the user should know and consider before attempting to exploit the persistency of {\tt mFOAM} objects in any advanced/sophisticated applications. As we have seen in the explicit examples of the previous section, the critical issue in this context is the treatment of the two external objects, which every given object of the class {\tt TFoam} needs in order to function properly: the object of random number generator (RNG) and the object providing the probability distribution function (PDF). These two objects have to be provided to the object of the class {\tt TFoam}. In the previous section, we have shown most typical case, when one deals with only one instance (object) of the above classes -- this was quite straightforward to organize. In more advanced applications we have to be prepared to deal with the situations in which we deal with many (hundreds) of objects of the class {\tt TFoam}, all of them using single {\em central} RNG object (or a few of them) and possibly generating many different PDFs. In this case if one wants to profit fully from the persistency, such a complicated set of interrelated objects of the three types has to emerge fully operational after the disk-write and disk-read operations. This turns out to be a nontrivial task to realize in practice. We claim that the way we interface an object of the class {\tt TFoam} with the two ``satellite'' RNG and PDF objects of the {\tt TRandom} and {\tt TFoamIntegrand} classes allows us to deal with any arbitrarily complicated set of interrelated object, while correctly implementing the persistency in {\em all} such situations. First of all, the two external objects, RNG and PDF, are external in this sense that the {\tt new} operator allocating them is placed outside the {\tt TFoam} code and the object of the class {\tt TFoam} knows only their pointers. Hence, the important question related to the persistency implementation using ROOT (the problem is however more general) can be immediately formulated: Whose responsibility is to {\em re-create} these two objects in the process of the disk-read? First possible solution is that this task is handled by the automatic streamer of the object of the class {\tt TFoam}, which would re-create the objects RNG and PDF% \footnote{With the help of their own streamers.}. Their actual pointer should then be exported to any other objects, which need legitimately an access to them. This second possibility is to inhibit the re-creation of the objects RNG and PDF by the streamers of the class {\tt TFoam}. ROOT allows this to be done% \footnote{It is done by means of adding the comment {\tt $\backslash\backslash$!} at the end of the line in which the pointer to an object is defined.}. In the latter case it would be the sole responsibility of the user to store the two external objects RNG and PDF into disk separately, read them separately, and provide their pointers to the object of the class {\tt TFoam} after the disk-read operation. The first option looks attractive because of its simplicity. It is definitely an optimal one in the most common case of just three objects -- hence we would like to implement this scenario as the basic one. However, this solution will fail when several objects of the {\tt TFoam} class are served by the single RNG object, a quite common case in any bigger MC projects. In this case, the disk-read operation (done by ROOT streamers) will clone many independent identical RNG objects, each one for every object of the class {\tt TFoam}. This is clearly undesirable. The situation with a set of several PDF objects serving one or more {\tt TFoam} objects is even more subtle. On the one hand, one may argue that since the distribution of a given PDF object is essentially memorized inside a given {\tt TFoam} object, a genuine one-to-one association among them should be maintained. Hence, the PDF object should be ``owned'' by the {\tt TFoam} object, during the disk-write and disk-read, as in the first scenario. On the other hand, we shall sometimes deal with the situations with a single PDF object serving several {\tt TFoam} objects; either because it needs huge memory, or it is very slow in execution (its execution is a two-step process), or it is not a genuine C++ object, but rather a ``wrapper'' to another non-OOP (Fortran) program. In such a case it is better to handle PDF objects outside the {\tt TFoam} object, as in the second scenario. Summarizing, the treatment of the RNG and PDF objects should be quite similar, and the possibility of keeping/controlling both of them outside the {\tt TFoam} object should be optionally available. In other words, we would ideally need both above solution for both RNG and PDF objects: The first solutions, for simple applications and the second one for advanced applications. The actual method of handling the RNG and PDF external objects in the {\tt TFoam} class allows the user to implement both above scenarios. It is done in the following way: the RNG and PDF objects are always created for the first time outside the {\tt TFoam} object, as already described. Their pointers are transferred into the {\tt TFoam} object as the arguments of {\tt Initialize(RNG,PDF)}. Alternatively it is done with the help of the two dedicated setters {\tt SetPseRan(RNG)} and {\tt SetRho(PDF)}, before invoking {\tt Initialize()}. At first sight, it seems that we follow the first solution, especially that we do not inhibit the re-creation of the ``private copy'' of the objects RNG and PDF by the {\tt TFoam} object (by its streamer) during the disk-read. Indeed the first solution is available in this way. Restricting the discussion to RNG objects, the second scenario can be implemented as follows: first, disk-write and disk-read of the RNG object is done by the user, then, after the disk-read of all {\tt TFoam} objects, the pointers of the RNG object inside {\tt TFoam} objects are reassigned to the RNG object, using a dedicated setter method, see also the examples of section \ref{examps}. In order to avoid a memory leak, the setter which is used to reassign the pointer to external RNG has to destroy the existing ``ghost'' RNG object, which has been unnecessarily created during the disk-read operation of every {\tt TFoam} object. The method {\tt ResetPseRan(RNG)} is introduced exactly for this purpose. The analogous setter method of destroying the existing PDF object and reassigning its pointer is the method {\tt ResetRho(PDF)} of the {\tt TFoam} class. Obviously, the RNG and PDF objects are treated in the same way. The above solution is efficient, transparent and useful in almost all cases. It will not be satisfactory in the case where creating and destroying a PDF object takes an extremely long time and/or huge memory (no such problem with the RNG objects). In such a case a simple modification of the source code of the {\tt TFoam} class (inhibiting the storage of the PDF object) will be a more economic solution; however, it requires recompiling the {\tt TFoam} library. \section{Conclusions} We present all users of the {\tt FOAM} package with its new version {\tt mFOAM}. We have payed special attention to making it more user-friendly, so that it provides with less effort solutions of many every-day problems in the MC simulation. This may, hopefully, attract new users, especially those who already use ROOT in their work. We also hope for feedback from them, to be used in the further improvements in the user interface to both {\tt FOAM} and {\tt mFOAM}. \section*{Acknowledgements} We are very grateful to R. Brun and F. Rademakers for their help in achieving better integration of the {\tt mFOAM} code with ROOT, and many related discussions. We warmly acknowledge the help of Piotr Golonka in setting up the example distribution directory. We thank to ACK Cyfronet AGH Computer Center for granting us access to their supercomputers and PC clusters funded by computational grants: MNiI/SGI2800/IFJ/009/2004, MNiI/HP\_K460-XP/IFJ/009/2004, EU project CrossGrid IST-2001-32243, and KBN Grant SPUB-M 620/E-77/SPB/5PR UE/DZ 224/2002-2004, which were helpful while testing {\tt mFOAM}. \providecommand{\href}[2]{#2}\begingroup
2,869,038,155,165
arxiv
\section{Introduction}\label{sec:introduction} In the last decades, cosmological observations have reached a remarkable level of precision, and have been shown to be compatible with a rather simple picture of the universe: a flat, (almost) Friedmann-Robertson-Walker (FRW) solution of General Relativity\xspace (GR\xspace) which includes a positive cosmological constant $\Lambda$ and a cold dark matter component ($\Lambda$CDM paradigm) \cite{Planck:2018vyg}. Despite the simplicity of the resulting picture, the success of modern cosmology\xspace is due to the interplay between different physical ingredients, each highly non-trivial \cite{Maggiore:2018sht,gorbunov1,gorbunov2,dodelson2020modern, Baumann:2009ds}: (i) GR\xspace, describing how the fluids modelling the matter content of the universe\xspace interact with gravity, in particular providing a framework in which the anisotropies and inhomogeneities that we observe today in the Cosmic Microwave Background (CMB) radiation and the large scale structures can be traced back to primordial ones \cite{Seljak:1996is}; (ii) some detailed microscopic physics, describing the interactions (or lack of thereof) among the cosmological fluids (e.g. through the relativistic Boltzmann equation \cite{Cercignani2002}, a combination of the first two points); (iii) finally, inflation \cite{guth,sato,LINDE1982389} (or some alternative model of the very early universe \cite{Brandenberger:2018wbg}), providing a natural argument for the assignment of the initial conditions and a physical mechanism for the generation of the primordial perturbations acting as seeds of cosmic structure formation (see e.g.\ \cite{Riotto:2002yw} for a review). It is then clear that, in order to extract cosmology in all of its glory from quantum gravity\xspace (QG\xspace), the theory needs to answer questions of very different physical nature. Any successful candidate QG\xspace theory should be able to reproduce these results, in some approximation, and possibly help to clarify the nature of the exotic forms of matter and energy (e.g.\ the aforementioned cosmological constant and cold dark matter) which, surprisingly enough, compose the vast majority of our universe\xspace. These represent established (in their observational consequences) but still rather mysterious (in their fundamental physical nature) ingredients of modern cosmology. Moreover, QG\xspace is also expected to shed light on the very early phase of the universe, associated to the cosmological singularity predicted by GR\xspace , where the whole semi-classical framework on which modern cosmology is based is likely to break down, and that it is simply not accounted for in current models. Moreover, this is all the more important, since cosmology\xspace could be one of the best observational testing grounds for fundamental theories of QG\xspace. To extract cosmology from fundamental QG\xspace formalisms is, in fact, a difficult challenge. This is especially true in QG\xspace approaches based on structures that are not immediately related to continuum fields and that are formulated in a manifestly background independent manner. Even just reproducing the purely GR\xspace part of cosmological models (item (i) above) is far from being a simple task, for the following reasons. First of all, it requires the determination of appropriate continuum \emph{and} classical limits of the theory, two limits that are in general distinguished and independent in QG\xspace \cite{Oriti:2018tym}. The continuum limit, in particular, requires control over the collective quantum dynamics and appropriate coarse-graining of the fundamental, microscopic degrees of freedom \cite{Oriti:2018dsg}, both being highly non-trivial tasks as experience with any quantum many-body system shows. Second, due to the absence of any manifold or spacetime structure in background independent QG\xspace (a feature inherited directly from the classical background independence of GR\xspace), time evolution and spatial localization in QG\xspace cannot be defined in terms of the usual manifold structures on which effective field theory relies, and can only be intended in a relational sense, i.e., they can only be defined with respect to physical degrees of freedom. This relational strategy, already subtle at the classical level \cite{rovelliobservables}, becomes more tricky at the quantum level (see e.g.\ \cite{Hoehn:2019owq} for a detailed discussion). For quantum gravity theories suggesting an emergent spacetime scenario, this is even more true \cite{Marchetti:2020umh}. Classically, a relational framework can be implemented through the use of \emph{relational observables} \cite{rovelliobservables, Dittrich:2004cb,Dittrich:2005kc} (see \cite{Tambornino:2011vg} for a review), gauge invariant extensions of phase space functions (associated to some physical quantities) which encode the relative change with respect to other phase space functions (associated to other physical quantities). For instance, a common choice to describe relational time evolution consists in minimally coupling the gravitational theory with a massless scalar clock (models of type II in \cite{Giesel:2012rb}), which is in fact well behaved enough to allow, for instance, for a reduced loop quantization of the system \cite{Domagala:2010bm}. When inhomogeneities, arguably the most important quantities for precision cosmology\xspace measurements, are included in the picture, spatial rods need also to be employed. Again, this is commonly achieved by introducing simple matter degrees of freedom to be used as reference fields; in particular, one usually chooses models with four pairs of scalar degrees of freedom and second class constraints (models of type I in the notation of \cite{Giesel:2012rb}). Once the second class constraints are solved, four degrees of freedom are eliminated and the remaining ones are used as relational frame \cite{Giesel:2012rb}. An example of such models is the Brown-Kucha\v{r} dust introduced in \cite{Brown:1994py,Bicak:1997bx,Kuchar:1995xn}. They were used in fact in \cite{Giesel:2007wi,Giesel:2007wk} in order to define a cosmological perturbation theory in terms of relational quantities. The very definition of relational evolution and localization in an emergent QG\xspace theory (whose fundamental degrees of freedom are only indirectly related to continuum and classical quantities), however, is complicated by the fact that the quantities that we would classically manipulate in order to recast their relative change as relational evolution and localization are simply absent in the fundamental description. They are expected to be only available in the continuum limit, after an appropriate coarse graining of the microscopic degrees of freedom \cite{Marchetti:2020umh}. These challenges are obviously not only technical, but also conceptual. In order to overcome them, one will need both guidance from physical non-QG\xspace experience and a flexible QG\xspace formalism. Group Field Theories (GFTs\xspace) \cite{Krajewski:2012aw,Oriti:2011jm} offer both, and are thus promising candidate QG\xspace frameworks where the possibility of extracting continuum cosmological physics from the fundamental theory is actually very concrete. They are quantum and statistical field theories defined on a group manifold (not interpreted as a \virgolette{spacetime} manifold), typically characterized by non-local and combinatorial interactions. In this respect, they are generalizations of matrix models \cite{DiFrancesco:1993cyw,David:1992jw}, and as examples (together with, e.g., random tensor models \cite{Gurau:2011xp,Gurau:2016cjo,guraubook}) of models defined within a broader Tensorial Group Field Theory (TGFT\xspace) formalism, i.e.\ tensorial field theory models which share the same non-local combinatorial pattern of interactions. More precisely, GFTs\xspace are TGFT\xspace models characterized by tensors whose data can be given a \virgolette{quantum geometric} interpretation. As a result of this structure, GFTs\xspace offer interesting connections to other QG\xspace approaches, like LQG \cite{Rovelli:2004tv,Thiemann:2007pyv,Ashtekar:2004eh}, spin foam models \cite{Perez:2003vx,Perez:2012wv,Finocchiaro:2018hks}, simplicial gravity models \cite{Finocchiaro:2018hks,Reisenberger:1997sk,Freidel:1998pt,Baratin:2011hp} and dynamical triangulations \cite{Ambjorn:2001cv,gorlich2013,Ambjorn2014,Loll:2019rdj}. Because of their field theoretic nature, GFTs\xspace offer tools and techniques that may prove helpful to tackle the above challenges. For instance, renormalization group techniques can be employed to study the continuum limit of the theory and the possible presence of phase transitions \cite{Carrozza:2016vsq,Finocchiaro:2020fhl, Pithis:2020kio,Pithis:2020sxm}. Alternatively, one could also employ a mean-field approach \cite{Oriti:2016qtz} to effectively describe the macroscopic dynamics (and also the critical behavior \cite{Pithis:2018eaq,Marchetti:2020xvf}) of the microscopic quantum gravitational many-body system. This perspective, which we will also adopt below, guides the extraction of cosmological physics from the hydrodynamics of GFTs\xspace \cite{Oriti:2016acw}. In particular, this has been achieved in the recent literature by considering the mean-field dynamics of condensate states \cite{Gielen:2013naa,Gielen:2014ila,Gielen:2014uga,Oriti:2015qva,Gielen:2016dss,Oriti:2016qtz,Pithis:2016cxg,Pithis:2019tvp}, i.e.\ states characterized by the simplest possible collective behavior of the fundamental GFT\xspace quanta (see however \cite{Gielen:2021vdd} for a more \virgolette{state agnostic} approach). Due to their macroscopic properties, these states have also been used to implement an effective notion of relational evolution in \cite{Marchetti:2020umh} with respect to a massless scalar field clock. This effective notion of relationality, being defined only for emergent (and averaged) quantities, bypasses several technical and conceptual difficulties related to its definition for microscopic QG\xspace degrees of freedom. Many intriguing results have been obtained from the effective relational GFT\xspace condensate cosmology framework, by making use of an EPRL-like GFT\xspace model (see \cite{Oriti:2016qtz} and Section \ref{sec:gftmodels} for more details). In particular, two regimes of the resulting emergent relational dynamics are worth mentioning \cite{Oriti:2016qtz,Marchetti:2020umh,Marchetti:2020qsq}. First, a continuum classical regime, characterized by a large number of GFT\xspace quanta, which matches the flat Friedmann cosmological dynamics. Second, a bouncing regime, characterized by a possible (depending on the impact of quantum fluctuations and on initial conditions) averaged resolution of the cosmological singularity into a quantum bounce. Moreover, phenomenological studies on the GFT\xspace interactions connected them to geometric inflation \cite{deCesare:2016rsf} and phantom dark energy \cite{Oriti:2021rvm}. These results have also been obtained recently using an extended Barrett-Crane (BC) model, which suggests that the emergent behavior of these theories may in fact be universal (at least at this level of approximation and for the few observables that have been considered so far) \cite{Jercher:2021bie}. Motivated by the success of these homogeneous and isotropic results, some pioneering works have tried making the first steps towards the study of small inhomogeneities \cite{Gielen:2017eco,Gielen:2018xph,Gerhardt:2018byq}. In particular, in \cite{Gielen:2017eco,Gielen:2018xph} it was explored the possibility for the production of primordial perturbations from quantum fluctuations of operators. However, the operators studied in \cite{Gielen:2017eco,Gielen:2018xph} did not yet have a solid relational interpretation. In \cite{Gerhardt:2018byq}, instead, the evolution of long wavelength perturbations was studied through the separate universe framework, already applied successfully also to Loop Quantum Gravity (LQG\xspace) \cite{Wilson-Ewing:2015sfx}. Here, we aim to provide a generalization of the results obtained in \cite{Gerhardt:2018byq} also to smaller wavelengths and in terms of an effective localization of perturbations in terms of a proper relational matter frame consisting of four minimally coupled scalar fields. This kind of matter can be seen as the corresponding model of type II associated to the model of type I, and it was shown in \cite{Giesel:2016gxq} not to allow for a reduced loop quantization of the system. This, however, is not a restriction for our purposes, since we are only aiming for an effective relational description of the kinematic quantum gravity system, and not for a reduced phase space quantization. The main objective of this work is to attempt to reproduce part (i) of the above list of ingredients making up modern cosmology, also for what concerns inhomogeneous cosmological perturbations (in both geometry and matter sectors), leaving e.g.\ the (equally important) task of reproducing part (iii) (which was instead the one considered by \cite{Gielen:2017eco,Gielen:2018xph}) to future works. More precisely, in Section \ref{sec:kinematics}, we will review the kinematics of the GFT\xspace models we are interested in (i.e.\ EPRL-like and extended BC) and we will in particular introduce coherent states which are peaked in \virgolette{pre-matter} variables associated to the minimally coupled massless scalar fields we want to use as a physical frame. In Section \ref{sec:dynamics}, instead, we will specify the classical system we want to reproduce, whose matter content will be characterized by five minimally coupled massless scalar fields, four of which will make up the matter reference frame and will be assumed to have negligible contribution to the energy-momentum budget of the universe\xspace. The remaining field, whose interplay with geometry dominates the resulting evolution of the universe\xspace, will be assumed to include small inhomogeneities with respect to the matter frame. Moreover, in Section \ref{sec:dynamics}, we will also show how dynamical equations for the macroscopic quantities determining the condensate state can be obtained from a mean-field quantum GFT\xspace dynamics. In Section \ref{sec:evophysicalquantities}, we will study how geometry and matter physical quantities evolve with respect to the matter fields frame, and we will discuss in particular the possibility of matching the results with GR\xspace (in harmonic gauge) in an appropriate limit. The results will be discussed in Section \ref{sec:conclusions}, where we will also point out future research directions. Finally, in Appendix \ref{app:harmonicgauge} we have reviewed how a first order harmonic gauge can be imposed classically, while in Appendix \ref{sec:redwfunctiondynamics} we have reported the detailed computations leading to the results concerning dynamics of Section \ref{sec:dynamics}. \section{GFT\xspace effective relational cosmology: kinematics}\label{sec:kinematics} In this section, we will review the basic notions of the GFT\xspace formalism necessary for the extraction of effective relational cosmological dynamics. More precisely, in Section \ref{sec:gftmodels} we will first briefly review the definition of two models used in the literature for cosmological applications, i.e.,\ the EPRL-like and the extended Barret-Crane (BC) models. Then, we will discuss the Fock structure of these theories, in particular when minimally coupled massless scalar fields are included as additional degrees of freedom. We will then continue in Section \ref{sec:condensates} by introducing a certain class of states which can be associated, at least in some limit, to continuum geometries, and which can be in fact also used to define an effective notion of relationality, thus paving the way to the study of cosmological small relational inhomogeneities. \subsection{GFT models and their Fock structure}\label{sec:gftmodels} As mentioned in Section \ref{sec:introduction}, GFTs\xspace are field theories describing a field $\gftfield:G^d\to \mathbb{C}$. The specific choice of the group manifold $G$, of the dimension $d$ and of the (combinatorial) action $S_{\text{GFT}}$, together with additional restrictions on the fields, characterize a given GFT model, as we will see explicitly below with two examples. These data are chosen so that the perturbative expansion of the partition function of the theory around the Fock vacuum can be matched with spinfoam or lattice gravity models. The amplitudes of such expansion, therefore, can be seen as discretized $\dimension$-dimensional spacetimes and geometries, with the group theoretic data characterizing the GFT being associated to discretized gravitational quantities. As a consequence of this construction, the boundary states of the theory, and thus the fundamental quanta of the theory, can be seen as $\dimension-1$-simplices. When $\dimension=4$, as we will consider from now on, these states can be seen as quantum tetrahedra whose geometric properties are encoded in the group-theoretic data. In this sense, in the GFT approach to QG the classical spacetime is expected to \virgolette{emerge} from the collective behavior of the fundamental \virgolette{pre-geometric quanta} of the theory. \paragraph{EPRL-like and extended BC. } The extraction of continuum cosmological physics from GFTs\xspace has been first obtained by considering an EPRL-like GFT model (see e.g.\ \cite{Oriti:2016qtz}). However, it has been recently shown \cite{Jercher:2021bie} that the vast majority of the results obtained within the EPRL-like model can be similarly obtained in an extended BC model. This would suggest that while the two models differ\footnote{For instance, the EPRL model, contrarily to the BC model, incorporates explicitly the Barbero-Immirzi parameter.} e.g.\ in the implementation of the simplicity constraint (see below), they still belong to the same continuum universality class, a feature already emphasized in \cite{Dittrich:2021kzs}. Here we will briefly review the kinematic structure of these models. With kinematic structure, we mean the kind of additional restrictions that are imposed on the GFT field in order to satisfy the so-called closure and the simplicity constraints. Geometrically, these constraints represent the fact that the bivectors associated to the faces of the fundamental tetrahedra sum to zero and are simple, respectively. These are nothing but the discrete counterparts of the imposition of gauge invariance and Plebanski geometricity condition in the continuum, respectively. How these conditions are imposed in a GFT characterizes the specific GFT model one is constructing. \begin{description}[font=\itshape] \item[{Extended BC model}:] The extended BC model considered in \cite{Jercher:2021bie} is a Lorentzian version of the model defined in \cite{Baratin:2011tx}, which is in turn a generalization of the original BC model \cite{Barrett:1999qw,DePietri:1999bx,Perez:2000ec} which allows for a non-ambiguous (i.e.\ commuting) imposition of the closure and simplicity constraints. The group domain of the field is given by four copies of $\text{SL}(2,\mathbb{C})$, but it is extended to include a timelike normal $X\in \hyperbolic$, where $\hyperbolic=\text{SL}(2,\mathbb{C})/\text{SU}(2)$ is the $3$-hyperboloid, so $\gftfield(G_I)\to \gftfield(G_I;X)$, where $\gftfield(G_I;X)\equiv \gftfield(G_1,\dots, G_4;X)$, with each $G_I$ being an element of $\text{SL}(2,\mathbb{C})$. Simplicity and closure are then defined with respect to the normal $X$ as follows: \begin{subequations}\label{equations:closuresimplbc} \begin{align}\label{eqn:simplicitybc} \gftfield(G_I;X)&=\gftfield(G_1g_1,\dots, G_4g_4;X)\,,\quad &\forall&\, g_I\in \text{SU}(2)_X\,,\\\label{eqn:closurebc} \gftfield(G_I;X)&=\gftfield(G_I h^{-1};h\cdotX)\,,\quad&\forall&\, h\in\text{SL}(2,\mathbb{C})\,, \end{align} \end{subequations} where $\text{SU}(2)_X$ is the $\text{SU}(2)$ subgroup of $\text{SL}(2,\mathbb{C})$ stabilizing $X$. From the above expressions it is clear that the normal $X$ has the only purpose of defining consistently the above constraints. As such, as we will mention below, it is not a dynamical variable. \item[{EPRL-like model}:] For cosmological applications, an EPRL-like GFT model has been implemented by following slightly different steps from the above extended BC model. Indeed, in the approach followed by \cite{Gielen:2013naa,Oriti:2016qtz}, one starts from a GFT defined on $\groupdomain=\text{SU}(2)$, with the details of the embedding of this $\text{SU}(2)$ subgroup inside $\text{SL}(2,\mathbb{C})$ (characterizing the appropriate imposition of the simplicity constraint of the model) being encoded in general in the kinetic and interaction terms of the action\footnote{In principle, the simplicity constraint can be imposed at the level of the kinetic term, of the interactions, or both \cite{Gielen:2013naa}. Each of these choices will in general result in different quantum theories. As we will see below, however, in this paper the precise details of the interaction and kinetic kernels will not be important so our results will encompass all the above choices.}. This still guarantees that the amplitudes of the perturbative expansion of the partition function of the model match those of the EPRL spinfoam model, but allows the use of kinematic structures which are easier to handle and, importantly, which offer a more direct geometric interpretation. Indeed, when the closure constraint is imposed similarly to equation \eqref{eqn:closurebc} (but without an explicit notion of normal): \begin{equation}\label{eqn:closureeprl} \gftfield(\gvariables)=\gftfield(\gvariables h)\,,\qquad \forall h\in \text{SU}(2)\,, \end{equation} where $\gftfield(\gvariables)\equiv \gftfield(g_1,\dots, g_4)$ with each $g_I\in \text{SU}(2)$, the resulting boundary states and fundamental quanta of the theory can be seen as open spin-networks, i.e.\ nodes from which four links are emanating which are decorated with the equivalence class of geometrical data $[\{\gvariables \}]=\{\{\gvariables h\},h\in \text{SU}(2)\}$. This correspondence between the fundamental structures of the theory and spin-network allows for a straightforward connection with LQG, which may in particular prove helpful to gain insights for the extraction of continuum physics \cite{Oriti:2017ave}. \end{description} As mentioned above, we will from here on explain the basic ideas underlying the extraction of cosmology from GFTs\xspace by working within an EPRL-like model for simplicity. However, we will emphasize similarities and differences between the two models where important, and we will resort to a unified notation where useful. \paragraph{Group representation basis.} The interpretation we have provided above of the boundary states of the theory as spin-network states is even more clear when working in the spin representation. This can be done by expanding the field satisfying \eqref{eqn:closureeprl} on a basis of functions on $L^2(\groupdomain^4/\groupdomain)$, with $\groupdomain=\text{SU}(2)$. By denoting $\vspinrep=\{\spin_I,m_I,\iota\}$ the labels characterizing these basis functions, we have \begin{equation}\label{eqn:gftfieldspinexpansion} \gftfield(\gvariables)=\sum_{\iota}\sum_{\spin_I}\sum_{m_I,n_I}\gftfield^{\spin_1,\dots,\spin_4;\iota}_{m_1,\dots,m_4}\left[\prod_{i=1}^4\sqrt{d(\spin_i)}D^{j_i}_{m_in_i}(\gvariables)\right]\itwiner^{\spin_1,\dots, \spin_4;\iota}_{n_1,\dots,n_4}\equiv \sum_{\vspinrep} \gftfield_{\vspinrep}\psi_{\vspinrep}(\gvariables)\,, \end{equation} where $\itwiner^{\spin_1,\dots, \spin_4;\iota}_{n_1,\dots,n_4}$ is an $\text{SU}(2)$ intertwiner obtained from the right-diagonal invariance of the GFT field. Exactly because of the choice $\groupdomain=\text{SU}(2)$, the boundary states are in fact clearly decorated with spin-network vertex data $\vspinrep$. More precisely, $\spin_I$ and $m_I$ are respectively spin and angular momentum projection associated to the open edges of a given vertex, while $\iota$ represents the intertwiner quantum number associated to the vertex itself. Of course, a similar decomposition can be performed for the GFT field operator of the extended BC model, the difference in this case being of course the field domain itself. The representation theory of $\groupdomain=\text{SL}(2,\mathbb{C})$ is clearly more involved than the one for $\text{SU}(2)$. However, for the purposes of this paper, we will only need some basic facts. Unitary irreducible representations of $\text{SL}(2,\mathbb{C})$ are labelled by $(\rho,\nu)$, with\footnote{Strictly speaking, this is only true for the \emph{principal series}, which we will restrict to here. For more details see \cite{Jercher:2021bie} and references therein. } $\rho\in\mathbb{R}$ and $\nu\in \mathbb{Z}/2$. The imposition of the simplicity constraint \eqref{eqn:simplicitybc} then forces $\nu=0$ \cite{Jercher:2021bie}. As a result, once integrating away the normal, one finds \cite{Jercher:2021bie} \begin{equation}\label{eqn:integratedbcexpansion} \gftfield(G_I)\equiv \int_{\hyperbolic}\diff X \gftfield(G_I;X)=\int\diff\rho_I\sum_{\spin_I, l_I}\sum_{m_I,n_I}\gftfield^{\rho_I}_{\spin_I m_I}\left[\prod_{i=1}^4(4\rho_i^2)D^{(\rho_i,0)}_{\spin_i m_i l_in_i}(\gvariables)\right]B^{\rho_I}_{l_I n_I}\,, \end{equation} where $\diff\rho_I\equiv \prod_{i=1}^4\diff\rho_i$, $D^{(\rho_i,0)}_{\spin_i m_i l_in_i}(\gvariables)$ are representation matrices, with $\spin_i$ and $l_i$ being positive half-integers and $m_i\in \{-\spin_i,\dots, \spin_i\}$, $n_i\in \{-l_i,\dots,l_i\}$. Finally, $B^{\rho_I}_{l_I n_I}$ is the Barret-Crane intertwiner, \begin{equation} B^{\rho_I}_{l_I n_I}\equiv \int_{\hyperbolic}\diff X \prod_{i=1}^4D^{(\rho_i,0)}_{\spin_i m_i00}(X)\,. \end{equation} Notice that, being the intertwiner space one-dimensional, there is no intertwiner label $\iota$ in this case, contrarily to what we have seen in equation \eqref{eqn:gftfieldspinexpansion}. Despite this difference, equations \eqref{eqn:gftfieldspinexpansion} and \eqref{eqn:integratedbcexpansion} share many obvious similarities. \paragraph{Fock structure.} GFTs\xspace can naturally be formulated in the language of second quantization. To this purpose, one defines the field operators $\gftfieldop$, $\gftfieldop^\dagger$ satisfying the commutation relations: \begin{subequations} \begin{align}\label{eqn:basiccommutator} [\gftfieldop(\gvariables),\gftfieldop^\dagger(\gvariables')]&=\mathbb{I}_{\groupdomain}(\gvariables,\gvariables')\,,\\ [\gftfieldop(\gvariables),\gftfieldop(\gvariables')]&=[\gftfieldop^\dagger(\gvariables),\gftfieldop^\dagger(\gvariables')]=0\,, \end{align} \end{subequations} where $\mathbb{I}_\groupdomain(\gvariables,\gvariables')$ is a Dirac delta distribution on the space $\groupdomain^4/\groupdomain$, with $\groupdomain=\text{SU}(2)$. For the extended BC model, instead, the commutation relations would read \begin{subequations} \begin{align}\label{eqn:basiccommutatorbc} [\gftfieldop(G_I;X),\gftfieldop^\dagger(G_I';X')]&=\mathbb{I}_{\text{BC}}(G_I;X,G_I';X')\,,\\ [\gftfieldop(G_I,X),\gftfieldop(G_I',X')]&=[\gftfieldop^\dagger(G_I,X),\gftfieldop^\dagger(G_I',X')]=0\,, \end{align} \end{subequations} where, similarly as before, $\mathbb{I}_{\text{BC}}(G_I;X,G_I';X')$ is the identity on the space $L^2(\text{SL}(2,\mathbb{C})^4\times \hyperbolic)$ which preserves the symmetries \eqref{equations:closuresimplbc}. Upon quantization, the modes $\gftfield_{\vspinrep}$ of the decomposition \eqref{eqn:gftfieldspinexpansion} are quantized and become creation and annihilation operators $\gftfieldop_{\vspinrep}$ and $\gftfieldop^\dagger_{\vspinrep}$ for spin-network vertices, which satisfy \begin{equation} [\gftfieldop_{\vspinrep},\gftfieldop^\dagger_{\vspinrep'}]=\delta_{\vspinrep,\vspinrep'}\,,\qquad [\gftfieldop_{\vspinrep},\gftfieldop_{\vspinrep'}]=[\gftfieldop^\dagger_{\vspinrep},\gftfieldop^\dagger_{\vspinrep'}]=0\,. \end{equation} The Fock space is then constructed as usual from the repeated action of the creation operator on the vacuum state $\ket{0}$ annihilated by all $\gftfieldop_{\vspinrep}$s, whose $n$-particle states satisfy \begin{align*} \gftfieldop_{\vspinrep}\ket{n_{\vspinrep}}&=\sqrt{n_{\vspinrep}}\ket{n_{\vspinrep}-1}\,,\\ \gftfieldop_{\vspinrep}\ket{n_{\vspinrep}}&=\sqrt{n_{\vspinrep}+1}\ket{n_{\vspinrep}+1}\,. \end{align*} It is possible to show that the Fock space constructed in this way shares many similarities with the kinematical Hilbert space of LQG \cite{Oriti:2013aqa}, since it encodes very similar degrees of freedom. As usual in the second quantized approach, one can construct quantum observables out of the quantized field operators which act on states of the Fock space. The simplest example of such operators is the \emph{number operator}\footnote{For the extended BC case, as a consequence of the fact that the GFT field operator depends on the normal $X$, which is however non-dynamical, equation \eqref{eqn:numberoperator} becomes \begin{equation*} \op{N}\equiv \int_{\hyperbolic}\diff X\int\intmeasure{G_I}\gftfieldop^\dagger(G_I;X)\varphi(G_I;X)\mathperiod \end{equation*} } \begin{equation}\label{eqn:numberoperator} \op{N}\equiv \int\intmeasure{\gvariables}\gftfieldop^\dagger(\gvariables)\varphi(\gvariables)\,, \end{equation} whose eigenvalues characterize different sectors of the GFT Fock space, since they count the number of quanta present in a given state. A general second quantized operator then reads \begin{equation}\label{eqn:secondquantizationoperator} \op{O}_{n+m}\equiv \int (\diff \gvariables)^m(\diff h_{I})^n\,O_{m+n}(\gvariables^1,\dots, \gvariables^m,h_{I}^1,\dots, h_{I}^n)\prod_{i=1}^m\gftfieldop^\dagger(\gvariables^i)\prod_{j=1}^n\gftfieldop(h_{I}^j)\,, \end{equation} where the matrix elements $O_{m+n}$ can be obtained either from quantum simplicial considerations, or, in the case $\groupdomain=\text{SU}(2)$ that we are considering here, from the LQG matrix elements between spin-network vertex states. This construction is of course independent on the specific representation of the Hilbert space one chooses to work with. For instance, a generic one-body operator characterized by a creation and an annihilation operator can be written as \begin{equation}\label{eqn:twobodyspin} \op{O}_2=\sum_{\vspinrep\vspinrep'}O_2(\vspinrep,\vspinrep')\gftfieldop^\dagger_{\vspinrep}\gftfieldop_{\vspinrep'}\,. \end{equation} One-body operators, typically associated to macroscopic observables, are clearly of great interest for the extraction of emergent continuum physics, and will thus be those that we will focus on from here on. \paragraph*{Coupling massless scalar fields.} As discussed in Section \ref{sec:introduction}, in this paper we aim to describe relational cosmological inhomogeneities. To this purpose, it is necessary to identify a set of relational rods and clock to use as physical frame. A very simple choice of such physical frame consists in four minimally coupled free massless scalar fields\footnote{Indeed, as we will see in Section \ref{sec:dynamics}, this choice remarkably simplifies the quantum dynamics.}. Here, therefore, we introduce \virgolette{pre-matter} data alongside the purely geometric ones discussed above, which by construction can be associated to $n$ scalar fields. Indeed, the inclusion of such pre-matter degrees of freedom to the GFT is performed in such a way that the perturbative expansion of the GFT partition function matches the discrete path-integral of the simplicial gravity model minimally coupled with the massless scalar fields one wants to reproduce (see \cite{Li:2017uao} for more details). This clearly changes the precise form of the GFT action, which now has to take into account the discretized matter-geometry coupling, but it also impacts the kinematics of the GFT model. Indeed, consider as an example the case in which we want to include one single scalar field. Since the scalar field is \virgolette{discretized} on the simplicial structures corresponding to the GFT boundary states, the very same GFT field domain has to be enlarged in order to account for the additional matter data $\framefield\in\mathbb{R}$: \begin{equation}\label{eqn:includingonescalarfield} \gftfieldop(\gvariables)\quad\longrightarrow\quad\gftfieldop(\gvariables,\framefield)\,, \qquad \mathcal{H}_1=L^2(\text{SU}(2)^4/\text{SU}(2))\to L^2(\text{SU}(2)^4/\text{SU}(2)\times \mathbb{R})\,, \end{equation} where $\mathcal{H}_1$ is the one-particle Hilbert space of the model. Notice that both these modifications to the dynamics and kinematics of the model happen regardless of the precise GFT model (e.g. EPRL-like or extended BC model). In particular, the arguments presented here in the exemplifying case of the EPRL-like model apply identically to the extended BC one. Clearly, if one wants to introduce more (say $n$) than one minimally coupled massless scalar field, the group field operator becomes $\gftfieldop(\gvariables,\framefield^a)\equiv \gftfieldop(\gvariables,\framefield^1,\dots,\framefield^n)$, with $a=1,\dots,n$. Of course, the commutation relation in \eqref{eqn:basiccommutator} has to be changed consistently, so that \begin{equation} \left[\gftfieldop(\gvariables,\framefield^a),\gftfieldop^\dagger\left(h_I,(\framefield')^a\right)\right]=\mathbb{I}_G(\gvariables,h_I)\delta^{(n)}\left(\framefield^a-(\framefield')^a\right)\,. \end{equation} Importantly, this change on the kinematic structure of the Fock space is reflected also in the second quantized operators, which now involve integrals over all the possible values of $\framefield^a\in\mathbb{R}^n$. For instance, the number operator reads \begin{subequations} \begin{equation} \op{N}=\int\diff^n\framefield\int\intmeasure[]{\gvariables}\gftfieldop^\dagger(\gvariables,\framefield^a)\gftfieldop(\gvariables,\framefield^a)\,. \end{equation} A crucial quantity for describing cosmological geometries is the volume operator \begin{equation}\label{eqn:volumeoperator} \op{V}=\int\diff^n\framefield\int\diff \gvariables\intmeasure[]{\gvariables'}\gftfieldop^\dagger(\gvariables,\framefield^a)V(\gvariables,\gvariables')\gftfieldop(\gvariables',\framefield^a)\,, \end{equation} whose matrix elements $V(\gvariables,\gvariables')$ are defined from those of the first quantized volume operator in the group representation\footnote{Such an operator is diagonal in the spin representation, with eigenvalues $\sim \spin^{3/2}$ for the EPRL-like model we are considering here and $\sim \rho^{3/2}$ for the extended BC model.}. The presence of \virgolette{pre-matter} data allows for the construction of a set of observables naturally related to them, through polynomials and derivatives with respect to $\chi^a$ for each $a=1,\dots, n$. In particular, the two fundamental, self-adjoint ones that can be obtained in this way are the \virgolette{scalar field operator} and the \virgolette{momentum operator} \cite{Oriti:2016qtz}: \begin{align} \label{eqn:scalarfieldoperator} \framefieldop^b&\equiv \int\diff^n\framefield\int\intmeasure[]{\gvariables}\framefield^b\gftfieldop^\dagger(\gvariables,\framefield^a)\gftfieldop(\gvariables,\framefield^a)\,,\\ \label{eqn:momentumoperator} \framemomop_b&=\frac{1}{i}\int\diff^n\framefield\int\intmeasure[]{\gvariables}\left[\gftfieldop^\dagger(\gvariables,\framefield^a)\left(\frac{\partial}{\partial\framefield^b}\gftfieldop(\gvariables,\framefield^a)\right)\right], \end{align} \end{subequations} whose expectation values on appropriate semi-classical and continuum states should be associated to the scalar field itself and possibly its momentum, which are at the core of a relational definition of dynamics and evolution \cite{Marchetti:2020umh}, as we will briefly review below. \subsection{Continuum geometries, effective relationality and GFT condensates}\label{sec:condensates} In order to describe the relational evolution of cosmological small inhomogeneities, one necessary step is to identify a class of quantum states which admit some \virgolette{proto-geometric} interpretation in terms of approximate continuum geometries. This allows to define an effective notion of relational evolution, whose general definition in a \virgolette{pre-geometric} sector of an emergent quantum gravity theory (such as a GFT) is instead technically and conceptually very complicated \cite{Marchetti:2020umh}, as we have discussed in Section \ref{sec:introduction}. Such \virgolette{proto-geometric} states are expected to be the result of some form of coarse-graining over the fundamental, microscopic degrees of freedom, and thus to show some form of collective behavior. In a sense, they are associated to a hydrodynamic description of the underlying quantum gravity model. The simplest form of such collective behavior is shown by \emph{coherent} (or, more commonly, \emph{condensate}) states, where each fundamental quantum is associated to the same condensate wavefunction:\label{asspage:KS1a} \begin{subequations} \begin{align}\label{eqn:coherentstates} \ket{\wfunction}&=\normalcoeff_{\wfunction}\exp\left[\int \diff^n \framefield\int\intmeasure[]{\gvariables}\wfunction(\gvariables,\framefield^a)\gftfieldop^\dagger(\gvariables,\framefield^a)\right]\ket{0}\\ &=\normalcoeff_\wfunction\exp\left[\int \diff^n\framefield\sum_{\vspinrep}\wfunction_{\vspinrep}(\framefield^a)\gftfieldop_{\vspinrep}^\dagger(\framefield^a)\right]\ket{0}\,,\nonumber \end{align} \end{subequations} where \begin{subequations} \begin{align} \normalcoeff_{\wfunction}&\equiv e^{-\Vert \wfunction\Vert^2/2},\\\ \Vert\wfunction\Vert^2&=\int \diff^n \framefield\int\intmeasure[]{\gvariables}\vert\wfunction(\gvariables,\framefield^a)\vert^2\equiv \braket{ \op{N}}_{\wfunction}\,. \end{align} \end{subequations} By definition, such coherent states are eigenstates of the field operator: \begin{equation}\label{eqn:eigenstateannihilation} \gftfieldop(\gvariables,\framefield^a)\ket{\wfunction}=\wfunction(\gvariables,\framefield^a)\ket{\wfunction}\,,\qquad \gftfieldop_{\vspinrep}(\framefield^a)\ket{\wfunction}=\wfunction_{\vspinrep}(\framefield^a)\ket{\wfunction}\,. \end{equation} States of the form \eqref{eqn:coherentstates} have been used in the past literature to show the intriguing results about the extraction of homogeneous and isotropic cosmological physics from GFTs\xspace mentioned in Section \ref{sec:introduction}. Moreover, they allow for a simple implementation of an effective description of relational quantities, as we explain below. \paragraph{Symmetries of the condensate wavefunction.} Before discussing how an effective relational framework can be implemented, let us mention some important symmetry assumptions that are often made on the condensate wavefunction. Let us also emphasize that the imposition of symmetry properties of the condensate wavefunction is conceptually different from a symmetry reduction procedure. Indeed, the first is a condition on a collective macroscopic quantity, while the latter acts on the fundamental microscopic degrees of freedom (though technically in the case of a coherent state like the one in \eqref{eqn:coherentstates} the collective wavefunction is also the wavefunction of each microscopic tetrahedron). A first important symmetry that is imposed on the condensate wavefunction is a diagonal left-invariance: \begin{equation} \wfunction(\gvariables,\framefield^a)=\wfunction(h\gvariables,\framefield^a)\,,\qquad \forall h\in\text{SU}(2)\,. \end{equation} This condition can be seen as an average over the relative embedding of the tetrahedron in $\mathfrak{su}(2)$ \cite{Oriti:2016qtz}. As a consequence of this imposition, the domain of the condensate wavefunction is isomorphic to the space of all the spatial metrics at a point, or, equivalently, to minisuperspace \cite{Gielen:2014ila}. This very same result holds also in the case of the extended BC model, with a similar averaging procedure (now over all configurations involving a preferred hypersurface normal and thus only for the integrated condensate wavefunction with respect to the normal $X$) \cite{Jercher:2021bie}. An additional assumption that is often imposed on the condensate wavefunction is its isotropy \cite{Oriti:2016qtz} (assumption \ref{ass:ks2})\label{asspage:ks2a}. This drastically simplifies the continuum dynamics, since the condensate wavefunction effectively turns out to depend on only one spin label $\spin$: \begin{equation}\label{eqn:isotropycond} \wfunction(\gvariables,\framefield^a)=\sum_{\spin=0}^\infty\wfunction_j(\framefield^a){\itwiner^*}^{\spin\spin\spin\spin,\iota_+}_{m_1m_2m_3m_4}\itwiner^{\spin\spin\spin\spin,\iota_+}_{n_1n_2n_3n_4}\sqrt{\spinrepdim^4}\prod_{i=1}^4\wmatrix^\spin_{m_in_i}(\gvariables)\,, \end{equation} where $\spinrepdim=2\spin+1$ and $\iota_+$ is the largest eigenvalue of the volume operator compatible with $\spin$. Therefore, the condensate wavefunction in spin representation reads \begin{equation}\label{eqn:sigmachi} \wfunction_{\vspinrep}(\framefield^a)\equiv \wfunction_{\{\spin,\vec{m},\iota_+\}}(\framefield^a)=\wfunction_{\spin}(\framefield^a)\overline{\mathcal{I}}^{\spin\spin\spin\spin,\iota_+}_{m_1m_2m_3m_4}\,. \end{equation} Importantly, a similar result holds also for the extended BC model, with the dynamical part of the condensate wavefunction $\wfunction_{\rho}(\framefield^a)$, effectively depending on the continuous representation label $\rho$ \cite{Jercher:2021bie}. As a consequence, it is useful to define a label $\upsilon$ which can be identified with $\rho$ or $\spin$ depending on the specific model chosen (extended BC or EPRL-like, respectively). For instance, in terms of this new label, we can write the expectation value of the number and the volume operators on an isotropic coherent state as \begin{equation}\label{eqn:volumenumberspin} \braket{\op{N}}_{\wfunction}=\SumInt_{\upsilon} \vert \wfunction_{\upsilon}(\framefield^a)\vert^2\,,\qquad \braket{\op{V}}_{\wfunction}=\SumInt_{\upsilon} V_{\upsilon}\vert \wfunction_{\upsilon}(\framefield^a)\vert^2\,, \end{equation} with the $\SumInt$ symbol indicating that, depending on whether $\upsilon=\rho$ or $\upsilon=\spin$, the right-hand-sides of the above equations will involve an integral or a sum, respectively. \paragraph{Effective relationality in GFT: CPSs.}\label{asspage:ks3a} Let us now discuss a way to implement an effective relational description of physical quantities in the GFT formalism. As we have mentioned above, in \cite{Marchetti:2020umh} an effective framework for the relational evolution of geometric observables with respect to a scalar field clock was constructed by making use of Coherent Peaked States (CPSs). As the name suggests, these are coherent states of the form \eqref{eqn:coherentstates} whose wavefunction has however strong peaking properties on the scalar field variables. For instance, for a single scalar field clock, we would have \begin{equation}\label{eqn:wavefunctioncps} \wfunction_{\cpeakwidth}(\gvariables,\framefield^0)\equiv \peakfunc_{\cpeakwidth}(\gvariables;\framefield^0-\peakvalue^0,\cpeakphase)\redwfunction(\gvariables,\framefield^0)\,, \end{equation} where the peaking properties around $\peakvalue^{0}$ are encoded in the \emph{peaking function} $\peakfunc_{\cpeakwidth}$ with a typical width given by $\cpeakwidth$. Of course, in order for the peaking properties to be effective, one wants $\cpeakwidth$ to be very small, $\cpeakwidth\ll 1$. However, one cannot just take $\cpeakwidth\to 0$, because as a consequence of the Heisenberg uncertainty principle, the fluctuations of the operator $\framemomop^0$ defined in equation \eqref{eqn:momentumoperator} would become arbitrarily large, which is certainly not ideal if one wants to interpret the scalar field as a classical clock, at least in some appropriate limit. In order to guarantee the existence of such classical clock regime, in \cite{Marchetti:2020umh,Marchetti:2020qsq} the condensate wavefunction \eqref{eqn:wavefunctioncps} was also assumed to be dependent on the parameter $\cpeakphase$, satisfying $\cpeakwidth\cpeakphase^2\gg 1 $. As a concrete example of the above peaking function, one can consider a Gaussian \cite{Marchetti:2020umh,Marchetti:2020qsq}: \begin{equation}\label{eqn:peakingfunction} \peakfunc_{\cpeakwidth}(\framefield^0-\peakvalue^{0},\cpeakphase)\equiv \normalcoeff_{\cpeakwidth}\exp\left[-\frac{(\framefield^0-\peakvalue^{0})^2}{2\cpeakwidth}\right]\exp[i\cpeakphase(\framefield^0-\peakvalue^{0})]\,, \end{equation} with a normalization constant $\normalcoeff_{\cpeakwidth}$ and where, as a first ansatz, it was assumed that the peaking function is independent on the group variables $\gvariables$. Therefore, the \emph{reduced wavefunction} $\redwfunction$ (which is assumed not to spoil the peaking properties of $\peakfunc_{\cpeakwidth}$) encodes all the geometric properties of the state, and in particular shows the symmetries discussed above. Its specific form will be determined by dynamical considerations in Section \ref{sec:dynamics}. By construction, these states satisfy, in the limit of small $\cpeakwidth$ (see \cite{Marchetti:2020umh,Marchetti:2020qsq} for more details): \begin{equation} \braket{\hat{\framefield}^0}_{\sigma}\equiv \frac{\braket{\framefieldop^0}_{\sigma}}{\braket{\hat{N}}_{\sigma}}\simeq \peakvalue^0\,, \end{equation} where the expectation value in the above equation is computed with respect to states with condensate wavefunction given by \eqref{eqn:wavefunctioncps}. Notice that we are defining a scalar field operator here through the intensive version of the second quantized operator $\framefieldop^0$ in equation \eqref{eqn:scalarfieldoperator}. As a consequence of the above equation, change with respect to $\peakvalue^0$ can be associated to evolution with respect to the \emph{averaged} clock. In this sense, the CPSs realize a notion of effective relational evolution of averaged geometric quantities with respect to the physical clock ($\peakvalue^0)$, with the effective approach becoming increasingly accurate the smaller one takes $\cpeakwidth$ and the larger $\braket{\op{N}}_{\sigma}$ grows. Indeed, it has been shown in \cite{Marchetti:2020qsq} that this latter condition, typically associated to an emergent regime and dynamically related to a very large value of the relational clock $\vert \peakvalue^0\vert$, is what ultimately allows for a suppression of fluctuations of both geometric and clock variables. In this large $\braket{\op{N}}_{\sigma}$ regime, the evolution of expectation values of geometric quantities with respect to $\peakvalue^0$ can thus be interpreted as classical relational dynamics\footnote{\label{footnote:fluctuations}On the other hand, when $\braket{\op{N}}_{\sigma}$ cannot be taken to be large, quantum fluctuations on both geometric and clock variables may become large, thus suggesting that the averaged evolution at the core of the effective relational approach may not be reliable anymore \cite{Marchetti:2020qsq}.}. The physical interpretation of the CPSs is then clear: they assign a distribution of spatial geometries for each value of the peaking parameter $\peakvalue^0\in\mathbb{R}$, i.e., to each value the scalar field clock takes on average. As such, in the emergent limit of large number of particles (and $\cpeakwidth\ll 1$) one can see them as the quantum equivalent of leaves of a $\framefield$-foliation of spacetime. A similar construction can be performed if one is interested in describing also relational inhomogeneities of physical quantities. Assume that the spacetime dimension is $\dimension$, with $\dimension\le n$. Then, the condensate state with condensate wavefunction given by \begin{equation}\label{eqn:patchstates} \wfunction_{\cpeakwidth^{\mu},\peakphase_\mu;\peakvalue^\mu}(\gvariables,\framefield^a)=\left[\prod_{\mu=0}^{\dimension-1}\peakfunc_{\cpeakwidth^\mu}(\framefield^\mu-\peakvalue^\mu,\peakphase_\mu)\right]\redwfunction(\gvariables,\framefield^a)\,, \end{equation} where the peaking function\footnote{Here we are assuming, as for the single scalar field case, that $\cpeakwidth^\mu\ll 1$ and $\cpeakwidth^\mu \peakphase_\mu^2\gg 1$ for each $\mu=0,\dots,\dimension-1$.} $\peakfunc_{\cpeakwidth^\mu}(\gvariables;\framefield^\mu-\peakvalue^\mu,\peakphase_\mu)$ can be taken to a Gaussian as in equation \eqref{eqn:peakingfunction} for each $\mu=0,\dots,\dimension-1$, would encode the distribution of spatial geometric data for each point $\peakvalue^\mu$ of the physical manifold coordinatized by the frame fields $\framefield^\mu$. By construction, the expectation value of the intrinsic version of the second quantized field operators $\framefieldop^\mu$ in equation \eqref{eqn:scalarfieldoperator} on the above states is approximately given by \begin{equation} \braket{\hat{\framefield}^\mu}_{\sigma}\equiv \frac{\braket{\framefieldop^\mu}_{\sigma}}{\braket{\hat{N}}_{\sigma}}\simeq \peakvalue^\mu\,, \end{equation} thus characterizing the change with respect to $\peakvalue^\mu$ as physical. These will be the fundamental states that we will consider from now on. Before concluding this discussion, let us also emphasize that the implementation of relational evolution through the CPSs (and thus, also their physical interpretation) that we have reviewed here for an EPRL-like model, can be identically realized also for the extended BC model, with the simple substitution $\gvariables\to (G_I;X)$ in all the above equations \cite{Jercher:2021bie}. \section{GFT effective relational cosmology: dynamics}\label{sec:dynamics} The main aim of this section is to obtain the dynamical equations which, once solved, determine the specific form of the reduced condensate wavefunction $\redwfunction$. The microscopic GFT action $S_{\text{GFT}}$ determining these equation is in turn obtained by comparison with an appropriate simplicial gravity model (see e.g.\ the discussion in Section \ref{sec:gftmodels}). Therefore, in Section \ref{sec:classicalsystem}, we will specify which kind of classical system we are interested in. Then, in Section \ref{sec:gftaverageddyna} we will obtain the dynamical equations determining the evolution of the reduced condensate wavefunction from the imposition of averaged GFT quantum equations of motion. Finally, in Section \ref{sec:perturbations}, we will define background and perturbed quantities, and we will consistently study the dynamical equations at zeroth and first order in the small perturbations. \subsection{Classical system}\label{sec:classicalsystem} The system we want to describe is classically composed by $\dimension+1$ massless scalar fields minimally coupled to gravity. We also assume that $\dimension$ of these fields, which we call $\framefield^\mu$, $\mu=0,\dots,\dimension-1$, give a negligible contribution to the total energy-momentum tensor of the system, while the contribution coming from the remaining scalar field, which we call $\matterfield$, is dominant (assumption \ref{ass:ds4})\label{asspage:ds4}. The $d$ scalar fields $\framefield^\mu$, therefore, can be thought as \virgolette{test fields} which we would naturally use to define a material reference system, for instance using harmonic coordinates $\coordinate^\mu$ (see Appendix \ref{app:harmonicgauge} for more details). The field $\matterfield$ is assumed to be almost homogeneous with respect to the material coordinate system (or, equivalently to the harmonic frame), meaning that $\matterfield=\bkgmatter+\pertmatter$, with $\bkgmatter=\bkgmatter(t)$, $t\equiv \coordinate^0$, being the homogeneous component of the field. \paragraph{Matter action and symmetries.} At the classical level, therefore, we assume a matter action of the form \begin{subequations} \begin{align} S_m[\framefield^\mu,\matterfield]&=-\frac{1}{2}\int\diff^4x\sqrt{-g}\metric^{ab}\partial_{a}\framefield^0\partial_b\framefield^0+\frac{\lambda}{2}\sum_{i=1}^d\int\diff^4x\sqrt{-g}\metric^{ab}\partial_{a}\framefield^i\partial_b\framefield^i\nonumber\\ &\quad-\frac{\alpha_\phi}{2}\int\diff^4x\sqrt{-g}\metric^{ab}\partial_{a}\matterfield\partial_b\matterfield\,,\label{eqn:matteraction}\\ &=\frac{1}{2}\int\diff^4x\sqrt{-g}M^{(\lambda)}_{\mu\nu}\metric^{ab}\partial_{a}\framefield^\mu\partial_b\framefield^\nu-\frac{\alpha_\phi}{2}\int\diff^4x\sqrt{-g}\metric^{ab}\partial_{a}\matterfield\partial_b\matterfield\,, \end{align} \end{subequations} where $\alpha_\phi\gg 1$, and $\lambda=\pm 1$, so that $M^{(+1)}_{\mu\nu}=\lmetric_{\mu\nu}$, while $M^{(-1)}_{\mu\nu}=-\delta_{\mu\nu}$. The choice $\lambda = -1$ corresponds to the natural matter coupling of four free, massless and minimally coupled scalar fields in classical gravity, in which all of them are treated on identical footing. Moreover, only when $\lambda=-1$ one is guaranteed to have the appropriate sign for the energy (density) of the four fields. On the other hand, when $\lambda=+1$, the second term in the action has an opposite sign with respect to the first and the third one. As we will see below, the GFT\xspace action is constructed so that it respects the symmetries of the classical matter action \cite{Oriti:2016qtz,Marchetti:2020umh, Gielen:2017eco}, and in particular it is symmetric under a rotation of the four scalar fields, with signature of the rotation depending the value of $\lambda$. Given this, one would expect the choice $\lambda=+ 1$ to the appropriate one for the interpretation of the above four scalar fields as a relational frame. However, as we have argued above, such an interpretation is available only at an emergent level and in an effective sense; as a consequence, only the symmetry properties of macroscopic variables (entering in the definition the CPSs) actually determine the emergent signature of the effective frame, which may well be independent of the signature of the orthogonal transformation relating the fields in the action. In order to see this clearly below, and to keep things as general as possible, leaving it to the effective cosmological dynamics to fix some of the ambiguities, we will not restrict to a specific choice of $\lambda=\pm 1$, keeping any $\lambda$-dependence explicit. As we have just mentioned, the symmetries of the above action play an important role in determining the form of the GFT action. These are (cfr. \cite{Gielen2020}): \begin{description}[font=\itshape] \item[{Translations}:] $\framefield^\mu\to \framefield^\mu+ k^\mu$ and $\matterfield\to\matterfield+k$, for each $\mu=0,\dots,\dimension-1$. \item[{Reflections}:] $\framefield^\mu\to -\framefield^\mu$ and $\matterfield\to -\matterfield$, for each $\mu=0,\dots, \dimension-1$. \item[{Lorentz transformations/Euclidean rotations}] When $\lambda=+1$ (resp.\ $\lambda=-1$), transformations $R\in \text{SO}(1,3)$ (resp.\ \text{SO}(4)) acting as $\framefield^\mu\to \tensor{R}{^\mu_\nu}\framefield^\nu$ are a symmetry of the Lagrangian for each $\mu=0,\dots, \dimension-1$. \end{description} \subsection{GFT averaged dynamics}\label{sec:gftaverageddyna} Analogously to what has been done in \cite{Oriti:2016qtz,Marchetti:2020qsq}, here we will only extract an effective mean-field dynamics from the full quantum equations of motion. In other words, we will only consider the imposition of the quantum equations of motion averaged on the states that we consider to be relevant for an effective relational description of the cosmological system (assumption \ref{ass:ds2})\label{asspage:ds2}, which, in our case, would be coherent states $\ket{\wfunction_{\cpeakwidth^\mu};\peakvalue^\mu,\peakphase_{\mu}}$ as in equation \eqref{eqn:coherentstates} whose condensate wavefunction is assumed to take the form \eqref{eqn:patchstates} (assumptions \ref{ass:ks1} and \ref{ass:ks3})\label{asspage:KS1b}: \begin{align}\label{eqn:simplestschwinger} &\left\langle\frac{\delta S_{\text{GFT}}[\gftfieldop,\gftfieldop^\dagger]}{\delta\gftfieldop^\dagger(\gvariables,\peakvalue^\mu)}\right\rangle_{\wfunction_{\cpeakwidth^\mu};\peakvalue^\mu,\peakphase_{\mu}}\equiv\left\langle\wfunction_{\epsilon^\mu};\peakvalue^\mu,\peakphase_{\mu}\biggl\vert\frac{\delta S_{\text{GFT}}[\gftfieldop,\gftfieldop^\dagger]}{\delta\gftfieldop^\dagger(\gvariables,\peakvalue^\mu)}\biggr\vert\wfunction_{\cpeakwidth^\mu};\peakvalue^\mu,\peakphase_{\mu}\right\rangle=0\,, \end{align} Here, $S_{\text{GFT}}$ is the GFT action, whose specific form will be discussed below. While perfectly consistent with the effective and approximate nature of the relational framework discussed in the previous section, the imposition of only an averaged form of equations of motion is clearly a strong truncation of the microscopic quantum dynamics, which is expected to be justified in general only in the emergent regime of very large number of particles (see the disucssion in Section \ref{sec:condensates} and in footnote \ref{footnote:fluctuations}). Moreover, for the purposes of this work, we will be interested in observables capturing only isotropic perturbations (e.g.\ the volume operator \eqref{eqn:volumeoperator}). For this reason, not only we will assume that the reduced wavefunction is isotropic, in the sense explained in Section \ref{sec:condensates} (so that the expectation value of the volume operator reduces to \eqref{eqn:volumenumberspin}), but we will also consider a condensate state whose peaking properties are isotropic as well (assumption \ref{ass:kc2})\label{asspage:kc2a}: \begin{equation}\label{eqn:condensatewavefunction} \wfunction_{\cpeakwidth,\rpeakwidth,\cpeakphase,\rpeakphase;\peakvalue^\mu}(\gvariables,\framefield^\mu,\matterfield)=\peakfunc_{\cpeakwidth}(\framefield^0-\peakvalue^0;\cpeakphase)\peakfunc_{\rpeakwidth}(\vert \boldsymbol{\chi}-\mathbf{\peakvalue}\vert;\rpeakphase)\redwfunction(\gvariables,\framefield^\mu,\matterfield)\,, \end{equation} where $\vert \boldsymbol{\chi}-\mathbf{\peakvalue}\vert^2=\sum_{i=1}^d(\framefield^i-\peakvalue^i)^2$. For the moment we will also assume (assumption \ref{ass:kc1})\label{asspage:kc1a} that the parameter $\rpeakwidth$ is a complex quantity, $\mathbb{C}\ni \rpeakwidth=\rpeakwidth_r+i\rpeakwidth_i$, but with a positive real part, necessary for the peaking properties of the states, $\rpeakwidth_r>0$. As we will see below, allowing a complex width for the rods peaking function allows the perturbation equations to be dependent on a derivative kernel with emergent Lorentz signature. \paragraph{GFT action.} Having made these premises, we now specify the form of $S_{\text{GFT}}$. As explained in Section \ref{sec:gftmodels}, $S_{\text{GFT}}$ depends on the precise spinfoam (or simplicial gravity) model coupled with $d+1$ massless scalar fields one wants to reproduce. While the EPRL-like and extended BC models differ on their domain (respectively $\text{SU}(2)$ and $\text{SL}(2,\mathbb{C})\times \hyperbolic$) and on the precise way the simplicity constraint is imposed, thus resulting in (in principle) different kinetic and interaction kernels, they are both defined by an action including a quadratic kinetic term and a non-local interaction term $U+U^*$ (the star representing complex conjugation) of simplicial\footnote{These kind of interactions are called simplicial because they represent the gluing of $5$ different tetrahedra in order to form a $4$-simplex, the basic building block of a $4$-dimensional discretized manifold.} type characterized by $5$ powers of the field operator, $S_{\text{GFT}}=\kinetic+U+U^*$. The resulting form of the action is however quite complicated to handle for most practical applications. For this reasons, one often makes some additional simplifying assumptions on $S_{\text{GFT}}$ \cite{Oriti:2016qtz,Marchetti:2020umh}: \begin{itemize} \item First of all, one imposes that the field symmetries of the classical action are preserved at the quantum level, meaning that they are also symmetries of the GFT action $S_{\text{GFT}}$ (assumption \ref{ass:ds1})\label{asspage:ds1}. In the case considered here, the symmetries to be respected are those highlighted in the section above: invariance under Lorentz transformations/Euclidean rotations, shifts, and reflections. This greatly simplifies the form of the interaction and kinetic terms, which read, in the EPRL-like case\footnote{Similar expressions hold for the extended BC model, provided that one extends the domain of the GFT fields and kinetic interaction kernels as $\gvariables\to (G_I;X)$. Moreover, since the normal $X$ is non-dynamical, the interaction kernel does not depend on it. As a consequence, only the integrated field \eqref{eqn:integratedbcexpansion} becomes important at the level of interactions. The kinetic kernel instead depends on the normal in a localized way, imposing $X=X'$, with $X$ and $X'$ being the arguments of $\bar{\gftfield}$ and $\gftfield$ respectively. We refer to \cite{Jercher:2021bie} for more details on the action of the extended BC model.} \cite{Oriti:2016qtz,Marchetti:2020umh} \begin{align*} \kinetic&=\int\diff \gvariables\diff h_I\int \diff^d\framefield\diff^d\framefield'\diff\matterfield\diff\matterfield'\,\bar{\varphi}(\gvariables,\framefield)\mathcal{K}(\gvariables,h_I;(\framefield-\framefield')^2_\lambda,(\matterfield-\matterfield')^2)\varphi(h_I,(\framefield')^\mu,\matterfield')\,,\\ U&=\int\diff^d\framefield\diff\matterfield\int\left(\prod_{a=1}^5\diff \gvariables^a\right)\mathcal{U}(\gvariables^1,\dots,\gvariables^5)\prod_{\ell=1}^5\varphi(\gvariables^\ell,\framefield^\mu,\matterfield)\,, \end{align*} where $(\framefield-\framefield')_\lambda^2\equiv \sgn(\lambda) M^{(\lambda)}_{\mu\nu}(\framefield-\framefield')^\mu(\framefield-\framefield')^\nu$ and where $\mathcal{K}$ and $\mathcal{U}$ are the respectively the aforementioned kinetic and interaction kernels encoding information about the EPRL-like model and, in particular, about the specific Lorentzian embedding of the theory. \item The second simplifying assumption that is often made in cosmological applications is that one is interested in a \virgolette{mesoscopic regime} where interactions are in fact essentially negligible (assumption \ref{ass:ds3})\label{asspage:ds3}. Clearly, this can only be a transient regime, and one expects that, eventually, interactions do become important (see e.g.\ \cite{Pithis:2016wzf,Pithis:2016cxg, Oriti:2021rvm}, for some works which study the phenomenological implications of the inclusion of interactions). \end{itemize} \paragraph{Dynamical equations.} Under both these assumptions, and performing a Fourier transform with respect to the variables $\matterfield$ and $\matterfield'$, one can see that the averaged quantum equations of motion reduce to \begin{equation}\label{eqn:fundamentalequationscps} \int\diff h_I\int\diff^d\framefield\, \kinetic(\gvariables,h_I;\framefield^2_\lambda,\mommatterfield)\peakfunc_\cpeakwidth(\framefield^0;\cpeakphase)\peakfunc_{\rpeakwidth}(\vert\boldsymbol{\chi}\vert;\rpeakphase)\redwfunction(h_I,\framefield^0+\peakvalue^0,\boldsymbol{\chi}+\mathbf{x},\mommatterfield)=0\,, \end{equation} where $\mommatterfield$ is the variable canonically conjugate to $\matterfield$ with respect to the Fourier transform. Expanding $\kinetic$ and $\redwfunction$ in power series around $\framefield^0=0$, $\boldsymbol{\chi}=0$ \cite{Marchetti:2020umh}, and assuming that (i) $\vert \rpeakwidth\vert$ and $\cpeakwidth$ are small, but the quantities \begin{equation} \combinedpeakparclock \equiv \cpeakwidth\cpeakphase^2/2\,,\qquad \combinedpeakparrods\equiv \rpeakwidth\rpeakphase^2/2 \end{equation} are large in absolute value (assumption \ref{ass:ks3})\label{asspage:ks3b} and (ii) reducing to isotropic configurations (assumption \ref{ass:ks2})\label{asspage:ks2b}, one finds, at the lowest order in the small parameters $\vert \rpeakwidth\vert$ and $\cpeakwidth$ (see Appendix \ref{sec:redwfunctiondynamics} for a detailed derivation)\label{asspage:kc2b}: \begin{equation}\label{eqn:redwfunctionevoprio} \partial^2_0\redwfunction_{\spin}(x,\mommatterfield)-i\firstdercoeff\partial_{0}\redwfunction_{\spin}(x,\mommatterfield)-\nondercoeff_{\spin}^2(\mommatterfield)\redwfunction_{\spin}(x,\mommatterfield)+\laplaciancoeff^2\nabla^2\redwfunction_{\spin}(x,\mommatterfield)=0\,, \end{equation} where $\spin$ is the isotropic spin label introduced in equation \eqref{eqn:sigmachi}, where we have dropped the superscript ${}^\mu$ for the argument of the reduced wavefunction $\redwfunction$, $\peakvalue\equiv \peakvalue^\mu$ and where $\partial^2_0$ and $\nabla^2\equiv \sum_{i}\partial^2_i$ represent derivatives with respect to rod and clocks values respectively. Finally, we have defined \begin{equation*} \firstdercoeff\equiv \frac{\sqrt{2\cpeakwidth}\combinedpeakparclock}{\cpeakwidth \combinedpeakparclock^2}\,,\qquad \nondercoeff_{\spin}^2\equiv\frac{1}{\cpeakwidth \combinedpeakparclock^2}-\kinratio_{\spin;2}(\mommatterfield)\left(1+3\lambda \laplaciancoeff^2\right)\,,\qquad \laplaciancoeff^2\equiv \frac{1}{3}\frac{\rpeakwidth \combinedpeakparrods^2}{\cpeakwidth \combinedpeakparclock^2}\,,\qquad \kinratio_{\Kexpansindex}\equiv \frac{\tilde{\kinetic}_{\lambda}^{(\Kexpansindex)}}{\tilde{\kinetic}_{\lambda}^{(0)}}\,. \end{equation*} Notice that by definition $\laplaciancoeff^2$ is in general a complex parameter, whose real and imaginary parts are given by \begin{equation*} \re\laplaciancoeff^2=\frac{\rpeakphase^2}{6}\frac{\rpeakwidth_r^2-\rpeakwidth_i^2}{\cpeakwidth \combinedpeakparclock^2}\,,\qquad \im\laplaciancoeff^2=\frac{\rpeakphase^2}{3}\frac{\rpeakwidth_r\rpeakwidth_i}{\cpeakwidth \combinedpeakparclock^2}\,. \end{equation*} Rewriting explicitly equation \eqref{eqn:redwfunctionevoprio} in terms of these quantities, we thus find \begin{align}\label{eqn:redwfunctionevo} 0&=\partial^2_0\redwfunction_{\spin}(x,\mommatterfield)-i\firstdercoeff\partial_{0}\redwfunction_{\spin}(x,\mommatterfield)-\realpartnonder_{\spin}^2\redwfunction_{\spin}(x,\mommatterfield)-i\impartnonder_{\spin}^2\redwfunction_{\spin}(x,\mommatterfield)\nonumber\\ &\quad+\re\laplaciancoeff^2\nabla^2\redwfunction_{\spin}(x,\mommatterfield)+i\im\laplaciancoeff^2\nabla^2\redwfunction_{\spin}(x,\mommatterfield)\,, \end{align} with \begin{equation} \realpartnonder_{\spin}^2\equiv \frac{1}{\cpeakwidth \combinedpeakparclock^2}-\kinratio_{\spin;2}(\mommatterfield)\left(1+3\lambda\re\laplaciancoeff^2\right)\,\qquad \impartnonder_{\spin}^2=3\lambda\im\laplaciancoeff^2 r_{\spin;2}\,. \end{equation} This is our fundamental equation determining the form of the reduced condensate wavefunction $\redwfunction$. As in \cite{Oriti:2016qtz,Marchetti:2020umh}, however, it is useful to decompose equation \eqref{eqn:redwfunctionevo} in its real and imaginary parts, by defining $\redwfunction_{\spin}\equiv \rwfunctionmodulus_{\spin}\exp[i\rwfunctionphase_{\spin}]$, so that, using \begin{align*} \redwfunction_{\spin}''&=\left[\rwfunctionmodulus_{\spin}''-(\theta'_{\spin})^2\rwfunctionmodulus_{\spin}+i\rwfunctionphase_{\spin}''\rwfunctionmodulus_{\spin}+2i\rwfunctionmodulus_{\spin}'\rwfunctionphase_{\spin}'\right]e^{i\rwfunctionphase_{\spin}}\,,\\ \nabla^2\redwfunction_{\spin}&=\left[\nabla^2\rwfunctionmodulus_{\spin}-(\boldsymbol{\nabla}\rwfunctionphase_{\spin})^2\rwfunctionmodulus_{\spin}+i\nabla^2\rwfunctionphase_{\spin}\rwfunctionmodulus_{\spin}+2i\boldsymbol{\nabla}\rwfunctionmodulus_{\spin}\cdot\boldsymbol{\nabla}\rwfunctionphase_{\spin}\right]e^{i\rwfunctionphase_{\spin}}\,, \end{align*} we see that, for the real and imaginary parts we have, respectively, \begin{subequations}\label{eqn:generalrealimaginary} \begin{align} 0&=\rwfunctionmodulus''_{\spin}+\re\laplaciancoeff^2\nabla^2\rwfunctionmodulus_{\spin}-\left[\left(\theta'_{\spin}\right)^2+\realpartnonder_{\spin}^2-\firstdercoeff\rwfunctionphase_{\spin}'-\re\laplaciancoeff^2\left(\boldsymbol{\nabla}\rwfunctionphase_{\spin}\right)^2-\im\laplaciancoeff^2\nabla^2\rwfunctionphase_{\spin}\right]\rwfunctionmodulus_{\spin}\nonumber\\ &\quad-2\boldsymbol{\nabla}\rwfunctionmodulus_{\spin}\cdot\boldsymbol{\nabla}\rwfunctionphase_{\spin}\,,\\ 0&=\rwfunctionphase_{\spin}''\rwfunctionmodulus_{\spin}+2\rwfunctionphase_{\spin}'\rwfunctionmodulus_{\spin}'-\firstdercoeff\rwfunctionmodulus_{\spin}'+\re\laplaciancoeff^2\left[2\boldsymbol{\nabla}\rwfunctionmodulus_{\spin}\cdot\boldsymbol{\nabla}\rwfunctionphase_{\spin}+\nabla^2\rwfunctionphase_{\spin}\rwfunctionmodulus_{\spin}\right]-\impartnonder_{\spin}^2\rwfunctionmodulus_{\spin}\nonumber\\ &\quad+\im\laplaciancoeff^2\left[\nabla^2\rwfunctionmodulus_{\spin}-\left(\boldsymbol{\nabla}\rwfunctionphase_{\spin}\right)^2\rwfunctionmodulus_{\spin}\right], \end{align} \end{subequations} where we have suppressed the explicit dependence of functions for simplicity. At this point, it is important to recall that we are interested in slightly inhomogeneous relational quantities. Therefore, in the next section we will consider a perturbative framework (with respect to spatial gradients) in which we will study the equations above. \subsection{Background and perturbed equations of motion}\label{subsec:bkgpert} The perturbative context will be defined by assuming that the functions $\rwfunctionmodulus_{\spin}$ and $\rwfunctionphase_{\spin}$ can be written as \begin{equation} \rwfunctionmodulus_{\spin}=\bkgmodulus_{\spin}+\pertmodulus_{\spin}\,,\qquad\rwfunctionphase_{\spin}\equiv \bkgphase_{\spin}+\delta\rwfunctionphase_{\spin}\,, \end{equation} with $\bkgmodulus=\bkgmodulus(\peakvalue^0,\mommatterfield)$ and $\bkgphase=\bkgphase(\peakvalue^0,\mommatterfield)$ being \virgolette{background} (zeroth-order) quantities and with $\pertmodulus_\spin$ and $\pertphase_\spin$ being small corrections to them. Let us study the zeroth- and the first-order (in $\pertmodulus, \pertphase)$ form of equations \eqref{eqn:generalrealimaginary}. \paragraph*{Background.} At the zeroth-order equations \eqref{eqn:generalrealimaginary} become \begin{align}\label{eqn:backgroundmoduluseq} \bkgmodulus''_{\spin}(\peakvalue^0,\mommatterfield)-\left[\left(\bkgphase'_{\spin}(\peakvalue^0,\mommatterfield)\right)^2+\realpartnonder_{\spin}^2(\mommatterfield)-\firstdercoeff\bkgphase_{\spin}'(\peakvalue^0,\mommatterfield)\right]\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)&=0\,,\\ \bkgphase_{\spin}''(\peakvalue^0,\mommatterfield)\bkgmodulus_{\spin}+2\bkgphase_{\spin}'(\peakvalue^0,\mommatterfield)\bkgmodulus_{\spin}'(\peakvalue^0,\mommatterfield)-\firstdercoeff\bkgmodulus_{\spin}'(\peakvalue^0,\mommatterfield)-\impartnonder_{\spin}^2\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)&=0\,, \end{align} where we have specified the dependence of the condensate modulus and phase on $\peakvalue^0$ and $\mommatterfield$ explicitly. Let us rewrite the second equation by multiplying by $\bkgmodulus_{\spin}\neq 0$: we obtain \begin{equation*} \bkgphase_{\spin}''(\peakvalue^0,\mommatterfield)\bkgmodulus^2(\peakvalue^0,\mommatterfield)_{\spin}+(\bkgphase_{\spin}'(\peakvalue^0,\mommatterfield)-\firstdercoeff/2)(\bkgmodulus_{\spin}^2)'(\peakvalue^0,\mommatterfield)-\impartnonder_{\spin}^2\bkgmodulus^2_{\spin}(\peakvalue^0,\mommatterfield)=0\,, \end{equation*} or, equivalently, \begin{equation*} \bkgphase_{\spin}''(\peakvalue^0,\mommatterfield)+(\bkgphase_{\spin}'(\peakvalue^0,\mommatterfield)-\firstdercoeff/2)\frac{(\bkgmodulus_{\spin}^2)'(\peakvalue^0,\mommatterfield)}{\bkgmodulus_{\spin}^2(\peakvalue^0,\mommatterfield)}-\impartnonder_{\spin}^2=0\,. \end{equation*} Now, assume that, in the regime of interest, $\impartnonder_{\spin}^2$ in the above equation is negligible\footnote{\label{footnote:beta}Classically, the volume background dynamics with respect to the scalar field clock is exponential (see Appendix \ref{app:harmonicgauge}). As we will see in Section \ref{sec:evophysicalquantities}, the behavior of the background volume is essentially determined by $\bkgmodulus_\spin^2$. If we make an exponential ansatz (analogous to \eqref{eqn:bkgmodsol}) for $\bkgmodulus_\spin^2$ and we plug it into the above equation for the phase we obtain, for large densities (and thus for large values of the clock, given our exponential ansatz), $\bkgphase'=(\firstdercoeff\massparameter_\spin+\impartnonder_\spin)/(2\massparameter_\spin)$ (assumption \ref{ass:dc1}). By reinserting this into equation \eqref{eqn:backgroundmoduluseq}, we see that the ansatz is not consistent, exactly because of the presence of $\impartnonder_\spin$. This motivates the choice of restricting to small values of $\impartnonder_\spin$. Moreover, as we will see below, since $\impartnonder_{\spin}^2\propto \im \laplaciancoeff$, the regime of small $\impartnonder_\spin$ will be compatible with the decoupling regime for first order perturbations.}. The results, in these cases are the same as in \cite{Marchetti:2020umh}, so that the equations for background phase and modulus can equivalently be written in terms of the integration constants $Q_{\spin}$ and $\mathcal{E}_{\spin}$ as \begin{subequations} \begin{align}\label{eqn:bkgphasesol} \bkgphase'_{\spin}(\peakvalue^0,\mommatterfield)&=\frac{\gamma}{2}+\frac{Q_{\spin}(\mommatterfield)}{\bkgmodulus_{\spin}^2(\peakvalue^0,\mommatterfield)}\mathcomma\\ (\bkgmodulus_{\spin}')^2(\peakvalue^0,\mommatterfield)&=\mathcal{E}_{\spin}(\mommatterfield)-\frac{Q_{\spin}^2(\mommatterfield)}{\bkgmodulus_{\spin}^2(\peakvalue^0,\mommatterfield)}+\massparameter_{\spin}^2(\mommatterfield)\bkgmodulus_{\spin}^2(\peakvalue^0,\mommatterfield)\simeq \massparameter_{\spin}^2(\mommatterfield)\bkgmodulus_{\spin}^2(\peakvalue^0,\mommatterfield)\,,\label{eqn:bkgmodsol} \end{align} \end{subequations} where $\massparameter_{\spin}^2(\mommatterfield)\equiv \realpartnonder_{\spin}^2(\mommatterfield)-\firstdercoeff^2/4$ (we have dropped the superscript $(\lambda)$ for notation simplicity) and with the last approximate equality being valid for large densities $\bkgmodulus_\spin\gg 1$ (assumption \ref{ass:dc1})\label{asspage:dc1a}. \paragraph*{First order.} \label{sec:perturbations} The first order equations, instead, are \begin{subequations}\label{eqn:firstorder} \begin{align}\label{eqn:deltarhofirstorder} 0&=\pertmodulus''_{\spin}(x,\mommatterfield)+\re\laplaciancoeff^2\nabla^2\pertmodulus_{\spin}(x,\mommatterfield)-\realpartnonder_{\spin}^2(\mommatterfield)\pertmodulus_{\spin}(x,\mommatterfield)\nonumber\\ &\quad-\left[\pertphase_{\spin}'(x,\mommatterfield)\left(2\bkgphase'_{\spin}(\peakvalue^0,\mommatterfield)-\firstdercoeff\right)-\im\laplaciancoeff^2\nabla^2\pertphase_{\spin}(x,\mommatterfield)\right]\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)\,,\\ 0&=\pertphase_{\spin}''(x,\mommatterfield)\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)+\bkgphase_{\spin}''(\peakvalue^0,\mommatterfield)\pertmodulus_{\spin}(x,\mommatterfield)+2\pertphase_{\spin}'(x,\mommatterfield)\bkgmodulus_{\spin}'(\peakvalue^0,\mommatterfield)\nonumber\\ &\quad+2\bkgphase_{\spin}'(\peakvalue^0,\mommatterfield)\pertmodulus_{\spin}'(x,\mommatterfield)-\firstdercoeff\pertmodulus_{\spin}'(x,\mommatterfield)+\re\laplaciancoeff^2[\nabla^2\pertphase_{\spin}(x,\mommatterfield)]\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)\nonumber\\ &\quad-\impartnonder_{\spin}^2\pertmodulus_{\spin}(x,\mommatterfield)+\im\laplaciancoeff^2\nabla^2\pertmodulus_{\spin}(x,\mommatterfield)\,.\label{eqn:phasefirstorder} \end{align} \end{subequations} The two equations form a complicated set of coupled second order differential equations for the variables $\pertmodulus_{\spin}$ and $\pertphase_{\spin}$. The decoupling regime can be easily identified by first rewriting equation \eqref{eqn:phasefirstorder} as \begin{align}\label{eqn:phaserewritten} 0&=\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)\left[\pertphase_{\spin}''(x,\mommatterfield)+2\pertphase_{\spin}'(x,\mommatterfield)\frac{\bkgmodulus_{\spin}'(\peakvalue^0,\mommatterfield)}{\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)}+\re\laplaciancoeff^2\nabla^2\pertphase_{\spin}(x,\mommatterfield)\right]\nonumber\\ &\quad+\pertmodulus_{\spin}(x,\mommatterfield)\left[\bkgphase_{\spin}''(\peakvalue^0,\mommatterfield)+[2\bkgphase_{\spin}'(\peakvalue^0,\mommatterfield)-\firstdercoeff]\frac{\pertmodulus_{\spin}'(x,\mommatterfield)}{\pertmodulus_{\spin}(x,\mommatterfield)}\right]+\im\laplaciancoeff^2\nabla^2\pertmodulus_{\spin}(x,\mommatterfield)\,, \end{align} where, similarly to what we did in the background case, and in light of the above discussion, we neglected the term proportional to $\impartnonder_{\spin}^2$. It is then easy to see that the decoupling regime corresponds to the limit in which\footnote{Notice that this condition is consistent with requirement of having negligible $\impartnonder$, see also footnote \ref{footnote:beta}.} (assumption \ref{ass:dc3})\label{asspage:dc3} \begin{equation}\label{eqn:smallimarginarypartalpha} \vert \im\laplaciancoeff^2\vert=\frac{2}{3}\frac{\rpeakphase^2\delta_r\vert \delta_i\vert}{\cpeakwidth^2\cpeakphase^2}\ll 1\,, \end{equation} and when the background density $\bkgmodulus$ is very large (assumption \ref{ass:dc1})\label{asspage:dc1b}. Indeed, using the background equation \eqref{eqn:bkgphasesol}, equation \eqref{eqn:deltarhofirstorder} can be written as \begin{equation*} L[\pertmodulus_{\spin}]\simeq 2\pertphase_{\spin}'Q_{\spin}/\bkgmodulus_{\spin}\,, \end{equation*} with $L$ an appropriate linear differential operator. So, $\pertmodulus\sim \pertphase/\bkgmodulus$, and for large enough $\bar\rho_{\spin}$, the right-hand-side is negligible. Similarly, using that $\bkgphase_{\spin}''=-2Q_{\spin}\bkgmodulus_{\spin}^{-2}(\bkgmodulus_{\spin}'/\bkgmodulus_{\spin})\sim -2Q_{\spin}\massparameter_{\spin}\bkgmodulus_{\spin}^{-2}$, we deduce that the first term at the second line of equation \eqref{eqn:phaserewritten} is of order $\pertmodulus_{\spin}/\bkgmodulus_{\spin}^2$, while the first term at the first line of equation \eqref{eqn:phaserewritten} is of order $\bkgmodulus_{\spin}\pertphase_{\spin}$, so for large enough $\bkgmodulus_{\spin}$, only the latter is important. As a result, equations \eqref{eqn:firstorder} become \begin{subequations}\label{eqn:decoupledperturbations} \begin{align}\label{eqn:modulusdecoupled} 0&\simeq \pertmodulus''_{\spin}(x,\mommatterfield)+\re\laplaciancoeff\nabla^2\pertmodulus_{\spin}(x,\mommatterfield)-\realpartnonder_{\spin}^2(\mommatterfield)\pertmodulus_{\spin}(x,\mommatterfield)\,,\\\label{eqn:thetadecoupled} 0&\simeq \pertphase_{\spin}''(x,\mommatterfield)+2\pertphase_{\spin}'(x,\mommatterfield)\frac{\bkgmodulus_{\spin}'(\peakvalue^0,\mommatterfield)}{\bkgmodulus_{\spin}(\peakvalue^0,\mommatterfield)}+\re\laplaciancoeff\nabla^2\pertphase_{\spin}(x,\mommatterfield)\,, \end{align} \end{subequations} which are clearly decoupled. An interesting feature of the above equations is that any Lorentz property of the second order differential operator in the perturbed equations is in fact only a result of the features of the peaking functions, i.e. of the (approximate) vacuum state we work with, and not of the fundamental symmetries imposed on the GFT action $S_{\text{GFT}}$. Indeed, the parameter $\lambda$, determining whether the matter variables enter the fundamental GFT action in a Lorentz ($\lambda=1$) or Euclidean ($\lambda=-1$) invariant fashion, only enters in $\realpartnonder_{\spin}$, and therefore does not affect at all the differential structure of the equations. Since, as we will see below, the form of the perturbation equations will naturally reflect the structure of equations determining the relational evolution of perturbed physical quantities, this result is particularly intriguing, because it would suggest that only a certain class of states is able to produce relational equations with local Lorentz signature. We will comment further on this in Section \ref{sec:conclusions}. \section{Effective relational dynamics of physical quantities}\label{sec:evophysicalquantities} In this section, we will use the evolution equations for the condensate wavefunction in order to obtain relational evolution equations for the expectation values of physical quantities, both at the background, i.e.\ homogeneous, and at the perturbed level, i.e.\ for inhomogeneous cosmological perturbations. In order to keep the notation lighter, for any quantum operator of interest $\op{O}$, we will denote $\bar{O}\equiv \braket{\hat{O}}_{\bar{\wfunction}}$, where the expectation value is computed with respect to the state characterized by the background part of the condensate wavefunction \eqref{eqn:condensatewavefunction}; similarly, we will denote by $\delta O$ the first order term in $\pertmodulus$, $\pertphase$ of the expectation value $\braket{\op{O}}_{\wfunction}$ computed on states characterized by the condensate wavefunction \eqref{eqn:condensatewavefunction}. The perturbed relational system includes in general geometric and matter operators. Among the matter operators, those of obvious interest are the $\phi$-scalar field operator and its momentum, written in the $\mommatterfield$ representation (see equations \eqref{eqn:scalarfieldoperator} and \eqref{eqn:momentumoperator}) as \begin{subequations}\label{eqn:mattervariables} \begin{align} \matterfieldop&=\frac{1}{i}\int\diff \gvariables\int\diff^4\framefield\int\diff\mommatterfield\,\gftfieldop^\dagger(\gvariables,\framefield^\mu,\mommatterfield)\partial_{\mommatterfield}\gftfieldop(\gvariables,\framefield^\mu,\mommatterfield)\,,\\ \mattermomop&=\int\diff \gvariables\int\diff^4\framefield\int\diff\mommatterfield\,\mommatterfield\gftfieldop^\dagger(\gvariables,\framefield^\mu,\mommatterfield)\gftfieldop(\gvariables,\framefield^\mu,\mommatterfield)\,. \end{align} \end{subequations} On the geometric side, there are in principle many different operators characterizing the properties of slightly inhomogeneous geometries. Here, we are interested only in scalar perturbations, and in particular only isotropic operators will be considered. Even in this case, however, at the classical level, scalar perturbations are in general captured by several non-trivial functions of the metric components, see e.g.\ equation \eqref{eqn:perturbedlineelement}. Reproducing metric perturbations at the quantum level, however, means determining (i) the structure of microscopic observables and (ii) collective states such that the expectation values of the former on the latter can be associated to emergent metric functions. Most of the work in the literature so far, however, has been devoted to the study of the volume operator \eqref{eqn:volumeoperator} and to models for which coherent states \eqref{eqn:coherentstates} with wavefunction \eqref{eqn:patchstates} provide an interpretation in terms of metric functions at specific values of the physical frame. The definition of more general operators and states is certainly a pressing issue to be tackled in order to define a comprehensive and complete perturbation theory from the GFT framework. However, we will content ourselves with considering the evolution of the universe volume defined (as a quantum operator) in equation \eqref{eqn:volumeoperator}, which is consistent and microscopically well defined, with respect to the states \eqref{eqn:coherentstates} with wavefunction \eqref{eqn:patchstates}. Moreover, in this section we will consider only the large densities (late times) regime of evolution of the relevant quantities, in which case, as shown in the above section, the equations of motion for $\pertmodulus$ and $\pertphase$ greatly simplify. As explained in Section \ref{sec:condensates}, one would expect this regime (characterized by a very large number of GFT quanta) to be also the classical one (i.e.\ characterized by small quantum fluctuations of macroscopic operators) \cite{Gielen:2019kae,Marchetti:2020qsq}. Therefore, it is of fundamental importance to check whether in this regime the solutions of the equations of motion coming from the quantum theory actually match those of GR\xspace (or possibly of some alternative theory of gravity). This will be the main purpose of the following sections, where geometric (Section \ref{subsec:volumeevolution}) and matter observables\footnote{Here with matter observables we mean the observables associated to the scalar field $\matterfield$, the only relevant contribution to the energy budget of the universe under our assumptions.} (Section \ref{subsec:matterevolution}) will be discussed separately. More precisely, we will look for a matching with GR\xspace in harmonic gauge (see Appendix \ref{app:harmonicgauge}), which is expected to capture well the physical properties of a relational scalar field frame. Before going to the actual computations, however, let us mention that the first order perturbed harmonic gauge condition does not produce a complete gauge fixing, meaning that, as we discuss in detail in Appendix \ref{app:harmonicgauge}, there are always small coordinate transformations satisfying \eqref{eqn:residualgaugefreedom} that can be performed while still remaining in the harmonic gauge. An important consequence of this residual gauge freedom is that one can fix the gauge in such a way that volume perturbations are completely absent, by reabsorbing spatial geometric perturbations in the specific coordinate choice. Given the very particular nature of this specific coordinate system, however, it is difficult to understand whether it can actually be realized by a physical reference frame. In particular, since in the quantum theory scalar field operators have matrix elements which are only dependent on \virgolette{pre-matter} variables, one would be led to conclude that using these quantum degrees of freedom as an effective physical reference frame one could not reproduce any such \virgolette{partially geometric} coordinate system. Therefore, in the following, we will look for a matching with classical GR\xspace in a completely gauge fixed harmonic gauge, with the understanding that the residual first order gauge freedom has been fixed in such a way that spatial geometric perturbations have not be reabsorbed by the gauge choice. \subsection{Volume evolution}\label{subsec:volumeevolution} Let us start from geometric observables, i.e. the evolution of the volume operator. We will first discuss the situation at the homogeneous background level, and then we will move to inhomogeneous perturbations. \paragraph{Background volume evolution and semi-classical limit.} The background volume dynamics is given, within our working assumptions and similarly to \cite{Marchetti:2020umh}, by\label{asspage:ks3c} \begin{equation}\label{eqn:backgrounddynamics} \left(\frac{\bar{V}'}{\bar{V}}\right)^2\simeq \left(\frac{2\SumInt_{\upsilon}\int\diff\mommatterfield V_{\upsilon}\bkgmodulus_{\upsilon}^2(\peakvalue^0\mommatterfield)\text{sgn}(\bkgmodulus_{\upsilon}'(\peakvalue^0,\mommatterfield))\massparameter_{\upsilon}(\pi_\phi)}{\SumInt_{\upsilon}\int\diff\pi_\phi V_{\upsilon}\bkgmodulus_{\upsilon}^2(\peakvalue^0,\mommatterfield)}\right)^2. \end{equation} The classical equations read, instead, in the limit of the field $\phi$ dominating the energy-density budget\footnote{Notice that here we have set the momentum of the clock field to $1$, for simplicity, as it has also been done in Appendix \ref{app:harmonicgauge}. We will discuss below how the results of this and the next section change when the momentum is properly reintroduced.} (see equations \eqref{eqn:friedmanneq} in Appendix \ref{app:harmonicgauge}) \begin{equation}\label{eqn:backgroundclassicaldynamics} \left(\frac{\bar{V}'}{\bar{V}}\right)^2=12\pi G(\bar{\pi}^{(c)}_\phi)^2\,, \qquad \left[\left(\frac{\bar{V}'}{\bar{V}}\right)^2\right]'=0\,, \end{equation} where $\bar{\pi}^{(c)}_{\matterfield}$ is the constant momentum of the scalar fields $\matterfield$, $\bar{\pi}_{\matterfield}^{(c)}=\bar{\matterfield}'$. To see if equation \eqref{eqn:backgrounddynamics} and its time derivative reduce to equations \eqref{eqn:backgroundclassicaldynamics}, let us consider the case, which has been in fact already shown to have a good classical gravitational interpretation \cite{Marchetti:2020umh,Marchetti:2020qsq, Oriti:2016qtz,Jercher:2021bie}, in which one of the representation labels, say $\upsilon_{o}$, is dominant (assumption \ref{ass:dc2})\label{asspage:dc2a}. In this case, if $\massparameter_{\upsilon_{o}}(\mommatterfield)\simeq \gravconst_{\upsilon_{o}}\mommatterfield$ (assumption \ref{ass:dc4})\label{asspage:dc4}, we have \begin{equation}\label{eqn:hubbleequation} \left(\frac{\bar{V}'}{\bar{V}}\right)^2\simeq 4\gravconst_{\upsilon_{o}}^2\frac{\left[\int\diff\mommatterfield\mommatterfield\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0\mommatterfield)\right]^2}{\left[\int\diff\mommatterfield\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0,\mommatterfield)\right]^2}=4\gravconst_{\upsilon_{o}}^2\frac{\bar{\Pi}^2_{\matterfield}}{\bar{N}^2}\,. \end{equation} So, when $4\gravconst_{\upsilon_{o}}^2=12\pi G$, equation \eqref{eqn:backgroundclassicaldynamics} is reproduced by identifying $\bar{\pi}^{(c)}_{\matterfield}\equiv \bar{\Pi}^2_{\matterfield}/\bar{N}^2 $. Notice that for the condition $\massparameter_{\upsilon_{o}}(\mommatterfield)\simeq \gravconst_{\upsilon_{o}}\mommatterfield$ to be true, the contribution to $\massparameter_{\upsilon_{o}}$ due to the geometric coefficients $\kinratio_{\upsilon_{o}}$ should be dominant, since only they can depend on $\mommatterfield$. In particular, this implies that $\mu_{\upsilon_{o}}\simeq \realpartnonder_{\upsilon_{o}}$, since they only differ by a $\mommatterfield$-independent coefficient. However, while the above conditions are clearly sufficient to reproduce the first Friedmann equation, they are not in general enough to guarantee the validity of the second Friedmann equation, stating that $(\bar{V}'/\bar{V})'=0$. The reason for this is that the ratio $\bar{\Pi}_{\matterfield}/\bar{N}$ is in general \emph{not constant}: \begin{equation*} \left[\frac{\bar{\Pi}_{\matterfield}}{\bar{N}}\right]'=2\left[\frac{\int\diff\mommatterfield\mommatterfield\massparameter_{\upsilon_{o}}(\mommatterfield)\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0,\mommatterfield)}{\int\diff\mommatterfield\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0,\mommatterfield)}-\frac{\left[\int\diff\mommatterfield\mommatterfield\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0,\mommatterfield)\right]\left[\int\diff\mommatterfield\massparameter_{\upsilon_{o}}(\mommatterfield)\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0,\mommatterfield)\right]}{\left[\int\diff\mommatterfield\bkgmodulus_{\upsilon_{o}}^2(\peakvalue^0,\mommatterfield)\right]^2}\right]. \end{equation*} If we assume, as before, that $\massparameter_{\upsilon}\simeq \gravconst_{\upsilon}\mommatterfield$, we see that the right-hand-side of the above equation has the form $\bar{\Pi}_{\phi,2}/\bar{N}-\bar{\Pi}_{\phi}^2/\bar{N}^2$, where $\bar{\Pi}_{\phi,2}$ is the background expectation value of the second quantized operator $\framemomop_{\phi,2}$ whose matrix elements in momentum space are given by $\mommatterfield^2$. In general, this quantity is not zero. However, if we further assume, as done in \cite{Gielen2020} that the condensate wavefunction has a peaking part peaked on one value of the momentum, say $\peakvaluemom$ of $\phi$, so that the condensate wavefunction can be written as\footnote{Notice that changing the form of the condensate wavefunction from equation \eqref{eqn:condensatewavefunction} to \eqref{eqn:finalcondensatewavefunction} and by assuming that $\mompeakingfunc$ is independent on the clock variables (as we are doing here) does not affect the equations of motion of $\rwfunctionmodulus_{\mompeakingwidth}$ and $\rwfunctionphase$ at all because of their linearity.} (assumption \ref{ass:kc3})\label{asspage:kc3a} \begin{equation}\label{eqn:finalcondensatewavefunction} \wfunction_{\cpeakwidth,\delta,\cpeakphase,\rpeakphase;\peakvalue^\mu;\peakvaluemom}=\peakfunc_\cpeakwidth(\framefield^0-\peakvalue^0;\cpeakphase)\peakfunc_{\delta}(\vert\boldsymbol{\chi}-\mathbf{x}\vert;\rpeakphase)\mompeakingfunc_{\mompeakingwidth}(\mommatterfield-\peakvaluemom)\redwfunction(\gvariables,\framefield^0,\boldsymbol{\chi},\mommatterfield)\,, \end{equation} we find that $\bar{\Pi}_{\phi,2}/\bar{N}-\bar{\Pi}_{\phi}^2/\bar{N}^2\simeq \peakvaluemom^2-\peakvaluemom^2=0$, and both Friedmann equations are thus satisfied, giving \begin{equation} \mathcal{H}^2\equiv \left(\frac{\bar{V}'}{3\bar{V}}\right)^2=\frac{4}{9}\massparameter^2_{\upsilon_{o}}(\peakvaluemom)=\frac{4\pi G}{3} \peakvaluemom^2\,,\qquad \mathcal{H}'=0\,. \end{equation} This also leads to the interpretation of $\peakvaluemom$ as the background classical momentum of the scalar field $\phi$, $\bar{\pi}^{(c)}_\phi$. We will discuss this point in more detail in the next section. Finally, let us emphasize that the above peaking condition on $\peakvaluemom$ is not related to an implementation of effective relational localization. However, it does localize the wavefunction in \virgolette{momentum space}, since, as we have mentioned, $\peakvaluemom$ can be identified with $\bar{\pi}^{(c)}_\phi$. In turn, since $\bar{\pi}^{(c)}_\phi$ is the quantity appearing at the right-hand-side of the classical Friedmann equations, it may well be that this localization property is associated to some form of semi-classicality. We leave further investigations of the physical interpretation of this peaking property to future works. \paragraph{Perturbed volume evolution.} As before, let us assume that we are in the case of single representation label dominance (assumption \ref{ass:dc2}). Then, the average perturbed volume reads \begin{equation} \delta V(\peakvalue,\peakvaluemom) \simeq 2V_{\upsilon_o}\bkgmodulus_{\upsilon_o}(\peakvalue^0,\peakvaluemom)\delta\bkgmodulus_{\upsilon_{o}}(x,\peakvaluemom)\,, \end{equation} where we have used the peaking properties in $\mommatterfield$ of the condensate wavefunction \eqref{eqn:finalcondensatewavefunction}. Now, let us take a time derivative of the above quantity. We have \begin{align*} \delta V'(\peakvalue,\peakvaluemom)&=2V_{\upsilon_{o}}\bkgmodulus_{\upsilon_{o}}'(\peakvalue^0,\peakvaluemom)\pertmodulus_{\upsilon_{o}}(x,\peakvaluemom)+2V_{\upsilon_{o}}\bkgmodulus_{\upsilon_{o}}(\peakvalue^0,\peakvaluemom)\pertmodulus'_{\upsilon_{o}}(x,\peakvaluemom)\\ &\simeq \massparameter_{\upsilon_{o}}(\peakvaluemom)\delta V(\peakvalue,\peakvaluemom)+2V_{\upsilon_{o}}\bkgmodulus_{\upsilon_{o}}(\peakvalue^0,\peakvaluemom)\pertmodulus'_{\upsilon_{o}}(x,\peakvaluemom)\,, \end{align*} where in the second line we have used the large $\bkgmodulus_{\upsilon_{o}}$ behavior\footnote{\label{footnote:signmu}For concreteness, we are considering large positive times $\peakvalue^0$, so that only the positive root of equation \eqref{eqn:bkgmodsol} is important.} $\bkgmodulus_{\upsilon_{o}}'\simeq \massparameter_{\upsilon_{o}}\bkgmodulus_{\upsilon_{o}}$. Taking one further derivative and using the above equation together with \eqref{eqn:modulusdecoupled}, we find \begin{equation} \delta V''-2\massparameter_{\upsilon_{o}}\delta V'+\re\laplaciancoeff\nabla^2\delta V+\delta V(\realpartnonder_{j}^2-\massparameter_j^2)=0\,. \end{equation} Recall also that, by consistency with the background equations, we have that $\massparameter_j^2\simeq \realpartnonder_{j}^2$, thus leading to the simplified form \begin{equation}\label{eqn:pertvolumegeneralform} \delta V''-2\massparameter_{\upsilon_{o}}\delta V'+\re\laplaciancoeff\nabla^2\delta V=\delta V''-3\mathcal{H}\delta V'+\re\laplaciancoeff\nabla^2\delta V=0\,. \end{equation} From the above equation we notice, in particular, that in order to have a Lorentz signature for the equation of physical perturbations we need to require $\re\laplaciancoeff<0$, which in turn implies $\delta_i^2>\delta_r^2$ (see assumption \ref{ass:kc1}\label{asspage:kc1b}). Moreover, in the extreme case in which $\delta_i^2\gg \delta_r^2$, and when $\rpeakphase^2\delta_i^2=3\cpeakwidth^2\cpeakphase^2$ (in which case the parameters of the peaking functions are chosen of the same magnitude), we have \begin{equation}\label{eqn:lorentz} \re\laplaciancoeff^2=\frac{\rpeakphase^2}{6\cpeakwidth \combinedpeakparclock^2}\left(\delta_r^2-\delta_i^2\right)\simeq -\frac{\rpeakphase^2\delta_i^2}{3\cpeakwidth^2\cpeakphase^2}=-1\,, \end{equation} which in turn implies that the second order differential operator appearing in \eqref{eqn:pertvolumegeneralform} can be recast in terms of a $\Box$ operator. By comparing equations \eqref{eqn:pertvolumegeneralform} and \eqref{eqn:classicalvolumeequation}, however, we conclude that the effective evolution of the perturbed volume obtained from our quantum gravity model \emph{does not match} the classical GR\xspace one, in general. An important difference lies in the pre-factor of the Laplacian term of the equation\footnote{Notice, however, that the general spatial differential structure of the equations is the same, thus implying that in the limit of $k\to\infty$ (with all the remaining quantities kept constant), the two equations are equivalent.}, being (in general) respectively $\re\laplaciancoeff$ and $\propto \bar{V}^{4/3}$ in equations \eqref{eqn:pertvolumegeneralform} and \eqref{eqn:classicalvolumeequation}. We will comment on the possible implications of this mismatch in Section \ref{sec:conclusions}. In the super-horizon limit $k\to 0$ (where $k$ represents the modulus of the modes associated to a spatial Fourier transform of the perturbed volume), thus for long-wavelength perturbations, equation \eqref{eqn:pertvolumegeneralform} admits two solutions: a constant one, and one of the form $\delta V\propto \bar{V}$. The latter becomes dominant as the universe expands, i.e. at large universe volumes. From the results in Appendix \ref{app:harmonicgauge} (see equation \eqref{eqn:limitspsi} and the discussion below equation \eqref{eqn:classicalvolumeequation}), we see that this dominant solution actually coincide with the GR\xspace one in the limit $k\to 0$. Thus, we conclude that the theory matches the predicted dynamics of GR\xspace in the super-horizon regime, at late cosmological times and large universe volume (which is also when the background dynamics reproduces the Friedmann one). \subsection{Matter evolution}\label{subsec:matterevolution}\label{asspage:ks3d}\label{asspage:kc3b}\label{asspage:dc2b} Let us now move to matter variables, i.e. to the background and perturbed expectation values of the operators $\matterfieldop$ and $\op{\Pi}_\phi$ defined in \eqref{eqn:mattervariables}. Their expectation values read, respectively\footnote{Here, for notational simplicity, we have reabsorbed any phase of the peaking function $\mompeakingfunc_{\mompeakingwidth}\equiv \vert \mompeakingfunc_{\mompeakingwidth}\vert e^{i\theta_f}$ into the phase of the reduced condensate wavefunction, redefining the global phase factor $\theta_{\upsilon_{o}}$.} \begin{subequations}\label{eqn:generalmatter} \begin{align} \braket{\matterfieldop}_{\wfunction}&\simeq \rho^2_{\upsilon_{o}}(\peakvalue,\peakvaluemom)[\partial_{\mommatterfield}\theta_{\upsilon_{o}}](\peakvalue,\peakvaluemom)=[\partial_{\mommatterfield}\theta_{\upsilon_{o}}](\peakvalue,\peakvaluemom) N(\peakvalue,\peakvaluemom)\,,\\ \braket{\mattermomop}_{\wfunction}&\simeq \peakvaluemom\rho^2_{\upsilon_{o}}(\peakvalue,\peakvaluemom)= \peakvaluemom N(\peakvalue,\peakvaluemom)\,. \end{align} \end{subequations} As we did for the volume operator, let us write explicitly the contributions to these quantities at the background and perturbed level. \paragraph*{Background.} At the background level from equations \eqref{eqn:generalmatter}, we have \begin{equation*} \bar{\Pi}_\phi\simeq \peakvaluemom\bar{N}(\peakvalue^0,\peakvaluemom)\,,\qquad \bar{\Phi}\simeq \bar{N}(\peakvalue^0,\peakvaluemom)[\partial_{\mommatterfield}\bkgphase_{\upsilon_{o}}](\peakvalue^0,\peakvaluemom)\,. \end{equation*} The dynamics of the background phase $\bkgphase_{\upsilon_{o}}$ is determined by equation \eqref{eqn:bkgmodsol}: \begin{equation}\label{eqn:backgroundphase} \bkgphase_{\upsilon_{o}}'=\frac{\firstdercoeff}{2}+\frac{Q_{\upsilon_{o}}}{\bar\rho_{\upsilon_{o}}^2}\,,\qquad\text{where}\qquad (\bkgmodulus_{\upsilon_{o}}^2)'\simeq \massparameter_{\upsilon_{o}}^2\bkgmodulus^2_{\upsilon_{o}}\,, \end{equation} with a prime denoting as usual a derivative with respect to the scalar time\footnote{In the equation for $\bkgmodulus_{\upsilon_{o}}^2$ we have neglected lower order terms in powers of $\bar{\bkgmodulus}_{\upsilon_{o}}^2$, since in the above equation for $\bkgphase_{\upsilon_{o}}$ we are already considering contributions suppressed as $\bkgmodulus^{-2}_{\upsilon_{o}}$. Any correction to the second equation in \eqref{eqn:backgroundphase} would thus result in even more negligible contributions to the first equation of \eqref{eqn:backgroundphase}.}. Integrating the equation on the left using the equation on the right we obtain \begin{equation} \bkgphase_{\upsilon_{o}}=\frac{\firstdercoeff}{2}\peakvalue^0-\frac{Q_{\upsilon_{o}}}{\massparameter_{\upsilon_{o}}\bar\rho_{\upsilon_{o}}^2}+C_{\upsilon_{o}}\,, \end{equation} where $C_{\upsilon_{o}}$ is an integration constant and where we have chosen a specific root for the second equation in \eqref{eqn:backgroundphase} (see footnote \ref{footnote:signmu}). Now, it is important to notice that $\firstdercoeff$ does not depend on $\mommatterfield$, while $\mu$ and $Q$ (and $C$) in principle do, even if they do not depend on time. As a result, we have, \begin{equation} \bar{\Phi}\simeq \left[-\partial_{\mommatterfield}\left[\frac{Q_{\upsilon_{o}}}{\massparameter_{\upsilon_{o}}}\right]+Q_{\upsilon_{o}}\frac{\partial_{\mommatterfield}\massparameter_{\upsilon_{o}}}{\massparameter_{\upsilon_{o}}}\peakvalue^0+\bar{N}\partial_{\mommatterfield}C_{\upsilon_{o}}\right]_{\mommatterfield=\peakvaluemom}\,. \end{equation} These results should be compared with the classical dynamics, given by $\bar{\phi}''=0$, i.e., $\bar{\phi}=c_1+c_2\peakvalue^0$, and of course also implying that the momentum of $\bar{\phi}$, $\bar{\pi}^{(c)}_\phi$, is a constant. Given the presence of $\bar{N}$ in the above expectation values for $\framemomop$ and $\matterfieldop$ (which grows exponentially in relational time), it is clear that we can only define \begin{equation}\label{eqn:scalarfieldmomentumbackground} \bar{\pi}^{(c)}_\phi\equiv \bar{\Pi}/\bar{N}=\peakvaluemom\,, \end{equation} with $\peakvaluemom$ that would be then associated to the classical momentum of the scalar field, $\peakvaluemom=\bar{\pi}^{(c)}_\phi$, which is the same identification we have found in the previous section by comparing the quantum volume evolution equations with the classical one. Notice that as a consequence of equation \eqref{eqn:scalarfieldbackground} we would also expect $\bar{\phi}'\equiv \bar{\pi}^{(c)}_\phi=\peakvaluemom$. The same reasoning is not adequate, instead, for the massless scalar field operator. Indeed, for large $\bar{N}$, a constant term (independent on the scalar field clock) becomes dominant, meaning that by defining $\phi=\braket{\matterfieldop}_{\wfunction}/\bar{N}$, we cannot match the classical result. On the other hand, if we take $C_{\upsilon_{o}}$ to be independent on $\mommatterfield$, $\braket{\matterfieldop}_{\wfunction}$ becomes an intensive quantity that can be readily compared to $\bar{\phi}$. In this case, consistency with the momentum correspondence requires that $[Q_{\upsilon_{o}}\partial_{\mommatterfield}(\log \massparameter_{\upsilon_{o}})]_{\mommatterfield=\peakvaluemom}=\peakvaluemom$. By assuming, as we did to arrive to equation \eqref{eqn:hubbleequation}, that $\massparameter_{\upsilon_{o}}\simeq \gravconst_{\upsilon_{o}}\mommatterfield$, the above condition fixes $Q_{\upsilon_{o}}\simeq \mommatterfield^2$, and, as a result, \begin{equation}\label{eqn:scalarfieldbackground} \bar{\phi}\equiv \braket{\matterfieldop}_{\wfunction}\simeq -\gravconst_{\upsilon_{o}}^{-1}+\peakvaluemom\peakvalue^0\,. \end{equation} Notice that, in \eqref{eqn:hubbleequation}, the quantity $\gravconst_{\upsilon_{o}}$ fixes the gravitational constant. This means that, for a finite value of the gravitational constant, one can never have $\gravconst_{\upsilon_{o}}^{-1}=0$. This implies that the background matter field \emph{can never be identified with the clock field}. One could argue that this condition is in fact necessary in order to trust the effective relational framework we have defined. It is intriguing, however, that this fact is related to the the gravitational constant attaining a finite value. Seen the other way around, it is interesting that the gravitational constant is in fact determined by the specific matter content that is present in the underlying fundamental theory. Both these points certainly deserve further scrutiny. Let us conclude this discussion with two comments. \begin{itemize} \item First, we emphasize that the matching with the classical equations has been performed by choosing the specific classical gauge fixing defined by the lapse function $N=a^3$, which naturally fixes the classical momentum of the scalar field as $\pi_{0}^{(c)}=1$. Of course, one could have fixed $\pi_{0}^{(c)}$ to any other arbitrary constant, in which case, at the classical level, one would have $\bkgmatter'=\bar{\pi}_{\phi}^{(c)}/\pi_{0}^{(c)}$. As a consequence, therefore, the matching would have required, e.g., $Q_{\upsilon_{o}}=\mommatterfield^2/\pi_{0}^{(c)}$, and $c_j^2=3\pi G/\pi_{0}^{(c)}$. However, requiring the quantity $\pi_{0}^{(c)}$ to be in direct correspondence with the expectation value of the clock field momentum would be difficult to justify from a physical perspective. Indeed, the classical momentum is always defined as a result of an (arbitrary) choice of time coordinate, which in the fundamental quantum gravity theory is simply absent. We therefore refrain from requiring any such correspondence, and we consider, from now on, the case $\pi_{0}^{(c)}=1$ in order make the matching with the classical theory more straightforward. \item It is also important to emphasize that the definition of $\bar{\Phi}$ and $\bar{\Pi}_\phi$ in terms of the fundamental operators is different from the one that is used for the clock variables in \cite{Marchetti:2020umh}. Indeed, in \cite{Marchetti:2020umh}, the value of the scalar field clock was associated to the scalar clock field operator divided by the particle number, while the momentum was just associated to the clock momentum operator. The tension between the definitions provided in \eqref{eqn:scalarfieldbackground} and \eqref{eqn:scalarfieldmomentumbackground} may be solved by noticing that the definitions are in fact consistent when one considers how the associated variables are represented. Indeed, for the clock scalar field, we have chosen the coordinate representation, and thus divided by $\bar{N}$ the operator which is multiplicative in this representation. The same has been done for the source scalar field, but since in this case the representation used is the momentum one, it has been the momentum operator (diagonal in this representation) that has been divided by $\bar{N}$. On the other hand, operators with derivative kernels in the two representations needed not to be divided by $\bar{N}$. While in principle the two representations are absolutely equivalent, the kind of states we have chosen, and according to which the definitions of $\bar{\phi}$ and $\bar{\pi}_\phi$ are provided, clearly distinguishes between the different sets of variables. \end{itemize} \paragraph*{Perturbed scalar field evolution.} Similarly to what we did for the volume operator, we can study perturbations to the scalar field quantities. Notice, however, that results about perturbations on the matter sector depend on how extensive variables are matched with classical ones. For instance, for the second quantized field operator, we have seen that $\phi=\braket{\matterfieldop}_{\wfunction}$, so \begin{equation} \pertmatter=\delta\braket{\matterfieldop}_{\wfunction}=\left[\frac{\delta N}{\bar{N}}\bkgmatter+\bar{N}\partial_{\mommatterfield}\pertphase_{\upsilon_{o}}\right]_{\mommatterfield=\peakvaluemom}\,. \end{equation} The dynamical equation satisfied by $\pertmatter$ can be easily determined by noticing that $\delta N/\bar{N}=2\pertmodulus_{\upsilon_{o}}/\bkgmodulus_{\upsilon_{o}}\equiv 2\delta_{\rwfunctionmodulus_{\upsilon_{o}}}$, and that $\delta_{\rwfunctionmodulus_{\upsilon_{o}}}$ and $\pertphase_{\upsilon_{o}}$ satisfy the same differential equation: \begin{equation} \delta_{\rwfunctionmodulus_{\upsilon_{o}}}''+2\mu_{\upsilon_{o}}\delta_{\rwfunctionmodulus_{\upsilon_{o}}}'+\re\laplaciancoeff\nabla^2\delta_{\rwfunctionmodulus_{\upsilon_{o}}}=0=\pertphase_{\upsilon_{o}}''+2\mu_{\upsilon_{o}}\pertphase_{\upsilon_{o}}'+\re\laplaciancoeff\nabla^2\pertphase_{\upsilon_{o}}\,. \end{equation} Here, consistently to what was done in the previous sections, we have used $\bkgmodulus'_{\upsilon_{o}}\simeq \mu_{\upsilon_{o}}\bkgmodulus$ and the fact that $\mu_{\upsilon_{o}}\simeq \realpartnonder_{\upsilon_{o}}$. Already from this equation, in particular from the behavior of the spatial derivative term (scaling as $\bar{V}^{4/3}$, see equation \eqref{eqn:deltaphiharmonicgauge}) we can conclude that the evolution equation for the scalar field perturbations does not match, in general, with the GR\xspace one. Still, similarly to what happens for the volume perturbations, we can verify that solutions to our equations and to the GR\xspace ones do match in the super-horizon regime (long wavelength limit). To see this explicitly, notice that in this case the equation satisfied by $\pertphase_{\upsilon_{o}}$ becomes \begin{equation}\label{eqn:deltathetalargewave} 0=\pertphase_{\upsilon_{o}}''+\frac{(\bkgmodulus_{\upsilon_{o}}^2)'}{\bkgmodulus_{\upsilon_{o}}^2}\pertphase'_{\upsilon_{o}}=\pertphase_{\upsilon_{o}}''+\massparameter_{\upsilon_{o}}\pertphase'_{\upsilon_{o}}\,,\qquad (k\to 0)\,, \end{equation} whose general solution is \begin{equation} \pertphase_{\upsilon_{o}}=c_{1,\upsilon_{o}}(\mommatterfield)+c_{2,\upsilon_{o}}(\mommatterfield)\bar{N}^{-1}\,,\qquad (k\to 0)\,, \end{equation} with an appropriate redefinition of constants. Thus, in the large $\bar{N}$ limit, we can write $\pertphase_{\upsilon_{o}}\simeq c_{1,\upsilon_{o}}(\mommatterfield)$, and since $\delta N/\bar{N}$ is constant, $\delta\phi\simeq \bar{N}c_{1,\upsilon_{o}}(\mommatterfield)$, which forces us to consider $c_{1,\upsilon_{o}}$ to be independent on $\mommatterfield$ in order to match with GR\xspace. Indeed, in this case, we have \begin{equation} \delta \phi=\delta\braket{\matterfieldop}=\left[\frac{\delta V}{\bar{V}}\bar{\phi}+\partial_{\mommatterfield}c_{2,\upsilon_{o}}-c_{2,\upsilon_{o}}\partial_{\mommatterfield}\massparameter_{\upsilon_{o}}\peakvalue^0\right]_{\mommatterfield=\peakvaluemom} \,,\qquad (k\to 0)\,, \end{equation} which is compatible with the classical solution, since in virtue of $\delta V/\bar{V}$ being constant\footnote{Recall that the dominant solution of equation \eqref{eqn:pertvolumegeneralform} in the $k\to 0$ limit is $\delta V\propto \bar{V}$.}, it satisfies $\delta\phi''=0$. Let us now consider perturbations in the scalar field momentum. If the classical momentum $\pi_{\phi}^{(c)}$ is identified with $\braket{\framemomop_{\phi}}_{\wfunction}/N\simeq \peakvaluemom$, we have that $\delta \pi_\phi=0$. On the other hand, if we maintain the correspondence suggested above, i.e.\ $\pi_\phi=\braket{\framemomop_{\phi}}_{\wfunction}/\bar{N}$, then \begin{equation} \delta \pi_\phi=\peakvaluemom\frac{\delta N}{\bar{N}}=\peakvaluemom\frac{\delta V}{\bar{V}}\,, \end{equation} with the equations for $\delta V$ being already described in the previous subsection. In order to have a consistent definition of the momentum, however, we should require $\delta\matterfield'=\delta\mommatterfield$, which, in the long wavelength limit forces us to impose $c_{2,\upsilon_{o}}=0$, so that we find \begin{equation} \pertmatter=(\delta V/\bar{V})\bkgmatter\,\,,\qquad (k\to 0)\,. \end{equation} Therefore we see that in the super-horizon limit perturbations are only present in the modulus of the condensate wavefunction. In conclusion, there is no matching with the classical theory for arbitrary wavelengths, and we obtain instead a modified dynamics for cosmological perturbations, from our quantum gravity model. On the other hand, we see that the same assumptions needed for the background solutions to match GR\xspace allow also for a classical matching of perturbed quantities in the super-horizon limit (which is expected already from previous work on this issue in the separate universe framework). This is a good consistency check of our formalism and procedure. Still, we have seen that the discussion of inhomogeneous perturbations is complicated by the fact that one needs to identify also the right way to turn extensive quantities into intensive ones (i.e., $\bar{N}$ or $N$, as we have seen in the case of the perturbed momentum). Consistent and rather compelling choices can be identified, though. We remark that this additional difficulty is due to the fact that our fundamental degrees of freedom are not quantized fields and that spatiotemporal observables emerge, in this QG\xspace formalism, only as collective, averaged quantities. This is reflected in the presence of an additional observable, with no classical counterpart, given by the number operator. It is the correct way of using this additional, purely quantum gravity observable, that needs to be determined, in order to match continuum gravitational physics. \section{Summary and discussion}\label{sec:conclusions} In this section we provide a summary and a discussion of the main results presented in the paper (Section \ref{sec:summary}). Moreover, we review the approximations and assumptions (and the arguments motivating them) made in order to obtain such results (Section \ref{sec:approx}), in particular discussing how relaxing some of them may impact the final results and help recover the appropriate GR\xspace limit for arbitrary wavelengths. \subsection{Summary and outlook}\label{sec:summary} In this paper we have studied the extraction of scalar (and isotropic) cosmological perturbations from full QG\xspace, within the GFT\xspace formalism. The classical counterpart of the GFT\xspace system we have considered consists of five massless and free scalar fields minimally coupled to geometry, four of which constitute the material reference system and which by assumption provide a negligible contribution to the total energy-momentum budget of the universe\xspace. The remaining field (called $\matterfield$) is assumed to be slightly inhomogeneous with respect to the matter reference frame. In contrast to past works on the subject \cite{Gielen:2017eco,Gielen:2018xph,Gerhardt:2018byq}, here we have for the first time defined perturbations (of matter and geometric quantities) in an effective relational sense, by means of coherent states sharply peaked on values associated to the four massless scalar fields (whose interpretation stems from the role their corresponding discrete degrees of freedom play in the fundamental perturbative quantum dynamics of the GFT\xspace models) we used as a matter frame. These peaking values are then the quantities with respect to which other physical observables can be relationally localized. Being this relational localization by construction only effective, controlled both by the above peaking properties of the condensate wavefunction and by the averaged total number of microscopic GFT\xspace quanta (controlling also the magnitude of quantum fluctuations), it bypasses most of the technical difficulties associated to a material frame composed by four minimally coupled scalar fields highlighted e.g.\ in \cite{Giesel:2016gxq}. By imposing an averaged form of the quantum many-body microscopic GFT\xspace dynamics, we have obtained dynamical equations determining the phase and the modulus of the (reduced) condensate wavefunction. These phase and modulus, representing the basic hydrodynamic variables of our \virgolette{quantum spacetime fluid}, are then assumed to split into background quantities taken to be homogeneous (i.e.\ independent on the relational rods) and small inhomogeneous perturbations, which allowed in turn for a perturbative analysis of the condensate equations. The resulting first order equations for phase and modulus are in general coupled, but one can show that they actually decouple in the limit of large average number of GFT\xspace quanta, a limit which has been associated to the emergent classical behavior of the macroscopic spacetime quantities \cite{Marchetti:2020qsq, Gielen:2019kae}. We have also seen that the equation for the perturbed modulus, eventually determining the behavior of crucial quantities, like e.g.\ the perturbed volume operator, shows an effective Lorentz signature of the derivative kernel only if one assumes that the width of the peaking condensate function (assumed to be isotropic for simplicity) on the scalar field rods is in general complex, with a large imaginary part (and a positive real one, guaranteeing in fact the aforementioned peaking properties). Interestingly enough, this feature seems to be independent on the symmetry properties of the classical action of the scalar fields, in turn assumed to be respected by the GFT\xspace action $S_{\text{GFT}}$, but to really only depend on the dispersion around the peaking value of the peaking states. How this emergent Lorentz signature is related to the local Lorentz structure encoded in the group data associated to the GFT\xspace field is still an open and fundamental question. The decoupled dynamics (at large universe volumes) of perturbed phase and modulus of the condensate wavefunction has then been used in order to study the dynamics of macroscopic observables associated to geometry (i.e.\ the volume operator) and to the non-reference matter scalar field $\matterfield$, in order to match it with GR\xspace (with a first order harmonic gauge fixing). The background, unperturbed dynamics turns out to be consistent with GR\xspace in the limit of one dominating representation label and with the additional assumption that the condensate wavefunction also peaks on an arbitrary value $\peakvaluemom$, of the variable conjugate to the field $\matterfield$. The background analysis of the equations shows two more interesting properties. First, one is naturally led to associate the classical scalar field $\matterfield$ with the \emph{extensive} (second quantized) scalar field operator associated to $\matterfield$, contrarily to what is actually done for the reference fields. This would suggest that the determination of classical quantities in terms of intrinsic and extensive expectation values of second quantized ones, actually depends on the physical role the quantities themselves play. Second, the shift between the clock scalar field and the background part of $\phi$ enters directly in the emergent definition of the gravitational constant. This is intriguing not only because it implies that the matter content of the universe\xspace actually impacts the very definition of the emergent gravitational constant, but it also highlights a possible interesting connection with the very notion of relational evolution. Indeed, a non-zero shift, implying that one can never identify the clock field with the background matter field, in turn means that the gravitational constant cannot vanish. It will be interesting to investigate further the interplay between relationality and emergent constant of nature by considering how to switch between two similar clocks, an issue which, by itself, is likely to play a crucial role in answering open questions about the emergence of relational dynamics from QG\xspace (including, e.g.\ the role that unitarity plays in the fundamental theory). Finally, we have studied the dynamics of linear perturbations in volume and matter variables. Obtaining such explicit dynamical equations directly from the full quantum gravity theory, within several approximations, which are however under control (at least in principle) within the fundamental formalism, is our main result. For both types of quantities, the solutions to the equations of motion actually match GR\xspace in the super-horizon limit of very large wavelengths. However, in other regimes, starting already at intermediate wavelengths, before entering the sub-horizon regime, \emph{no such match} was found (though the classical and quantum equations do share the same spatial differential structure, characterized by a Laplace operator on the rod fields). There are three ways in which we could interpret this result. First, we could insist that in this regime the dynamics of cosmological perturbations, as derived from quantum gravity, should in fact match the GR\xspace one. This implies that some assumptions that were taken in our derivation were not justified, and need to be improved (see Section \ref{sec:approx}). Of course, we know well that such improvements are needed independently of this issue, but it is worth considering which ones could be responsible for this specific mismatch. These could involve, for instance, the peaking parameters which were taken to be actually independent on geometric data (assumption \ref{ass:ks3}), or the GFT\xspace interactions being entirely neglected in deriving the equation for cosmological perturbations\footnote{It is worth noticing however that it is the mesoscopic regime of free GFT\xspace dynamics at large volumes, before interactions become relevant, that gives an effective Friedmann dynamics for the background} (assumption \ref{ass:ds3}). Second, the identification of perturbations we used may be incorrect or at least insufficient to reproduce their correct dynamics. We have indeed assumed \emph{condensate perturbations}, but it is easy to envisage situations (which are actually of great physical interest \cite{pitaevskii2016bose} in condensed matter physics) where fluid perturbations are determined also by small components of the fluid out of the condensate, and that of course a better approximation of the true condensate (vacuum) state, beyond the mean-field or the perfect condensate (or coherent) state should be considered (see assumption \ref{ass:ds2}). Finally, it is possible that none of the two interpretations above is correct, meaning that the procedure we employed is sufficient to extract the relevant effective continuum dynamics of cosmological perturbations from QG\xspace. In other words, our QG\xspace theory, that is our chosen class of GFT\xspace models, simply predicts the above mismatch with GR\xspace. Of course, this would be the most intriguing possibility. Taking this possibility seriously, we have again two possible implications. First, the underlying GFT\xspace (and spin foam) models are \virgolette{lacking} and we managed to identify a problematic aspect of them in terms of their physical predictions (e.g.\ within assumption \ref{ass:ds1}). This in itself would be quite an achievement, in our opinion, given how difficult it is to constrain quantum gravity models in general. Second, the modified dynamics of cosmological perturbations we have obtained should be taken seriously as a prediction and it may turn out to be in fact a viable one. For example, it could correspond to the predicted dynamics of some modified gravity theory, which would then be the effective continuum and classical description of our fundamental quantum gravity dynamics\footnote{Let us point out that the effective cosmological dynamics for the background, obtained from these GFT\xspace models, has been already matched with (limiting curvature) mimetic gravity \cite{deCesare:2018cts}.}. In order to explore this possibility, we need to invest serious work on the comparison of these predictions with observational cosmological data, and it would be exciting work. In conclusion, in this work we have taken one important step towards the definition of a complete framework for the extraction of an effective relational cosmological perturbation theory from full QG\xspace. The results of the analysis showed perturbative consistency with GR\xspace (in first order harmonic gauge) in the super-horizon limit, but a modified dynamics of cosmological perturbations otherwise. Our analysis has allowed us to gain some important new insights on the emergence of a relational Lorentz-like structure in the macroscopic dynamics, and to further characterize the emergent physics of GFT\xspace models including not only (four) reference fields, but also additional, non-frame matter. Moreover, this work has highlighted what steps are likely to be crucial for the ambitious goal of a complete extraction of cosmological perturbation theory from QG\xspace. Among them, we emphasize (i) the need for the construction of microscopic observables whose expectation value on appropriate states can be associated to macroscopic geometric quantities (ideally one would aim for the construction of an emergent metric); (ii) the need for an in-depth investigation on the relation between the Lorentz-like properties of the emergent dynamics and those encoded in the group theoretic data assigned to the microscopic GFT\xspace field; and (iii) the study of the impact of \virgolette{out-of-condensate} perturbations on the macroscopic emergent dynamics and, more generally, improvements of the various approximations that were needed to get to the results obtained in this work. \subsection{Approximations and assumptions}\label{sec:approx} The assumptions made in this paper can be naturally split in kinematic and dynamic ones; moreover, we will also categorize them as structural (i.e.\ assumptions motivated by conceptual reasons or used to simplify otherwise extremely challenging technical computations) and as motivated by the requirement of matching with classical gravity. \subsubsection{Kinematic assumptions} Kinematic approximations are related to the properties of the specific states we are considering. \paragraph*{Structural} \begin{description} \item[KS1\label{ass:ks1}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Condensate states}}, pages \pageref{asspage:KS1a}, \pageref{asspage:KS1b}): In this paper, following \cite{Gielen:2013naa,Gielen:2014ila,Gielen:2014uga,Oriti:2015qva,Gielen:2016dss,Oriti:2016qtz,Pithis:2016cxg,Pithis:2019tvp}, we only focus on condensate states, defined as in equation \eqref{eqn:coherentstates}. Condensate states are in fact the simplest representative of the class of coarse-grained states which we expect can be associated to emergent continuum geometries \cite{Gielen:2013naa,Gielen:2014ila,Gielen:2014uga,Oriti:2015qva,Gielen:2016dss,Oriti:2016qtz,Pithis:2016cxg,Pithis:2019tvp}, and thus are of primary interest for the extraction of continuum physics from QG\xspace. \item[KS2\label{ass:ks2}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Isotropy}}, pages \pageref{asspage:ks2a}, \pageref{asspage:ks2b}): The condensate wavefunction is required to satisfy the isotropy condition \eqref{eqn:isotropycond}. This assumption remarkably simplifies the computational difficulties related to the derivation of emergent collective dynamics from the microscopic one. It will likely need to be relaxed when one is interested in anisotropic perturbations (see e.g.\ \cite{deCesare:2017ynn}). \item[KS3\label{ass:ks3}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Peaking}}, pages \pageref{asspage:ks2a}, \pageref{asspage:ks2b}, \pageref{asspage:ks3c}, \pageref{asspage:ks3d}): The condensate wavefunction, following \cite{Marchetti:2020umh,Marchetti:2020qsq}, is assumed to split into a peaking and into a reduced wavefunction (which is assumed not to spoil the overall peaking properties of the condensate wavefunction\footnote{The validity of this condition can be checked after the solutions of the mean-field dynamics are determined.}), with the former depending only on frame variables, see equation \eqref{eqn:wavefunctioncps} and the discussion in Section \ref{sec:condensates}. The use of coherent peaked states allows to concretely implement a notion of relational evolution with respect to the frame scalar field variables, so that their wavefunction represents a distribution of spatial geometries for each point of the physical manifold labelled by the reference frame fields. The peaking function is taken to be a Gaussian (with a non-trivial phase) with small width. In the case of a single (clock) variable, with the notation used in equation \eqref{eqn:peakingfunction}, this requirement translates in $\epsilon\ll 1$ \cite{Marchetti:2020umh,Marchetti:2020qsq}. In order to avoid large quantum fluctuations, however, $\epsilon$ cannot tend to zero; it needs to be finite and, in particular, as discussed in \cite{Marchetti:2020umh,Marchetti:2020qsq}, it should satisfy $\epsilon\pi_0^2\gg 1$, where $\pi_0$ determines the non-trivial phase of the Gaussian. This guarantees that all quantum fluctuations of observables associated to the clock variable are small in the classical regime \cite{Marchetti:2020qsq}. Analogous assumptions are made for rod variables. \end{description} \paragraph{Motivated by classical matching} \begin{description} \item[KC1\label{ass:kc1}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Complex width}}, pages \pageref{asspage:kc1a}, \pageref{asspage:kc1b}): The width of the Gaussian determining the peaking on rod fields is in general taken to be a complex parameter $\delta^{(j)}$ (for $j=1,2,3$), with a positive real part $\delta^{(j)}_r>0$ and satisfying ${\delta^{(j)}_i}^2>{\delta^{(j)}_r}^2$, where $\delta^{(j)}_i$ is the imaginary part of $\delta^{(j)}$. This last condition is necessary in order to recover an effective Lorentz signature of second-order derivatives with respect to the frame fields (see e.g.\ the discussion below equation \eqref{eqn:pertvolumegeneralform}). It is possible that a more detailed study (guided by the underlying discrete gravity interpretation of the QG\xspace dynamics) of the coupling between matter frames and geometry will relate the validity of this condition to the imposition of local Lorentz invariance. \item[KC2\label{ass:kc2}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Rods rotational invariance of peaking function}}, pages \pageref{asspage:kc2a}, \pageref{asspage:kc2b}): The peaking function is assumed to be rotationally symmetric with respect to rods variables, see equation \eqref{eqn:condensatewavefunction}. This implies in particular $\delta^{(j)}\equiv \delta$, $\pi^{(j)}\equiv \pi_x$. This is necessary in order to obtain a Laplace operator with respect to rods variables. \item[KC3\label{ass:kc3}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Peaking on matter \virgolette{momenta}}}, pages \pageref{asspage:kc3a}, \pageref{asspage:kc3b}): We assume the condensate wavefunction to also be peaked in the variable canonically conjugate to $\matterfield$, $\mommatterfield$, see equation \eqref{eqn:finalcondensatewavefunction}. This is necessary in order to recover both the Friedmann equations already at the background level, see Section \ref{subsec:volumeevolution}. As we discuss in Section \ref{subsec:volumeevolution}, this \virgolette{momentum peaking} may be related to a semi-classicality condition which could be better understood in models which actually include a scalar field with a non-trivial potential, so that at the right-hand-side of the classical Friedmann equation both the scalar field conjugate variables are present. This is a further direction for future work. \end{description} \subsubsection{Dynamic assumptions} Dynamic approximations are instead related to the specific form of the microscopic GFT action, on how the effective equations for background and perturbed quantities are obtained and, finally, on the specific form of the classical system's dynamics. \paragraph{Structural} \begin{description} \item[DS1\label{ass:ds1}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{GFT action and symmetries}}, page \pageref{asspage:ds1}): The form of the GFT action is in general obtained by comparison with the discrete gravity path integral of the corresponding classical system. In particular, this means that symmetries of the classical action are reflected on the GFT action \cite{Oriti:2016qtz,Gielen:2017eco}. As mentioned in the previous section, the mismatch between GR\xspace and effective QG\xspace equations for perturbations of intermediate and small wavelengths may suggest that some further scrutiny into the details of these models (especially regarding the coupling of geometric and matter degrees of freedom) could be important. \item[DS2\label{ass:ds2}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Mean-field dynamics}}, page \pageref{asspage:ds2}): The effective dynamics is taken to be well approximated by a mean-field one, obtained by computing the expectation value of quantum equations of motion on the above coherent states \cite{Oriti:2016qtz}, see equation \eqref{eqn:simplestschwinger}. This assumption implies that microscopic quantum fluctuations are completely neglected, which is certainly not the most general situation one can envisage. In particular, this assumption may be critical exactly because we are interested in small, perturbative effects, which may be heavily affected by quantum corrections to the mean-field dynamics. The impact of fluctuations on the mean-field theory has already been studied in the Landau-Ginzburg approach, suggesting the validity of mean-field methods for a class of toy models and simple background configurations \cite{Pithis:2018eaq,Marchetti:2020xvf}. Still, at the time of writing there is no conclusive result for realistic geometric models with non-trivial backgrounds as those considered in this paper. \item[DS3\label{ass:ds3}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Negligible interactions}}, page \pageref{asspage:ds3}): Interaction terms in the effective dynamics are assumed to be negligible with respect to kinetic terms. At the mean-field level, this approximation can only be satisfied for condensate densities (or, equivalently, average particle numbers) which are not arbitrarily large \cite{Oriti:2016qtz}. \item[DS4\label{ass:ds4}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Classical system}}, page \pageref{asspage:ds4}): Frame fields are classically assumed to have negligible impact on the energy-momentum budget of the universe\xspace. Besides making these fields behave as \virgolette{frame-like} as possible, this condition allows to define unambiguously perturbative inhomogeneities with respect to the rods fields. \end{description} \paragraph{Motivated by classical matching} \begin{description} \item[DC1\label{ass:dc1}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Mesoscopic regime}}, pages \pageref{asspage:dc1a}, \pageref{asspage:dc1b}): The averaged number of particles of the system is taken to be large enough to allow for both a continuum interpretation of the expectation values of relevant operators and classical behavior, but not too large that interactions are dominating, see above. In this regime one can see that order two or higher moments of the relevant operators are suppressed (essentially by powers of the averaged number of particles) with respect to expectation values, showing that the averaged dynamics captures sufficiently well the physics of the system \cite{Marchetti:2020qsq}. Moreover, from a more technical point of view, in this regime it is possible to decouple and significantly simplify the equations for linear perturbations \eqref{eqn:firstorder}, see the discussion in Section \ref{subsec:bkgpert}. The validity of this approximation depends of course on the form and on the specific values of the coupling constants of the microscopic interactions. \item[DC2\label{ass:dc2}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{\virgolette{Single-spin} dominance}}, pages \pageref{asspage:dc2a}, \pageref{asspage:dc2b}): We assume that only one quantum label dominates the evolution (represented by $\upsilon_{o}$ in our notation). This assumption is justified by the fact that the background evolution is exponential for each $\upsilon$, meaning that, if $\mu_\upsilon$ has a maximum $\upsilon_0$ over the range of $\upsilon$, the evolution of macroscopic observables like the volume will be dominated by $\upsilon_0$ (see e.g.\ equation \eqref{eqn:backgrounddynamics}). The validity of this assumption has also been investigated in \cite{Gielen:2016uft}. \item[DC3\label{ass:dc3}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Decoupling}}, page \pageref{asspage:dc3}): We assume that the imaginary part of $\alpha^2$ is much smaller than one, see equation \eqref{eqn:smallimarginarypartalpha}. This requirement mildly constrains the parameters of the states we are considering, and guarantees that the averaged equations for the background match GR. Moreover, together with the assumption of working in a mesoscopic regime, it allows for the first order equations to decouple, see again the discussion in Section \ref{subsec:bkgpert}. \item[DC4\label{ass:dc4}] $\hspace{-2pt}$($\hspace{-1pt}$\textbf{\itshape{Effective mass dependence on \virgolette{momentum}}}, page \pageref{asspage:dc4}): We assume that $\mu_\upsilon(\mommatterfield)\propto \mommatterfield$, a condition that turns out to be necessary in order to match GR already at the background level. If the function $\mu_\upsilon(\mommatterfield)$ admits a series expansion in $\mu_\upsilon(\mommatterfield)$ this condition is naturally satisfied when $\mommatterfield$ is small (and the zeroth order term of the expansion identically vanish). This is not expected to hold in general, but notice that this requirement is imposed only at the point $\mommatterfield=\peakvaluemom$ (see Section \ref{subsec:volumeevolution}). Thus, it can be interpreted as the condition that the momentum of the matter field is not too large (the connection between $\peakvaluemom$ and the classical momentum of the matter field being established in \eqref{eqn:scalarfieldmomentumbackground}). This is expected to be physically well motivated, since we are interested in a semi-classical regime. \end{description} \acknowledgments We are extremely grateful to Ed Wilson-Ewing for extended discussions and several suggestions, and to Steffen Gielen for important critical remarks and more valuable inputs. We also thank Jean-Luc Lehners for useful comments about cosmological perturbations and gauge choices. LM thanks the University of Pisa and the INFN (section of Pisa) for financial support, and the Ludwig Maximilians-Universit\"at (LMU) Munich for the hospitality.
2,869,038,155,166
arxiv
\section{Introduction} In a Paul trap ions are confined using an oscillating electric quadrupole field. Ideally the equilibrium position of a single trapped ion will coincide with the null of the oscillating quadrupole field. Stray electric fields as well as trap fabrication imperfections introduce a quasi-static dipole electric field $\vec{E}$ at the null of the oscillating quadrupole field, which displaces the ion equilibrium position from the oscillating field null. This results in an oscillating dipole field at the ion equilibrium position, which drives oscillatory ion motion, called excess micromotion \cite{Berkeland1998}. The oscillating dipole field causes a Stark shift and the excess micromotion causes a Doppler shift, both effects impact precision spectroscopy \cite{Keller2015}, and the Stark shifts are particularly troublesome in experiments using highly-polarizable Rydberg ions \cite{Higgins2019, Feldker2015}. Furthermore, the energy stored in excess micromotion is an obstacle to studies of quantum interactions in hybrid systems of neutral atoms and trapped ions \cite{Grier2009,Schmid2010,Zipkes2010,Feldker2020}. The Stark shift and the excess micromotion can be diminished by applying a static electric dipole field to counter the unwanted quasi-static dipole field $\vec{E}$. This opposing electric field is usually produced by applying voltages to dedicated compensation electrodes. Although a host of techniques have been developed to determine appropriate compensation electrode voltages \cite{Berkeland1998, Keller2015, Feldker2020, Barrett2003, Allcock2010, Chuah2013, Schneider2005, Gloger2015, Brown2007, Ibaraki2011, Narayanan2011, Tanaka2012, Harter2013, Mohammadi2019, Yu1994, Higgins2019, Cerchiari2020, Zhukas2020}, there is a demand to improve upon the existing techniques. For instance, the world's most precise clock is currently a trapped ion optical clock \cite{Brewer2019}, and the largest contribution to its systematic uncertainty arises from excess micromotion. Some of the most popular methods for minimising excess micromotion rely on the impact of micromotion on an ion's absorption or emission spectra, through the Doppler effect \cite{Berkeland1998, Keller2015, Barrett2003, Allcock2010, Chuah2013}. For instance, micromotion introduces spectral sidebands which are separated from carrier transitions by the frequency of the trap's oscillating quadrupole field \cite{Berkeland1998, Keller2015}. It also modulates the ion's scattering rate at the frequency of the trap's oscillating quadrupole field \cite{Berkeland1998, Keller2015}. Other techniques rely on measuring the change of a trapped ion's equilibrium position when the trap stiffness is changed \cite{Berkeland1998, Gloger2015, Schneider2005, Brown2007, Feldker2020, Saito2021}; the methods we present here also work in this fashion. These techniques are explained as follows: The unwanted quasi-static dipole field $\vec{E}$ at the position of the trap's oscillating field null displaces the equilibrium position of a trapped ion from the null by $\vec{r}$, where \cite{Berkeland1998} \begin{equation} \label{eq_r} r_i = \frac{q E_i}{m {\omega_i}^2} \end{equation} and $q$ is the ion charge, $m$ is the ion mass, the three spatial directions indexed by $i$ are defined by the ion's secular motion, and $\omega_i$ is the trap stiffness (the frequency of the trapping pseudopotential) in the $i$ direction. When the trap stiffness is changed $\omega_{Ai} \rightarrow \omega_{Bi}$ the ion equilibrium position is displaced by $\vec{r}_{AB}$, which has the components \begin{equation} \label{eq_del_u} r_{ABi} = \frac{q E_i}{m} \left( \frac{1}{{\omega_{Bi}}^2} - \frac{1}{{\omega_{Ai}}^2} \right) \end{equation} This is represented in Fig.~\ref{fig_pseudopotential}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_pseudopotential.pdf} \caption{At the minimum (position 0) of a Paul trap's effective RF potential (blue lines) there is no oscillating dipolar electric field. A static dipolar offset field $\vec{E}$ (corresponding to the orange potential) displaces the trapping pseudopotential, causing an ion trapped at the displaced minimum [$x_A$ in (a), $x_B$ in (b)] to experience an oscillating dipolar electric field which drives excess micromotion. The offset field $\vec{E}$ is the same in both plots. The displacement from 0 is larger in (b) than in (a) because the trap stiffness is weaker in (b). By changing the trap stiffness and measuring the change of a trapped ion's position information is gained about $\vec{E}$ [see Eq.~(\ref{eq_del_u})]. } \label{fig_pseudopotential} \end{figure} The trap stiffness is usually changed by altering the amplitude of the trap's oscillating electric quadrupole field, though it can also be changed by altering the amplitude of the trap's static quadrupole field \cite{Gloger2015, Schneider2005}. By measuring effects sensitive to $\vec{r}_{AB}$ ion trappers gain information about $\vec{E}$. The displacement $\vec{r}_{AB}$ is commonly monitored by imaging a trapped ion \cite{Berkeland1998, Gloger2015, Schneider2005, Feldker2020, Saito2021}. It can also be detected by measuring the strength with which transitions are driven when there is an optical field gradient \cite{Brown2007} or a magnetic field gradient \cite{Feldker2020}. These methods are limited by the imaging resolution, by optical diffraction limits and laser powers, and by achievable magnetic field gradients respectively. In this work we use interferometry to measure $\vec{r}_{AB}$ with a resolution much less than an optical wavelength. This allows us to reduce $|\vec{E}|$ beyond state-of-the-art levels in a short time, and thereby diminish excess micromotion. We apply different Ramsey-interferometry pulse sequences to a single trapped ion to probe $\vec{r}_{AB}$. Using a sequence of two $\pi/2$ pulses resonant to an optical transition we determine the projection of $\vec{r}_{AB}$ along one direction with resolution $\approx \tfrac{\lambda}{2\pi \sqrt{N}}$, where $\lambda$ is the wavelength of the laser field and $N$ is the number of experimental cycles. We improve on this resolution using sequences of $M+1$ coherent pulses, which offer a $M$-fold precision enhancement. The pulse sequences are described in Section~\ref{sec_seqs}. In Section~\ref{sec_fast_accurate} we demonstrate fast and accurate minimization of $\vec{E}$, and discuss the impact that changing the RF power supplied to the trap has on the trap temperature. In Section~\ref{sec_efficient_phase_estimation} we show that by measuring using pulse sequences of different lengths $\vec{r}_{AB}$ and $\vec{E}$ can be probed with an uncertainty below the standard quantum limit. The pulse sequences can be designed so that the results are robust against pulse area errors and laser detuning; we demonstrate this in Section~\ref{sec_dynamical_decoupling}. In Section~\ref{sec_2D_and_3D_main} we apply the methods to minimize micromotion in 2D and 3D. We also demonstrate 2D micromotion minimization using just a single laser beam, by using the interferometry method together with the commonly-used resolved sideband technique \cite{Berkeland1998}. As well as enabling micromotion minimization, one of the pulse sequences presented here demonstrates clock-synchronization protocols which involve exchange of a ticking qubit \cite{Chuang2000, deBurgh2005}. This is described in Section~\ref{sec_ticking_qubit}. \section{Pulse sequences} \label{sec_seqs} In this section we present methods to minimize $|\vec{E}|$ using interferometry sequences, but first we introduce some key concepts: The action of a sequence of laser pulses on a transition $|g\rangle \leftrightarrow |e\rangle$ between two states of an ion can be described by a sequence of rotations on the Bloch sphere spanned by $|g\rangle$ and $|e\rangle$. When the laser field driving the pulses is resonant to the $|g\rangle \leftrightarrow |e\rangle$ transition, the rotation axes lie on the Bloch sphere's equator. The phase of the laser field during each pulse, within the ion's rotating frame, determines the azimuthal angle of each rotation axis. Within the ion's rotating frame, the phase of the laser field is fixed in time (unless a controlled phase shift is introduced), and it varies in space according to \begin{equation} \label{eq_Phi_alpha_A} \Phi_{\alpha A} = \vec{k}_\alpha \cdot \vec{r}_A + \Phi_{\alpha 0} \end{equation} where $\vec{k}_\alpha$ is the wavevector of the laser field, $\Phi_{\alpha 0}$ is a constant phase offset, and Greek letters are used to index different laser beams along different directions while Roman letters are used to index different trap stiffness settings and the corresponding ion positions. The laser phase experienced by the ion depends on the ion position. This means the rotation axis of a laser pulse and the impact the pulse has on the ion's state also depend on the ion's position. By applying a sequence of pulses and measuring the ion's state we can probe the change of ion position $\vec{r}_{AB}$ when the trap stiffness is changed from setting $A \rightarrow B$. We use Ramsey pulse sequences, comprising two $\pi/2$ pulses, as well as longer sequences with several $\pi$ pulses between two $\pi/2$ pulses. In general the sequences comprise $M+1$ pulses and have pulse areas $M \pi$, where $M$ is an integer and $M \geq 1$. During the pulse sequences the phase of the laser field at the ion position is changed between pulses. This is accomplished by changing the phase of the laser beam which drives the pulse, or by using a different laser beam from a different direction, or by moving the ion from one position to another. We write the laser phase experienced by the ion during the $j^\mathrm{th}$ pulse as $\phi_j+\theta_j$, where $\phi_j$ depends on both the ion position and the laser beam used to drive the pulses according to Eq.~(\ref{eq_Phi_alpha_A}), while the controlled shift $\theta_j$ results from adding a phase shift to the laser field, using, for example, an acousto-optical modulator. $\{\phi_j\}$ are general phases, later we will substitute in specific phases using Eq.~(\ref{eq_Phi_alpha_A}). If the ion is initially in state $|g\rangle$, after applying the pulse sequence the probability of measuring the ion in state $|e\rangle$ is \begin{equation} \label{eq_pe_cos_theta_T_phi_T} p = \tfrac{1}{2} \left[ 1 + \cos{\left( \phi_\mathrm{T} + \theta_\mathrm{T} \right)}\right] \end{equation} where \begin{align} \label{eq_phi_T} \phi_\mathrm{T}&=\phi_1 + 2\sum_{j=2}^{M} (-1)^{j-1} \phi_j + (-1)^M \phi_{M+1} \\ \theta_\mathrm{T}&=\theta_1 + 2\sum_{j=2}^{M} (-1)^{j-1} \theta_j + (-1)^M \theta_{M+1} + \xi_M \label{eq_theta_T} \end{align} and where $\xi_M=\pi$ ($0$) if $M$ is even (odd). The phase $\phi_\mathrm{T}$ reveals information about the ion position, or change of position. By repeatedly applying the sequence and measuring the state of the ion, the probability $p$ can be estimated, from which $\phi_\mathrm{T}$ can be estimated (the controlled phase shift $\theta_\mathrm{T}$ is known). An estimate of $\phi_\mathrm{T}$ using a single $p$ estimate and Eq.~(\ref{eq_pe_cos_theta_T_phi_T}) is sensitive to pulse area errors and decoherence. More robust estimates of $\phi_\mathrm{T}$ use two measurements of $p$ using two different $\theta_\mathrm{T}$ values. One can use \cite{Chwalla2009} \begin{equation} \phi_\mathrm{T} = \arcsin{\frac{p(\theta_\mathrm{T}=-\tfrac{\pi}{2})-p(\theta_\mathrm{T}=\tfrac{\pi}{2})}{\mathcal{C}\left[p(\theta_\mathrm{T}=-\tfrac{\pi}{2})+p(\theta_\mathrm{T}=\tfrac{\pi}{2})\right]}} \label{eq_phiT_2} \end{equation} where $\mathcal{C}$ accounts for reduction of the contrast of the oscillation in Eq.~(\ref{eq_pe_cos_theta_T_phi_T}), or one can use the two-argument arctangent function \cite{Kimmel2015} \begin{equation} \phi_\mathrm{T} = \mathrm{arctan2}\left[ p(\theta_\mathrm{T}=-\tfrac{\pi}{2})-\tfrac{1}{2}, p(\theta_\mathrm{T}=0)-\tfrac{1}{2} \right] \label{eq_phiT_3} \end{equation} Eq.~(\ref{eq_phiT_2}) performs well when $\phi_\mathrm{T}\approx 0$, and returns an estimate within a range of $\pi$, while Eq.~(\ref{eq_phiT_3}) returns an estimate of $\phi_\mathrm{T}$ within a range of $2\pi$. When $N$ experimental runs are conducted, $\tfrac{N}{2}$ using each value of $\theta_\mathrm{T}$, the statistical uncertainties of the $\phi_\mathrm{T}$ estimates are $\approx \tfrac{1}{\sqrt{N}}$; the statistical uncertainties depend on the magnitude of $\phi_\mathrm{T}$, as shown in Appendix~\ref{appendix_a}. Pulse area errors and detuning of the laser field from resonance introduce systematic errors to estimates of $\phi_\mathrm{T}$. Systematic errors can be reduced by appropriately choosing the control phases $\{\theta_j\}$, as shown in Section~\ref{sec_dynamical_decoupling}. The pulse sequences presented here build on the sequence presented in ref.~\cite{Kimmel2015}. In Method~A the coherent pulses are driven using a single laser beam and the trap stiffness is changed between pulses. In Method~B two laser beams are used and the trap stiffness is not changed between coherent pulses. In Appendix~\ref{appendix_method_c} we describe Method~C, which involves multiple laser beams with trap stiffness changes between the pulses. \subsection*{Method~A: Sequence using a single laser beam} In the first method the laser pulses are driven by a single laser beam and the trap stiffness is alternated between stiffness $A$ and stiffness $B$ between laser pulses. The sequence is presented in Fig.~\ref{fig_method_A}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_seq_b_new.pdf} \caption{Method~A involves a sequence in which the trap stiffness is changed between each coherent laser pulse. If there is an unwanted field $\vec{E}$, changing the trap stiffness causes the ion to change position and experience a different laser phase. The probability of measuring the ion in $|e\rangle$ depends on the laser phases during the pulses, and thus on $\vec{E}$. The areas of the coherent pulses (red) are indicated. The sequences in (a) and (b) are used when $M$ is odd and even respectively. The shortest sequence is a Ramsey sequence with $M=1$. } \label{fig_method_A} \end{figure} The trap stiffness changes cause the ion position to alternate between two positions, $\vec{r}_A$ and $\vec{r}_B$, and the position-dependent phase $\phi_j$ alternates between two values $\Phi_{\alpha A}$ and $\Phi_{\alpha B}$. Using Eq.~(\ref{eq_Phi_alpha_A}), the difference between the phase values is \begin{equation} \label{eq_Phi_alpha_A_Phi_alpha_B} \Phi_{\alpha A} - \Phi_{\alpha B} = \vec{k}_\alpha \cdot \left( \vec{r}_A - \vec{r}_B \right) \end{equation} and from Eqs.~(\ref{eq_phi_T}) and (\ref{eq_del_u}) \begin{align} \label{phi_T_phi_alphaA_phi_alphaB} \phi_\mathrm{T} &= M \left( \Phi_{\alpha A} - \Phi_{\alpha B} \right) \\ \label{eq_phi_T_k_r_AB} &= M \vec{k}_\alpha \cdot \left( \vec{r}_A - \vec{r}_B \right) \\ &= M \sum_i \frac{q k_{\alpha i} E_i}{m} \left( \frac{1}{{\omega_{Ai}}^2} - \frac{1}{{\omega_{Bi}}^2} \right) \label{eq_phiT_E} \end{align} From Eq.~(\ref{eq_phi_T_k_r_AB}) we see $\phi_\mathrm{T}$ reveals the change in equilibrium position along the direction of $\vec{k}_\alpha$, and from Eq.~(\ref{eq_phiT_E}) we see $\phi_\mathrm{T}$ is sensitive to $\vec{E}$ along the direction $\vec{d}$, which has the components \begin{equation} \label{eq_d} d_i = k_{\alpha i} \left( \frac{1}{{\omega_{Ai}}^2} - \frac{1}{{\omega_{Bi}}^2} \right) \end{equation} Thus, by probing and minimizing $\phi_\mathrm{T}$, $\vec{E}$ can be minimized. For convenience we define $\phi_\mathrm{PD} \equiv \Phi_{\alpha A} - \Phi_{\alpha B}$; the phase difference $\phi_\mathrm{PD}$ depends on the path length difference from the laser source to $\vec{r}_A$ and from the laser source to $\vec{r}_B$. From Eqs.~(\ref{eq_pe_cos_theta_T_phi_T}), (\ref{phi_T_phi_alphaA_phi_alphaB}) and (\ref{eq_phiT_E}) \begin{align} \label{eq_pe_M_phiPD} p & = \tfrac{1}{2}\left[1+\cos{\left(M\phi_\mathrm{PD}+\theta_\mathrm{T}\right)}\right] \\ \label{eq_pe_M_E} & = \tfrac{1}{2} \left\{1+ \cos{\left[ M \sum_i\frac{q k_{\alpha i} E_i}{m} \left( \frac{1}{{\omega_{Ai}}^2} - \frac{1}{{\omega_{Bi}}^2} \right) + \theta_\mathrm{T} \right]} \right\} \end{align} With increasing $M$ the precision of a $\phi_\mathrm{PD}$ estimate is improved, at the expense of reducing the range within which $\phi_\mathrm{PD}$ can be determined. $\phi_{\mathrm{PD}}$ can be efficiently determined with a Heisenberg scaling by conducting measurements using different values of $M$; this is discussed further in Section~\ref{sec_efficient_phase_estimation}. We experimentally demonstrate the workings of this method using a single $\mathrm{^{88}Sr^+}$ ion confined in a linear Paul trap. A 674\,nm laser field couples a Zeeman sublevel of the $5^2S_{1/2}$ ground state $|g\rangle$ with a Zeeman sublevel of the metastable $4^2D_{5/2}$ state $|e\rangle$. To initialise the ion in $|g\rangle$ we employ Doppler cooling as well as optical pumping on a transition between $5^2S_{1/2}$ and $4^2D_{5/2}$ sublevels. In some experiments we also employ sideband cooling. State detection involves probing the ion with 422\,nm laser light near-resonant to the $5^2S_{1/2} \leftrightarrow 5^2P_{1/2}$ transition. The trap stiffness is changed between the laser pulses by changing the amplitude of the RF signal applied to the trap electrodes and thus changing the amplitude of the trap's oscillating quadrupole field. The electronics are described in detail in Appendix~\ref{sec_electronics}. A component of $\vec{E}$ is varied by changing the voltage applied to a compensation electrode, and the effect on $p$ is measured in a two-pulse Ramsey sequence ($M=1$). The results are shown in Fig.~\ref{fig_seq1}(a). \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_seq1.pdf} \caption{Micromotion minimization using Method~A. (a)~The population measured in $|e\rangle$ depends sinusoidally on the offset voltage applied to a micromotion compensation electrode and the offset field strength. (b)~The phase difference $\phi_\mathrm{PD}$ depends linearly on the offset voltage, and is zero when micromotion is minimized. The green data was calculated from the datasets in (a) using Eq.~(\ref{eq_phiT_3}). $\phi_\mathrm{PD}$ responds more strongly to the offset field when the trap stiffness is changed by a larger amount. The solid lines in (a) and (b) are respectively sinusoidal and linear fits to the data. Error bars represent quantum projection noise (1$\sigma$ confidence interval). The error bars are often smaller than the marker size. } \label{fig_seq1} \end{figure} As expected from Eq.~(\ref{eq_pe_M_E}) $p$ shows a sinusoidal dependence on the changes made to $\vec{E}$. Fig.~\ref{fig_seq1}(a) shows $p$ values when two different values of $\theta_\mathrm{T}$ were used. From this data and using Eq.~(\ref{eq_phiT_3}) $\phi_\mathrm{PD}$ was calculated; the results are shown in Fig.~\ref{fig_seq1}(b). The figure shows that $\phi_{\mathrm{PD}}$ has a linear dependence on a component of $\vec{E}$, and that $\phi_{\mathrm{PD}}=0$ when the compensation electrode offset voltage is zero. The point where the offset voltage is zero was independently determined using the resolved sideband technique \cite{Berkeland1998}. Throughout this work compensation electrode offset voltages are shown relative to the optimal values as determined using the resolved sideband method. The figure also shows the linear dependence of $\phi_{\mathrm{PD}}$ on a component of $\vec{E}$ is stronger when the change of the trap stiffness is larger, as expected from the $({\omega_{Ai}}^{-2}-{\omega_{Bi}}^{-2})$ term in Eq.~(\ref{eq_phiT_E}). The measurements involved reducing the radial secular frequencies from ${\sim2}\pi\times 1.5\,\mathrm{MHz}$ to ${\sim2}\pi\times 600\,\mathrm{kHz}$ for the green dataset and to ${\sim2}\pi\times 400\,\mathrm{kHz}$ for the purple dataset. The axial secular frequency was fixed ${\sim2}\pi\times1.0\,\mathrm{MHz}$. Because Eq.~(\ref{eq_pe_M_E}) is cyclic it is possible to achieve $\phi_{\mathrm{PD}}=0$ when $|\vec{E}|$ is not minimized, as seen for the purple dataset near $\pm 2\,\mathrm{V}$. To check that $|\vec{E}|$ is truly minimized, one can check that $\phi_{\mathrm{PD}}$ remains zero when different trap stiffness changes are used. The probability $p$ of measuring the ion in $|e\rangle$ becomes more sensitive to $\phi_\mathrm{PD}$, and thus to a component of $\vec{E}$, as the sequence length $M$ is increased. To show this we measured the dependence of $p$ on the compensation electrode offset voltage with sequences of different lengths $M$; the results are shown in Fig.~\ref{fig_seq_multi_pulse_data}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_vary_M_paper.pdf} \caption{Method~A becomes more sensitive to the compensation electrode offset voltage and to the offset field $\vec{E}$ with increasing sequence length $M$. Solid lines represent sinusoidal fits to the data. The oscillation contrast decreased as $M$ was increased due to the short coherence time of our system. The $M=2$ dataset has a negative gradient at zero offset voltage because it was measured with $\theta_\mathrm{T}=\tfrac{\pi}{2}$ while the other measurements used $\theta_\mathrm{T}=-\tfrac{\pi}{2}$; for better comparison of the datasets we inverted the x-axis of the $M=2$ dataset. Error bars represent quantum projection noise (1$\sigma$ confidence interval). } \label{fig_seq_multi_pulse_data} \end{figure} The oscillation contrast decreased with increasing $M$, due to the limited coherence time in our experiment ($\sim 500\,\mathrm{\mu s}$ \cite{Lindberg2020}). \subsection*{Method~B: Sequence using a fixed trap stiffness} In the sequence described in this section the trap stiffness is fixed while the coherent pulses are applied, and alternate pulses are driven by two different laser beams. This is represented in Fig.~\ref{fig_method_B}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_seq_d_new.pdf} \caption{Method~B involves coherent pulse sequences in which alternate pulses are driven by two different laser beams (with wavevectors $\vec{k}_\alpha$ and $\vec{k}_\beta$), while the trap stiffness is fixed. The probability of measuring the ion in $|e\rangle$ reveals the phase difference between the laser fields at the ion position. We measure the phase differences $\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$ at the ion equilibrium positions $\vec{r}_A$ and $\vec{r}_B$ when two different trap stiffness settings ($A$ and $B$) are used. The quantity $\phi_\mathrm{PD}^A - \phi_\mathrm{PD}^B$ depends on $\vec{r}_{AB}$ and $\vec{E}$. The sequences in (a) and (b) are used when $M$ is odd and even respectively. The shortest sequence is a Ramsey sequence with $M=1$. } \label{fig_method_B} \end{figure} If the ion is at position $\vec{r}_A$ and alternate pulses are driven by two different laser beams, with wavevectors $\vec{k}_\alpha$ and $\vec{k}_\beta$, the phase $\phi_j$ alternates between two values $\Phi_{\alpha A}$ and $\Phi_{\beta A}$. Using Eq.~(\ref{eq_Phi_alpha_A}), the difference between these phase values is \begin{equation} \Phi_{\alpha A}-\Phi_{\beta A} = \left( \vec{k}_\alpha - \vec{k}_\beta \right) \cdot \vec{r}_A + \Phi_{\alpha 0} - \Phi_{\beta 0} \end{equation} If the two laser beams are derived from the same source, the phase difference depends on the path length difference from the point where the beams are split to the ion position $\vec{r}_A$. For convenience, we define $\phi_\mathrm{PD}^A \equiv \Phi_{\alpha A}-\Phi_{\beta A}$. From Eq.~(\ref{eq_phi_T}) \begin{align} \begin{split} \phi_\mathrm{T} &= M \left( \Phi_{\alpha A}-\Phi_{\beta A} \right) \\ \phi_\mathrm{T} &= M \phi_\mathrm{PD}^A \end{split} \end{align} If the sequence is conducted using the fixed trap stiffness $B$ then \begin{align} \begin{split} \phi_\mathrm{T} &= M \left( \Phi_{\alpha B}-\Phi_{\beta B} \right) \\ &= M \left[ \left( \vec{k}_\alpha - \vec{k}_\beta \right) \cdot \vec{r}_B + \Phi_{\alpha 0} - \Phi_{\beta 0} \right] \\ &= M \phi_\mathrm{PD}^B \end{split} \end{align} where $\phi_\mathrm{PD}^B \equiv \Phi_{\alpha B}-\Phi_{\beta B}$. By conducting the sequence using each of the two trap stiffness settings, the phases $\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$ can be estimated, and therefrom the quantity \begin{align} \begin{split} \label{eq_phi_PD_AB} \phi_\mathrm{PD}^A -& \phi_\mathrm{PD}^B = \left( \vec{k}_\alpha - \vec{k}_\beta \right) \cdot \left( \vec{r}_A - \vec{r}_B \right) \\ &= M \sum_i \frac{q (k_{\alpha i}-k_{\beta i}) E_i}{m} \left( \frac{1}{{\omega_{Ai}}^2} - \frac{1}{{\omega_{Bi}}^2} \right) \end{split} \end{align} where, in the second line, Eq.~(\ref{eq_del_u}) is used. $\phi_{\mathrm{PD}}^A - \phi_{\mathrm{PD}}^B$ reveals the difference between the ion equilibrium positions $\vec{r}_{AB}$ along the direction $\vec{k}_\alpha-\vec{k}_\beta$. Thus, $\phi_\mathrm{PD}^A - \phi_\mathrm{PD}^B$ is sensitive to $\vec{E}$ along the direction $\vec{d}$ which has components \begin{equation} \label{eq_d2} d_i = (k_{\alpha i}-k_{\beta i}) \left( \frac{1}{{\omega_{Ai}}^2} - \frac{1}{{\omega_{Bi}}^2} \right) \end{equation} We demonstrated this method in our system, the results are shown in Fig.~\ref{fig_PD_A_B}. The two laser beams are derived from the same source, they are each passed through a separate acousto-optic modulator (allowing each beam to be separately switched on and off, and allowing controlled phase shifts $\{\theta_j\}$ to be introduced), then each beam is guided through an optical fiber before it is focussed onto the ion. The path length difference from the point where the beams are separated to the experimental chamber, varies in time, due to temperature fluctuations and mechanical vibrations. Because of this $\left( \Phi_{\alpha 0} - \Phi_{\beta 0} \right)$ and thus $\phi_{\mathrm{PD}}^A$ and $\phi_{\mathrm{PD}}^B$ vary in time. We measured the drift of $\phi_{\mathrm{PD}}^A$ and $\phi_{\mathrm{PD}}^B$ in time using the sequence of Fig.~\ref{fig_method_B} with $M=1$ by interleaving measurements using trap settings $A$ and $B$, and control phases $\theta_\mathrm{T}=0$ and $-\tfrac{\pi}{2}$, and using Eq.~(\ref{eq_phiT_3}). The results are shown in Fig.~\ref{fig_PD_A_B}(a). \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_seq3.pdf} \caption{Demonstration of Method~B. (a)~$\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$ were repeatedly measured over 100\,s. The phase difference $\phi_\mathrm{PD}^A-\phi_\mathrm{PD}^B$ is stable in time, despite the limited interferometric stability between the two beams, which causes $\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$ to drift. (b)~$\phi_\mathrm{PD}^A-\phi_\mathrm{PD}^B$ has a linear dependence on the offset voltage applied to a micromotion compensation electrode. Error bars representing quantum projection noise (1$\sigma$ confidence interval) are generally smaller than the marker size. } \label{fig_PD_A_B} \end{figure} Because $\phi_{\mathrm{PD}}^A$ and $\phi_{\mathrm{PD}}^B$ do not drift too fast, and because the difference between the ion equilibrium positions $\vec{r}_{AB}$ is stable, the difference between the estimates $\phi_{\mathrm{PD}}^A-\phi_{\mathrm{PD}}^B$ is stable in time, as shown in Fig.~\ref{fig_PD_A_B}(a). We varied the voltage applied to a compensation electrode and measured the linear response of $\phi_{\mathrm{PD}}^A - \phi_{\mathrm{PD}}^B$, this is shown in Fig.~\ref{fig_PD_A_B}(b). This result is consistent with Eq.~(\ref{eq_phi_PD_AB}), which describes a linear relationship between $\phi_{\mathrm{PD}}^A - \phi_{\mathrm{PD}}^B$ and a component of $\vec{E}$. Thus, the quantity $\phi_{\mathrm{PD}}^A - \phi_{\mathrm{PD}}^B$ can be used for minimizing micromotion. Longer pulse sequences (with larger $M$) offer more precise measurement of $\phi_{\mathrm{PD}}^A - \phi_{\mathrm{PD}}^B$, though they also require better interferometric stability between the two beams. This can be achieved using active stabilisation \cite{Ma1994}. \section{Fast and accurate micromotion minimization} \label{sec_fast_accurate} Using Method~A with $M=8$ we minimized the strength of the offset field $\vec{E}$ quickly and accurately. This is shown by the data in Fig.~\ref{fig_allan}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_allan.pdf} \caption{Using Method~A we minimized the magnitude of the offset field $\vec{E}$ quickly and accurately. With increasing measurement time $t$ the residual field strength decreased as $t^{-1/2}$, until around 100\,s when drifts caused the accuracy to worsen. Dashed lines are $t^{-1/2}$ fits. Error bars represent the standard error of the mean (1$\sigma$ confidence interval). } \label{fig_allan} \end{figure} The experiment runs alternated between using two different laser beams from two different directions; in this way we probed $\vec{E}$ in two dimensions, i.e.\ the plane of the oscillating field of a linear Paul trap. We see that with increasing measurement time $t$ the residual electric field strength decreased as $t^{-1/2}$, until around 100\,s when drifts kicked in. The drifts were likely caused by changes in the offset field $\vec{E}$ and instability of the voltage sources used to apply voltages to the compensation electrodes. We obtained the data as follows: First we measured the rate of change of $\phi_\mathrm{PD}$ with respect to compensation electrode voltage, in much the same way as shown in Fig.~\ref{fig_seq1}(b). We did this for $\phi_\mathrm{PD}$ measurements using the two different laser beams and two different compensation electrodes. Then we repetitively measured $\phi_\mathrm{PD}$ using the two different laser beams, and every 11\,s we updated the voltages of the two compensation electrodes so as to minimize $|\vec{E}|$ in two dimensions. We measured repetitively over 18~minutes. By analysing data collected over this time, we see how well the magnitude of the electric field $\vec{E}$ can be minimised with different measurement times. The analysis is much the same as that used to calculate the overlapping Allan deviation of fractional frequency data from a clock. After 75\,s of measurement the 2D residual static field strength was $(3.5\pm0.3)\,\mathrm{mV\,m^{-1}}$, which is, as far as we are aware, lower than the residual static field strength achieved using any other micromotion minimization technique, and also lower than the residual field achieved in a system of optically-trapped ions \cite{Huber2014}. The field uncertainty decreased with increasing measurement time as $(31.1\pm1.0)\,\mathrm{mV\,m^{-1}\,Hz^{-1/2}}$. The horizontal component of $\vec{E}$ was minimised faster than the vertical component, since the beam probing the horizontal component has a larger projection onto the plane of the oscillating field than the beam probing the vertical component. On the second y-axis of Fig.~\ref{fig_allan} we show the corresponding strength of the residual oscillating dipole field experienced by the ion, which arises because the offset field $\vec{E}$ displaces the ion from the oscillating quadrupole field null. This assumes that there is no additional oscillating dipole field in our system which arises from a phase mismatch of the voltages applied to the trap electrodes (quadrature micromotion) \cite{Berkeland1998}. It's worth noting that a horizontal (vertical) offset field $\vec{E}$ causes the ion to experience a vertical (horizontal) oscillating dipole field (see Fig.~\ref{fig_sb_method}). The experiments were evenly split between using two different laser beams and four different sets of control phases $\{\theta_j\}$. The use of four sets of controllable phases diminished systematic errors, as is described in Section~\ref{sec_dynamical_decoupling}. During these measurements we reduced the ion's radial secular frequencies from $2\pi\times1.5\,\mathrm{MHz}$ to $2\pi\times840\,\mathrm{kHz}$, while keeping the axial secular frequency at $2\pi\times350\,\mathrm{kHz}$. The oscillating quadrupole field's frequency was $2\pi \times 18.1\,\mathrm{MHz}$. Faster minimization of $\vec{E}$ could be achieved using a larger change of the trap stiffness or by using a longer sequence (with higher $M$). A phase estimation sequence with over 1000 pulses has been conducted in an experimental setup with a longer coherence time than ours \cite{Rudinger2017}. With such long sequences care must be taken to mitigate heating of the ion's motion caused by the trap stiffness changes. Ion heating causes pulse area errors, which in turn can cause systematic errors in $\phi_\mathrm{PD}$ estimates. This can be mitigated by changing the trap stiffness sufficiently slowly, or by employing sympathetic cooling \cite{Barrett2003, Home2009}. Alternatively $\vec{E}$ can be probed using Method B, which does not involve changes of the trap stiffness between the pulses. \subsection*{Changing the RF power applied to the trap electrodes affects the trap temperature} After just $11\,\mathrm{s}$ measurement time we achieve a low residual oscillating dipole field, which would cause a second-order Doppler shift on the $\mathrm{^{88}Sr^+}$ clock transition below the $10^{-22}$ level \cite{Berkeland1998, Keller2015}. And so, the micromotion minimization methods presented here stand to benefit precision spectroscopy experiments. However, in precision spectroscopy experiments, care should be taken to mitigate unwanted changes of the trap temperature: Changing the trap stiffness by changing the RF power supplied to the trap electrodes affects the RF power dissipated in the system, which, in turn, affects the trap temperature. Changes of the trap temperature affect the blackbody radiation field experienced by the ion. Further, thermal expansion can shift the relative positions of trap electrodes and affect $\vec{E}$ \cite{Gloger2015}, and also cause beam-pointing errors. During the measurements used to produce the data shown in Fig.~\ref{fig_allan} we did not make efforts to mitigate trap temperature changes. During these measurements the RF signal applied to the trap electrodes was reduced 4\% of the time. We estimate that the decrease of the average RF power caused the temperature of the ion's surroundings to decrease by $\sim 10\,\mathrm{mK}$ \cite{Guggemos2017}, causing a blackbody radiation shift on the $\mathrm{^{88}Sr^+}$ clock transition $\sim 10^{-19}$ \cite{Dube2014}. To mitigate trap temperature changes the average RF power used during the micromotion minimization sequences should equal the RF power used during the trap's normal operation \cite{Gloger2015}, for instance as sketched in Fig.~\ref{fig_trap_temp_schematic}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_trap_temp_schematic.pdf} \caption{To mitigate unwanted changes of the trap temperature, the average RF power used during the interferometry sequences should equal the RF power used during normal trap operation. The RF power profiles sketched here are suitable for use with Method~A and $M=8$. } \label{fig_trap_temp_schematic} \end{figure} Alternatively, the trap stiffness can be changed during the micromotion minimization sequences by changing the amplitude of the trap's static quadrupole field \cite{Schneider2005, Gloger2015}. \section{Micromotion minimization with sub-standard quantum limit scaling using a binary search algorithm} \label{sec_efficient_phase_estimation} In this section we use a binary search algorithm (based on the \textit{Robust Phase Estimation} technique \cite{Kimmel2015}) to efficiently measure $\phi_\mathrm{PD}$ of Method~A with an uncertainty below the standard quantum limit (SQL). The same methodology can be used together with Methods B or C (Appendix~\ref{appendix_method_c}). This phase estimation technique can be used to achieve Heisenberg scaling, it is easy to implement, the data analysis is straightforward and the protocol is non-adaptive. While adaptive phase estimation techniques allow for more accurate phase measurements at the Heisenberg limit \cite{Boixo2008, Higgins2007}, they require measurement settings to be updated on the fly, which is not possible with our current control system \cite{Pham2005, Schindler2008, Heinrich2020}. The binary search algorithm works as follows: Starting with an unknown phase $\phi_\mathrm{PD}$ from within the range $[-\pi, \pi]$, a set of measurements are first conducted using a sequence with $M=1$ to limit $\phi_\mathrm{PD}$ to a range of width $\pi$, then a set of measurements with $M=2$ narrow the range to $\pi/2$, then a set of measurements with $M=4$ narrow the range to $\pi/4$, and so on. The $j^\mathrm{th}$ set of measurements use a sequence with $M_j=2^{j-1}$ to narrow the range to a width of $\pi/2^{j-1}$. The technique is illustrated in Fig.~\ref{fig_RPE_explanation}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_RPE_explanation.pdf} \caption{Illustration of the binary search algorithm (based on the \textit{Robust Phase Estimation} technique \cite{Kimmel2015}). By conducting measurements using different sequence lengths $\phi_{\mathrm{PD}}$ can be determined efficiently. The shaded regions of width $\pi/M$ indicate values of $\phi_{\mathrm{PD}}$ consistent with the measurement results. Shorter sequences allow $\phi_{\mathrm{PD}}$ to be reckoned from within a larger range, but they are less precise. Longer sequences are more precise, but they allow $\phi_{\mathrm{PD}}$ to be reckoned from within only a narrow range. By combining the results $\phi_{\mathrm{PD}}$ can be determined with high precision from a broad range. The orange arrows indicate the range of $\phi_{\mathrm{PD}}$ values consistent with all the measurement results. } \label{fig_RPE_explanation} \end{figure} We demonstrated the efficiency of this protocol as follows: We carried out 59,000 measurement runs, split evenly between five sequence lengths $M \in \{1,2,4,8,16\}$. The measurements were also split between using two different $\theta_\mathrm{T}$ values. We then analysed the estimates of $\phi_\mathrm{PD}$ given by sub-sampled datasets. If we consider first the results using only $M=1$ data, the error in the estimates of $\phi_\mathrm{PD}$ decreased with the number of measurements in the sample $N$ as $N^{-1/2}$. This is shown by the blue data in Fig.~\ref{fig_rpe}(a). \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_rpe_all_pi_half_in_cost.pdf} \caption{$\phi_\mathrm{PD}$ can be efficiently measured using a binary search algorithm. Combining the results of measurements using different sequence lengths (orange data) is more efficient than using a fixed sequence length (blue data). This is true both when the number of measurements conducted is considered, as in (a), and when the total pulse area is considered, as in (b). Using the binary search algorithm a $\phi_\mathrm{PD}$-uncertainty lower than the standard quantum limit (SQL) is achieved. Error bars represent the standard error of the mean (1$\sigma$ confidence interval). } \label{fig_rpe} \end{figure} The binary search algorithm allows improved estimates to be achieved using fewer measurements, as shown by the orange data. The first orange datapoint describes the error in estimates of $\phi_\mathrm{PD}$ using 40 measurements split evenly between different sequence lengths $M\in\{1,2\}$. The second orange datapoint describes the error in estimates using 60 measurements split evenly between sequence lengths $M\in\{1,2,4\}$. And so on for the third and fourth orange datapoints. The scaling of the orange data with the number of measurements is well described by a power law. The deviation from the power law for the final datapoint (using sequences with up to $M=16$) is due to the limited coherence time of our experiment and also because of the non-zero probability of error in the results of the measurements with $M<16$, which contribute to the overall estimate. The ``true'' value of $\phi_\mathrm{PD}$ was estimated using all 59,000 measurements. The duration of each experimental run was dominated by cooling and fluorescence detection, rather than the duration of the coherent pulses. Thus, the x-axis in Fig.~\ref{fig_rpe}(a) reflects the total measurement time. For long sequences with large $M$ the total measurement time would be better represented by the total area of coherent pulses $\mathcal{A}$ than by the number of measurements \cite{Higgins2009}. And so we rescale the x-axis of Fig.~\ref{fig_rpe}(a) to view the scaling of the same data with the pulse area, this is shown in Fig.~\ref{fig_rpe}(b). Here we see that the binary search algorithm allows us to estimate $\phi_\mathrm{PD}$ with an error below the SQL $\sqrt{\frac{\pi}{\mathcal{A}}}$ \cite{Giovannetti2004}. A better scaling would be achieved in an experimental setup with a longer coherence time. Also, to achieve Heisenberg scaling the different measurement sets (parameterised by $j$) need to use different numbers of measurements $N_j$ \cite{Kimmel2015}. For readers interested in using the binary search algorithm in their systems we reproduce an algorithm for combining the results of different measurement sets from \cite{Rudinger2017} in Appendix~\ref{appendix_rpe_algorithm}. \section{Robust estimates of \texorpdfstring{$\phi_\mathrm{T}$}{phases} using suitable control phases \texorpdfstring{$\{\theta_j\}$}{}} \label{sec_dynamical_decoupling} Changing the overall control phases $\theta_T$ [Eq.~(\ref{eq_theta_T})] shifts the dependence of $p$ on $\phi_T$, as can be appreciated from Eq.~(\ref{eq_pe_cos_theta_T_phi_T}) and Fig.~\ref{fig_seq1}(a). By appropriately choosing the individual phases $\theta_j$, estimates of $\phi_T$ can be made robust against laser detuning and pulse area errors. Pulse area errors can arise if the sequence includes fast changes of the trap stiffness, which can cause motional heating, which in turn modifies the $|g\rangle \leftrightarrow |e\rangle$ coupling strength. They can also be caused by the change in laser light intensity when the ion changes position within a tightly-focussed laser beam. We used simulations to test different sets of control phases $\{\theta_j\}$ when the pulse sequences from Method~A and Method~B are used, and we found $\phi_\mathrm{T}$ (and thus $\phi_\mathrm{PD}$, $\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$) can be robustly estimated in the presence of these errors using \begin{align} \text{Settings I: \quad} &\theta_j =\begin{cases} 0 &\text{even $j$} \\ -\tfrac{\pi}{2} &\text{odd $j$, } 1<j<M+1 \\ \pi &j=M+1 \end{cases} \\ \text{Settings II: \quad} &\theta_j=\begin{cases} 0 &\text{even $j$} \\ \tfrac{\pi}{2} &\text{odd $j$, } 1<j<M+1 \\ \pi &j=M+1 \end{cases} \end{align} and $\theta_1 \in \{\pi, \tfrac{\pi}{2}\}$, and where $M$ is an even integer and $\phi_\mathrm{T}$ is small. Using these settings $\phi_\mathrm{T}$ can be estimated using \begin{align} \begin{split} \phi_{\mathrm{T}} = \mathrm{arctan2} \Big\{ & (-1)^{M/2} \left[ p{\left( \theta_1=\tfrac{\pi}{2} \right)} - \tfrac{1}{2} \right], \\ & (-1)^{M/2} \left[ p{\left( \theta_1=\pi \right)} - \tfrac{1}{2} \right] \Big\} \end{split} \end{align} We experimentally tested the robustness of $\phi_\mathrm{PD}$ estimates by introducing different errors to our system. The results are shown in Fig.~\ref{fig_robustness}. \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{figs/fig_robustness.pdf} \caption{Estimates of $\phi_\mathrm{PD}$ (and $\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$) are robust against errors when the control phase $\{\theta_j\}$ settings I and II are used. The robustness is improved by averaging the estimates obtained using settings I and II. (a)~The detuning of the laser from the $|g\rangle \leftrightarrow |e\rangle$ resonance was scanned. (b)-(c)~A pulse area error on even-indexed pulses was introduced and varied, in (b) no additional errors were added, in (c) a 10\% error was introduced to the area of odd-indexed pulses. Dashed lines indicate simulation results, which show good agreement with the experimental results; the only free parameter was a phase offset used in (a) which accounts for a weak offset field $\vec{E}$. Error bars represent quantum projection noise (1$\sigma$ confidence interval). } \label{fig_robustness} \end{figure} First we measured $\phi_\mathrm{PD}$ of Method~A with $M=16$, when different laser detunings were used. One might expect that a detuning $\Delta$ might shift $\phi_\mathrm{T}$ by $\Delta \cdot T$ and $\phi_\mathrm{PD}$ by $\Delta \cdot T/M$, where the duration of the coherent pulse sequence $T$ we used was $1.6\,\mathrm{ms}$. The $\phi_\mathrm{PD}$ estimates using settings I and II were much more stable than this, as shown in Fig.~\ref{fig_robustness}(a). Furthermore, the estimate of $\phi_\mathrm{PD}$ becomes still more stable by averaging the estimates obtained with settings I and II. Then we investigated the robustness in the presence of pulse area errors. We conducted experiments with a pulse area error on the even-indexed pulses. The phase estimates were stable when the magnitude of this error was varied, as shown in Fig.~\ref{fig_robustness}(b). We additionally introduced a 10\% pulse area error on the odd-indexed pulses, and found that the robustness of the phase estimate could again be improved by averaging the results of experiments conducted using settings I and settings II, as shown in Fig.~\ref{fig_robustness}(c). These experiments used $M=8$, a fixed trap stiffness and a single laser beam driving the pulses. The reason we alternated the pulse area error between pulses is that this will happen in practice, since in Method~A the trap stiffness setting is alternated, while in Method B the laser beam is alternated. The robustness properties depend on the size of the phase $\phi_\mathrm{PD}$, as shown by the simulation results in Fig.~\ref{fig_robustness_simulation}. We simulated experimental runs using pulse sequences of length $M=16$ with pulse area errors of 5\% on the even-indexed pulses. The results of simulations using control phase settings I are shown in Fig.~\ref{fig_robustness_simulation}(a); we see that the probability $p$ of measuring the ion in state $|e\rangle$ deviates from the unity-contrast oscillations described by Eq.~(\ref{eq_pe_M_phiPD}), at around $\phi_\mathrm{PD}=\pm\tfrac{\pi}{2}$. From this data estimates of $\phi_\mathrm{PD}$ were generated, and the systematic errors in the estimates (caused by the 5\% pulse area error) were largest when the true value of $\phi_\mathrm{PD}$ was around $\pm\tfrac{\pi}{2}$, as shown in Fig.~\ref{fig_robustness_simulation}(b). We also simulated measurements using control phase settings III (described in Appendix \ref{appendix_settings_III}), then the systematic errors in $\phi_\mathrm{PD}$ estimates were largest when the true value of $\phi_\mathrm{PD}$ was near $0$ or $\pi$. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_robustness_simulation.pdf} \caption{Pulse area errors and laser detuning cause systematic errors in estimates of $\phi_\mathrm{PD}$, the size of the errors depend on the control phases $\{\theta_j\}$ used and the true value of $\phi_\mathrm{PD}$. We show this using simulated experimental runs using sequences with $M=16$ in which the even-indexed pulses have a 5\% pulse area error. (a) Control phase settings I were used and the pulse area error affected the probability $p$ of measuring the ion in state $|e\rangle$ most strongly around $\phi_\mathrm{PD}=\pm\tfrac{\pi}{2}$ where the oscillation contrast was reduced. (b) Estimates of $\phi_\mathrm{PD}$ were generated from the simulated runs. With control phase settings I the accuracy is highest near $\phi_\mathrm{PD}=0$ or $\pi$, while with settings III the accuracy is highest near $\phi_\mathrm{PD}=\pm \pi/2$. } \label{fig_robustness_simulation} \end{figure} Although the robustness of phase estimates depends on the true value of the phase, this is unlikely to be a problem when Method~A is used and when micromotion is nearly minimized -- then $\phi_\mathrm{PD}$ is small and control phase settings I and II perform well. However, if Method~B is used in an experiment setup in which the path length difference between the two laser beams is not stable, then $\phi_\mathrm{PD}^A$ and $\phi_\mathrm{PD}^B$ will drift over time [as shown in Fig.~\ref{fig_PD_A_B}(a)] and the robustness of the phase estimates will be unstable. This instability could be mitigated by adapting the control phase values $\{\theta_j\}$ during a measurement, or by actively stabilising the path length difference between the two beams \cite{Ma1994}. \section{Micromotion minimization in 2D and 3D} \label{sec_2D_and_3D_main} \subsection{Applying the interferometry methods in 2D and 3D} To counter an unwanted electric field $\vec{E}$ in 2D (3D) we produce a 2D (3D) compensating field by supplying voltages to two (three) compensation electrodes. To determine the appropriate voltages we need to measure $\phi_\mathrm{PD}$ or $\phi_\mathrm{PD}^A-\phi_\mathrm{PD}^B$ using two (three) laser beam configurations. First we measure the dependence of the $i^\mathrm{th}$ phase measurement on the $j^\mathrm{th}$ compensation electrode voltage, in the same way as in Fig.~\ref{fig_seq1}(b) or Fig.~\ref{fig_PD_A_B}(b). We label the gradient of this dependence $\mathcal{M}_{ij}$. We use the four (nine) $\mathcal{M}_{ij}$ values to construct a 2$\times$2 (3$\times$3) matrix $\mathbfcal{M}$. Then we can minimize $|\vec{E}|$ by measuring the two (three) phase values, storing them in a two-element (three-element) vector $\boldsymbol{\phi}$, then calculating $\vec{V} = \mathbfcal{M}^{-1} \cdot \boldsymbol{\phi}$ \cite{Roos2000}. The two-element (three-element) vector $\vec{V}$ describes the offsets of the compensation electrode voltages from the optimal values. Note that the matrix $\mathbfcal{M}$ depends on the trap settings used. If one wishes to relate $\boldsymbol{\phi}$ to the offset field $\vec{E}$, one can use Eq.~(\ref{eq_phiT_E}) or Eq.~(\ref{eq_phi_PD_AB}). This requires knowledge of the direction of the laser field wavevectors and the change of the secular frequencies. Alternatively one can relate $\vec{V}$ to $\vec{E}$ using another micromotion minimization technique; in this work we related $\vec{V}$ to $\vec{E}$ using the resolved sideband method \cite{Berkeland1998, Keller2015}. \subsection{2D micromotion minimization using a single probe laser beam} \label{sec_2d_compensation_one_beam} Micromotion can be minimized in two dimensions using a single probe laser beam by using the resolved sideband method \cite{Berkeland1998} together with interferometry method~A. This is shown in Fig.~\ref{fig_1_beam_2D_compensation}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figs/fig_2D_comparison_i.pdf} \caption{$|\vec{E}|$ can be minimized in 2D using a single probe laser, by using interferometry method~A together with the resolved sideband technique \cite{Berkeland1998}. (a) and (b) [(c) and (d)] use the interferometry (resolved sideband) method, (a) and (c) [(b) and (d)] use a horizontal (vertical) probe beam. } \label{fig_1_beam_2D_compensation} \end{figure} In Fig.~\ref{fig_1_beam_2D_compensation}(a) the interferometry method is conducted using a horizontal laser beam, and $\phi_\mathrm{PD}$ is sensitive to the horizontal component of $\vec{E}$, which is varied by changing the voltage applied to the ``horizontal'' compensation electrode. In Fig.~\ref{fig_1_beam_2D_compensation}(c) the resolved sideband method is conducted using the same horizontal laser beam, and the sideband amplitude is sensitive to the vertical component of $\vec{E}$, which is varied by changing the voltage applied to the ``vertical'' compensation electrode. Similar results are observed when using a vertical laser beam in Fig.~\ref{fig_1_beam_2D_compensation}(b) and (d). These results can be understood with the aid of Fig.~\ref{fig_sb_method}. \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{figs/fig_sb_method_3.pdf} \caption{Schematic of the experimental setup, showing a slice through the linear Paul trap in the plane of the oscillating electric field. The trap's oscillating electric quadrupole field (orange lines) is produced by applying voltages to four electrodes (large circles). An unwanted dipolar field along the vertical direction displaces the ion equilibrium position vertically, with displacement $\vec{r}$ from the trap centre (black cross). This can be detected using interferometry method~A with the vertical laser beam, or using the resolved sideband method with the horizontal laser beam, since at the new position the ion (blue dot) experiences a horizontal oscillating dipole field which drives horizontal micromotion (blue arrow). Each compensation electrode consists of a pair of rods (grey circles). The ion's secular motion eigenmodes are orientated along $x$ and $y$. } \label{fig_sb_method} \end{figure} Using a horizontal laser beam and interferometry method~A, the results are sensitive to the horizontal component of $\vec{E}$, which displaces the ion equilibrium position horizontally. Using a horizontal laser beam and the resolved sideband method, the results are sensitive to the vertical component of $\vec{E}$, which displaces the ion equilibrium position vertically, at the new equilibrium position the ion experiences a horizontal oscillating dipole field, which drives horizontal micromotion. \subsection{Applying the interferometry methods in linear Paul traps with non-degenerate radial frequencies} Micromotion minimization techniques that involve monitoring an ion's position when the trap stiffness is changed become more sensitive when larger changes of the trap stiffness are used. However, in a linear Paul trap, if the trap stiffness is reduced to the point where the ion is barely trapped, and if non-degenerate radial secular frequencies are used, these techniques risk becoming overwhelmingly sensitive to the offset field $\vec{E}$ along just one direction. We illustrate this behaviour in Fig.~\ref{fig_non_degeneracy}. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{figs/fig_sensitivity_w_non_degeneracy.pdf} \caption{When the radial trapping frequencies in a linear Paul trap are non-degenerate with $\omega_x < \omega_y$, an offset field along the $x$-direction causes a larger change of ion equilibrium position than an offset field of the same magnitude along the $y$-direction. This difference diverges as the amplitude of the oscillating quadrupole field is reduced. In the figure the trap stiffness is changed between initial settings $A$ with $\{\omega_x, \omega_y, \omega_z\}/2\pi = \{1.5, 1.6, 1.0\}\,\mathrm{MHz}$ and settings $B$ by changing the amplitude of the oscillating quadrupole field. (a)~As the oscillating quadrupole field amplitude during setting $B$ is decreased, $\omega_{Bx}\rightarrow0$ before $\omega_{By}\rightarrow0$. (b)~The ion displacement ${r}_{ABi}$ due to an offset field $E_i$ depends on $\omega_{Bi}^{-2}-\omega_{Ai}^{-2}$ [see Eq.~(\ref{eq_del_u})]. This quantity diverges as $\omega_{Bi}\rightarrow0$. (c)~The ratio of $\omega_{Bx}^{-2}-\omega_{Ax}^{-2}$ to $\omega_{By}^{-2}-\omega_{Ay}^{-2}$, indicating the relative displacements caused by an offset field, diverges as the trap stiffness during setting $B$ is reduced and $\omega_{Bx}\rightarrow0$. } \label{fig_non_degeneracy} \end{figure} The calculations show how the trap stiffnesses respond when the amplitude of the linear Paul trap's oscillating quadrupole field is changed, from an initial setting $A$, with non-degenerate trap stiffnesses $\{\omega_{Ax},\omega_{Ay}\}/2\pi=\{1.5,1.6\}\,\mathrm{MHz}$ along the radial directions and $\omega_{Az}/2\pi=1.0\,\mathrm{MHz}$ along the axial direction, to a trap setting $B$. As we decrease the amplitude of the trap's oscillating quadrupole field $\omega_{Bx}\rightarrow0$ before $\omega_{By}\rightarrow0$ and thus the quantity $\omega_{Bx}^{-2}-\omega_{Ax}^{-2}$ diverges before $\omega_{By}^{-2}-\omega_{Ay}^{-2}$ diverges. These quantities describes the response of $\vec{r}_{AB}$ to $\vec{E}$ [Eq.~(\ref{eq_del_u})] and impact the direction $\vec{d}$ along which the interferometry method is sensitive to $\vec{E}$ [Eqs.~(\ref{eq_d}) and (\ref{eq_d2}), and in Appendix~\ref{appendix_method_c} Eqs.~(\ref{eq_d3}) and (\ref{eq_d4})]. As a result, when Method~A is used with a beam that has a projection onto both the $x$ and $y$ axes, and when the oscillating quadrupole field amplitude is reduced to the point where the ion is barely trapped ($\omega_{Bx}\approx 0$) the technique effectively becomes sensitive to only $E_x$. A much higher sensitivity to $E_x$ than to $E_y$ also appears if the radial stiffnesses are reduced by increasing the amplitude of the static quadrupole field which provides axial confinement. We illustrate this sensitivity difference by conducting experiments using Method~A as $E_x$ and $E_y$ are changed, using two different laser beams which each have projections onto the $x$ and $y$ axes. The beam directions are shown in the schematic in Fig.~\ref{fig_sb_method}. The results are shown in Fig.~\ref{fig_seq1_2D_comparison}. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{figs/fig_2D_comparison_ii.pdf} \caption{2D micromotion compensation with degenerate secular frequencies compared with the case of non-degenerate secular frequencies. In (a) [(b)] the secular frequencies are degenerate, and $\phi_\mathrm{PD}$ is sensitive to the vertical [horizontal] component of $\vec{E}$ along $\tfrac{1}{\sqrt{2}}(\hat{x}+\hat{y})$ [$\tfrac{1}{\sqrt{2}}(\hat{x}-\hat{y})$] when measured using a vertical [horizontal] beam. In (c) and (d) the secular frequencies are non-degenerate, with $\omega_x < \omega_y$, and as a result the measurements of $\phi_\mathrm{PD}$ become more sensitive to $\vec{E}$ along the $x$-direction, i.e.\ the ``horizontal~$+$~vertical'' direction, as described by Eq.~(\ref{eq_d}) and Fig.~\ref{fig_non_degeneracy}. } \label{fig_seq1_2D_comparison} \end{figure} In Figs.~\ref{fig_seq1_2D_comparison}(a) and (b) the secular frequencies are degenerate ($\omega_x=\omega_y$), and $\phi_\mathrm{PD}$ depends on the vertical (horizontal) component of $\vec{E}$ when a vertical (horizontal) probe beam is used; the orthogonal beams are sensitive to orthogonal components of $\vec{E}$. In Figs.~\ref{fig_seq1_2D_comparison}(c) and (d) the secular frequencies are non-degenerate with $\omega_x < \omega_y$ and the method becomes more sensitive to $E_x$ than to $E_y$. As a result, the orthogonal beams are sensitive to non-orthogonal components of $\vec{E}$. If a higher sensitivity to $E_x$ than to $E_y$ is problematic, Method~C (Appendix~\ref{appendix_method_c}) may be useful; it allows the direction of sensitivity $\vec{d}$ to be tuned. Another solution is to implement Method~A using a probe beam propagating along the $y$-axis, with no projection onto the $x$-axis. However, in most setups the electrode geometry obstructs optical access along the directions of secular motion. A third solution is to calculate superpositions of the phases measured via Method~A using probe beams from different directions, for instance a weighted sum (difference) of the phases measured with the horizontal and vertical beams is sensitive to $E_x$ ($E_y$). Alternatively one can use Method~B, with two beams whose wavevector difference $\vec{k}_\alpha-\vec{k}_\beta$ has no $x$-component [see Eq.~(\ref{eq_d2})]. \subsection{Minimization of axial micromotion in a linear Paul trap} In an ideal linear Paul trap there is no RF electric field along the trap symmetry axis ($z$ direction) $\tilde{E}_z=0$. In physical linear Paul traps, $\tilde{E}_z$ is non-zero because of the finite size of the trap electrodes, among other reasons \cite{Herschbach2012, Pyka2014, Keller2015, Keller2019}. A non-zero $\tilde{E}_z$ drives ion micromotion along $z$. Usually $\tilde{E}_z$ vanishes only at a single point, and with increasing distance from this point along $z$, $|\tilde{E}_z|$ increases \cite{Pyka2014} and the extent of axial micromotion increases. Thus, the null point can be found using, for example, the resolved sideband method \cite{Berkeland1998} with a laser beam propagating along the $z$ direction. Because the extent of axial micromotion and thus the kinetic energy associated with it increase with distance along $z$ from the null, $\tilde{E}_z$ introduces a trapping pseudopotential along $z$. This pseudopotential contributes to the axial confinement, and this means that axial micromotion can be minimised using methods which are sensitive to the change of ion equilibrium position along $z$ when $\omega_z$ is changed \cite{Gloger2015}. And so, we demonstrated that interferometry method~A can be used to minimize axial micromotion: We varied the $z$-component of $\vec{E}$ (and thus we varied $\tilde{E}_z$) by changing the voltage applied to an endcap electrode, and we measured the linear response of $\phi_\mathrm{PD}$ using a laser beam with wavevector $\vec{k}$ largely along the $z$-direction (it propagates through holes in the endcap electrodes). We changed $\omega_z$ during the pulse sequence by changing the amplitude of the oscillating electric quadrupole field; this can also be achieved by changing the amplitude of the static electric quadrupole field. The results are shown in Fig.~\ref{fig_axial_MM}. \begin{figure}[ht!] \centering \includegraphics[width=0.9\columnwidth]{figs/fig_axial_MM.pdf} \caption{The interferometry sequences enable axial micromotion compensation in a linear Paul trap. The axial component of $\vec{E}$ is varied by offsetting the voltage applied to an endcap electrode, and $\phi_\mathrm{PD}$ responds linearly. $\phi_\mathrm{PD}$ is measured using Method~A with a beam propagating along the axial direction. Error bars represent quantum projection noise (1$\sigma$ confidence interval). The shaded area indicates the $1\sigma$ uncertainty in the estimate obtained using the resolved sideband method. } \label{fig_axial_MM} \end{figure} The zero-offset voltage was determined using the resolved sideband method \cite{Berkeland1998}; the optimal voltage determined using the interferometry method and the optimal voltage determined using the resolved sideband method do not perfectly agree. This mismatch may have resulted from a small projection of the probing laser beam onto the $x$ and $y$ directions (the plane of the oscillating quadrupole field) together with non-zero $x$- and $y$-components of $\vec{E}$. \section{Demonstration of quantum clock synchronization protocols} \label{sec_ticking_qubit} Method~A has much in common with two quantum clock synchronization protocols \cite{Chuang2000, deBurgh2005}. Synchronizing distant clocks is important for engineering and metrology. It is also of fundamental interest in physics, falling within the field of reference frame alignment \cite{Bartlett2007}. Suppose Alice and Bob want to synchronize their clocks, which are known to tick at the same rate: Eddington's protocol \cite{Eddington1924} involves Alice synchronising a watch to her clock, and then mailing the watch to Bob, who synchronises his own clock to the watch. Chuang \cite{Chuang2000} proposed a quantum version of Eddington's protocol, in which Alice sends a quantum watch to Bob, namely a ticking qubit. In this protocol Alice and Bob each apply a $\pi/2$ pulse on the qubit before the state of the qubit is measured. Importantly, the phase of each $\pi/2$ pulse is relative to Alice's and Bob's clocks respectively. The sequence of Method~A with $M=1$ is equivalent to Chuang's protocol. In this sequence the trapped ion equilibrium position changes from $\vec{r}_A$ (Alice's location) to $\vec{r}_B$ (Bob's location) when the trap stiffness is changed. We identify the optical field at $\vec{r}_A$ as Alice's clock, and the optical field at $\vec{r}_B$ as Bob's clock (these clocks tick incredibly fast, at over 400\,THz). The asynchronicity of the clocks is due to the phase difference $\phi_\mathrm{PD}$ between the optical field at $\vec{r}_A$ and the optical field at $\vec{r}_B$ [see Eq.~(\ref{eq_phi_T_k_r_AB})]. During the sequence we first apply a $\pi/2$ pulses on a ticking ion qubit at position $\vec{r}_A$ (the pulse phase is determined by Alice's clock) then we move the qubit to $\vec{r}_B$ and apply another $\pi/2$ pulse (the pulse phase is determined by Bob's clock), before measuring the state of the qubit. Measurements allow us to calculate $\phi_\mathrm{PD}$ and thus ``synchronise the clocks'', as shown in Fig.~\ref{fig_seq1}. In Fig.~\ref{fig_clock_sync} we illustrate the relationship between Method A and Chuang's protocol. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{figs/fig_clock_sync.pdf} \caption{Method~A is related to quantum versions of Eddington's clock synchronization protocol \cite{Chuang2000,deBurgh2005}: Alice and Bob (phantoms at $\vec{r}_A$ and $\vec{r}_B$) have an unknown phase difference $\phi_{\mathrm{PD}}$ between their clocks (the oscillating laser field at their positions). They exchange a ticking qubit (a trapped ion), and they each perform rotations on it. By measuring the difference between their rotation axes they learn $\phi_{\mathrm{PD}}$. } \label{fig_clock_sync} \end{figure} De~Burgh and Bartlett \cite{deBurgh2005} improved on Chuang's protocol. They proposed that Alice and Bob perform multiple exchanges of the qubit, and apply multiple pulses on the qubit, to more accurately determine $\phi_\mathrm{PD}$. Method~A with $M>1$ is equivalent to this protocol, and the data in Figs.~\ref{fig_seq_multi_pulse_data} and \ref{fig_rpe} demonstrates the enhancement gained from this protocol over the two-pulse protocol.\footnote{Chuang's paper \cite{Chuang2000} includes a protocol with a sub-SQL scaling, however, this protocol requires a set of ticking qubits, whose frequencies span an exponentially-large range.} Within the framework developed in this section, we can describe Method~B as a protocol to synchronise two oscillators (i.e.\ two laser fields) which are at the same position, using a ticking qubit. \section{Conclusion} We introduce and demonstrate interferometry pulse sequences for minimizing the magnitude of a stray electric field $\vec{E}$ in a trapped ion experiment. These sequences allow $|\vec{E}|$ to be minimized to state-of-the-art levels quickly, with modest experimental requirements. These methods will be particularly useful in trapped ion precision spectroscopy experiments \cite{Keller2015, Brewer2019}, hybrids systems of neutral atoms and trapped ions \cite{Grier2009,Schmid2010,Zipkes2010,Feldker2020}, and experiments using highly-polarizable Rydberg ions \cite{Higgins2019, Feldker2015}, which are very sensitive to effects caused by stray fields. We demonstrate that quantum phase estimation techniques can be used to minimize $|\vec{E}|$ with a scaling below the standard quantum limit. This constitutes a real-world case in which quantum metrology provides a significant enhancement. We also show that the results can be robust against laser detuning and pulse area errors. By using one of the sequences presented here together with the resolved sideband method we minimize $|\vec{E}|$ in 2D using a single probe beam. This approach will be useful in experiments with restricted optical access, such as cavity QED experiments \cite{Sterk2012, Steiner2013, Stute2013} and surface trap experiments \cite{Brown2011, Harlander2011, Wilson2014, Kumph2016, Mehta2020, Niffenegger2020}. We reduced $|\vec{E}|$ beyond state-of-the-art levels quickly. $|\vec{E}|$ could be reduced much further and much more quickly in a setup with a longer coherence time (allowing longer sequences) and with finer control of the trap stiffness (allowing larger stiffness changes). In trapped ion precision spectroscopy experiments usually just a single ion is probed. Scaling up precision spectroscopy experiments to many ions enables faster interrogation \cite{Champenois2010, Pyka2014, Arnold2015, Keller2019}. In a many-ion system the offset field $\vec{E}$ would ideally be measured and countered for each of the ions. The methods presented here will work in a system of many ions, provided that the ions do not unexpectedly switch positions during the sequences. Further, by probing a system of entangled ions, it might be possible to precisely measure offset fields even faster \cite{Gilmore2021}. The methods we introduce can also be used when the states which get excited are separated by a Raman transition or a multi-photon transition. To achieve the highest sensitivity the laser beams should be orientated to give the largest effective wavevector. The dominant cause of excess micromotion is usually a slowly-varying dipole field $\vec{E}$ at the null of the oscillating quadrupole field. However, excess micromotion can also arise when the oscillating voltages applied to the trap electrodes are out of phase, this is called quadrature micromotion. Measurements sensitive to $\vec{r}_{AB}$, such as the techniques presented here, do not give information about quadrature micromotion. Quadrature micromotion can instead be characterised using other methods \cite{Berkeland1998, Keller2015} and it can be avoided by careful trap design and fabrication \cite{Herschbach2012,Pyka2014,Chen2017}. Finally, our work demonstrates quantum versions of Eddington's clock synchronisation protocol \cite{Chuang2000, deBurgh2005}, linking trapped ion experiments to the problem of reference frame alignment \cite{Bartlett2007}. \section*{Acknowledgements} We thank Holger Motzkau for designing and making the bias tee in Fig.~\ref{fig_electronics}. We thank Ferdinand Schmidt-Kaler for making us aware of ref.~\cite{Kotler2011}. This work was supported by the Knut \& Alice Wallenberg Foundation (Photonic Quantum Information and through the Wallenberg Centre for Quantum Technology [WACQT]), the QuantERA ERA-NET Cofund in Quantum Technologies (ERyQSenS), and the Swedish Research Council (Trapped Rydberg Ion Quantum Simulator and grant number 2020-00381). \section*{Author contributions} GH developed the methods, planned and conducted the experiments, analysed the data and wrote the manuscript. All authors contributed to the experimental setup, discussed the results and gave feedback on the manuscript.
2,869,038,155,167
arxiv
\section{Introduction} \label{secIntro} The physics of massive galaxy clusters is relatively simple, at least compared to that of smaller objects. In the standard cold dark matter (CDM) scenario, their mass is dominated by the dark component, while most baryons are in the form of a hot diffuse plasma in hydrostatic equilibrium with the gravitational potential created by the CDM halo. The intracluster medium (ICM) gas is shock-heated to approximately the virial temperature of the object, and its thermal bremsstrahlung emission has been detected by X-ray satellites for the last three decades. In the absence of any other process, it is often stated that galaxy clusters are expected to be self-similar, and their global properties should obey power-law scaling relations. As long as the shape of a cluster's potential does not depend systematically on its mass, the radial structure of the ICM ought to be scale-free, and the global properties of galaxy clusters, such as halo mass, emission-weighted temperature, or X-ray luminosity, would scale self-similarly \citep{Kaiser86}. Indeed, numerical simulations that include adiabatic gasdynamics are reported to produce clusters of galaxies that obey such scaling laws \citep[e.g.][]{NFW95,EMN96,BN98,Eke98}. In real clusters, deviations from self-similarity are expected to arise from merging \citep[e.g.][]{JingSuto00} and additional physics acting on the intracluster gas \citep[see e.g.][and references therein]{TozziNorman01,Babul02,Voit02}. Radiative cooling and energy injection by supernova and/or active galactic nuclei (AGN) may be particularly relevant for low-mass systems, where they can make a significant contribution to the total energy budget. Observations seem to corroborate that the self-similar picture is indeed too simplistic, and it fails to predict the observed scalings of cluster mass and luminosity with respect to the ICM gas temperature. Although some observational studies \citep[e.g.][]{Horner99,NeumannArnaud99,ASF01} are consistent with the self-similar expectation, $M\propto T^{1.5}$, the observed mass-temperature relation has often been found to be steeper \citep[e.g.][]{Sanderson03}, particularly in the group regime \citep[e.g.][]{Nevalainen00,Finoguenov01,Xu01,Arnaud05}. It has also been noted \citep{EttoriGrandiMolendi02} that the slope of the $M-T$ relation might depend on the limiting overdensity. Regarding the normalization, the reported value is in most cases $\sim40$ per cent lower than in numerical simulations, although there is a certain degeneracy with the precise value of the slope. On the other hand, it has been known since the first generation of X-ray satellites \citep[e.g.][]{EdgeStewart91,David93} that the slope of the luminosity-temperature relation, $\alpha\sim3$, is significantly steeper than the self-similar value, $\alpha=2$ \Referee{(although the exact value varies for a limited energy band)}. It has been shown \citep[e.g.][]{AllenFabian98,Markevitch98,ArnaudEvrard99} that the scatter in the $L-T$ relation is significantly reduced, and the discrepancy is somewhat less severe, when cooling flows are excised or samples with only weak cooling cores are considered. Actually, it has been recently claimed \citep{OHara_05} that cool core related phenomena, and not merging processes, are the primary contributor to the scatter in all the scaling relations. Observational data have thus motivated significant efforts attempting to build a physical model of the ICM that breaks self-similarity, either by removing low-entropy gas from the centres of clusters via radiative cooling \citep{Bryan00} or by introducing non-gravitational heating \citep{EvrardHenry91,Kaiser91}. In both cases, the `excess' entropy produces a flattening of the density profile that brings the X-ray properties of the modelled clusters in agreement with the observed scaling relations \citep[see e.g.][and references therein]{Voit03,Borgani04}. Nevertheless, the source and precise amount of heating and cooling required are still a matter of debate. Recent observations \citep[e.g.][]{Ponman03} suggest that the shape of the entropy profile is similar in groups and clusters of galaxies, which rules out the simplest scenarios. In this paper, we claim that dark matter haloes are not exactly self-similar, and therefore both the cluster's potential and the properties of the ICM gas do indeed depend on the total mass (or temperature) of the object. In particular, there is no compelling reason to expect that the scaling relations between \Referee{any two physical properties, integrated up to a given overdensity}, should obey a power law, even in the purely gravitational case. We present a theoretical prediction of these relations based on a polytropic model of the ICM, and compare it with a set of high-resolution adiabatic gasdynamical simulations. It will be shown that self-similar models implicitly assume that all clusters have the same concentration and polytropic index. Relaxing these hypotheses yields the scaling relations derived in Section~\ref{secTh}, which we compare with the results of numerical experiments in Section~\ref{secSims}. Observational implications are discussed in Section~\ref{secObs}, and Section~\ref{secConclus} summarizes our main conclusions. \section{Theoretical model} \label{secTh} As shown by \citet{Ascasibar03}, relaxed clusters and minor mergers found in adiabatic gasdynamical simulations can be considered to be in approximate thermally-supported hydrostatic equilibrium up to $\sim0.8r_{200}$. Furthermore, the ICM gas is fairly well described by a polytropic equation of state with an effective polytropic index $\gamma\simeq1.18$. Using the phenomenological formula proposed by \citet[hereafter NFW]{NFW97} to model the density profile of the dark matter halo, \begin{equation} \rho(r)=\frac{\rho_{\rm s}}{(r/r_{\rm s})(1+r/r_{\rm s})^2}, \label{eqNFW} \end{equation} the gas temperature is given by \begin{equation} T(r) = T_0\ \frac{\ln(1+r/r_{\rm s})}{r/r_{\rm s}}, \label{eqTemp} \end{equation} where the central temperature, \begin{equation} kT_0= 4\pi G\mu m_{\rm p} \frac{\gamma-1}{\gamma}\rho_{\rm s} r_{\rm s}^2, \label{eqT0} \end{equation} is set by the boundary condition that the ICM density and temperature vanish at infinity. The gas density profile can be computed from the polytropic relation \begin{equation} \rho_{\rm g}(r) = \rho_0(\gamma) \left[ \frac{\ln(1+r/r_{\rm s})}{r/r_{\rm s}} \right]^{\frac{1}{\gamma-1}}, \label{eqGas} \end{equation} where the central gas density, $\rho_0(\gamma)$, can be constrained by normalizing the baryon fraction to match the cosmic value at $x_{\rm b}=r_{\rm b}/r_{\rm s}\sim3$, \begin{equation} \rho_0(\gamma)=\frac{\Omega_{\rm b}}{\Omega_{\rm dm}}\,\rho_{\rm s} \left[ \frac{\ln(1+x_{\rm b})}{x_{\rm b}} \right]^{-\frac{1}{\gamma-1}} \left[ x_{\rm b}(1+x_{\rm b})^2 \right]^{-1}. \end{equation} This analytic prescription is simpler than the one proposed in \citet{Ascasibar03}, and it provides better results for extreme values of the polytropic index $\gamma$. The choice $x_{\rm b}\sim3$ is somewhat arbitrary, and in principle one could leave the normalization of the gas density as a free parameter of the model. We consider, though, that it is desirable to reduce the number of free parameters as much as possible. In fact, the very existence of relatively tight scaling relations suggests that real galaxy clusters can indeed be described by only one free parameter, which could be taken to be the mass of the halo. Once $x_{\rm b}$ is set, our model still has three parameters, two of them related to the dark matter halo (the characteristic density and radius, $\rho_{\rm s}$ and $r_{\rm s}$) and one related to the intracluster gas (the effective polytropic index $\gamma$). The first two are known to be correlated \citep{NFW97,Bullock01,Eke01}, and we propose a phenomenological relation between polytropic index and concentration in Section~\ref{secSims} below. It is important to note, though, that the fact that clusters could be described by a one-parameter family of functions would give rise to universal scaling relations both for their radial profiles \Referee{as well as for their global (integrated or averaged) physical properties}. However, it does \emph{not} imply that such relations ought to be self-similar in any sense. The precise functional form of the different scalings would be specified by the two independent relations between $\rho_{\rm s}$, $r_{\rm s}$ and $\gamma$. In the present work, we are interested in the scaling relations between several quantities, integrated up to the radius $r_\Delta$ encompassing an overdensity $\Delta$ with respect to the critical density, i.e. \begin{equation} M_\Delta \equiv M(r_\Delta) = \Delta\frac{4\pi}{3}\rho_{\rm c}\,r_\Delta^3, \end{equation} where $\rho_{\rm c}\equiv\frac{3H_0^2}{8\pi G}=2.8\times10^{11}\ h^2$ \Msun\ Mpc$^{-3}$ is the critical density and $H_0\equiv 100\ h$ km s$^{-1}$ Mpc$^{-1}$ is the Hubble constant. Defining $c_\Delta \equiv r_\Delta / r_{\rm s}$ and $g(x)\equiv\left[\ln(1+x)-\frac{x}{1+x} \right]^{-1}$, we obtain \begin{equation} M_\Delta \simeq M_{\rm dm}^\Delta = 4\pi\rhosr_{\rm s}^3\ g^{-1}(c_\Delta) \label{eqMs} \end{equation} and \begin{equation} \rho_{\rm s} = \frac{\Delta H_0^2}{8\pi G}\,c_\Delta^3\ g(c_\Delta). \label{eqRhos} \end{equation} Let us define the function \begin{equation} y(\eta,c)\equiv \int_0^{c}\left[ \frac{\ln(1+x)}{x} \right] ^{\eta} x^2\ {\rm d} x, \end{equation} which must be integrated numerically. In terms of this function, we can express in a very compact form both the cumulative gas mass, \begin{equation} M_{\rm g}^\Delta = 4\pi\rho_0r_{\rm s}^3\ y(\frac{1}{\gamma-1},c_\Delta), \label{eqMgas} \end{equation} and mass-weighted temperature, \begin{equation} T_\Delta=T_0\, \frac{ y(\frac{\gamma}{\gamma-1},c_\Delta)} { y(\frac{ 1 }{\gamma-1},c_\Delta)}. \label{eqT} \end{equation} Assuming that thermal bremsstrahlung is the dominant cooling mechanism, the X-ray power radiated by the ICM gas per unit volume may be estimated as \begin{equation} P=\frac{64\pi}{3}\left(\frac{\pi}{6}\right)^{\!\frac{1}{2}} \left(\frac{e^2}{4\pi\epsilon_0}\right)^{\!\!3} \left[\frac{kT}{(m_{\rm e}c^2)^3}\right]^{\!\frac{1}{2}} \bar{g} n_{\rm e}\sum_i{Z_i^2n_i}, \end{equation} where $e$, $m_{\rm e}$ and $n_{\rm e}$ are the electron charge, mass and number density, respectively. $\epsilon_0$ is the permittivity of free space, $k$ is Boltzmann's constant, $c$ is the speed of light and $\bar{g}$ is the average Gaunt factor, which we take to be unity. The sum takes into account the atomic number $Z_i$ and number density $n_i$ of each ion species $i$. For a fully-ionized plasma of primordial composition ($\sim75$ per cent of the mass in hydrogen and 25 per cent in helium), \begin{equation} P \simeq 2\times10^{17} \left(\!\!\frac{\rho}{\rm M_\odot\ Mpc^{-3}}\!\right)^{\!2} \left(\frac{kT}{\rm keV}\right)^{\!\frac{1}{2}} {\rm \ erg\ s^{-1}\,Mpc^{-3}.} \end{equation} Integrating up to $r_\Delta$, the bolometric X-ray luminosity would be \begin{equation} L_{\rm X}^\Delta = \Lambda_{\rm X}\ \rho_0^2\ (kT_0)^{\!\frac{1}{2}} \,4\pir_{\rm s}^3 \ y(\frac{2}{\gamma-1}\!+\!\frac{1}{2},c_\Delta) \label{eqLx} \end{equation} with $\Lambda_{\rm X}\simeq2\times10^{17}$ erg s$^{-1}$ Mpc$^{3}$ M$_\odot$$^{-2}$ keV$^{-\frac{1}{2}}$, while the emission-weighted temperature is given by \begin{equation} T_{\rm X}^\Delta=T_0\, \frac{ y(\frac{2}{\gamma-1}+\frac{3}{2},c_\Delta)} { y(\frac{2}{\gamma-1}+\frac{1}{2},c_\Delta)}. \label{eqTx} \end{equation} Combining equations (\ref{eqT0}), (\ref{eqMs}), (\ref{eqRhos}) and (\ref{eqTx}), simple algebra yields the mass-temperature relation \begin{equation} M_\Delta = \frac{\sqrt{2}}{GH_0} \Delta^{-1/2} \left[\frac{kT_{\rm X}^\Delta}{\mu m_p}\right]^{3/2} Y_{\rm MT}(\gamma,c_\Delta), \label{eqMT} \end{equation} where \begin{equation} Y_{\rm MT}(\gamma,c_\Delta) \equiv \left[ \frac{\gamma-1}{\gamma}\,c_\Delta\ g(c_\Delta)\, \frac{ y(\frac{2}{\gamma-1}+\frac{3}{2},c_\Delta)} { y(\frac{2}{\gamma-1}+\frac{1}{2},c_\Delta)}\, \right]^{-\frac{3}{2}}. \label{eqYmt} \end{equation} Analogously, expressions (\ref{eqMs}) and (\ref{eqMgas}) tell us that the cumulative baryon fraction does not depend explicitly on the object mass or temperature, \begin{equation} F_\Delta \equiv \frac{M_{\rm g}^\Delta}{M_\Delta} \frac{\Omega_{\rm dm}}{\Omega_{\rm b}}= \frac{\rho_0(\gamma)}{\rho_{\rm s}}\frac{\Omega_{\rm dm}}{\Omega_{\rm b}}\, g(c_\Delta)\ y(\frac{1}{\gamma-1},c_\Delta). \label{eqFb} \end{equation} Finally, the luminosity-temperature relation can be expressed in the form \begin{equation} L_{\rm X}^\Delta= \frac{\Lambda_{\rm X}H_0}{(\mu m_{\rm p})^{\frac{3}{2}}\,2^{\frac{5}{2}}\,\pi\,G^2} \left(kT_{\rm X}^\Delta\right)^2 \Delta^{\frac{1}{2}}\ Y_{\rm LX}(\gamma,c_\Delta) \label{eqLT} \end{equation} with \begin{equation} Y_{\rm LX}(\gamma,c_\Delta)\!\equiv\!\!\! \left[\!\frac{\rho_0\!(\gamma)}{\rho_{\rm s}}\!\right]^2\! \left[\!\frac{\gamma\,c_\Delta}{\gamma-1}\!\right]^{\!-\frac{3}{2}} \!\!g^{\frac{1}{2}}\!(c_\Delta)\, \frac{ y^3(\frac{2}{\gamma-1}+\frac{1}{2},c_\Delta)} { y^2(\frac{2}{\gamma-1}+\frac{3}{2},c_\Delta)}, \label{eqYlt} \end{equation} according to (\ref{eqT0}), (\ref{eqRhos}), (\ref{eqLx}) and (\ref{eqTx}). \section{Simulations} \label{secSims} For constant values of the polytropic index and concentration, equations (\ref{eqMT}), (\ref{eqFb}) and (\ref{eqLT}) become the well-known self-similar scalings, with logarithmic slopes $3/2$, $0$ and $2$, respectively. The precise `universal' values of $\gamma$ and $c_\Delta$ would simply set the normalization. However, both quantities might well depend systematically on the mass of the object. We address such dependence in the present section, where we also compute the expected scaling relations and compare them to our numerical data. \begin{table*} \caption{ Description of our cluster sample. Number of gas particles within $r_{200}$, \Referee{physical properties} at overdensities $\Delta=2500$, 500 and 200, best-fitting characteristic density (in units of the critical density), radius (in $h^{-1}$ kpc) and effective polytropic index. Masses are expressed in $10^{13}$ M$_\odot$, temperatures in keV and X-ray luminosities in $10^{44}\ h$ erg s$^{-1}$. The baryon fraction is given in units of the cosmic value, $\Omega_{\rm b}/\Omega_{\rm m}$. } \label{tabSims} \centering \begin{tabular}{rr rrrr rrrr rrrr rrr} \hline $N_{\rm gas}$ & $M_{2500}$ & $F_{2500}$ & $T_{\rm X}^{2500}$ & $L_{\rm X}^{2500}$ & $M_{ 500}$ & $F_{ 500}$ & $T_{\rm X}^{ 500}$ & $L_{\rm X}^{ 500}$ & $M_{ 200}$ & $F_{ 200}$ & $T_{\rm X}^{ 200}$ & $L_{\rm X}^{ 200}$ & $\rho_{\rm s}/\rho_{\rm c}$ & $r_{\rm s}$~ & $\gamma$~~ \\ \hline 1511675 & 50.33 & 0.60 & 10.60 & 21.44 & 118.35 & 0.85 & 10.11 & 29.67 & 177.53 & 0.90 & 9.88 & 32.28 & 5970 & 434 & 1.170 \\ 94804 & 3.84 & 0.78 & 2.17 & 1.65 & 8.42 & 0.83 & 2.14 & 1.81 & 12.34 & 0.81 & 2.17 & 1.85 & 12698 & 136 & 1.176 \\ 71366 & 2.58 & 0.66 & 1.75 & 0.43 & 6.41 & 0.83 & 1.62 & 0.60 & 8.85 & 0.85 & 1.61 & 0.62 & 5792 & 177 & 1.170 \\ 1191370 & 34.27 & 0.89 & 10.62 & 28.44 & 92.44 & 0.93 & 10.51 & 36.29 & 136.74 & 0.92 & 10.38 & 37.47 & 4255 & 509 & 1.154 \\ 1089303 & 16.94 & 0.53 & 11.66 & 8.24 & 82.84 & 0.85 & 10.64 & 34.74 & 120.72 & 0.95 & 10.42 & 37.03 & 1139 & 928 & 1.158 \\ 970021 & 30.13 & 0.75 & 8.58 & 15.25 & 75.78 & 0.94 & 8.14 & 20.64 & 113.58 & 0.90 & 8.04 & 21.65 & 7505 & 346 & 1.167 \\ 836640 & 2.87 & 0.52 & 3.31 & 0.35 & 30.11 & 0.74 & 6.14 & 3.56 & 99.79 & 0.88 & 8.05 & 21.23 & 636 & 855 & 1.158 \\ 740358 & 16.97 & 0.59 & 7.18 & 4.04 & 52.51 & 0.87 & 5.74 & 7.98 & 90.08 & 0.87 & 5.60 & 8.80 & 11094 & 243 & 1.194 \\ 1325010 & 41.67 & 0.70 & 11.82 & 19.68 & 105.05 & 0.88 & 10.99 & 27.40 & 153.94 & 0.91 & 10.82 & 28.53 & 5343 & 465 & 1.167 \\ 124201 & 5.50 & 0.42 & 3.84 & 0.49 & 11.40 & 0.70 & 3.29 & 0.76 & 15.95 & 0.82 & 3.17 & 0.82 & 11098 & 161 & 1.221 \\ 276238 & 11.21 & 0.93 & 4.46 & 11.59 & 22.60 & 0.95 & 4.40 & 12.30 & 30.70 & 0.95 & 4.39 & 12.38 & 18060 & 163 & 1.164 \\ 670893 & 22.94 & 0.81 & 7.60 & 12.96 & 51.66 & 0.94 & 7.38 & 15.77 & 73.35 & 0.96 & 7.31 & 16.18 & 5002 & 394 & 1.159 \\ 27706 & 1.39 & 0.43 & 1.58 & 0.08 & 2.94 & 0.63 & 1.43 & 0.11 & 4.19 & 0.70 & 1.43 & 0.12 & 6786 & 136 & 1.217 \\ 304438 & 3.66 & 0.88 & 1.90 & 1.57 & 8.07 & 0.87 & 1.92 & 1.80 & 9.59 & 0.88 & 1.91 & 1.81 & 8634 & 161 & 1.164 \\ 134337 & 1.44 & 0.70 & 1.23 & 0.23 & 3.08 & 0.83 & 1.17 & 0.27 & 4.16 & 0.89 & 1.15 & 0.28 & 9156 & 114 & 1.179 \\ 123954 & 1.62 & 0.74 & 1.36 & 0.43 & 2.95 & 0.87 & 1.32 & 0.47 & 3.66 & 0.93 & 1.31 & 0.47 & 30553 & 65 & 1.202 \\ 86035 & 1.02 & 0.67 & 1.03 & 0.17 & 2.16 & 0.78 & 0.99 & 0.19 & 2.80 & 0.85 & 0.99 & 0.20 & 14820 & 81 & 1.198 \\ 120196 & 0.48 & 0.69 & 0.65 & 0.05 & 2.27 & 0.80 & 0.68 & 0.11 & 4.30 & 0.83 & 0.67 & 0.14 & 1085 & 285 & 1.152 \\ 119735 & 0.46 & 0.47 & 0.70 & 0.02 & 2.20 & 0.79 & 0.69 & 0.10 & 4.29 & 0.83 & 0.67 & 0.14 & 921 & 311 & 1.152 \\ 81150 & 1.02 & 0.85 & 1.00 & 0.46 & 1.97 & 0.90 & 0.98 & 0.48 & 2.57 & 0.94 & 0.98 & 0.48 & 26165 & 62 & 1.190 \\ 134696 & 1.88 & 0.76 & 1.56 & 0.53 & 3.48 & 0.86 & 1.51 & 0.58 & 4.33 & 0.92 & 1.50 & 0.58 & 30406 & 69 & 1.194 \\ 73495 & 0.51 & 0.41 & 0.64 & 0.02 & 1.17 & 0.67 & 0.64 & 0.03 & 2.80 & 0.79 & 0.61 & 0.05 & 1471 & 225 & 1.166 \\ 17632 & 0.25 & 0.43 & 0.44 & 0.01 & 0.49 & 0.61 & 0.43 & 0.01 & 0.66 & 0.81 & 0.42 & 0.01 & 2653 & 124 & 1.218 \\ 241690 & 3.27 & 0.75 & 1.95 & 0.91 & 6.14 & 0.86 & 1.88 & 1.02 & 8.20 & 0.88 & 1.87 & 1.03 & 17239 & 107 & 1.175 \\ 98380 & 0.93 & 0.52 & 0.89 & 0.06 & 2.14 & 0.76 & 0.81 & 0.09 & 3.62 & 0.82 & 0.78 & 0.10 & 4260 & 146 & 1.172 \\ 1400058 & 5.31 & 0.65 & 4.01 & 1.17 & 28.13 & 0.80 & 3.46 & 2.83 & 45.73 & 0.85 & 3.31 & 3.18 & 2208 & 422 & 1.161 \\ 111187 & 1.33 & 0.73 & 1.12 & 0.24 & 2.62 & 0.90 & 1.06 & 0.28 & 3.74 & 0.88 & 1.04 & 0.29 & 14668 & 88 & 1.176 \\ 76770 & 0.92 & 0.63 & 0.89 & 0.15 & 1.71 & 0.85 & 0.83 & 0.17 & 2.68 & 0.85 & 0.81 & 0.18 & 9841 & 92 & 1.167 \\ 185292 & 1.99 & 0.74 & 1.21 & 0.44 & 5.16 & 0.87 & 1.20 & 0.72 & 6.36 & 0.86 & 1.20 & 0.73 & 4798 & 174 & 1.158 \\ 179971 & 1.31 & 0.70 & 1.19 & 0.26 & 4.82 & 0.83 & 1.21 & 0.71 & 6.13 & 0.87 & 1.20 & 0.73 & 2866 & 221 & 1.162 \\ 163595 & 0.86 & 0.61 & 0.93 & 0.08 & 3.39 & 0.79 & 0.86 & 0.18 & 5.66 & 0.86 & 0.82 & 0.26 & 1689 & 260 & 1.155 \\ 162520 & 0.62 & 0.69 & 0.74 & 0.07 & 2.89 & 0.80 & 0.79 & 0.16 & 5.62 & 0.86 & 0.82 & 0.26 & 1052 & 321 & 1.150 \\ 934064 & 10.82 & 0.76 & 4.41 & 4.61 & 22.15 & 0.87 & 4.23 & 5.43 & 29.11 & 0.89 & 4.20 & 5.51 & 10437 & 205 & 1.169 \\ 897423 & 5.53 & 0.76 & 4.58 & 2.41 & 20.97 & 0.86 & 4.24 & 5.38 & 28.28 & 0.88 & 4.21 & 5.49 & 2260 & 429 & 1.155 \\ 132470 & 1.64 & 0.71 & 1.32 & 0.40 & 2.97 & 0.86 & 1.28 & 0.44 & 3.85 & 0.95 & 1.27 & 0.44 & 21423 & 77 & 1.202 \\ 416239 & 5.28 & 0.58 & 2.85 & 0.91 & 11.20 & 0.77 & 2.66 & 1.15 & 14.46 & 0.85 & 2.62 & 1.18 & 9667 & 168 & 1.183 \\ 501065 & 5.28 & 0.83 & 2.82 & 3.03 & 11.67 & 0.88 & 2.75 & 3.32 & 15.75 & 0.88 & 2.73 & 3.36 & 18629 & 127 & 1.171 \\ 4490660 & 5.95 & 0.88 & 3.13 & 4.95 & 12.02 & 0.92 & 3.07 & 5.18 & 17.18 & 0.90 & 3.05 & 5.25 & 27337 & 109 & 1.168 \\ 436426 & 5.78 & 0.80 & 2.90 & 2.54 & 11.08 & 0.88 & 2.83 & 2.75 & 14.42 & 0.90 & 2.81 & 2.77 & 17751 & 129 & 1.173 \\ 59817 & 0.84 & 0.67 & 0.82 & 0.13 & 1.94 & 0.67 & 0.80 & 0.14 & 2.70 & 0.66 & 0.80 & 0.14 & 12304 & 82 & 1.181 \\ 274209 & 2.52 & 0.72 & 1.80 & 0.50 & 6.39 & 0.82 & 1.72 & 0.64 & 9.18 & 0.83 & 1.69 & 0.66 & 6565 & 167 & 1.171 \\ 157916 & 2.17 & 0.82 & 1.57 & 0.86 & 3.92 & 0.88 & 1.55 & 0.91 & 4.76 & 0.92 & 1.54 & 0.91 & 26205 & 77 & 1.185 \\ \hline \end{tabular} \end{table*} \subsection{Numerical experiments} Our cluster sample consists of 42 objects formed in a flat \LCDM\ universe ($\Omega_{\rm m}=0.3$; $\Omega_{\rm b}=0.04$; $\Omega_\Lambda=0.7$; $ h=0.7$; $\sigma_8=0.9$). 28 of them have been extracted from a $80~h^{-1}$~Mpc cubic box simulated with a version of the parallel Tree-SPH code {\sc Gadget} \citep{gadget01} that implements the entropy-conserving scheme proposed by \citet{Gadget02}. For a thorough description of these experiments, the reader is referred to \citet{tesis}. In order to extend our numerical sample of clusters to a wider temperature (mass) range, we have also simulated a $500~h^{-1}$~Mpc box with the code {\sc Gadget2} \citep{gadget2}, from which we have considered 14 objects. Details about these simulations can be found in \citet{Yepes_04}. In each case, high resolution has been achieved by means of the multiple-mass technique \citep[see][]{Klypin01}. An unconstrained random realization of the \LCDM\ power spectrum was generated with $1024^3$ and $2048^3$ particles for the 80 and $500~h^{-1}$~Mpc boxes, respectively. Haloes were selected at $z=0$ from a low-resolution experiment evolved with $128^3$ dark matter particles, and then re-simulated with three and five levels of mass refinement (so that the final mass resolution of both subsamples is similar). The gravitational softening length was set to $\epsilon=2-5\ h^{-1}$~kpc, depending on number of dark matter particles within the virial radius of the object \citep[following][]{Power03}. Gas particles have only been added in the highest refinement level. Basic information about the objects in our numerical cluster sample is summarized in Table~\ref{tabSims}. They span two orders of magnitude in mass ($\sim10^{13}-10^{15}$~M$_\odot$) and cover a temperature range between 0.5 and 11 keV. The total number of gas particles within the virial radius is always $N_{\rm gas}\ge2\times10^4$, the number of dark matter particles being slightly higher (inversely proportional to the cumulative baryon fraction, $F$). For each object, the centre of mass was found by an iterative procedure. Starting with an initial guess, we compute the centre of mass within a sphere of $500\ h^{-1}$ kpc. The sphere is moved to the new centre until convergence is reached. The radius of the sphere is then decreased by 10 per cent, and the process continues until the sphere contains 200 dark matter particles. The radii $r_{2500}$, $r_{500}$ and $r_{200}$ are obtained from the overdensity profile around the final centre of mass. The total mass, baryon fraction, X-ray luminosity and emission-weighted temperature quoted in Table~\ref{tabSims} have been computed within those radii. \subsection{Concentration and polytropic index} The values of the parameters $c_\Delta$ and $\gamma$ have been computed by means of a \Referee{global fit} to the gas density, dark matter density, total mass and effective polytropic index (i.e. gas temperature versus gas density) profiles, averaged over logarithmically-spaced spherical shells between $0.1r_{200}$ and $r_{200}$. \Referee{ More precisely, we minimize the quantity \begin{equation} \chi^2=\chi^2_{\rho}+\chi^2_{M}+\chi^2_{T}+\chi^2_{\gamma}, \end{equation} where \begin{equation} \chi^2_{x}=\sum_{b=1}^{n_{\rm bins}}\frac{\log^2[\,x(b)/x_{\rm model}(b)\,]}{1/{n(b)}} \end{equation} and $x$ denotes the number of particles within each bin, $n(b)$, the total enclosed mass, $M(b)$, the average temperature within the bin, $T(b)=\sum_{i=1}^{n(b)}T_i/n(b)$, and the quantity $T(b)n^{1-\gamma}(b)$. } A grid of analytical profiles is generated for the intervals $1.1<\gamma<1.25$ and $10<r_{\rm s}/(h^{-1}{\rm kpc})<1000$, in uniform steps $\Delta\gamma=0.001$ and $\Deltar_{\rm s}=1\ h^{-1}$ kpc. Since all the measured quantities are proportional to the characteristic density $\rho_{\rm s}$, its best-fitting value has been trivially found from the average of the logarithmic residuals. \begin{figure} \centering \includegraphics[width=8cm]{cg.eps} \caption{ Concentration at different overdensities and effective polytropic index of each object. Solid lines represent our theoretical model, based on the mass-concentration relation of \citet{Bullock01} and our proposed fit to the dependence of the effective polytropic index on concentration, equation~(\ref{eqCG}). Dotted lines show the one-sigma scatter expected from $\Deltac_{\rm vir}/c_{\rm vir}\simeq0.3$ \citep{Colin04}. \Referee{The linear fit to the $\gamma-c$ relation proposed by \citet{KS01} is shown by the dashed lines on the right panels}. } \label{figCG} \end{figure} Results of the minimization procedure are quoted in Table~\ref{tabSims}, and best-fitting concentrations and effective polytropic indices are plotted in Figure~\ref{figCG}. Solid lines depict the toy model for the mass-concentration relation proposed by \citet{Bullock01}\footnote{Nearly identical results are obtained when the prescription given in \citet{Eke01} is used.}, with $F=0.001$ and $K=3$. We transform the values of $c_{\rm vir}\simeq c_{100}$ to the other overdensities according to the NFW profile. Dotted lines show the one-sigma scatter $\Deltac_{\rm vir}/c_{\rm vir}\simeq0.3$ reported by \citet{Colin04} for relaxed systems. Finally, we find that the phenomenological relation \begin{equation} \gamma=a+b\,c_{200} \label{eqCG} \end{equation} \Referee{ with $a=1.145\pm0.007$ and $b=0.005\pm0.002$ fits reasonably well our results for the polytropic index (although there exists a certain degeneracy between the best-fitting values of both parameters). Dotted lines show the one-sigma scatter in $\gamma(c)$ expected from $\Deltac_{\rm vir}$. A correlation between $\gamma$ and $c$ is expected if both quantities (and therefore the radial structure of the ICM) vary smoothly with the mass of the object. In fact, an approximately linear dependence has already been advocated by \citet{KS01} in order to enforce constant baryon fraction at large radii. Their fit (dashed lines in Figure~\ref{figCG}) is however significantly steeper than equation (\ref{eqCG}). Although it works somewhat better for large $c$, it does not seem to adequately describe the least concentrated systems, suggesting that, most probably, the precise functional form of $\gamma(c)$ is not as simple as a straight line. We therefore advise against extrapolating our fit towards values of the concentration parameter outside the range covered by the present work. Moreover, additional physics is expected to play an important role in less massive (more concentrated) systems, and therefore the polytropic approximation itself will no longer to be valid, since it fails to describe the presence of a central cool core. } Some of our objects deviate appreciably from both the mass-concentration relation and the $\gamma-c$ relation given by expression (\ref{eqCG}). These tend to be merging systems, which have formed (or \emph{are} forming) more recently than relaxed objects. In the spherical collapse picture, that means they have collapsed around density peaks on larger scales, and therefore the resulting density profiles are less concentrated than relaxed haloes of the same mass \citep{Ascasibar04}. As noted by \citet{Gottloeber01}, merging is more common on the scale of galaxy groups. The effective polytropic index seems to be systematically higher in these objects, although it is important to bear in mind that they are not particularly well described by a polytropic equation of state. \Referee{ Actually, the gas distribution shows obvious asymmetries, as well as an offset between the gas and dark matter peak which results in an artificially flat gas density profile in the central regions. Such flattening is responsible for both the abnormally low baryon fraction measured at $\Delta=2500$ and the unusually high value of $\gamma$ obtained by our fitting routine. } \subsection{Scaling relations} \begin{figure*} \centering \includegraphics[width=15cm]{correct.eps} \caption{ $M-T_{\rm X}$, $F-T_{\rm X}$ and $L_{\rm X}-T_{\rm X}$ scaling relations of our cluster sample at different overdensities, corrected by the factors $Y_{\rm MT}$, $F_\Delta$ and $Y_{\rm LX}$, respectively. Dashed lines represent the theoretical predictions given by expressions (\ref{eqMT}), (\ref{eqFb}) and (\ref{eqLT}). } \label{figCorr} \end{figure*} The scaling relations of total mass, baryon fraction and X-ray luminosity with respect to the emission-weighted temperature are represented in Figure~\ref{figCorr}, divided by the appropriate values of the structure factors $Y_{\rm MT}$, $F_\Delta$ and $Y_{\rm LX}$ corresponding to each object. These are computed by substituting the best-fitting values of $\gamma$ and $c_\Delta$ into expressions (\ref{eqYmt}), (\ref{eqFb}) and (\ref{eqYlt}). As shown by \citet{Ascasibar03}, polytropic models provide a fairly accurate description of the radial structure of galaxy groups and clusters. It is therefore not surprising that they are able to match the scaling relations as well. When the factors $Y_{\rm MT}$, $F_\Delta$ and $Y_{\rm LX}$ are taken into account, both the normalization and the logarithmic slope (equal to the self-similar models) of the scaling relations are correctly predicted by equations (\ref{eqMT}), (\ref{eqFb}) and (\ref{eqLT}). The scatter around the theoretical expectation is quite low, and only merging systems deviate appreciably from the predicted relation. In these objects, the dark matter potential may differ considerably from the NFW form, and the assumptions of hydrostatic equilibrium and a polytropic equation of state provide rather poor approximations \citep{Ascasibar03}. \begin{figure*} \centering \includegraphics[width=15cm]{raw.eps} \caption{ Numerical scaling relations (points), compared to our theoretical prediction (solid lines) based on a variable $c(M)$ and $\gamma(c)$, see text. Dotted lines show the scatter arising from $\Deltac_{\rm vir}/c_{\rm vir}\simeq0.3$, and dashed lines indicate the scaling relations expected for a fixed concentration $c_{\rm vir}=8$ and effective polytropic index $\gamma=1.176$. } \label{figUncorr} \end{figure*} Uncorrected scaling relations are plotted in Figure~\ref{figUncorr}. Solid lines show our theoretical prediction, using the mass-concentration relation from \citet{Bullock01} and our fit (\ref{eqCG}) to estimate $c_\Delta$ and $\gamma$ as a function of the cluster mass. It turns out that the dependencies on concentration and polytropic index seem to cancel each other so that the resulting scaling relations look as if the clusters were actually self-similar. Indeed, all the relations considered in the present study can be accurately fit by an `average' concentration $c_{\rm vir}=8$ (which implies $c_{200}\simeq6$, $c_{500}\simeq4$, $c_{2500}\simeq1.8$ and $\gamma\simeq1.176$). The corresponding scaling relations, \begin{eqnarray} M_\Delta &=& M_0^\Delta\Delta^{-1/2}\,\left(\frac{T_{\rm X}}{1~{\rm keV}}\right)^{3/2}\\ F_\Delta &=& F_0^\Delta \\ L_{\rm X}^\Delta &=& L_0^\Delta\left(\frac{T_{\rm X}}{1~{\rm keV}}\right)^2, \end{eqnarray} have been plotted as dashed lines, and their normalizations are given in Table~\ref{tabFiducial}. \begin{table} \caption{ Normalization of the approximate scaling relations obtained for $c_{\rm vir}=8$ and $\gamma=1.176$. $M_0^\Delta$ is expressed in $10^{14}~h^{-1}$~M$_\odot$, $L_0^\Delta$ in $10^{43}~h$~erg~s$^{-1}$ and $F_0^\Delta$ in units of the cosmic baryon fraction. } \centering \label{tabFiducial} \begin{tabular}{cccc} \hline $\Delta$ & $M_0^\Delta$ & $F_0^\Delta$ & $L_0^\Delta$\\ \hline 2500 & 5.65 & 0.771 & 2.18 \\ 500 & 5.47 & 0.885 & 2.69 \\ 200 & 4.71 & 0.885 & 2.74 \\ \hline \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=15cm]{cancel.eps} \caption{ \Referee{ Effects of concentration (dotted) and polytropic index (dashed) on initially self-similar scaling relations where $c_{\rm vir}=8$ and $\gamma=1.176$ for all objects (grey solid). Black solid lines show our prediction, taking into account both effects simultaneously. } } \label{figCancel} \end{figure*} \Referee{ It is to some extent remarkable that concentration and polytropic index conspire to produce scaling relations that match so closely the self-similar slope. Such effect is illustrated more clearly in Figure~\ref{figCancel}, where the contributions of $\gamma$ and $c$ to the scaling relations are plotted separately. The value of the concentration sets the ratio between the radii $r_\Delta$ and the characteristic radius $r_{\rm s}$. Since massive objects are substantially less concentrated than smaller systems, their $r_\Delta$ are much closer to the centre in terms of $r_{\rm s}$, and thus the mass within a given overdensity (which is an increasing function of $r/r_{\rm s}$) will be smaller than indicated by the `average' scaling relation based on a higher value of $c_{\rm vir}$. Conversely, $M_\Delta$ in the least massive objects would be biased high with respect to the self-similar relation. The effect is particularly noticeable for $\Delta=2500$, where $r_\Delta\ll r_{\rm s}$ and the enclosed mass is a rapidly increasing function of $r/r_{\rm s}$; for $r>r_{\rm s}$, it increases only logarithmically, and differences in $r_\Delta/r_{\rm s}$ are much less important. The emission-weighted temperature decreases with $r/r_{\rm s}$, and therefore it is expected to be biased high (low) for large (small) systems. However, the effect is arguably small, since the average is biased towards the central part, where the gas temperature is roughly constant. As shown by the dotted lines on the top panels of Figure~\ref{figCancel}, the net result is a shallower mass-temperature relation at $\Delta=2500$, while no significant change can be appreciated at lower overdensities. The baryon fraction and the X-ray luminosity are steep functions of $r/r_{\rm s}$, and hence more sensitive to the precise value of $c_{\rm vir}$. The effect on these quantities is also stronger at large overdensities, but unlike the $M-T$ relation, the systematic variation of concentration would yield a noticeable imprint on the $F-T$ and $L-T$ relations at $\Delta=200$. On the other hand, the effective polytropic index controls the slope of the gas density profile. According to~(\ref{eqCG}), larger masses, which imply lower concentrations, also mean lower effective polytropic index. The gas density profile becomes increasingly steep, which boosts the central baryon fraction and the X-ray luminosity, increasing the emission-weighted temperature only slightly and leaving the the total mass mostly unaffected. Given the small variation of the polytropic index throughout the interesting mass range, the effect is never larger than a factor of two, comparable to or smaller than that of the concentration, but it always acts in the opposite sense. } Finally, we would also like to note that, apart from the normalization, our model can estimate the scatter around the average scaling relations by assuming that it arises completely from the scatter in the mass-concentration relation. Such estimate, indicated by the dotted lines in Figure~\ref{figUncorr}, has been obtained by combining the value $c'=1.3c_{\rm vir}$ with the effective polytropic index $\gamma'=\gamma(c_{\rm vir}/1.3)$, and $c''=c_{\rm vir}/1.3$ with $\gamma''=\gamma(1.3c_{\rm vir})$, where in both cases $\gamma(c)$ has been computed according to equation~(\ref{eqCG}). Comparing to Figure~\ref{figCorr}, it seems that the internal structure of clusters (accounted for by the factors $Y_{\rm MT}$, $F_\Delta$ and $Y_{\rm LX}$) is indeed responsible for a significant fraction of the scatter in the observed scaling relations, as recently suggested by \citet{OHara_05}. Note, however, that in our case the differences in internal structure are obviously not related to the presence of a cool core or the action of any external source of energy, but rather to the different formation histories of each object. \subsection{Comparison with previous work} \begin{table} \caption{ Normalization $M_0^\Delta$ (in $10^{14}~h^{-1}$~M$_\odot$) and logarithmic slope $\alpha$ of the $M-T$ relation reported in previous numerical studies. } \begin{center} \begin{tabular}{lrcl} \hline {\sc Reference} & $\Delta$~ & $M_0^\Delta$ & ~$\alpha$ \\ \hline \citet{NFW95} & 200 & 6.14 & 1.5 \\ \citet{EMN96} & 2500 & 7.85 & 1.5 \\ & 500 & 7.84 & 1.5 \\ \citet{BN98} & 200 & 8.68 & 1.5 \\ \citet{Pen98} & 200 & 7.28 & 1.5 \\ \citet{Eke98} & 100 & 6.12 & 1.5 \\ \citet{Yoshikawa00} & 100 & 5.56 & 1.5 \\ \citet{ME01} & 500 & 8.18 & 1.52 \\ & 200 & 7.65 & 1.54 \\ \citet{Muanwong02} & 200 & 13.9 & 1.5 \\ \hline \end{tabular} \end{center} \label{tabMTsims} \end{table} The mass-temperature relation in the absence of radiative processes has been extensively studied by means of cosmological numerical simulations. A summary of previous results is given in Table~\ref{tabMTsims}. Slopes consistent with $3/2$ are found in most studies, with normalizations showing a relatively low scatter around $M_0^\Delta\sim(7-8)\times10^{14}~h^{-1}$~M$_\odot$. Similar results are obtained when cooling and stellar feedback are considered \citep[e.g.][]{Borgani04}, although there is a trend towards steeper slopes and lower normalizations, in better agreement with observational data. The $M-T$ relation predicted by our model (see Table~\ref{tabFiducial}) is also considerably lower than the values found in previous experiments based on purely adiabatic gasdynamics. We think that this is to a great extent a resolution effect, coupled to the use of an entropy-conserving scheme to solve the SPH equations. Most if not all of the earlier work on the adiabatic scaling relations of galaxy clusters relied on the traditional formulation of SPH \citep{Lucy77,GingoldMonaghan77}. It has been recently shown \citep[e.g.][]{Gadget02,Ascasibar03,OShea05} that poor entropy conservation leads to spurious entropy losses in the cluster cores, and thus previous codes tend to systematically overestimate the central density and underestimate the central gas temperature. On the other hand, lack of resolution results in artificially flattened density and temperature profiles, i.e. nearly isothermal and isentropic cores. Actually, some studies \citep[e.g.][]{ME01,Muanwong02} even report decreasing temperature profiles towards the centre. This is in strong disagreement with our results, as well as with those of independent numerical work based on high-resolution Eulerian simulations \citep[e.g.][]{Loken02}, in which the temperature profile in the absence of radiative processes is also found to decrease monotonically with radius. Under such conditions, the emission-weighted temperature (which is biased towards the central, dense and X-ray bright regions of the cluster) is larger than the mass-weighted average, and thus the resulting normalization of $M-T$ relation becomes considerably lower when $T_{\rm X}$ is used \emph{and} the central parts of the objects under study are well resolved. Concerning the $L_{\rm X}-T_{\rm X}$ relation, there is relatively little numerical work based on adiabatic gasdynamical simulations. This is also a reflection of the stringent resolution requirements, due to the fact that a significant fraction of the X-ray photons are expected to be produced in the innermost regions. A further problem affecting cosmological numerical experiments is that the smallest objects are typically resolved with less particles, so their bolometric X-ray luminosity is underestimated and the resulting $L-T$ relation is artificially steepened \citep[see e.g.][]{BN98,Yoshikawa00,Yepes_04}. Most of our objects (see Table~\ref{tabSims}) have more than $10^5$ gas particles, and in principle they should not be severely affected by this problem \citep[see e.g.][]{Borgani02,Borgani_05}. Nevertheless, it is always wise to bear this consideration in mind when drawing conclusions from numerical data. \begin{table} \caption{$L-T$ relation found in previous simulations, with $L_0^\Delta$ expressed in $10^{43}~h$~erg~s$^{-1}$.} \begin{center} \begin{tabular}{lrcl} \hline {\sc Reference} & $\Delta$~ & $L_0^\Delta$ & ~$\alpha$ \\ \hline \citet{NFW95} & 200 & 4.61 & 2 \\ \citet{BN98} & 200 & 2.67 & 2 \\ \citet{Eke98} & 100 & 0.85 & 2 \\ \citet{Bialek01} & 500 & 2.44 & 2.02 \\ \hline \end{tabular} \end{center} \label{tabLTsims} \end{table} Table~\ref{tabLTsims} shows several fits to the adiabatic $L-T$ relation reported in the literature. The normalizations are in this case broadly consistent with our results (Table~\ref{tabFiducial}), although the scatter between different estimates is extremely large. This is not entirely unexpected, given the sensitivity of the X-ray luminosity to the details of the gas density profile in the central regions. Finally, the baryonic content of galaxy clusters has been recently investigated by \citet{Kravtsov05}. For their adiabatic simulations, they find $F_{2500}=0.85\pm0.08$ and $F_{500}=0.94\pm0.03$ in units of the cosmic value. As noted by these authors, the baryon fractions obtained with the Eulerian code {\sc ART} \citep{ARThydro02} are systematically less concentrated than those obtained with {\sc Gadget}, even when entropy conservation is enforced, but nevertheless the cumulative baryon fractions beyond $r_{2500}$ are about about $3-5$ per cent higher. Since the results of \citet{Kravtsov05} are based on a subset of the cluster sample studied here, the interested reader is referred to that paper for an extensive comparison between both codes. Our results are also compatible with the baryon fraction measured by \citet{Ettori06} in their non-radiative runs, when the standard implementation of the artificial SPH viscosity is used. As pointed out by these authors, it is interesting how an improved scheme may lower the baryon fraction in the innermost regions by about 15 per cent. Although not dramatic, it seems clear that the details of the numerical technique do have a measurable impact on the predicted radial profiles near the centre, as well as on the scaling relations at high overdensities. These relatively small discrepancies between different algorithms should nevertheless be considered as part of the theoretical uncertainty. \section{Observations} \label{secObs} \begin{figure*} \centering \includegraphics[width=15cm]{obs.eps} \caption{ Observed $M-T_{\rm X}$, $F-T_{\rm X}$ and $L_{\rm X}-T_{\rm X}$ scaling relations, compared to our theoretical prediction (solid lines) and the one-sigma scatter expected from $\Deltac_{\rm vir}/c_{\rm vir}\simeq0.3$ (dotted lines). } \label{figObs} \end{figure*} It is the aim of the present work to provide a sound theoretical prediction of the cluster scaling relations when only gravity, adiabatic gasdynamics and shock-wave heating act on the intracluster medium. In real life, additional physical processes, such as radiative cooling, energy injection by stars and AGN or thermal conduction, may play an important role or even determine the exact form of the scaling relations. However, the influence of all these phenomena outside the central regions is expected to be relatively small when compared to shock heating, especially for the most massive systems. Therefore, one may expect a priori that the adiabatic scaling relations roughly match the observed ones. Departures would measure the effect of additional physics, and in principle should be more noticeable at high overdensities and for low-temperature systems. Our predictions for the adiabatic case are compared with observational data in Figure~\ref{figObs}. We find a fairly good agreement, both in shape and, to some extent, scatter, with the observed $M-T$ relation at different overdensities. Only the dataset from \citet{Piffaretti05} is not well described by our model. From visual inspection of Figure~\ref{figObs}, though, it seems that the dark matter masses inferred for some of these systems are lower not only than our results but also than the other observations. \Referee{ On the other hand, the luminosity-temperature relation observed for galaxy clusters is roughly consistent with our theoretical prediction, but the slope is appreciably steeper. As one approaches the group regime, real systems can be one order of magnitude less bright than the model. } The X-ray luminosity is much more sensitive to the details of the central parts than the total mass or the emission-weighted temperature, and thus it is not surprising that the $L-T$ relation deviates from the adiabatic prediction more significantly than the mass-temperature relation. The lower X-ray emissivity seems to be intimately connected to the shape of the gas density profile. Our models correctly predict that the baryon fraction should be an increasing function of radius, and such a trend is clearly consistent with the observational data. However, they also predict that, at a given overdensity, the baryon fraction should be roughly independent on cluster mass. This is blatantly at odds with observations. As noted by \citet{Vikhlinin_05scal}, the conversion of gas into stars is a crucial factor that should be taken into account. Actually, it would be very interesting to measure whether it could completely explain the observed central baryon depletion on its own, or by the contrary some other physical mechanism (e.g. heating) must be invoked in order to explain the observed density and temperature profiles \citep[see e.g.][for a recent discussion on this issue]{Borgani_05}. In any case, radiative processes must obviously affect the physical properties of real clusters to some extent. As recently shown by \citet{OHara_05}, the scaling relations of clusters with and without a cool core are clearly offset from each other, and the observed scatter can be significantly reduced by introducing the peak strength (characterized by the central X-ray surface brightness) as an additional parameter. Although such a parameter would measure a mixture between the intensity of cooling and the internal structure of the halo, a visual comparison between the simulation results plotted in Figure~\ref{figUncorr} and the observational data shown in Figure~\ref{figObs} suggests that observed systems display a somewhat larger scatter than our simulated clusters, which points in the direction that both processes may have a comparable contribution to the total scatter. \Referee{ Measurement errors, most notably for cluster mass estimates, also contribute to the scatter in the observed scaling relations, although they have been reported to be relatively small compared to the intrinsic scatter \citep{OHara_05}. We have also neglected redshift evolution, which can modify the observed masses and luminosities by a factor $H(z)/H_0$, which for a $\Lambda$CDM universe amounts to about 40 per cent at $z=0.2$. } A rigorous statistical analysis (and a larger dataset) would be required in order to make a quantitative assessment. \section{Conclusions} \label{secConclus} In this paper, the scaling relations between gas and dark matter mass, X-ray luminosity and emission-weighted temperature of galaxy groups and clusters have been investigated from a theoretical point of view. As a starting point, we have considered a relatively simple case in which the influence of radiative processes on the observable properties of the ICM gas has been completely neglected. Our estimates of the adiabatic scaling relations have been computed from the polytropic models described in \citet{Ascasibar03}, based on the results of high-resolution gasdynamical simulations. Our main conclusions can be summarized as follows: \begin{enumerate} \item Dark matter haloes are well known not to scale self-similarly, but according to a certain mass-concentration relation. We find that the effective polytropic index of the gas also varies systematically with mass, and propose the phenomenological fit $\gamma=1.145+0.005\,c_{200}$, equation (\ref{eqCG}), to model the dependence of $\gamma$ on the concentration $c$ of the dark matter halo. \item Given $c(M)$ and $\gamma(c)$, the whole structure of the ICM is fully specified by our model. It turns out that the effects of the varying polytropic index and concentration tend to cancel out at all overdensities, yielding scaling relations that are well described by simple power laws whose exponents coincide with the self-similar prediction, and whose normalizations are well fitted by a `typical' $c_{\rm vir}\simeq8$. \item Our model provides an excellent match to numerical data. The normalization of the M-T relation is significantly lower than previous values reported in the literature, which we attribute to a resolution effect. The scaling of the baryon fraction and the $L-T$ relation are broadly consistent with independent numerical work. \item Additional physics (most notably, radiative cooling and star formation) has an important effect on the density and temperature profiles of real clusters. The $M-T$ relation is not severely affected, but the baryon fraction observed in low-mass systems is considerably below our theoretical prediction. This results in a lower X-ray luminosity, and it is ultimately responsible for the steepness of the observed $L-T$ relation. On the other hand, the precise strength of cool cores (which cannot form in our simulations) seems to increase the scatter around the average scaling relations. \end{enumerate} \section*{Acknowledgments} This work has been partially supported by the \emph{Plan Nacional de Astronom\'\i a y Astrof\'\i sica} (AYA2003-0973), the \emph{Acciones Integradas Hispano-Alemanas} (HA2000-0026), the \emph{Deutscher Akademischer Austausch Dienst} and NASA grants G02-3164X and G04-5152X. We thank the CIEMAT, the \emph{Forschungszentrum J\"ulich} and the \emph{Astrophysikalisches Institut Potsdam} for kindly allowing us to use their supercomputer facilities to carry out the numerical simulations used in this article. \bibliographystyle{mn2e}
2,869,038,155,168
arxiv
\section{Introduction} Hyperspectral imaging is widely used in various applications, such as biomedical imaging, terrain classification, and military surveillance \cite{tiwari2011An,zhang2015Compression,zhao2014Hyperspectral}. However, clean hyperspectral images (HSIs) are rarely obtained due to unavoidable corruptions by mixed types of noise, such as Gaussian noise, impulse noise, deadlines, and stripes, in the acquisition process \cite{zhang2014Hyperspectral}, which makes it more challenging to process HSIs in various applications, such as classification \cite{li2017Hyperspectral} and unmixing \cite{iordeche2011Sparse}. Thus, there is an essential demand for developing efficient algorithms to remove noise from HSIs. In the last decades, a number of HSI denoising techniques have been developed. For example, \cite{othman2006noise} exploits the dissimilarity of signal regularity in both spatial and spectral dimensions of the HSIs with a hybrid spatial-spectral derivative-domain wavelet shrinkage model; \cite{zhong2013multiple} propose to simultaneously adopt the spatial and spectral dependences in a unified probabilistic framework; \cite{qian2013hyperspectral} proposes a sparse representation-based framework that considers both nonlocal similarity and spectral-spatial structure of HSIs. Moreover, methods such as principal component analysis, wavelet shrinkage, anisotropic diffusion, and multitask sparse matrix factorization, etc., have been considered for HSI denoising \cite{ye2015multitask,chen2011denoising,duarte2007comparative,wang2010anisotropic}. Most of the above mentioned methods require some specific prior knowledge of the noise. Unfortunately, the above methods can only remove one or two kinds of noise due to the limitation of prior knowledge on the noise. Thus, there is still a demand in developing more effective method to remove mixed types of noise from HSIs. More recently, low-rank matrix recovery-based approach has been developed for HIS denoising \cite{Xu2017Denoising}, which obtains promising performance. The basic assumption is that the data can be decomposed into a low-rank and a sparse component, which generally holds for HSIs with the two components corresponding to clean images and sparse noise, respectively. Robust principal component analysis (RPCA) has been widely used for low-rank and sparse matrices separation \cite{Cand2011Robust}. The separation usually relays on the nuclear and $\ell_1$ norm minimization. Recent studies point out that the nuclear norm may not approximate the true rank function well, which may lead to degraded performance in low-rank matrix recovery \cite{peng2015subspace}. Consequently, some nonconvex approaches to RPCA have been developed to better approximate the rank function, which has been shown successful \cite{peng2020robust}. The nonconvex RPCA methods mainly focus on developing more accurate low-rank approximation for low-rank matrix recovery. However, existing methods rarely consider developing more accurate sparse approximation for the sparse matrix recovery. In fact, there is a close connection between the low-rank and sparse recovery problems. The minimization of rank for a matrix actually equals to the minimization of sparseness for all its singular values. For nonconvex rank approximations, the improved approximation actually benefits from the improved approximation to the sparsity of the singular values. Thus, the great success of nonconvex rank approximation inspires us to separate low-rank and sparse matrices with nonconvex approximations to both the rank and sparseness, respectively. In this paper, we propose a novel RPCA method with log-based approximations to both the rank function and column-wise sparsity. We summarize the key contributions of our paper as follows: 1) We propose a novel approximation to approximate column-wise sparsity, named $\ell_{2,\log}$-norm, which is more accurate than the widely used $\ell_{2,1}$ norm. Moreover, the $\ell_{2,\log}$-norm is unitary invariant; 2) We formally provide a closed-form solution to the $\ell_{2,\log}$ norm-based thresholding problem, which can be generally used in various problems that restrict column-wisely sparsity; 3) Efficient optimization algorithm is developed for the proposed model, which is theoretically guaranteed to converge. 4) We observe superior performance of the proposed method compared with sate-of-the-art baseline methods, which confirm the effectiveness of our method. \section{Related Work} \label{sec_related} Given data matrix $X$, RPCA assumes that the data can be decomposed into a low-rank and a sparse parts, which can be mathematically formed as $X=L+S$. To do this, the classic RPCA aims at solving the following constrained optimization problem \cite{Cand2011Robust}: \begin{equation} \label{eq_rpca_l1} \min_{L,S} \|L\|_* + \lambda \|S\|_{1}, \quad s.t.\quad X = L+S, \end{equation} where $\|\cdot\|_*$ is the nuclear norm that adds all singular values of the input matrix, $\|\cdot\|_1$ is the $\ell_1$ norm that adds the absolute values of all elements of the input matrix, and $\lambda\ge 0$ is a balancing parameter. \section{RPCA with Log-based Approximations} Given the observed HSI $\mathcal{X}\in\mathcal{R}^{r\times c\times n}$, with $r$ and $c$ being the spatial dimensions and $n$ being the spectral dimension, we first divide $\mathcal{X}$ into overlapped patches with each patch has size $q\times q\times n$. Then we obtain a matrix $X$ of size $q^2\times n$ by lexicographically ordering each patch. In the rest of this section, we develop our model based on the observation $X$. It is noted that the new model can be directly applied to the overall data set with reshaped size $(rc)\times n$, whereas the patch-based processing follows a common strategy for HSI application. It is natural that the observed HSI can be decomposed into a low-rank and a sparse component, which correspond to the low-rank clean data and sparse noise, respectively. To separate them from the data $X$, it is natural to adopt the RPCA model in \cref{eq_rpca_l1}. To enhance the spatial connection information in the sparse component, the $\ell_{2,1}$ norm is adopted to replace the $\ell_1$ norm in \cref{eq_rpca_l1}, leading to \cite{xu2012robust} \begin{equation} \label{eq_rpca_l21} \min_{L,S} \|L\|_* + \lambda \|S\|_{2,1}, \quad s.t. \quad X = L+S, \end{equation} where $\|S\|_{2,1} = \sum_{j}\|s_j\|_2$ is the $\ell_{2,1}$ norm, and $\|s_j\|_2$ is the $\ell_2$ norm of each column of $S$. In fact, there is a close connection between the nuclear norm and the $\ell_{2,1}$ norm. For a matrix $M$, we define $a$ and $b$ to be vectors that contains singular values of $M$ and $\ell_2$ norm of all columns, respectively. Then, based on the definitions of the nuclear and $\ell_{2,1}$ norms, it is easily seen that $\|M\|_* = \|a\|_1$ and $\|M\|_{2,1}=\|b\|_1$, respectively. Thus, the minimization of the nuclear norm and the $\ell_{2,1}$ norm restricts the sparsity of singular values and columns, respectively. Recently, it is pointed out that the minimization of nuclear norm may not lead to a desired low-rank matrix recovery due to the inaccurate approximation of the nuclear norm to the rank function. To overcome this issue, nonconvex approaches have been developed to approximate the rank function more accurately than the nuclear norm, which has been shown successful in various applications such as subspace clustering \cite{peng2015subspace,peng2016feature} and robust PCA \cite{peng2020robust}. This inspires us to adopt nonconvex rank approximation technique to recover the low-rank component from the data. Specially, we adopt the log-determinant rank approximation, which is defined as \begin{equation} \label{eq_logdet} h(L) = \operatorname{logdet}(I + (L^TL)^{\frac{1}{2}}) = \sum_{i}(1+\sigma_i(L)), \end{equation} where $I$ is an identity matrix of proper size and $\sigma_{i}(\cdot)$ is the $i$-th largest singular value of the input matrix. \cref{eq_logdet} leads to the following model: \begin{equation} \min_{L,S} \operatorname{logdet}(I+(L^TL)^{\frac{1}{2}}) + \lambda \|S\|_{2,1}, \quad s.t. \quad X = L+S. \end{equation} Due to the close connection between the nuclear and the $\ell_{2,1}$ norms, the later suffers from similar issue to the former when there are large values in a matrix. However, recent works have mainly focused on the nonconvex approximation to the low-rank part while the sparse part is rarely considered. This inspires us to develop more accurate approximation to restrict the column-wise sparsity for a matrix. In this paper, to better approximate the column-wise sparsity, we propose the following novel measurement, named $\ell_{2,\log}$ norm: \begin{equation} \label{eq_col_log_norm} \mathcal{C}(S) = \|S\|_{2,\log} = \sum_{i=1}^{n} \operatorname{log}(1+\|s_i\|_2). \end{equation} It is seen that $\mathcal{C}(S)$ can measure the column-wise sparsity more accurately than the $\ell_{2,1}$ norm if $S$ contains large values. Moreover, it is invariant. Incorporating \cref{eq_col_log_norm} into the objective, we obtain the following Log-based Low-rank and Sparse approximations for RPCA model (LLS-RPCA): \begin{equation} \label{eq_obj} \min_{L,S} h(L) + \lambda \mathcal{C}(S), \quad s.t. \quad X=L+S \\ \end{equation} For its optimization, we will develop an efficient algorithm based on augmented Lagrange multiplier (ALM) optimization technique. \section{Optimization} The augmented Lagrangian function of \cref{eq_obj} is as follows: \begin{equation} \begin{aligned} \mathcal{L} = & h(L) + \lambda \mathcal{C}(S) + \frac{\rho}{2}\|X-L-S+\Theta / \rho\|_F^2, \end{aligned} \end{equation} \subsection{Optimization w.r.t. $L$} \label{sec_opt_L} The sub-problem associated with $L$ is \begin{equation} \label{eq_sub_L} \min_{L} \operatorname{logdet}(I+(L^TL)^{\frac{1}{2}}) + \frac{\rho}{2}\|X-L-S+\Theta / \rho \|_F^2. \end{equation} For a matrix $D$, we define $\mathcal{P}(D)$, $\mathcal{Q}(D)$, and $\sigma_i(D)$ to be its left and right singular vectors and the $i$-th largest singular value, respectively. Then, similar to \cite{peng2015subspace,peng2020robust}, \cref{eq_sub_L} admits a closed-form solution with the following operator: \begin{equation} \label{eq_sol_c_uffp} L = \mathcal{D}_{\frac{1}{\rho}}(X-S+\Theta/\rho), \end{equation} where $\mathcal{D}_{ \tau }(D) = \mathcal{P}(D) \text{diag}\{\sigma_i^*\} (\mathcal{Q}(D))^T$, with \begin{eqnarray} \sigma_{i}^* = \begin{cases} \xi,&\mbox{ if $f_i(\xi) \le f_i(0)$ and $ (1 + \sigma_{i}(D))^2 > 4\tau$, } \\ 0, &\mbox{ otherwise, } \end{cases} \end{eqnarray} where $f_i(x) = \frac{1}{2}(x-\sigma_i(D))^2 + \tau \log (1+x)$, and $\xi = \frac{\sigma_i(D)-1}{2} + \sqrt{\frac{(1+\sigma_i(D))^2}{4} - \tau }$. \subsection{Optimization w.r.t. $S$} The sub-problem associated with $S$ is \begin{equation} \label{eq_sub_S} \begin{aligned} \min_{S} \lambda \sum_{i=1}^{n} \! \operatorname{log}(1+\|s_i\|_2) \!+\! \frac{\rho}{2}\|X\!-\!L\!-\!S+\Theta / \rho\|_F^2. \end{aligned} \end{equation} For the optimization, we have the following theorem. \begin{theorem}[$\ell_{2,\log}$-shrinkage operator] Given matrix $Y\in\mathcal{R}^{d\times n}$ and a nonnegative parameter $\tau$, the following problem \begin{equation} \label{eq_soft_thres} \min_{W} \frac{1}{2} \|Y-W\|_F^2 + \tau \|W\|_{2,\log} \end{equation} % admits closed-form solution in a column-wise manner: \begin{equation} \label{eq_sol_soft_thres} \!w_i \!=\! \begin{cases} \frac{\xi}{\|y_i\|_2}\!y_i, \!&\!\!\! \mbox{ \!if $f_i(\xi) \!\le\! \frac{\|y_i\|_2^2}{2}, \frac{(1 \!+\! \|y_i\|_2)^2}{4}\!>\!\tau$, and $\xi >0$} \\ 0, & \!\mbox{ otherwise, } \end{cases} \end{equation} where $f_i(x) = \frac{1}{2}(x-\|y_i\|_2)^2 + \tau \log (1+x),$ and $\xi = \frac{\|y_i\|_2-1}{2} + \sqrt{\frac{(1+\|y_i\|_2)^2}{4} - \tau }.$ \end{theorem} Due to space limit, we do not provide the detailed proof. For ease of representation, we denote the $\ell_{2,\log}$-shrinkage operator of \cref{eq_sol_soft_thres} as $\mathcal{F}_{\tau}(Y)$. Thus, the solution to \cref{eq_sub_S} is \begin{equation} S = \mathcal{F}_{\frac{\lambda}{\rho}}(X-L+\Theta/\rho). \end{equation} \subsection{Updating $\Theta$ and $\rho$} For $\Theta$ and $\rho$, we update them in a standard way as follows: \begin{equation} \label{eq_update_theta} \begin{aligned} \Theta &= \Theta + \rho(X-L-S), \\ \rho & = \rho\kappa, \end{aligned} \end{equation} where $\kappa>1$ is a parameter that keeps $\rho$ to be increasing at each iteration. The optimization theoretically guarantees the convergence of LLS-RPCA. However, due to space limit, we do not expand detailed proof in this paper. \vspace{-2mm}\section{ Experiments } In this section, we conduct experiments to testify the performance of the LLS-RPCA in HSI denoising. In particular, we compare it with RPCA \cite{Cand2011Robust}, LRMR (LRMR) \cite{zhang2014Hyperspectral}, spectral-spatial total variation (SSTV) \cite{dddd2011sstv}, and noise-adjusted LRMA (NAILRMA) \cite{he2015Hyperspectral}. We test all methods on both synthetic and real-world data sets. We will present the detailed experimental results in rest of this section. All experiments are conducted on a Lenovo laptop with Intel Core-i7-9750H CPU @2.6GHz, 16GB RAM, and 64-bit Windows 10 system. \subsection{Experiments on Simulated Data Sets } \label{sec_exp_simu} To quantitatively evaluate the proposed method, we first conduct experiments on synthetic data set. We follow \cite{He2015Total} and use the Indian Pine data set \cite{PURR1947} to generate the synthetic data set. Totally, we generate 224 bands with each band containing 145$\times$145 pixels. Detailed descriptions of the synthetic data set can be found in \cite{He2015Total}. We treat the generated data set as the ground truth clean HSIs. To evaluate the denoising performance of the LLS-RPCA, we artificially add noise to the generated HSIs. \begin{table} \caption{ Quantitative Comparison on Synthetic Data Sets } \resizebox{0.48\textwidth}{!} { \begin{tabular}{ |c|c|c|c|c|c|c|} \hline \multirow{2}{1.5cm}{\centering  }& \multicolumn{6}{c|}{The First Mixed Noise}\\ \cline{2-7} \multirow{2}{1.5cm}{} & Noisy & SSTV & LRMR & NAILRMA & RPCA & Ours \\ \hline MPSNR(dB) & 16.972&30.431 &33.795 &33.719 &29.711 & $\mathbf{34.823}$ \\ \hline MSSIM & 0.277 &0.774 &0.903 &0.902 & 0.810 &$\mathbf{0.912}$ \\ \hline ERGAS & 332.547 &74.370 &50.087 &50.329 &78.723 &$\mathbf{44.303}$ \\ \hline TIME(s) &\ &83.686 &205.204 &106.471 &612.642 &$\mathbf{7.671}$\\ \hline \multirow{2}{1.5cm}{\centering }& \multicolumn{6}{c|}{The Second Mixed Noise}\\ \cline{2-7} \multirow{2}{1.5cm}{} & Noisy & SSTV & LRMR & NAILRMA & RPCA & Ours \\ \hline MPSNR(dB) & 14.026 &27.998 &30.901 &31.358 &27.377 & $\mathbf{32.126}$ \\ \hline MSSIM & 0.199 &0.678 &0.838 &0.840 & 0.731 &$\mathbf{0.853}$ \\ \hline ERGAS & 466.370 &96.043 &68.658 &64.577 &102.107 & $\mathbf{59.176}$ \\ \hline TIME(s) & \ &82.542 &207.204 &105.261 &493.382 &$\mathbf{11.092}$ \\ \hline \end{tabular} }% \label{tab_simulated} \end{table} In particular, we add noise to the clean images in the following two ways: \begin{itemize} \item[1)] We fix the noise intensity for all bands. In particular, we add zero-mean Gaussian noise to each band individually, where the variance equals to 0.14. Then, we add fringes to bands 161-190 as follows. For each band, we randomly select 20-40 columns. Then, for each selected column, we add a fixed value to all pixels, where the value is randomly picked from $(-0.25,0.25)$. \item[2)] We randomly add noise to each band, where the SNR value of each band varies from 45 to 55 dB. The mean SNR value of all bands is 49.75. Then we add salt-and-pepper impulse noise to the data set, where 20\% pixels are randomly corrupted in each band. We randomly set the impulse noise intensity of each band from $[0.0196,0.0784]$, where the mean intensity is 0.0492. \end{itemize} Then, we apply all methods to the above two noisy data sets. In our experiments, we adopt three widely used metrics, including including peak SNR (PSNR) \cite{huynh2008Scope}, structural Similarity index (SSIM) \cite{wang2004image}, and ERGAS \cite{wald:hal-00464703}. For all methods, we tune their parameters such that best performance can be observed. Specially, for RPCA, we use it theoretically optimal value for the balancing parameter. For each method, we report its averaged performance over all bands in \cref{tab_simulated}. It is seen that the proposed method has the best performances in all metrics under both noise settings. Among the baseline methods, the LRMR and NAILRMA are the most competing ones. Compared with the LRMR (resp. NAILRMA), the proposed method improves the performance in PSNR, SSIM, and ERGAS by (1,0.01,6) and (1,0.01,6) (resp. (1.5,0.02,9) and (1,0.01,5)) under the two mixed noise conditions, respectively. Compared with other methods, the improvements are more significant. Moreover, the proposed method is significantly faster than SSTV, LRMR and NAILRMA. These observations confirm the effectiveness of the proposed method from quantitative perspective. \subsection{Experiments on Real Data Sets } {In this test, we evaluate the proposed method on two real world datasets, including HYDICE unban and AVIRIS Indian Pines. For all methods, we follow the strategy in above test to tune the parameters. Detailed descriptions of the data sets and experimental results are presented in the following. \subsubsection{HYDICE Urban Dataset} This data set contains 210 bands of HSIs, where each image has 307$\times$307 pixels. These images are corrupted by stripes, deadlines, atmosphere, water absorption, and other unknown types of noise. Some bands in this data set are severely polluted by atmosphere and water absorption, which rarely provide useful information. Since the real world data set lacks the underlying clean images, we follow a common strategy and visually compare the performance of all methods. Without loss of generality, we show the results at bands 152, 139, and 207, respectively, which involves the bands with both light and heavy noise. It is observed that the LLS-RPCA can simultaneously remove the mixed types of noise from the noisy images. It is seen that bands 139 and 207 are heavily corrupted whereas the LLS-RPCA well recovers structural information and preserves rich details. Among all baseline methods, NAILRMA has the best performance, which removes almost all noise. However, some detail information is missing in the recovered images and we can still observe some stripe effects in band 139. The performance of other baseline methods are less competitive. For example, all other methods cannot remove stripes in band 139; These observations suggest effectiveness of the LLS-RPCA and its superior performance to the baseline methods. \subsubsection{AVIRIS Indian Pines Dataset} The AVIRIS Indian Piness dataset is acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) in Northwestern Indiana in 1992. This data set contains images of 145$\times$145 pixels in 220 bands, where some bands are severely damaged by the mixed Gaussian and impulse noise. We show the performance of these methods in \cref{fig_indian}. Among all bands, we report the results on the 150 and 220th bands, where it is difficult to observe useful information due to heavy noise. It is seen that there are still heavy noises in restored images by the SSTV and RPCA, where local veins are still unrecognizable. Among the baseline methods, the LRMR and NAILRMA are the most competitive ones, which remove majority of noise. However, we can still observe some slight local noises and veins remaining in the smooth regions. The LLS-RPCA not only restores the main information image, but also eliminates local noise, which show superior performance to the baseline methods.} \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{fig_real/fig_urban.png} \\ \caption{ Restoration results on HYDICE urban data set. From top to bottom are the images located at the 152, 139, and 207th bands, respectively. From left to right are the original and restored images by SSTV, LRMR, NAILRMA, RPCA, and LLS-RPCA, respectively.} \label{fig_urban} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{fig_real/fig_indian.png} \caption{ Restoration results on AVIRIS indian pines data set. From top to bottom are the images located at the 150, and 220th bands, respectively. From left to right are the original and restored images by SSTV, LRMR, NAILRMA, RPCA, and LLS-RPCA, respectively.} \label{fig_indian} \end{figure} \section{Conclusion} In this paper, we propose a novel RPCA method, named LLS-RPCA for HSI denoising. The new method adopts log-based functions for both low-rank and sparse approximations. The $\ell_{2,\log}$ norm is more accurate than the widely used $\ell_{2,1}$ norm in approximating column-wise sparsity. We formally provide solution to $\ell_{2,\log}$-shrinkage problem, which can be generally used for other problems that restrict column-wise sparsity. Extensive experiments confirm the effectiveness and efficiency of the proposed method in HSI denoising. {\small\bibliographystyle{IEEEbib}
2,869,038,155,169
arxiv
\section{Introduction}\label{intro} Driven vortex lines in type-II superconductors, in the presence of randomly distributed point-like quenched disorder, constitute a rich, highly complex system far from equilibrium, with a number of competing energy, time and length scales. This system displays a variety of thermodynamic phases and intriguing transport properties \cite{Blatter1994}, making it interesting from a statistical physics viewpoint. When driven by Lorentz forces generated by an external current, these magnetic flux vortices move through the superconducting sample generating an electric field that opposes the external current, resulting in Ohmic dissipation. Quenched disorder in the form of material defects that effectively act as attractive pinning centers can be optimally distributed in the system to curb flux flow, consequently restoring dissipation-free transport \cite{Blatter1994}, the property that renders these materials desirable for technology (such as generation of strong magnetic fields for MRI scanners and particle accelerators). Uncorrelated point-like disorder can be naturally occurring (e.g. oxygen vacancies) or artificially introduced (e.g. by electron irradiation \cite{Kwok1994}). Weak uncorrelated disorder is known to destroy the long-range translational order of the Abrikosov flux line lattice that forms at low temperatures in a three-dimensional system free of disorder, replacing it with either a vortex glass phase completely devoid of translational order \cite{Fisher1989,Feigelman1989,Nattermann1990,Fisher1991,Kwok1992} or a Bragg glass phase with quasi long-range positional order \cite{Giamarchi1994,Giamarchi1995,Kierfeld1997,Fisher1997,Giamarchi1997,Nattermann2000}. An applied external electric current exerts a transverse Lorentz force on the flux lines. At ambient temperatures much lower than the temperature at which the vortex glass melts into a flux liquid in the absence of an external current, the introduction of external current transforms the disorder-dominated equilibrium phase into a creeping Bragg glass state. As the current is increased beyond the critical depinning transition value, the stresses in the flux line lattice increase sufficiently enough to result in the breaking of the lattice into a moving liquid. At even larger drive, transverse order develops in the liquid to result in the formation of a moving smectic which when subjected to even higher drive dynamically freezes into a moving Abrikosov flux lattice \cite{Nattermann2000}. In our numerical simulations, finite-size effects resulting from the small system size and low number of flux lines render it difficult for us to distinguish between the moving liquid, smectic, and moving lattice states. We therefore limit ourselves in the present work to a broad classification of states of vortex matter into three regimes of current viz. the {\it moving regime} corresponding to {\it high} current values, the {\it pinned regime} corresponding to {\it low} current values and the {\it critical regime} associated with {\it intermediate} values; the terms {\it high}, {\it low} and {\it intermediate} are quantified in Section \ref{secRegimes} below. A material shows physical \lq aging' when it undergoes slow relaxation from a state away from equilibrium to reach thermal equilibrium and displays breaking of time-translation invariance in this non-stationary regime \cite{Struik1978,Henkel2010}. Aging is often seen in frustrated environments characterized by a large number of energetically close metastable states, that is, in glassy systems \cite{Henkel2007}. The discovery of the dependence of the voltage response to an applied current in superconducting 2H-NbSe$_2$ on the duration of the application clearly indicated the presence of physical aging in disordered vortex matter \cite{Du2007}. Bustingorry, Cugliandolo, and Dom\'inguez studied a three-dimensional elastic line model of vortex matter by means of Langevin molecular dynamics (LMD) to identify physical aging features in two-time quantities like density-density autocorrelation and mean-square displacement \cite{Bustingorry2006,Bustingorry2007}. The aforementioned studies utilized a random landscape representation for the distribution of disorder while we implement isolated localized pinning centers of uniform potential, two approaches that yield significantly different relaxation properties \cite{Dobramysl2014} and aging scaling exponents \cite{Pleimling2011}. Pleimling and T\"auber investigated the non-equilibrium relaxation properties of vortex matter as elastic lines by means of Monte Carlo simulations \cite{Pleimling2011}; Dobramysl {\em et al.} later verified these results with a different microscopic representation of the system's dynamics through LMD \cite{Dobramysl2013}. Assi {\em et al.} proceeded to use the latter LMD implementation of the system to study relaxation dynamics of vortex lines following magnetic field and temperature quenches to analyze the system's sensitivity to sudden external perturbations \cite{Assi2015}. In the present work, we employ Langevin molecular dynamics to study the aging relaxation dynamics of flux lines in type-II superconductors when subjected to sudden changes in the externally applied current. We accomplish this by subjecting our coarse-grained elastic lines representing vortex matter to an instantaneous change in driving force after letting the lines relax under the influence of the initial drive sufficiently long for them to have reached a moving, non-equilibrium steady state. We either quench the current within the moving regime or from the moving into the pinned regime. We perform these simulations first with repulsive vortex interactions turned off, and subsequently switched on, in order to identify the physical mechanisms behind the complex features seen in the vortex relaxation dynamics. We investigate this rich non-equilibrium relaxation behavior by the measurement of various one- and two-time observables. The organization of this paper is as follows. The next section explains the elastic line model, defines the LMD algorithm we employ to implement its stochastic dynamics, and specifies the material parameters we use for the implementation. We then describe the simulation protocol for the drive quenches and define the one- and two-time observables we measure in order to quantify the relaxation behavior of the system post-quench. We end the section by quantifying the pinned and moving regimes, based on steady-state results for the one-time quantities measured in the system. Section 3 is devoted to results obtained from studying the relaxation properties of systems of flux lines undergoing sudden current changes. We compare the effects of instantaneously decreasing or down-quenching the drive between values in the moving regime to results for quenches from the moving to the pinned regime, which yield markedly different behavior. We also isolate the effect of inter-vortex interactions on the relaxation phenomena by alternately running the simulations with interactions switched off and on. We conclude the paper by summarizing our results in Section 4. \section{Elastic Line Model and Simulation Protocol}\label{sec2} \subsection{Model Hamiltonian}\label{secHam} We model magnetic flux lines dynamically in the extreme London limit (where the London penetration depth is much larger than the coherence length) as mutually repulsive elastic lines \cite{Nelson1993,Das2003}. We write down the Hamiltonian of the system as a sum of three competing terms viz. the elastic line tension energy, the attractive potential due to point-like pinning sites, and the mutual repulsive interactions between flux lines: \begin{equation}\label{Ham} \fl H[\mathbf{r}_{i,z}(t)]=\displaystyle\sum_{i=1}^{N}\int_{0}^{L}dz \Bigg[ \frac{\tilde{\epsilon}_1}{2}\left\vert\frac{d\mathbf{r}_{i,z}(t)}{dz}\right\vert^2+U_D(\mathbf{r}_{i,z}(t),z) + \frac{1}{2}\displaystyle\sum_{j\neq i}^{N}V(|\mathbf{r}_{i,z}(t) - \mathbf{r}_{j,z}(t)|)\Bigg]. \end{equation} $\mathbf{r}_{i,z}(t)$ is the position vector in the $xy$-plane at time $t$, of the line element of the $i$th flux line (one of $N$), at height $z$ in the vertical direction, which is also the direction of the applied external magnetic field. The elastic line stiffness or local tilt modulus is given by $\tilde{\epsilon}_1 \approx \Gamma^{-2}\epsilon_0\ln(\lambda_{ab}/\xi_{ab})$ where $\Gamma^{-1}=M_{ab}/M_c$ is the effective mass ratio or anisotropy parameter. $\lambda_{ab}$ is the London penetration depth and $\xi_{ab}$ is the coherence length, in the $ab$ crystallographic plane. The in-plane repulsive interaction between any two flux lines is given by $V(r)=2\epsilon_0K_0(r/\lambda_{ab})$, where $K_0$ denotes the zeroth-order modified Bessel function. It effectively serves as a logarithmic repulsion that is exponentially screened at the scale $\lambda_{ab}$. The pinning sites are modeled as randomly distributed smooth potential wells, given by \begin{equation}\label{pin_pot} U_D(\mathbf{r}, z)=-\displaystyle\sum_{\alpha=1}^{N_D}\frac{b_0}{2}p\left[1-\tanh\left(5\frac{|\mathbf{r}-\mathbf{r}_\alpha|-b_0}{b_0}\right)\right]\delta(z-z_\alpha), \end{equation} where $N_D$ is the number of pinning sites, $p\ge0$ is the pinning potential strength, and $\mathbf{r}_\alpha$ and $z_\alpha$ respectively represent the in-plane and vertical position of pinning site $\alpha$. In the following, all lengths are measured in units of the pinning potential width $b_0$. Energies are measured in units of $\epsilon_0 b_0$, where $\epsilon_0=(\phi_0/4\pi\lambda_{ab})^2$ is the elastic line energy per unit length, and $\phi_0=hc/2e$ is the magnetic flux quantum. \subsection{Langevin Molecular Dynamics}\label{secLan} In order to simulate the dynamics of the model, we discretize the system along the $z$ axis, i.e., the direction of the external magnetic field, into layers, with the layer spacing corresponding to the crystal unit cell size $c_0$ along the crystallographic $c$ direction \cite{Das2003,Bullard2008}. Consequently, each elastic line is broken up into points, with each point belonging to a given line, residing in a unique layer. Any two points of the same line in neighboring layers attract each other via an elastic force, the potential between them constituting the first term in the Hamiltonian \eref{Ham}. Points in the same layer repel each other via long-range logarithmic interactions that are defined by the third term of the Hamiltonian. The pinning sites are also confined to these layers perpendicular to the $z$ axis, and are modeled as smooth potential wells \eref{pin_pot}. The interactions between the discrete elements of the system that are described here are encapsulated in the properly discretized version of the Hamiltonian. We use this discretized Hamiltonian to obtain coupled, overdamped Langevin equations which we solve numerically: \begin{equation}\label{lan} \eta\frac{\partial\mathbf{r}_{i,z}(t)}{\partial t}=-\frac{\delta H[\mathbf{r}_{i,z}(t)]}{\delta\mathbf{r}_{i,z}(t)}+\mathbf{f}_{i,z}(t). \end{equation} Here, $\eta=\phi_0^2/2\pi\rho_n c^2 \xi_{ab}^2$ is the Bardeen-Stephen viscous drag parameter, where $\rho_n$ represents the normal-state resistivity of YBCO near $T_C$ \cite{Blatter1994,Bardeen1965}. We model the fast, microscopic degrees of freedom of the surrounding medium by means of thermal stochastic forcing as uncorrelated Gaussian white noise $\mathbf{f}_{i,z}(t)$ with vanishing mean $\langle\mathbf{f}_{i,z}(t)\rangle=0$. Furthermore, these stochastic forces obey the Einstein relation \begin{equation}\nonumber \langle\mathbf{f}_{i,z}(t) \cdot \mathbf{f}_{j,z'}(s)\rangle = 4\eta k_BT\delta_{ij}\delta_{zz'}\delta(t-s), \end{equation} which ensures that the system relaxes to thermal equilibrium with a canonical probability distribution $P[\mathbf{r}_{i,z}]\propto e^{-H[\mathbf{r}_{i,z}]/k_BT}$ in the absence of external current. \subsection{Model Parameters}\label{secParam} We have selected our model parameters to closely match the material properties of the ceramic high-$T_C$ type-II superconductor YBa$_2$Cu$_3$O$_7$ (YBCO). The pinning center radius is set to $b_0=35\AA$; all simulation distances are measured in units of this quantity. The inter-layer spacing in the crystallographic $c$ direction is set to this microscopic scale, $c_0=b_0$. The in-plane London penetration depth and superconducting coherence length are chosen to be $\lambda_{ab}=34b_0\approx 1200\AA$ and $\xi_{ab}=0.3b_0\approx 10.5\AA$ respectively, in order to model the high anisotropy of YBCO, which has an effective mass anisotropy ratio $\Gamma^{-1}=1/5$. The line energy per unit length is $\epsilon_0\approx1.92\cdot 10^{-6}\mathrm{erg}/\mathrm{cm}$; all simulation energies are measured in units of $\epsilon_0 b_0$. This effectively renders the vortex line tension energy scale to be $\tilde{\epsilon}_1/\epsilon_0\approx 0.189$. The pinning potential well depth is set to $p/\epsilon_0=0.05$. The temperature in our simulations is set to $10\,$K ($k_{\rm B} T / \epsilon_0 b_0=0.002$ in our simulation units). The Bardeen-Stephen viscous drag coefficient $\eta=\phi_0^2/2\pi\rho_nc^2\xi_{ab}^2 \approx10^{-10} \, \mathrm{erg} \cdot\mathrm{s} / \mathrm{cm}^2$ is set to one, where $\rho_n \approx 500 \, \mathrm{\mu \Omega m}$ is the normal-state resistivity of YBCO near $T_C$ \cite{Abdeladi1994}. This results in the simulation time step being defined by the fundamental temporal unit $t_0=\eta b_0/\epsilon_0\approx 18\,$ps; all times are measured in units of $t_0$. \subsection{Drive Quench Simulation Protocol}\label{secProt} Our system consists of $N=16$ flux lines, moving in a three-dimensional space with periodic boundary conditions in the $xy$ directions and free boundary conditions along the $z$ direction. The system is discretized into $L=100$ layers along the $z$ direction. We simulate point-like disorder by randomly distributing $1116$ pinning sites per layer, using a different random distribution for each layer. We set the horizontal system size to $(16/\sqrt{3}\lambda_{ab}\times 8\lambda_{ab})$. This ratio of horizontal boundary lengths is necessary to ensure that the system, in the absence of disorder or drive, equilibrates to a state where the flux lines arrange themselves into a periodic hexagonal Abrikosov lattice. Each simulation run starts with perfectly straight flux lines, distributed randomly in the computational space. The Lorentz force exerted on the flux lines by an external current is modeled in the system as a tunable, spatially uniform drive $F_d$ in the $x$ direction, the introduction of which requires the addition of a work term $-\mathbf{F_d}\cdot\mathbf{r}_{i,z}(t)$ to the Hamiltonian \eref{Ham} and hence $\mathbf{F_d}$ to the right-hand side of the Langevin equations \eref{lan}. Having set $F_d$ to some initial value, the lines are left to relax beyond microscopic time scales in a temperature bath at $T=0.002$ in our dimensionless units, or $10$ K. The strength of the initial drive is chosen according to whether we want to start with a system in a moving or pinned state; we discuss these states in detail in Section \ref{secRegimes}. During this time, thermal fluctuations contribute towards the roughening of the lines. After this initial relaxation period of $60,000t_0$, we instantaneously change the drive to a different value that once again corresponds to a state in either the pinned or moving regime. At this point, we reset the system clock $t$ to $0$. All physical quantities are measured with respect to $t$. Following the drive quench, we start the measurement of one-time quantities and allow the system to relax for waiting time $s$. After the waiting time has elapsed, we take a snapshot of the system. We then begin the measurement of two-time quantities which continues until the end of the simulation, with a run time of $500,000 t_0$ after the drive quench (set at $t=0$). \subsection{Measured Quantities}\label{mesQua} In the course of a simulation, we measure one- and two-time physical quantities, all of which are averaged over many disorder realizations and noise histories. A one-time quantity of interest is the mean $velocity$ of the lines, measured by averaging the velocity of all line elements at time $t$: \begin{equation} \mathbf{v}(t) = \left\langle \frac{d}{dt}\mathbf{r}_{i,z}(t) \right\rangle_{z,i}. \end{equation} $\langle \ldots \rangle_z$ represents an average over the $z$ axis, i.e., over all vertical layers, while $\langle \ldots \rangle_i$ represents an average over all the lines. Another one-time quantity we measure is the mean \textit{radius of gyration} \begin{equation}\label{GR} r_g(t)=\sqrt{\langle(\mathbf{r}_{i,z}(t)-\langle\mathbf{r}_{i,z}(t)\rangle_z)^2\rangle_{z,i}}\ . \end{equation} The radius of gyration is defined as the standard deviation of the lateral positions $\mathbf{r}_{i,z}(t)$ of the points constituting the $i$th flux line, averaged over all the lines. $r_g(t)$ is a measure of the roughness of the lines in the system, which is produced by thermal spatial fluctuations and line distortion due to pinning of line elements by pinning centers distributed in the sample. The third one-time quantity we measure is the \textit{fraction of pinned line elements}, i.e., \begin{equation}\label{GR1} f_p = N(r<b_0)/N. \end{equation} It is defined as the fraction of line elements in the system that are located within distance $b_0$ (pinning center radius) of a pinning center. The two-time quantity we measure in this study is the normalized \textit{height autocorrelation} function \begin{equation} C(t,s)=\frac{\left<(\mathbf{r}_{i,z}(t)-\langle\mathbf{r}_{i,z}(t)\rangle_z)(\mathbf{r}_{i,z}(s)-\langle\mathbf{r}_{i,z}(s)\rangle_z)\right>_{z,i}}{\left<(\mathbf{r}_{i,z}(s)-\langle\mathbf{r}_{i,z}(s)\rangle_z)^2\right>_{z,i}}\ . \end{equation} It quantifies how the lateral positions $\mathbf{r}_{i,z}$ of the elements of a line relative to the mean lateral line position $\langle\mathbf{r}_{i,z}\rangle_z$ at the present time $t$ are correlated to those relative positions at a past time $s$ and contains information about local transverse thermal fluctuations of vortex line elements. This quantity is averaged over all lines as well as over several thousand noise histories and disorder realizations. It is worth noting that the term `height' autocorrelation originates from viewing the flux lines as fluctuating one-dimensional interfaces, the local height of which corresponds to the deviation of $\mathbf{r}_{i,z}$ from the respective line's mean position. We use the height autocorrelations as a tool to investigate the existence and nature of physical aging in our system. A system shows aging when a dynamical two-time quantitiy displays slow relaxation and the breaking of time translation invariance \cite{Henkel2010}. Additionally, in a {\it simple aging} scenario, the two-time quantity shows dynamical scaling and follows the general scaling form \begin{equation} C(t,s)=s^{-b}f_C(t/s), \end{equation} where $f_C$ is a scaling function that follows the asymptotic power law \begin{equation}\label{scaform} f_C(t/s)\sim(t/s)^{-\lambda_C/z}, \end{equation} as $t\rightarrow\infty$; $b$ is the aging scaling exponent, $\lambda_C$ is the autocorrelation exponent, and $z$ is the dynamical scaling exponent. \subsection{Moving and Pinned Regimes}\label{secRegimes} \begin{figure} \centering \subfloat{\includegraphics[width=.97\linewidth]{{images/steady_state.py}.eps} \llap{\parbox[b]{5.4in}{{\Large $(a)$}\\\rule{0ex}{4.05in}}} \llap{\parbox[b]{5.45in}{{\Large $(b)$}\\\rule{0ex}{2.65in}}} \llap{\parbox[b]{5.5in}{{\Large $(c)$}\\\rule{0ex}{1.35in}}}} \caption{Steady-state (a) mean vortex velocity $v$ (units of $b_0/t_0$), (b) radius of gyration $r_g$ (units of $b_0$), and (c) fraction of pinned line elements $f_p$ as a function of drive $F_d$ (units of $\epsilon_0$) for a system of interacting flux lines. $r_g$ peaks at $F_d \approx 0.006\epsilon_0$, where also $v$ starts assuming non-zero values, and $f_p$ begins to decay from its pinned steady state value $\sim 0.2$. Data are averaged over 100 realizations. \label{steady_st}} \end{figure} In order to identify the drive ranges that correspond to states when the system of flux lines is respectively in the pinned or moving regime, we have investigated steady-state features of the system as a function of drive, in the following manner: For a system of interacting flux lines, we set the drive to a certain value and allowed the system to evolve for $60,000t_0$ to let the system arrive to a non-equilibrium steady state. At this point, we started measuring the one-time physical quantities of the system viz. the radius of gyration $r_g$ and velocity $v$ at intervals of $t_0$ for $250,000t_0$. We performed this operation for $50$ evenly spaced drive values between $F_d = 0$ and $0.025\epsilon_0$. We averaged our results over the final $250,000$ time-steps and over 100 independent realizations, effectively averaging over $25$ million values per $F_d$ value. Error bars, representing the statistical error or standard deviation of the mean obtained via the aforementioned averaging process, are smaller than the symbol sizes in Fig. \ref{steady_st}. Similarly, the data points in every figure appearing in this paper represent mean values obtained by averaging over several independent realizations (the exact number of realizations are specified in the figure captions) and are accompanied by error bars representing the standard deviation of the mean in case these error bars are larger than the symbol sizes. For zero drive, about $20\%$ (see $f_p$ in Fig. \ref{steady_st}c) of flux line elements are pinned by the pinning centers, as they have had $60,000$ time steps to move around the system exclusively via thermal wandering and find point-like defects that will trap them. The absence of drive further increases the likelihood of the flux lines remaining relatively motionless and trapped in their pinned configurations, as seen by the zero mean velocity of the lines at $F_d = 0$ (Fig. \ref{steady_st}a). Upon introducing drive, at small values, we see an increased radius of gyration compared to the case with $F_d = 0$. This can be attributed to the relatively weak drive assisted by thermal fluctuations causing portions of the lines that are weakly pinned to break free from their original pins and get trapped in other nearby pins resulting in distortions of the line configurations which translates to increased line roughness and hence larger gyration radius $r_g$. The persistence of the pinned state under these drive conditions is supported by the continued absence of mean line velocity $v$ and the lack of significant change in the fraction of pinned line elements $f_p$ compared to its value ($\approx0.20$) at $t=0$. The radius of gyration continues to increase with drive, until the drive is large enough to overcome the attractive forces exerted by the pins, enabling a complete depinning of the lines from the pins. This depinning point is marked by the rise of $v$, coinciding with a drop in $r_g$ and $f_p$. These trends continue for the remainder of the drive values, resulting in the flux lines getting further depinned (lower $f_p$), moving faster (higher $v$) and becoming straighter (lower $r_g$) with increasing drive. The depinning crossover appears to occur somewhere in the drive interval $0.004 \epsilon_0 \leq F_d \leq 0.008 \epsilon_0$, the \textit{critical regime} of drive. Drive values below this interval ($F_d<0.004\epsilon_0$) constitute the \textit{pinned regime} while those above it ($F_d>0.008\epsilon_0$) constitute the \textit{moving regime}. We have repeated these numerical operations for non-interacting flux lines and found the results to be very similar: $r_g$ once again peaked around $F_d \approx 0.006\epsilon_0$ which is also the value at which $v$ started assuming non-zero values, and $f_p$ began decaying from its steady initial value, indicating that for our purposes, the ranges for the pinned and moving regimes remain essentially unchanged for the non-interacting case. \section{Results: Relaxation post drive quench}\label{secRel} \subsection{Quenches within the moving regime}\label{secMovMov} \begin{figure} \centering \subfloat{\includegraphics[width=.97\linewidth]{{images/plot_v_r_f_mov.py}.eps} \llap{\parbox[b]{5.35in}{{\Large $(a)$}\\\rule{0ex}{3.3in}}} \llap{\parbox[b]{5.4in}{{\Large $(b)$}\\\rule{0ex}{2.in}}} \llap{\parbox[b]{5.45in}{{\Large $(c)$}\\\rule{0ex}{.6in}}}} \caption{Relaxation of the (a) velocity $v$ (units of $b_0/t_0$), (b) radius of gyration $r_g$ (units of $b_0$), and (c) fraction of pinned line elements $f_p$ with time (units of $t_0$) for a system of interacting flux lines in the presence of point-like disorder, following a drive down-quench from $F_d=0.035\epsilon_0$ to $0.025\epsilon_0$ (moving to moving regime), with relaxation times $\uptau_v=0$, $\uptau_{r_g}=1250 t_0$, and $\uptau_{f_p}=155 t_0$, respectively. Data are averaged over 1000 independent realizations. \label{plot_v_r_f_mov}} \end{figure} In a first set of numerical experiments, we quench the drive in a moving ($F_d = 0.035\epsilon_0$) steady-state system of vortex lines, in the presence of point-like disorder, to $F_d = 0.025\epsilon_0$, a drive value also in the moving regime. For interacting lines, upon quenching, the mean velocity $v$ of the lines drops suddenly (Fig. \ref{plot_v_r_f_mov}a) due to the system being in an overdamped Langevin regime which effectively renders the elastic lines massless; the lines have no inertia and an instantaneous change in drive causes an equally abrupt change in velocity. At the moment of quench, the mean radius of gyration of these interacting lines starts growing (Fig. \ref{plot_v_r_f_mov}b). This is in agreement with our expectations: the reduced mean vortex velocity allows easier trapping of the lines by the pins present in the sample. This increased susceptibility to pinning coupled with thermal wandering results in the lines assuming increasingly distorted configurations, whence their roughness is enhanced as a function of time. The growth of $r_g$ is fast (exponential) and stabilizes to a new steady-state value within a relaxation time $\uptau=1250t_0$. This exponential relaxation implies that when quenching within the moving regime, the system transitions from one non-equilibrium steady state to another quickly. The fraction of pinned line elements $f_p$ also grows rapidly (Fig. \ref{plot_v_r_f_mov}c) and reaches a new steady-state value after $\uptau=155t_0$ upon quenching the drive. This is to be expected since the lowered velocity of the lines means that a larger fraction of line elements in the system are susceptible to trapping by the pins. The free parameters for the mathematical functions that have been fitted to the data in Fig. \ref{plot_v_r_f_mov} and subsequent figures were determined using the method of least squares. The time evolution of one-time physical properties $v$, $r_g$, and $f_p$ for non-interacting lines was found to be very similar to that for the interacting case discussed above, with comparable exponential relaxation times ($\uptau_{v}=0$, $\uptau_{r_g}=1120t_0$, and $\uptau_{f_p}=149t_0$). The effect of interactions on flux line dynamics only becomes evident when we study two-time height autocorrelations $C(t,s)$. \begin{figure} \centering \subfloat{\includegraphics[width=.97\linewidth]{{images/plot_HA_mov.py}.eps} \llap{\parbox[b]{3.3in}{{\Large $(a)$}\\\rule{0ex}{2.1in}}} \llap{\parbox[b]{0.5in}{{\Large $(b)$}\\\rule{0ex}{2.1in}}}} \caption{Evolution of the height autocorrelation function $C(t,s)$ as a function of the post-snapshot time $t-s$ (units of $t_0$), for waiting times $s =$ $2^7t_0$, $2^9t_0$, $2^{13}t_0$, $2^{14}t_0$, and $2^{15}t_0$, for systems of (a) non-interacting and (b) interacting flux lines in the presence of point disorder, following a drive down-quench from $F_d=0.035\epsilon_0$ to $0.025\epsilon_0$ (moving to moving regime). Time translation invariance is obeyed in both cases for larger waiting times ($s\ge2^{13}t_0$), as seen by the collapse of the corresponding $C(t,s)$ curves onto stationary curves that show exponential relaxation, with the interacting lines relaxing faster ($\uptau=693t_0$) than the non-interacting lines ($\uptau=1490t_0$). Data are averaged over 1000 independent realizations. \label{HA_mov}} \end{figure} We have measured $C(t,s)$ for waiting times $s =$ $2^{7}t_0$, $2^{9}t_0$, $2^{13}t_0$, $2^{14}t_0$, and $2^{15}t_0$ as a function of time elapsed post-quench $t-s$ (Fig. \ref{HA_mov}). In both, non-interacting (Fig. \ref{HA_mov}a) and interacting (Fig. \ref{HA_mov}b) cases, the autocorrelations for the higher waiting times ($s\geq2^{13}t_0$) are observed to be time-translation invariant, i.e., they coincide and display exponential relaxation, with the interacting lines relaxing faster ($\uptau=693t_0$) than the non-interacting ones ($\uptau=1490t_0$). The faster relaxation in the presence of vortex interactions can be attributed to caging effects. The repulsions force the lines apart, resulting in faster depinning of the line elements and hence straightening of the lines as well as the confining of these straightened lines into a moving lattice. This quick straightening results in the lines becoming spatially uncorrelated with their initial horizontal configurations faster than in the non-interacting case, thus explaining the faster height autocorrelation decay. Time-translation invariance is broken, however, when we go to shorter waiting times ($2^7t_0$, $2^9t_0$) for both the non-interacting and interacting cases. This is to be expected since the waiting times in question are shorter than the relaxation time ($\uptau=693t_0 \approx 2^{9.4}t_0$), a regime where the system has not yet forgotten its initial state; therefore its relaxation behavior is dependent on when we start measuring the autocorrelation function, i.e., it depends on the waiting time $s$. The observation of time translation invariance in the evolution of the height autocorrelation functions corresponding to higher waiting times rules out the possibility of physical aging in the system, as was already hinted at by the exponentially fast relaxation of the radius of gyration $r_g(t)$. \subsection{Quenches from the moving into the pinned regime}\label{secMovPin} For our next set of numerical experiments, we quench the drive of a system of flux lines in the moving regime ($F_d = 0.025\epsilon_0$) to $F_d = 0$ in the pinned regime. \begin{figure} \centering \subfloat{\includegraphics[width=.97\linewidth]{{images/plot_v_r_f_pin.py}.eps} \llap{\parbox[b]{5.4in}{{\Large $(a)$}\\\rule{0ex}{3.2in}}} \llap{\parbox[b]{5.45in}{{\Large $(b)$}\\\rule{0ex}{1.9in}}} \llap{\parbox[b]{5.5in}{{\Large $(c)$}\\\rule{0ex}{.5in}}}} \caption{Relaxation of the (a) mean vortex velocity $v$ (units of $b_0/t_0$), (b) radius of gyration $r_g$ (units of $b_0$), and (c) fraction of pinned line elements $f_p$ with time $t$ (units of $t_0$) for a system of interacting flux lines in the presence of point-like disorder, following a drive down-quench from $F_d=0.025\epsilon_0$ to $0$ (moving to pinned regime). $v$ drops instantaneously, while both $r_g$ and $f_p$ relax logarithmically slowly ($a_r=0.05b_0$, $a_f=0.01$) with $t$. Data are averaged over 1000 independent realizations. \label{plot_v_r_f_pin}} \end{figure} \begin{figure} \centering \subfloat{\includegraphics[width=.47\linewidth]{{images/plot_HA_pin.py}.eps} \llap{\parbox[b]{2.2in}{{\Large $(a)$}\\\rule{0ex}{2.85in}}} \llap{\parbox[b]{2.25in}{{\Large $(c)$}\\\rule{0ex}{.5in}}}} \subfloat{\includegraphics[width=.485\linewidth]{{images/plot_HA_scaled_pin.py}.eps} \llap{\parbox[b]{2.2in}{{\Large $(b)$}\\\rule{0ex}{2.85in}}} \llap{\parbox[b]{2.25in}{{\Large $(d)$}\\\rule{0ex}{.5in}}}} \caption{Height autocorrelation function $C(t,s)$ as a function of $t-s$ (a, c), and scaled height autocorrelation $s^{-b}C(t,s)$ as a function of $t/s$ (b, d), for systems of (a, b) non-interacting and (c, d) interacting flux lines in the presence of point disorder, following a drive down-quench from $F_d=0.025\epsilon_0$ to $0\epsilon_0$ (moving to pinned regime). Time translation invariance is broken (a, c) and dynamical scaling is observed (b, d) in both cases, with scaling exponents $(b$, $\lambda_C/z)$ found to be $(0.004$, $0.006)$ and $(0.005$, $0.011)$, respectively, for the (b) non-interacting and (d) interacting cases. Data are averaged over $10,000$ independent realizations. \label{HA_pin}} \end{figure} For the interacting lines, at the moment of quench, the velocity $v$, once again as in the case of quenches within the moving regime, drops instantaneously to zero (Fig. \ref{plot_v_r_f_pin}a) as the system enters a pinned state. The drop in velocity is accompanied by growth of the radius of gyration $r_g$ (Fig. \ref{plot_v_r_f_pin}b). This growth is very slow, however, when compared to the exponentially fast relaxation of the radius of gyration that we observed in the case of quenches within the moving regime. Here, the relaxation is slow enough that the radius of gyration cannot stabilize to a steady value on the time scales we are exploring, and instead shows a logarithmic growth with time. Initial attempts to fit the $r_g$ data to a power law by the method of least squares yielded exponents quite close to zero. A logarithmic function was therefore tested and found to provide a superior fit (smaller residuals) to the data than any temporal power law. This slower logarithmic growth can be attributed to the system entering a Bragg glass phase where the system of flux lines has access to many metastable states, each corresponding to a unique configuration. These states have a negligible mean velocity but have similar probabilities associated with several different pinning configurations. For interacting lines, the growth in $r_g$ does not persist indefinitely, but terminates at a certain upper value of time $t$. This is a consequence of the caging effect of the repulsive vortex interactions on the growth of the time-dependent correlation length $L(t)$ associated with the flux lines \cite{Pleimling2015}. However this caging effect is not yet perceptible in the data shown in Fig. \ref{plot_v_r_f_pin}. The interaction-induced caging effect will also affect the behavior of the two-time height autocorrelation functions at very long times. Another one-time quantity that displays slow logarithmic growth post quench as the system enters the glassy pinned state is the fraction of pinned line elements $f_p$ (Fig. \ref{plot_v_r_f_pin}c), in contrast to the fast exponential growth and stabilization of the quantity seen for quenches within the moving regime (moving-to-moving quenches). For the relaxation of one-time quantities $v$ ($\uptau = 0$), $r_g$ ($a_r = 0.06b_0$), and $f_p$ ($a_f = 0.01$) in the interaction-free situation, as in the case of moving-to-moving quenches, we did not find remarkable qualitative differences compared to the system with interacting lines. The two-time height autocorrelations $C(t,s)$ for quenches into the pinned regime display slow temporal relaxation accompanied by the breaking of time translation invariance for both non-interacting (Fig. \ref{HA_pin}a) and interacting lines (Fig. \ref{HA_pin}c). This is in contrast to the situation for quenches within the moving regime, where time translation invariance was clearly observed for the entire period of measurement for waiting times greater than the relaxation time of the system. We checked the autocorrelations for dynamical scaling by testing a range of scaling exponents $b$ in the following way. For each $b$ under consideration, we plotted the three $s^bC(t,s)$ curves ($s = 2^{14}t_0$, $2^{15}t_0$ and $2^{16}t_0$) against $t/s$. We then employed a least-squares algorithm to compare these functions and identified the value of $b$ that rendered the best dynamical scaling collapse. For the non-interacting (Fig. \ref{HA_pin}b) and interacting (Fig. \ref{HA_pin}d) cases, the algorithm yielded pairs of dynamical aging scaling exponents $(b$, $\lambda_C/z) = (0.004$, $0.007)$ and $(0.005$, $0.011)$, respectively, for which the individual height autocorrelation curves collapsed onto a master curve, a clear indication of physical aging in the system. The scaling only emerges for larger $t/s$, when the system has had sufficient time to overcome the initial large fluctuations that immediately follow the quench, and to enter the aging scaling regime. For interacting lines, the aging scaling regime will be cut short at very long times by the caging effect of the repulsive vortex interactions (also responsible for limiting the growth of $r_g$) \cite{Pleimling2015}. The scaling form for simple aging given in \eref{scaform} is a special case of the more general scaling form $f_C(t,s)\sim[L(t)/L(s)]^{-\lambda_C}$. The simple aging form arises from the general case when $L(t)$ grows as a simple power law of $t$. The algebraic growth $L(t)\sim t^{1/z}$ with the dynamic scaling exponent $z$ is limited by the interaction-induced caging effect. The aging scaling exponents $b$ seen here are over an order of magnitude smaller than those obtained in previous studies: one on the aging of randomly placed, interacting flux lines in the absence of drive \cite{Dobramysl2013} and another on relaxation following temperature and magnetic field quenches, also for randomly placed flux lines without drive \cite{Assi2015}. In the case of drive quenches as presented here, we have verified that during the initial pre-quench, high-drive ($F_d = 0.035$) period of the simulation, the flux lines constitute a highly correlated moving lattice. This is in contrast to the previous studies where, on account of the absence of drive, the initial disorder dominated state was always random and uncorrelated. We can thus infer that the initial conditions have a significant influence on the aging scaling exponents, with a correlated initial state yielding far smaller aging scaling exponents compared to an uncorrelated one. \section{Conclusion} \label{conclusion} In this paper, we have investigated the long-time relaxation features of driven magnetic flux vortices in type-II superconductors following sudden quenches of external current. In order to study the post-quench dynamics of these vortices in the presence of uncorrelated point-like disorder, we modeled them as directed elastic lines in the presence of localized pinning centers, and solved the associated Langevin molecular dynamics equations numerically. In the simulations we maintained a constant ambient temperature. The external current quenches were realized in the form of instantaneous changes in the drive, a quantity in the elastic line model that mimics the Lorentz force exerted by external current on the flux vortices. In this study, we focused on two types of drive quenches, those within the moving regime and those from the moving regime into the pinned regime. For quenches within the moving regime, we have studied the effects of the vortex-vortex repulsive interactions on the relaxation kinetics of the vortices by performing drive quenches in the system with the interactions initially absent or in effect. In both cases, drive quenches within the moving phase result in fast exponential relaxation of the system from one non-equilibrium steady state to another, as evidenced by the rapid temporal evolution of one-time observables such as the mean radius of gyration of the lines and the fraction of pinned line elements. The two-time height autocorrelation functions for different waiting times display similar fast exponential relaxation as the one-time quantities, along with time translation invariance, firmly eliminating the possibility of physical aging in the case of quenches within the moving regime. When turned on, the screened logarithmic repulsive interactions between the flux lines significantly speed up the exponential relaxation of the height autocorrelations with the associated relaxation time being around half that for quenches with no interactions present. For our study on drive quenches from the moving to the pinned regime, in stark contrast to quenches within the moving regime, the relaxation of the system after the quench is much slower, which is seen in the non-exponential, logarithmic time evolution of the radius of gyration and fraction of pinned line elements. This indicates that the system fails to reach a steady state when quenched into the pinned regime on time scales that are on the order of the simulation duration. The two-time height autocorrelations show breaking of time translation invariance, accompanied by dynamical scaling with $t/s$, evidence for aging in the system, as we quench it from a moving non-equilibrium steady state into a pinned, glassy one. The $t/s$ range for which simple aging is applicable for interacting lines is bound by the limiting of the algebraic growth of the characteristic time-dependent correlation length $L(t)$, a consequence of the caging of the flux lines by the repulsive vortex interactions. Correlated initial conditions as with the moving lattice seen in the initial state in our study yield markedly smaller aging scaling exponents compared to uncorrelated initial conditions such as those obtained in previous investigations where the flux lines were initially randomly distributed. \section*{Acknowledgments} \label{acknowledgements} This research is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-FG02-09ER46613. \section*{References} \bibliographystyle{iopart-num}
2,869,038,155,170
arxiv
\section{Introduction} \label{introduction} The bulk properties of the quantum Hall systems at filling fraction $1/m$, $m =$ odd, in the presence of low magnetic fields have been subject of many theoretical and experimental investigations in recent years. The spin degree of freedom plays an important role in these systems. Here we focus on properties of the boundary of these systems, which, in a special way, reflect bulk properties. In the spinless case this reflection was already described by Wen \cite{wen}. The low-energy (bulk) physics of these systems is identical to the one of 2D quantum ferromagnets with spin waves as excitations. Due to exchange the spins of electrons in the ground state are all aligned in the same direction and the lowest lying excitations are one-spin-flip (spin wave) excitations which leave the charge of the system unchanged. The lowest lying charged excitations are topologically non-trivial skyrmion excitations \cite{sond} for which a local change in the charge density that characterizes them, is accompanied by a local change in the spin density. This scenario, in which a finite number of overturned spins follows the creation of the charged excitations is supported by experimental findings \cite{barr}. On the other hand the physics of the boundary of quantum Hall systems without the spin degree of freedom is well understood \cite{wen}. In fact for any quantum Hall system, including the one which edge physics we would like to understand, it is expected that the charge dynamics is reduced to the edge of the system. This is maybe an oversimplification with respect to a general experimental situation where sharp edges (i.e. edges with steep confining potential) are not always present. Still the effective one-space-dimensional edge theory - chiral boson theory of quantum Hall systems has received considerable experimental support in recent years. Here we present a low-energy effective theory of quantum Hall ferromagnetic systems which describes the charge degrees of freedom restricted to live only on the edge and lowest lying excitations of the bulk - neutral spin waves. We show that, under special conditions (and as solutions of the theory), edge spin waves exist, whose characteristic width is smaller than the one of bulk spin waves, which spread throughout the whole system. These excitations, edge spin waves, are characterized by gaps which are smaller than the Zeeman gap and linear dispersion relations. We find two classes of these waves, wich we call, charged and neutral edge spin waves. One way to induce the charged edge waves is to subtract or add some charge to the edge. By a redistribution of the charge and, simultaneously, spin of the system on the edge (in the manner of spin textures as described first in \cite{sond}) neutral edge spin waves are possible. \section{Effective low-energy field theory with charge degrees of freedom on the edge and spin waves} In this section, we will first rederive the edge theory for spinless electrons \cite{wen} using the dual form of the Chern-Simons field-theory description of quantum Hall systems (at filling fractions $1/m$ where $m$ is an odd integer . Then we will use the dual form of the Chern-Simons formulation of the ferromagnetic quantum Hall systems (at the same filling fractions with the spin degree of freedom taken into account) to derive a low energy effective theory which describes not only the edge of these systems, but also lowest lying excitations in their bulk - spin waves. At the end we will demonstrate the gauge invariance of our total, bulk plus edge, action when perturbing external electromagnetic fields are present in it. \subsection{The edge theory of spinless quantum Hall systems} The systems that we consider are at filling fractions $1/m$ where $m$ is an odd integer. We start with the dual formulation of the Chern-Simons effective description of these systems. In the Chern-Simons formulation the problem of the 2D electron system in magnetic field is mapped to a problem of bose liquid with a long-range interaction described by a ststistical gauge field. The vortex excitations of the bose fluid correspond to the quasiparticle excitations of quantum Hall systems. In the dual form the vortex excitations (i.e. fluxes of the gauge field in the preceding formulation) are viewed as particles, and the charge -current density becomes flux of some new gauge field. The first terms of the Lagrangian density in the dual form are: \begin{equation} {\cal L} = -\frac{m}{4 \pi} \epsilon^{\mu \nu \lambda} A_{\mu} \partial_{\nu} A_{\lambda} + \frac{1}{2 \pi} \epsilon^{\mu \nu \lambda} A^{ext}_{\mu} \partial_{\nu} A_{\lambda} - \label{lagr} {\cal J}^{v}_{\mu} A^{\mu} - \frac{\lambda}{4} F_{\mu \nu} F^{\mu \nu} \end{equation} The vector potential $A_{\mu},\; \mu = 0,1,2$ represents the newly introduced gauge field, which enters the definition of the charge current-density $J_{\mu},\; \mu = 0,1,2$: \begin{equation} J_{\mu} = \epsilon_{\mu \nu \lambda} \frac{1}{2 \pi} \partial^{\nu} A^{\lambda}. \label{basic} \end{equation} $F_{\mu \nu} = \partial^{\mu} A^{\nu} - \partial^{\nu} A^{\mu}.$ The vector potential $A_{\mu}^{ext}, \mu = 0,1,2$ describes the external electromagnetic field, and two first terms when $A_{\mu}$ is ``integrated out'' give the basic Hall response of the system i.e. the Hall conductance equal to $ \frac{1}{m} \frac{e^{2}}{h}$. The third term couples the vortex current-density $ {\cal J}_{\mu}^{v},\; \mu = 0,1,2$, \begin{equation} {\cal J}_{\mu}^{v} = \frac{1}{2 \pi i} \epsilon_{\mu \nu \lambda} \partial^{\nu} \partial^{\lambda} \alpha \label{vorcur} \end{equation} to the gauge field $A_{\mu}$ (according to the interchanged roles of fluxes and particles). In \ref{vorcur} $\alpha$ is the phase of the bose field in the former (non-dual) formulation and the vortex current-density is non-zero only if $ \alpha$ is a nonanalytic function of coordinates i.e. $ \partial^{\nu} \partial^{\lambda} \alpha \neq \partial^{\lambda} \partial^{\nu} \alpha $ for some $\lambda$ and $\nu$. If our system has a boundary the action for a general gauge transformation $A_{\mu} \rightarrow A_{\mu} + \partial_{\mu} \Lambda$ for which $\Lambda$ is not zero at the boundary is not gauge invariant (i.e. charge conserving). Any electric field along the boundary will produce a current normal to the boundary because of the nonzero value of the Hall conductance. But with restricted gauge transformations for which $\Lambda=0$ on the boundary we may use this action to derive, as was already done in \cite{wen}, the kinetic term of the edge theory Lagrangian. These gauge transformations describe a well-defined boundary problem, in which some of the physics of the quantum Hall systems due to perturbing external electromagnetic fields is not present. Namely, by making all previously gauge dependent quantities on the edge gauge independent i.e. physical, we are deriving an effective edge theory that gives the right physics when charge exchange between the bulk and edge is absent \cite{ston,hald}. (We defer description of the total action with perturbing electromagnetic fields which is explicitly gauge invariant to subsection II C.) First we neglect the last term in \ref{lagr} (as a higher order term in derivatives), regard the equation of motion for $A_{0}$ as a constraint, and take $A_{0} = 0$. The constraint is \begin{equation} \frac{1}{2 \pi}(m \vec{\nabla} \times \vec{A} - \vec{\nabla} \times \vec{A}^{ext}) = - {\cal J}_{0}^{v} \end{equation} simply saying that any deviation in the charge density of the system is due to the creation of vortices. The solution of this equation, up to a gauge transformation is \begin{equation} \delta A_{a} = A_{a} - \frac{1}{m} A_{a}^{ext} = - \frac{1}{m} \partial_{a} \alpha. \label{solu} \end{equation} We assume that there are no vortices in the bulk of the system so that $\alpha$ is analytic. When we plug in the solution \ref{solu} in the remaining term of the Lagrangian (we do not consider the term with $A_{\mu}^{ext}$): \begin{equation} \Delta {\cal L}_{\rm eff} = - \frac{m}{4 \pi} \epsilon_{a 0 b} A_{a} \partial_{0} A_{b} \end{equation} we get, up to a total time derivative and with an assumption that $A_{\mu}^{ext}$ is time independent, that \begin{equation} {\cal L}_{\rm eff}(x,y,t) = \frac{1}{4 \pi m} \epsilon_{a b} \partial_{b} (\partial_{a} \alpha \partial_{0} \alpha). \end{equation} This total divergence can be translated into a surface term: \begin{equation} {\cal L}_{\rm eff}(x,t) = \frac{1}{4 \pi m} \partial_{x} \alpha \partial_{0} \alpha, \label{surf} \end{equation} exactly the kinetic term of the chiral boson theory, if we consider the system to be defined in the lower half-plane with $y=0$ as a boundary. We get only the kinetic term of the edge theory because we started from the theory in which we neglected terms that bring dynamics. We might expect that the following term in small momentum expansion on the edge is \begin{equation} - \frac{v}{4 \pi m} \partial_{x} \alpha \partial_{x} \alpha \end{equation} with a nonuniversal coupling $v$. This term gives dynamics to the edge theory and together with \ref{surf} makes the chiral boson theory Lagrangian density. Thus, we can conclude that the field $\alpha$ - phase of the bosonic field in the standard Chern-Simons formulation on the boundary plays the role of the chiral boson field. \subsection{Effective low-energy field theory of ferromagnetic quantum Hall states} The Lagrangian density of the Chern-Simons theory for ferromagnetic quantum Hall states in the dual form is \cite{leeka,stone}: \begin{eqnarray} \cal{L} = & & - \frac{m}{4 \pi} \epsilon^{\mu \nu \lambda} A_{\mu} \partial_{\nu} A_{\lambda} + \frac{1}{2 \pi} \epsilon^{\mu \nu \lambda} A_{\mu}^{ext} \partial_{\nu} A_{\lambda} - {\cal J}_{\mu} A^{\mu} \nonumber \\ \label{lagrs} & & - \frac{\rho_{s}}{2} ( \vec{\nabla} \vec{n})^{2} + \frac{\Delta}{2} n_{3} - \frac{\lambda}{4} F^{\mu \nu} F_{\mu \nu}. \end{eqnarray} Again the charge current-density as a function of the statistical gauge field $A^{\mu}$ is given by the relation \ref{basic}. Now quasiparticle current-density consists of two contributions: vortex and skyrmion: ${\cal J} = {\cal J}_{\mu}^{v} + {\cal J}_{\mu}^{s}$. As before vortex excitations do not change the spin configuration of the ferromagnetic Hall states. Skyrmions, on the other hand, which lie lower in the energy spectrum, represent excitations followed by reversal of electron spins in the system. The vortex current-density is given by \ref{vorcur} where again $\alpha$ is the phase of the bosonic field of the standard Chern-Simons formulation, and the skyrmion current-density \cite{leeka,stone} is: \begin{equation} {\cal J}_{\mu}^{s} = \frac{1}{2 \pi i} \epsilon_{\mu \nu \lambda} \partial^{\nu} \overline{z} \partial^{\lambda} z. \end{equation} The field $z$ is the two component spinor of the standard Chern-Simons formulation \cite{leeka}, in which the bosonic field is decomposed in an amplitude, phase, and spinor part: \begin{equation} \Psi_{\rm bosonic} = \rho \; \exp\{i \alpha\} \; z \label{field} \end{equation} The spinor part describes the spin degree of freedom of the bosonic field associated with the original electron field. The fourth, extra term with respect to spinless case in the Lagrangian density is the nonlinear sigma model term with $ \vec{n} = \overline{z} \vec{\tau} z $, where $ \tau_{x}, \tau_{y},$ and $\tau_{z}$ are Pauli matrices. $ \rho_{s}$ is the stiffness constant. This term represents the cost in the exchange energy when the ground state ferromagnetic configuration is modified. The fifth term is the Zeeman term with constant $ \Delta = \frac{g \mu_{B}}{2 \pi} B $ where $B$ is the external magnetic field. To get the low-energy, effective theory which includes the edge physics, we repeat steps that we described in the previous subsection in the spinless case. The constraint equation in this case, as we vary $ A_{0}$ is: \begin{equation} \frac{1}{2 \pi} (m \vec{\nabla} \times \vec{A} - \vec{\nabla} \times \vec{A}^{ext}) = - {\cal J}_{0}^{v} - {\cal J}_{0}^{s}. \end{equation} The solution to this constraint is: \begin{equation} \delta A_{a} = A_{a} - \frac{1}{m} A_{a}^{ext} = - \frac{1}{m} \partial_{a} \alpha + \frac{i}{m} \overline{z} \partial_{a} z \label{ssolu} \end{equation} Now we assume that $\alpha$ is analytic so that no vortex excitations in the bulk are allowed. We do not make any restrictions on $z$. By plugging in the solution \ref{ssolu} in the begining Lagrangian \ref{lagrs} with $ A_{0}=0 $ and without the second and the last term we get surface terms: \begin{equation} {\cal L}^{\rm kin}_{\rm eff}(x,t) = \frac{1}{4 \pi m}(\partial_{x} \alpha \partial_{0} \alpha -2 i \partial_{x} \alpha (\overline{z} \partial_{0} z) + {\rm terms \; without} \; \alpha) \label{surter} \end{equation} The field $\alpha$ can not be found anymore in the bulk Lagrangian. Therefore pure charge degrees of freedom i.e. those that are not followed by a change in the spin configuration are now restricted to live only on the boundary of the system. (This coincides with the microscopic physical picture that we have of these systems. The pure charge (quasihole) excitations, which lie higher in the energy spectrum than skyrmion excitations, in the bulk, can be found in the low enery approximation on the edge of the system. There, on the edge, their excitation energy is smallest (in fact goes to zero).\cite{mini}) In \ref{surter} we see the most important result of this derivation, namely a nontrivial coupling between the spin and charge degrees of freedom on the edge of the system. In the low-energy approximation that we make (i.e. neglecting the last term in \ref{lagrs}) we find that the charge current is equal to the sum of skyrmion and vortex currents. As we do not have vortex currents in the bulk of the system, the condition that the skyrmion current normal to the boundary is zero is equvalent to the demand that the charge current normal to the boundary is zero. In the absence of any external electromagnetic fields besides the uniform magnetic that defines the problem, we will require that no charge can leave the system i.e. the skyrmion current normal to the boundary is zero. In that case \begin{equation} \partial_{x}(\overline{z} \partial_{0} z) = \partial_{0} (\overline{z} \partial_{x} z) \end{equation} and we may rewrite (after a partial integration) the second surface term in the ${\cal L}_{\rm eff}^{\rm kin}(x,t)$ as \begin{equation} \frac{1}{4 \pi m} [-i \partial_{x} \alpha (\overline{z} \partial_{0} z) - i \partial_{0} \alpha (\overline{z} \partial_{x} z)] \end{equation} Besides , as we did not break the gauge invariance under the gauge transformation defined as: \begin{equation} \alpha \rightarrow \alpha + \beta \;\;{\rm and} \; \;z \rightarrow \exp\{-i \beta\} \; z \end{equation} and present in the bosonic Chern-Simons formulation with field \ref{field}, we may also demand the same invariance on the edge of the system. This invariance is an expression of the confinement of spin and charge on electrons, and should exist also on the edge. As a result the surface gauge-invariant kinetic term that contains the field $\alpha$ is \begin{equation} {\cal L}_{\rm eff}^{\rm kin}= = \frac{1}{4 \pi m} (\partial_{x} \alpha - i \overline{z} \partial_{x} z) (\partial_{0} \alpha - i \overline{z} \partial_{0} z). \end{equation} We can conclude that with respect to the spinless case instead of \begin{equation} \partial_{\mu} \alpha \; \; \mu = 0,x, \label{exp} \end{equation} in the case with spin we have \begin{equation} \partial_{\mu} \alpha - i \overline{z} \partial_{\mu} z \; \; \mu = 0,x. \label{expres} \end{equation} By considering couplings with external fields we may also conclude that these gauge-invariant expressions up to some appropriate constants represent charge density and current on the edge similarly to the the spinless case. Then \ref{expres} formally expresses the physical fact that the charge current and density on the edge have contributions followed by changes in the spin configuration on the edge. Therefore we expect that under inclusion of a charge-density interaction term (which describes the dynamics on the edge) so that the Lagrangian density is \begin{equation} {\cal L}_{\rm eff} = \frac{1}{4 \pi m} (\partial_{0} \alpha - i \overline{z} \partial_{0} z) (\partial_{x} \alpha - i \overline{z} \partial_{x} z) - \frac{v}{4 \pi m} (\partial_{x} \alpha - i \overline{z} \partial_{x} z)^{2}, \label{rezultat} \end{equation} we have a complete low-energy description of the charge degree of freedom on the edge. The equation of motion for $\alpha$ on the edge simply tells us that the charge on the edge drifts with velocity $v$ along the edge in only one direction as it should be in a quantum Hall system. At this stage of deriving the low-energy, effective theory, the bulk Lagrangian density is \begin{equation} \Delta {\cal L}_{\rm eff}(x,y,t) = - {\cal J}_{a}^{s} (\frac{1}{m} A_{ext}^{a}+\frac{i}{m} \overline{z} \partial^{a} z) - \frac{\rho_{s}}{2} (\vec{\nabla} \vec{n})^{2} + \frac{\Delta}{2} n_{3} \label{restlag} \end{equation} where $ a = x,y $. Now we will apply the spin-wave approximation in which field $z$ is decomposed in the following way \begin{equation} z= \left(\begin{array}{c} 1- \frac{1}{2} |\Psi|^{2} \\ \Psi \end{array} \right). \end{equation} The complex bosonic field $\Psi$ expresses fluctuations with respect to the ground state configuration and is to be considered small and slowly varying, and therefore, we will keep only terms quadratic in $\Psi$. Then, the $2+1$ dimensional Lagrangian density \ref{restlag} of the bulk becomes \begin{equation} {\cal L}_{\rm eff}(x,y,t) = i \rho_{0} \overline{\Psi} \partial_{0} \Psi - 2 \rho_{s} \vec{\nabla} \overline{\Psi} \vec{\nabla} \Psi - \Delta |\Psi|^{2} \label{dvapet} \end{equation} Now, we will again assume that our system is in the lower half-plane. After a partial integration inside the expression for the system Lagrangian we get an extra (to the chiral boson Lagrangian) surface term: \begin{equation} \Delta {\cal L}_{\rm surf}(x,t) = - 2 \rho_{s} \overline{\Psi} \partial_{y} \Psi, \label{extrasur} \end{equation} and the $2+1$ dimensional Lagrangian density of the bulk \begin{equation} {\cal L}_{\rm eff}^{\rm bulk}(x,y,t) = i \rho_{0} \overline{\Psi} \partial_{0} \Psi + 2 \rho_{s} \overline{\Psi} \vec{\nabla}^{2} \Psi - \Delta |\Psi|^{2}. \label{lagbu} \end{equation} For a moment we will discuss only the problem of 2D ferromagnet described by the Lagrangian densities \ref{extrasur} and \ref{lagbu} (in the spin-wave approximation) in order to explain the meaning of the surface term \ref{extrasur}. In that case, the corresponding Euler-Lagrange equations for the field $\Psi$ is \begin{equation} i \rho_{0} \partial_{0} \Psi + 2 \rho_{s} \vec{\nabla}^{2} \Psi - \Delta \Psi = 0 \label{bueq} \end{equation} in the bulk, and \begin{equation} \partial_{y} \Psi|_{y=0} = 0 \label{baeq} \end{equation} on the edge. The normal mode solutions of \ref{bueq} in the bulk are spin waves, \begin{equation} \Psi = A \exp\{ i \vec{k} \vec{r}\} \exp\{ - i w t\} \end{equation} with the dispersion relation: \begin{equation} w = \frac{\Delta}{\rho_{0}} + \frac{2 \rho_{s}}{\rho_{0}} k^{2}. \end{equation} To satisfy boundary condition \ref{baeq}, the class of solutions is further reduced to the form: \begin{equation} \Psi = A \cos\{ k_{y} y\} \exp\{i k_{x} x\} \exp\{- i w t\}. \label{spwe} \end{equation} The condition \ref{baeq} ensures that spin currents normal to the boundary are zero. Namely, if we use the Noether expression for them, \begin{equation} {\cal J}_{\mu}^{a} = - \frac{i}{2} \frac{\delta {\cal L}_{QFM}} {\delta(\partial^{\mu} z_{i})} \tau_{ij}^{a} z_{j} + {\rm h.c.} \end{equation} where $ \tau^{a}, a = 1,2,3 $ are Pauli matrices and ${\cal L}_{QFM}$ (a Lagrangian density of a quantum ferromagnet) is \begin{equation} {\cal L}_{QFM} = i \rho_{0} \overline{z} \partial_{0} z - \frac{\rho_{s}}{2} (\vec{\nabla} \vec{n})^{2} + \frac{\Delta}{2} \overline{z} \tau_{3} z, \end{equation} we may find in the spin-wave approximation the following expressions for them: \begin{eqnarray} & & {\cal J}_{y}^{3} = i \rho_{s} (\overline{\Psi} \partial_{y} \Psi - \partial_{y} \overline{\Psi} \Psi) \nonumber \\ & & {\cal J}_{y}^{1} = i \rho_{s} (\partial_{y} \overline{\Psi} - \partial_{y} \Psi \nonumber \\ & & {\cal J}_{y}^{2} = i \rho_{s} (\partial_{y} \overline{\Psi} + \partial_{y} \Psi). \label{spcurrents} \end{eqnarray} We neglected the terms of order higher than two in $\Psi$; the single condition \ref{baeq} ensures that all of them are zero at the boundary. The edge spin-wave solutions of the form \begin{equation} \Psi = B \exp\{\beta y\} \exp\{i k_{x} x\} \exp\{-i w t\}, \label{edwave} \end{equation} $\beta$ positive, satisfy the equation in the bulk, but the condition \ref{baeq} forces $\beta$ to be zero. Therefore, in the case of a pure quantum ferromagnet, the constraint of the spin conservation on the boundary, does not allow the existance of the edge spin excitations. Now, we will summarize our effective theory for a quantum Hall system with a boundary, having in mind the specific geometry in which the system is in the lower half of the plane. The bulk Lagrangian density (in the spin wave approximation) is given by \begin{equation} {\cal L}_{\rm bulk} = i \rho_{0} \overline{\Psi} \partial_{0} \Psi + 2 \rho_{s} \overline{\Psi} \vec{\nabla}^{2} \Psi - \Delta |\Psi|^{2}+ {\rm higher \; order \; terms} \label{effbu} \end{equation} and the edge Lagrangian density is \begin{equation} {\cal L}_{\rm edge}(x,t) = - 2 \rho_{s} \overline{\Psi} \partial_{y} \Psi + \frac{1}{4 \pi m}( \partial_{x} \alpha^{f} \partial_{0} \alpha^{f} - v \partial_{x} \alpha^{f} \partial_{x} \alpha^{f}) + {\rm higher \; order \; terms} \label{effed} \end{equation} where \begin{equation} \partial_{\mu} \alpha^{f} = \partial_{\mu} \alpha - \frac{i}{2} (\overline{\Psi} \partial_{\mu} \Psi - \partial_{\mu} \overline{\Psi} \Psi). \end{equation} \subsection{The proof of gauge invariance of the effective field theory} In the spinless case the effective action of the bulk, \begin{equation} {\cal L}_{\rm bulk} = - \frac{m}{4 \pi} \epsilon^{\mu \nu \lambda} A_{\mu} \partial_{\nu} A_{\lambda} + \frac{1}{2 \pi} \epsilon^{\mu \nu \lambda} A^{ext}_{\mu} \partial_{\nu} A_{\lambda} \label{pocetak} \end{equation} when the field $ A_{\mu}$ is integrated out, is \begin{equation} {\cal L}_{\rm bulk}^{\rm eff} = \frac{1}{4 \pi m} \epsilon^{\mu \nu \lambda} A_{\mu}^{ext} \partial_{\nu} A^{ext}_{\lambda}. \end{equation} On the other hand the edge action with external electromagnetic field included is \cite{wen,ston} \begin{eqnarray} {\cal L}_{\rm edge}= & & \frac{1}{4 \pi m} [\partial_{x} \alpha \partial_{0} \alpha - v (\partial_{x} \alpha)^{2}] \nonumber \\ & & \frac{1}{2 \pi m} (v A_{x}^{ext} - A_{0}^{ext}) \partial_{x} \alpha - \frac{1}{4 \pi m} (v A_{x}^{ext} - A_{0}^{ext}) A_{x}^{ext} \label{edaction} \end{eqnarray} Under the gauge transformation $ \alpha \rightarrow \alpha + \Lambda $ and $ A_{\mu}^{ext} \rightarrow A_{\mu}^{ext} + \partial_{\mu} \Lambda$ the total action, $ {\cal L}_{\rm bulk} + {\cal L}_{\rm edge}$ or $ {\cal L}_{\rm bulk}^{\rm eff} + {\cal L}_{\rm edge}$, is invariant. Namely the chiral anomaly \cite{ston} term \begin{equation} \frac{1}{4 \pi m} \Lambda (\partial_{0} A_{x}^{ext} - \partial_{x} A_{0}^{ext}) \end{equation} which we get by the gauge transformation of the bulk action is of the same absolute value but of opposite sign as the term that we get gauge transforming the edge action \ref{edaction}. In our previous derivation we assumed that only the vector potential of the constant magnetic field is present in \ref{pocetak} and, therefore, the field $ A^{\mu}$ did not have any dynamics; by the equation of motion of the action it was constrained to satisfy $ m \epsilon^{a b} \partial_{a} A_{b} = \epsilon^{a b} \partial_{a} A_{b}^{ext} $. Because of the absence of the perturbing external electromagnetic fields the whole dynamics of the system was on the edge described by the chiral boson theory (the first two terms in \ref{edaction}). Similarly, in the case with spin, more general effective bulk action with external perturbing electromagnetic fields is \begin{eqnarray} {\cal L}_{\rm bulk} = & & - \frac{m}{4 \pi} \epsilon^{\mu \nu \lambda} A_{\mu} \partial_{\nu} A_{\lambda} + \frac{1}{2 \pi} \epsilon^{\mu \nu \lambda} A_{\mu}^{ext} \partial_{\nu} A_{\lambda} - {\cal J}^{s}_{\mu} A_{\mu} \nonumber \\ & & - \frac{\rho_{s}}{2} (\vec{\nabla} \vec{n})^{2} + \frac{\Delta}{2} n_{3}. \label{begin} \end{eqnarray} Compare with \ref{lagrs} and note the absence of the vortex current ${\cal J}^{v}_{\mu}$ in the action. As we pointed out before this signifies the fact that the charge excitations without spin changes are to be found on the edge of the system in the low-energy approximation. If we integrate out $A_{\mu}$ field we get \begin{equation} {\cal L}_{\rm bulk}^{\rm eff} = \frac{1}{4 \pi m} \epsilon^{\mu \nu \lambda} A_{\mu}^{ext} \partial_{\nu} A_{\lambda}^{ext} + \frac{1}{4 \pi m} \epsilon^{\mu \nu \lambda} A_{\mu}^{ext} \partial_{\nu}(i \overline{z} \partial_{\lambda} z) + {\rm terms \; \; without \; \;} A_{\mu}^{ext} \label{spinpocetak} \end{equation} For the edge the complete action (see \ref{rezultat}) with external electromagnetic fields is \begin{eqnarray} {\cal L}_{\rm edge} = & & \frac{1}{4 \pi m} [(\partial_{x} \alpha - i \overline{z} \partial_{x} z) (\partial_{0} \alpha - i \overline{z} \partial_{0} z) - v (\partial_{x} \alpha - i \overline{z} \partial_{x} z)^{2}] \nonumber \\ & & + \frac{1}{2 \pi m} (v A_{x}^{ext} - A_{0}^{ext}) (\partial_{x} \alpha - i \overline{z} \partial_{x} z) - \frac{1}{4 \pi m} (v A_{x}^{ext} - A_{0}^{ext} )A_{x}^{ext} \label{xxaction} \end{eqnarray} Again, in this case, the extra term that we get by setting $A_{\mu}^{ext} \rightarrow A_{\mu}^{ext} + \partial_{\mu} \Lambda $ in \ref{spinpocetak}, \begin{equation} \frac{1}{4 \pi m} \Lambda [\partial_{0}(A_{x}^{ext} + i \overline{z} \partial_{x} z) - \partial_{x}(A_{0}^{ext} + i \overline{z} \partial_{0} z)] \end{equation} is canceled by the term that comes out from taking $ A_{\mu}^{ext} \rightarrow A_{\mu}^{ext} + \partial_{\mu} \Lambda$ and $\alpha \rightarrow \alpha + \Lambda$ in \ref{xxaction}. Therefore the gauge invariance of the total action, $ {\cal L}_{\rm bulk} + {\cal L}_{\rm edge}$ (or ${\cal L}^{\rm eff}_{\rm bulk} + {\cal L}_{\rm edge}$, \ref{begin} and \ref{xxaction}, is proved. As we take $ A_{\mu}^{ext}$ to be the vector potential of the constant magnetic field and apply the spin-wave approximation the action \ref{begin} is transformed into \ref{dvapet} and the edge action becomes \ref{rezultat}. \section{Solutions of the field theory} As we vary the surface terms of the low-energy effective Lagrangian ( \ref{effbu} and \ref{effed})with respect to $\alpha$ and $\overline{\Psi}$ (or $\Psi$) we get two equations. As we vary $\alpha$, we get \begin{equation} \partial_{x} \partial_{0} \alpha^{f} = v_{c} \partial_{x}^{2} \alpha^{f} \label{chargeeq} \end{equation} and, by varying $\overline{\Psi}$ and using the previous equation, \begin{equation} 2 \rho_{s} \partial_{y} \Psi + \frac{1}{4 \pi m} (-i) [ \partial_{x} \Psi \partial_{0} \alpha^{f} - \partial_{0} \Psi \partial_{x} \alpha^{f} - v_{c} 2 \partial_{x} \Psi \partial_{x} \Psi \partial_{x} \alpha^{f}]=0 \label{boundarye} \end{equation} When $\partial_{\mu} \alpha^{f} = 0, \mu =0,x$ i.e. there are no charge excitations on the boundary, the spin waves \ref{spwe} are solutions of the bulk and surface equations. When $\Psi = 0$, i.e. there are no spin excitations in the system; the only solutions of the equations are charge density waves of the chiral boson theory. From the bulk equation \ref{bueq} we get the following dispersion relation for the edge spin waves \ref{edwave}: \begin{equation} w = \frac{\Delta}{\rho_{0}} + \frac{2 \rho_{s}}{\rho_{0}} (k_{x}^{2} - \beta^{2}) \label{disprel} \end{equation} The coefficient $\beta$, which characterize the extension of the edge spin waves into the bulk of the system, comes from the boundary equation \ref{boundarye}. The one-dimensional charge density and current are given by \begin{equation} j_{0} = \frac{1}{2 \pi m} \partial_{x} \alpha^{f} \;\; {\rm and} \;\; j_{x} = \frac{1}{2 \pi m} \partial_{0} \alpha^{f} \end{equation} respectively. For $\alpha^{f} = \alpha^{f}(x + v_{c}t)$, the general solution of \ref{chargeeq}, we have \begin{equation} v_{c} \partial_{x} \alpha^{f} = \partial_{0} \alpha^{f} \; \; {\rm i.e.} \; \; v_{c} j_{0} = j_{x} \end{equation} and we may rewrite the equation \ref{boundarye} as \begin{equation} - 4 \rho_{s} \partial_{x} \Psi + i j_{0} \partial_{0} \Psi + i j_{0} v_{c} \partial_{x} \Psi = 0 \label{spbaeq} \end{equation} \subsection{Charged edge spin-wave solutions} The solution of the form \ref{edwave} of the equation \ref{spbaeq} with $\beta =$ const exists only if $j_{0}$ is, by itself, a constant, i.e. if there is a constant charge density along the boundary of the system. For the ground state $j_{0} = 0$, and the previous condition (with $j_{0} \neq 0$) means that some (extra) charge is added to $(j_{0}>0)$ or subtracted from the system $(j_{0}<0)$. Plugging in the form \ref{edwave} into \ref{spbaeq}, and using \ref{disprel}, we obtain the following candidates for solutions with $\beta$ equal to \begin{equation} \beta_{1,2} = - \frac{\rho_{0}}{j_{0}} \pm \frac{\rho_{0}}{|j_{0}|} \sqrt{1 + \frac{j_{0}^{2}}{2 \rho_{s} \rho_{0}^{2}} (\Delta + 2 \rho_{s} k_{x}^{2} - v_{c} \rho_{0} k_{x})} \label{betasol} \end{equation} from which only those with $\beta$ positive can describe the edge spin-waves. In the approximations in which the second term under the square root in \ref{betasol} is small, \begin{eqnarray} \beta \approx & \frac{j_{0}}{4 \rho_{s} \rho_{0}} (\Delta + 2 \rho_{s} k_{x}^{2} - v_{c} \rho_{0} k_{x}) & \; \;{\rm for}\; \; j_{0}> 0 \nonumber \\ {\rm and}\; \beta \approx & 2 \frac{\rho_{0}}{|j_{0}|} & \; \;{\rm for} \; \;j_{0} < 0 \label{twoeq} \end{eqnarray} For $j_{0} \rightarrow 0$ the second solution is unacceptable because our effective theory assumes (spatially) slowly varying quantities, which is not the case with the solution with $\beta$ large. Also, we may notice looking at \ref{twoeq} asymmetry between $j_{0}>0$ and $j_{0}<0$ case. One way to understand the asymmetry is to first consider the single particle picture of adding and subtracting charge to the edge of these systems. In this picture of systems without spin, adding or subtracting charge is always equivalent to simple shifts (translations) of the boundary. In the case of the systems with spin there is an additional possibility to add charge as shown in Fig.1. We believe that this single particle picture is behind the many body state - charged edge spin-wave for $j_{0}>0$. This can be supported by the following consideration. First, we may try to interpret the solution as a solution of a generalized pure-spin (quantum-ferromagnet) problem, in which the second and third term in \ref{spbaeq} correspond to some surface terms in that problem. For a finite Zeeman coupling and in the small-momentum approximation, the solution with $j_{0}>0$ corresponds to the following term: \begin{equation} \frac{j_{0}}{\rho_{0}} \Delta \overline{\Psi} \Psi \end{equation} with an effective magnetic field $ B_{eff} = -(\frac{j_{0}}{\rho_{0}}) B $ on the boundary. As we add electrons, we effectively create a magnetic field in the opposite direction of the external magnetic field. This magnetic field makes possible the creation of the spin edge solution. As we add more electrons ($j_{0}$ larger) the effective magnetic field increases in its magnitude and favors edge spin waves localized near the boundary ($\beta$ larger). Therefore the edge-spin-wave, one-spin flip will be more localized on the boundary if there are more pairs (see Fig.1) each of them energetically unfavorable because each of them consists of two electrons in the same orbital. The corresponding term (in the pure-spin problem) for the second solution (with $j_{0} < 0$) is \begin{equation} - \frac{2 \rho_{s} j_{0}}{\rho_{0}} \partial_{y} \overline{\Psi} \partial_{y} \Psi. \end{equation} It is of the opposite sign than the gradient term in the bulk and, therefore, favors the solution which disorders the spin configuration of the ground state and, we expect as it is usual in these systems, that it is also followed by a redistribution of the charge on the boundary. We also expect that these charged edge spin-waves can be created without any change in the total charge of the system i.e. by simple redistributions of the ground-state charge on the boundary where $j_{0}$ is a parameter that characterizes them. Therefore, when considering the total energy of excitations we will neglect the energy term of the chiral boson Lagrangian. Then the surface contribution is \begin{equation} E_{\rm surface} = 2 \rho_{s} \beta^{2} \label{ensur} \end{equation} and the contribution from the bulk is given by \ref{disprel}.(To get \ref{ensur} we normalized the wave to describe one electron spin flip: \begin{equation} \Psi = \sqrt{\frac{\beta}{\pi}} \exp\{ \beta y + i k x - i w t\}) \end{equation} As a consequence, we may write the total energy of excitations as \begin{equation} E_{tot} = \frac{\Delta}{\rho_{0}} + \frac{2 \rho_{s}}{\rho_{0}} k^{2} + \beta^{2} (4 \rho_{s} - \frac{2 \rho_{s}}{\rho_{0}}) \label{etot} \end{equation} Because $ \rho_{0} = \frac{1}{2 \pi m}$ the last term in \ref{etot} is always negative, and at $k=0$ $E_{tot}$ is always lower than the constant Zeeman term. When \ref{twoeq} is substituted for $\beta$ in the expression \ref{etot}, the energy \ref{etot} is of the following form \begin{equation} E_{tot} = c_{0} + c_{1} k + c_{2} k^{2}+ \cdots \end{equation} (where $c$'s do not depend on k). By having a linear term in $k$ this dispersion relation is asymmetric around $k=0$. \subsection{Neutral edge spin-wave solutions} We define the neutral edge spin-waves by requiring that the field $\alpha$, which lives on the boundary, associated with pure (not carryng also spin) charge degrees of freedom, is zero ($\alpha = 0$). As a consequence there is one less surface equation to be satisfied, because the constraint equations \ref{chargeeq} and \ref{boundarye} can not be satisfied simultaneously. And, as we will see, the constraint also implies losing one of the two (charge and spin) local conservation laws. Namely, local spin currents normal to the boundary will be nonzero in general. It will be also shown that these excitations are neutral i.e. the total change in the charge of the system, when these excitations occur, is zero. With the condition $\alpha = 0$ the charge current and density on the boundary are proportional to \begin{equation} \partial_{\mu} \alpha^{f} = - \frac{i}{2} (\overline{\Psi} \partial_{\mu} \Psi - \partial_{\mu} \overline{\Psi} \Psi) \;\; \mu = 0,x \end{equation} These charge degrees of freedom must satisfy \ref{chargeeq}, and from that it follows that the dispersion relation for the waves is $ w = v_{c} k$. This dispersion relation follows when a general solution that is a superposition of the waves of the form \ref{edwave} is considered. The frequency of the wave is also equal to \ref{disprel}. This enables us to find $\beta$ in this case. It is given by the following expression \begin{equation} \beta^{2} = \frac{\Delta}{2 \rho_{s}} + k^{2} - \frac{v_{c} \rho_{0}}{2 \rho_{s}} k, \label{betav} \end{equation} i.e. in the small-momentum approximation $ \beta \approx \sqrt{\frac{\Delta}{2 \rho_{s}}}$. But the waves do not satisfy \ref{boundarye}, which, in the spin-wave approximation, is equivalent to the condition \ref{baeq}. This means that the spin currents normal to the boundary of the system are, in general, nonzero and exchange of the spin of the system with outside is allowed. Nevertheless, ${\cal J}_{y}^{3} = 0$ (see \ref{spcurrents}) everywhere on the boundary and, also, the total spin of the system is conserved, i.e. \begin{equation} \int_{-\infty}^{+\infty} {\cal J}_{y}^{2} dx = 0 \;\; {\rm and} \;\; \int_{-\infty}^{+\infty} {\cal J}_{y}^{1} dx = 0 \end{equation} Although the bulk excitation energy $(w = v_{c} k)$ is gapless, these excitations have a gap. To see this we have to calculate their total energy and include the surface contribution \ref{ensur}, with $\beta$ given by \ref{betav}. As a final result we have: \begin{equation} E_{tot} = 2 \Delta + v_{c} k (1 - 2 \rho_{s}) + 4 \rho_{s} k^{2} \label{nuttot} \end{equation} The requirement $E_{tot}>0$ for each k fixes the allowed range of the parameters in \ref{nuttot} and in the theory. The excitations are allowed to propagate in both directions, though, the spectrum is assymetric around $k = 0$ because of the presence of the linear term in $E_{tot}$. The gap (the smallest excitation energy for certain $k$) of these excitaations is always smaller than the Zeeman gap. To prove the neutrality of these excitations we start with a general edge spin-wave solution: \begin{equation} \Psi(x,y,t) = \sum_{k} \sqrt{\frac{\beta}{\pi}} \exp\{ \beta y + i k x - i w t\} a(k), \end{equation} where $a(k)$ are arbitrary (complex) coefficients in this expansion. The density of charge on the boundary of the system can be expressed as \begin{eqnarray} \rho_{\rm surface}(x,t) & = & \frac{\partial_{x} \alpha^{f}}{2 \pi m} = \frac{1}{2 \pi m} (-\frac{i}{2}) (\overline{\Psi} \partial_{x} \Psi - \partial_{x} \overline{\Psi} \Psi) = \nonumber \\ & = & \frac{\beta}{2 \pi^{2} m} (-\frac{i}{2}) \sum_{k,k^{\prime}} \exp\{-i (k - k^{\prime})x\} \exp\{i(w_{k} - w_{k^{\prime}})t\} i (k^{\prime} + k) a(k)^{\ast} a(k^{\prime}) \end{eqnarray} The same quantity in the bulk is proportional to the topological density \cite{leeka} and equal to \begin{equation} \rho_{\rm bulk}(x,y,t) = \frac{(-i)}{2 \pi m} (\partial_{x} \overline{z} \partial_{y} z - \partial_{y} \overline{z} \partial_{x} z). \end{equation} In the spin-wave approximation we have \begin{eqnarray} \rho_{\rm bulk}(x,y,t) & = & \frac{(-i)}{2 \pi m} (\partial_{x} \overline{\Psi} \partial_{y} \Psi - \partial_{y} \overline{\Psi} \partial_{x} \Psi) \nonumber \\ & = & \frac{-i \beta^{2}}{2 \pi^{2} m} \sum_{k,k^{\prime}} \exp\{2 \beta y\} \exp\{-i (k - k^{\prime})x\} \exp\{i (w_{k} - w_{k^{\prime}})t\} (-i) (k + k^{\prime}) a(k)^{\ast} a(k^{\prime}). \end{eqnarray} Clearly, the total change in the charge of the boundary, \begin{equation} Q_{\rm surface} = \int_{-\infty}^{+\infty} \rho_{\rm surface}(x,t) dx = \frac{\beta}{\pi m} \sum_{k} k a(k)^{\ast} a(k), \end{equation} is of the same amount, but of opposite sign than the one in the bulk $(Q_{\rm bulk} = \int_{-\infty}^{+\infty} dx \int_{-\infty}^{0} dy \rho_{\rm bulk}(x,y,t))$. At the end of this section we would like to comment on the nature of these neutral edge spin waves. As they are completely skyrmionic, their charge is fully specified by their spin configuration. Their spread increases with the decreasing of the Zeeman energy, which is well known skyrmion property. As they carry fixed (unit) amount of spin they vanish when the Zeeman coupling is zero. Due to the close relationship between their charge and spin and linearity of the chiral boson dispersion relation, the dispersion relation of these spin waves is also linear. \section{Conclusions and Discussion} In conclusion, we proposed an effective edge theory for ferromagnetic quantum Hall states. It describes their (1 + 1) dimensional charge degrees of freedom by a chiral boson theory and their (2 + 1) dimensional spin degrees of freedom by the effective theory of a quantum ferromagnet in the spin-wave approximation. We found two classes of the edge spin-wave solutions. The class with the charged edge spin waves is obtained by removing or adding electrons to the edge of the system. The second class of the neutral edge spin waves does not require any change in the total charge of the system. All these edge spin excitations are characterized by linear dispersion relations and gaps in the excitation energy. We did not consider most-general surface terms for the spin degrees of freedom which would be present in a most-general theory, because we wanted to emphasize and examine the influence of the terms that describe the charge degrees of freedom. The latter terms, collected in the chiral boson theory, give a complete effective description of the charge degrees of freedom on the edge. Ref. \cite{kkls} pointed out that for small Zeeman energies and soft confining potentials a reconstruction (from narrow and spin polarized edge) to the edge with spin textures should occur. Because of our assumption of the edge with the same polarization as that of the bulk, our theory is valid only for steep enough (see for estimates in \cite{kkls}) confining potentials. At the end we would like to comment on the relationship between our work and the recent work concerning edge excitations and edge reconstruction at $\nu=$ 1 \cite{kkll}. There, the Hartree-Fock procedure was used to determine the energy spectrum of some proposed edge spin-flip excitations. (In fact a special form of these excitations was first suggested in Ref. \cite{ommt}.) These excitations reconstruct the edge when the Zeeman energy is small and the confining potential softened. They are followed by an outward movement of charge and exist even at zero Zeeman energy. The latter property is not shared by our neutral edge excitations and charged edge excitations for $j_{0}>0$ and therefore, they should not be confused with the ones of Ref. \cite{kkll}. (The neutral edge waves even require, as we described, special boundary conditions to exist.) On the other hand the detailed description of the charged edge excitations for $ j_{0}<0 $ (which can be associated with the outward movement of charge) is impossible in the scope of our theory. The theory gives only an inkling of the possible instability of the ground state with respect to their creation. The author thanks F.D.M. Haldane and N. Read for beneficial discussions. She is especially thankful to E. Shimshoni who also gave very useful comments on the manuscript. This work was supported by Israel Council for Higher Education.
2,869,038,155,171
arxiv
\section{Introduction} \label{intro} In a previous paper \cite{Ma} we have generalized the classical theory about minimal surfaces in $\mathbb{R}^3$ to zero mean curvature spacelike surfaces in 4-dimensional Lorentz space. Such an immersed surface $M^2\to\mathbb{R}^4_1$, called a \emph{stationary surface} (see \cite{Alias} for related works before), admits a Weierstrass-type representation formula, which involves a pair of meromorphic functions $\phi,\psi$ (the Gauss maps) and a holomorphic $1$-form (the height differential) on $M$: \[ {\bf x}= 2~\mathrm{Re}\int \Big(\phi+\psi, -\mathrm{i}(\phi-\psi),1-\phi\psi,1+\phi\psi\Big)\mathrm{d}h. \] Among complete examples, those with finite total curvature are most important, i.e., when the integral of the Gaussian curvature $-\int K\mathrm{d}M$ converges absolutely. For such surfaces, under mild assumptions we have established Gauss-Bonnet type formulas relating the total curvature with the Euler characteristic number of $M$, the generalized multiplicities $\widetilde{d}_j$ of each ends, the mapping degree of $\phi,\psi$, and the indices of the so-called \emph{good singular ends}: \begin{equation*} \int_M K\mathrm{d}M=2\pi\Big(2-2g-r-\sum_{j=1}^r \widetilde{d}_j\Big) =-2\pi \left(\deg\phi+ \deg\psi-\sum{_j} |\mathrm{ind}_{p_j}|\right) \end{equation*} On this foundation, here we go on to consider complete examples with total curvature $-\int K\mathrm{d}M=4\pi$, which is the smallest possible value among algebraic stationary surfaces. (Here we ingore the trivial case when $M$ is contained in a 3-dimensional degenerate subspace $\mathbb{R}^3_0$. The induced metric is flat in that case with total curvature $0$. See Section~2.) Recall that in $\mathbb{R}^3$, Osserman has shown that complete minimal surfaces with finite total curvature must be algebraic ones, i.e., they are given by meromorphic Weierstrass data over compact Riemann surfaces. In particular, immersed examples with $-\int K\mathrm{d}M=4\pi$ are either the catenoid or the Enneper surface. (For other complete minimal surfaces in $\mathbb{R}^3$ with small total curvature $-\int K\mathrm{d}M\le 12\pi$ and the classification results, see \cite{Costa,Lopez}.) These two classical examples have been generalized by us in \cite{Ma} to stationary surfaces in $\mathbb{R}^4_1$ (see Example~\ref{exa-enneper} and \ref{exa-catenoid}). In this paper our main result is \bigskip\noindent {\bf Theorem A}~~ Let $x:M^2\to\mathbb{R}^4_1$ be a complete, immersed, algebraic stationary surface with total curvature $4\pi$. Then it is either a generalized catenoid, or a generalized Enneper surface. In particular, there does not exist non-orientable examples with $-\int K\mathrm{d}M\le 4\pi$. \bigskip Compared to minimal surfaces in $\mathbb{R}^3$, here finite total Gaussian curvature (i.e., $\int_M K\mathrm{d}M$ converges absolutely) still implies that $M$ is conformally equivalent to a compact Riemann surface $\overline{M}$ with finite punctures $\{p_j|1\le j\le r\}$. A main difference is that in our case, finite total curvature no longer implies \emph{algebraic-type}. For counter-examples see Example~\ref{exa-essen} and Example~\ref{exa-essen2}. An interesting open problem is that whether there exist non-algebraic examples with $-\int K\mathrm{d}M=4\pi$. See discussions in Section~5. Another new technical difficulty is that, to solve existence and uniqueness problems for complete stationary surfaces, now we must consider the following equation about complex variable $z$: \begin{equation}\label{eq-singular} \phi(z)=\bar\psi(z), \end{equation} We have to show that there are no solutions to it for meromorphic functions $\phi,\psi$ with given algebraic forms and certain parameters on a compact Riemann surface $\overline{M}$ (except at several points assigned to be \emph{good singular ends}). This is because that on an immersed surface there must be $\phi\ne\bar\psi$ (\emph{regularity condition}). On the other hand, at one end where $\phi,\bar\psi$ take the same value with equal multiplicities (\emph{bad singular end}), the total curvature will diverge. Such a complex equation \eqref{eq-singular} involving both holomorphic and anti-holomorphic functions is quite unusual to the knowledge of the authors. Most of the time we have to deal with this problem by handwork combined with experience. See \cite{Ma} or Appendix A for related discussions. Note that $M\to\mathbb{R}^3$ is a rare case where we overcome this difficulty easily, because this time $\phi\equiv -1/\psi$, and this will never be equal to $\bar\psi$. \bigskip In \cite{Meeks}, Meeks initiated the study of complete non-orientable minimal surfaces in $\mathbb{R}^3$. Such surfaces are represented on its oriented double covering space, and the example with least possible total curvature $6\pi$ was constructed (Meeks' M\"obius strip). Here we generalize this theory to non-orientable stationary surfaces in $\mathbb{R}^4_1$ (Section~4). A key result is the following lower bound estimation of the total curvature which helps to establish Theorem~A above. \bigskip\noindent {\bf Theorem B}~~ Given a non-orientable surface $M$ whose double covering space \ $\widetilde{M}$ has genus $g$ and finite many ends, for any complete algebraic stationary immersion $x:M\to\mathbb{R}^4_1$ with finite total curvature there must be $-\int_M K\mathrm{d}M\ge 2\pi(g+3).$ \bigskip We conjecture that $2\pi(g+3)$ is the best lower bound which could always been attained. Note that this agrees with the estimation for non-orientable minimal surfaces in $\mathbb{R}^3$, and the conjecture is still open even in that special case \cite{Martin}. We organize this paper as below. In Section~2 we review the basic theory about stationary surfaces in $\mathbb{R}^4_1$. The orientable case and non-orientable case are discussed separately in Section~3 and 4. In Section~5 we give non-algebraic examples with small total curvature. The proofs to several technical lemmas are left to Appendix~A and B. \bigskip\noindent \textbf{Acknowledgement}~~ We thank two colleagues of the first author at Peking University, Professor Fan Ding for providing the proof to the topological Theorem~\ref{thm-odd} in Appendix~B, and Professor Bican Xia for verifying Lemma~\ref{lem-main} in Appendix~A using a computational method developed by him before. We also thank the encouragement of Professor Changping Wang. This work is supported by the Project 10901006 of National Natural Science Foundation of China. \section{Preliminary} Let ${\bf x}:M^2\to \mathbb{R}^4_1$ be an oriented complete spacelike surface in 4-dimensional Lorentz space. The Lorentz inner product $\langle\cdot,\cdot\rangle$ is given by \[\langle {\bf x},{\bf x}\rangle=x_1^2+x_2^2+x_3^2-x_4^2.\] We will briefly review the basic facts and global results established in \cite{Ma} about such surfaces with zero mean curvature (called \emph{stationary surfaces}). Let $\mathrm{d}s^2=\mathrm{e}^{2\omega}|\mathrm{d}z|^2$ be the induced Riemannian metric on $M$ with respect to a local complex coordinate $z=u+\mathrm{i}v$. Hence \[ \langle {\bf x}_{z},{\bf x}_{z}\rangle=0,~~ \langle {\bf x}_{z},{\bf x}_{\bar{z}}\rangle =\frac{1}{2}\mathrm{e}^{2\omega}. \] Choose null vectors ${\bf y},{\bf y}^*$ in the normal plane at each point such that \[ \langle {\bf y},{\bf y}\rangle=\langle {\bf y}^*,{\bf y}^*\rangle=0, ~~ \langle {\bf y},{\bf y}^*\rangle =1,~~ \mathrm{det}\{{\bf x}_u,{\bf x}_v,{\bf y},{\bf y}^*\}>0~. \] Such frames $\{{\bf y},\ {\bf y}^{*}\}$ are determined up to scaling \begin{equation}\label{scaling} \{{\bf y},\ {\bf y}^{*}\}\rightarrow \{\lambda {\bf y},\ \lambda^{-1}{\bf y}^{*}\} \end{equation} for some non-zero real-valued function $\lambda$. After projection, we obtain two well-defined maps (independent to the scaling \eqref{scaling}) \[ [{\bf y}],\ [{\bf y}^* ]: M \rightarrow S^2\cong\{[{\bf v}]\in\mathbb{R}P^3|\langle {\bf v},{\bf v}\rangle=0\}. \] The target space is usually called the projective light-cone, which is well-known to be homeomorphic to the 2-sphere. By analogy to $\mathbb{R}^3$, we call them \emph{Gauss maps} of the spacelike surface ${\bf x}$ in $\mathbb{R}^{4}_{1}$. The surface has zero mean curvature $\vec{H}=0$ if, and only if, $[{\bf y}],\ [{\bf y}^* ]: M \rightarrow S^2$ are conformal mappings (yet they induce opposite orientations on $S^2$). Since $S^2\cong \mathbb{C}\cup\{\infty\}$, we may represent them locally by a pair of holomorphic and anti-holomorphic functions $\{\phi,\bar{\psi}\}$. The Weierstrass-type representation of stationary surface ${\bf x} : M\rightarrow \mathbb{R}^4_1$ is given by \cite{Ma}: \begin{equation}\label{x} {\bf x}= 2~\mathrm{Re}\int \Big(\phi+\psi, -\mathrm{i}(\phi-\psi),1-\phi\psi,1+\phi\psi\Big)\mathrm{d}h \end{equation} in terms of two meromorphic functions $\phi,\psi$ and a holomorphic $1$-form $\mathrm{d}h$ locally. We call $\phi,\psi$ the Gauss maps of ${\bf x}$ and $\mathrm{d}h$ the height differential. \begin{remark}\label{rem-Wrepre} When $\phi\equiv \mp 1/\psi$, by \eqref{x} we obtain a minimal surface in $\mathbb{R}^3$, or a maximal surface in $\mathbb{R}^3_1$. This recovers the Weierstrass representation in these classical cases. When $\phi$ or $\psi$ is constant, we get a zero mean curvature spacelike surface in the 3-space $\mathbb{R}^3_0\triangleq \{(x_1,x_2,x_3,x_3)\in\mathbb{R}^4_1\}$ with an induced degenerate inner product, which is essentially the graph of a harmonic function $x_3=f(x_1,x_2)$ on complex plane $\mathbb{C}=\{x_1+\mathrm{i}x_2\}$. \end{remark} \noindent \textbf{Convention:} In this paper, we always assume that neither of $\phi,\psi$ is a constant unless it is stated otherwise. According to the remark above, we have ruled out the trivial case of stationary surfaces in $\mathbb{R}^3_0$. (According to \eqref{eq-totalcurvature} below, such surfaces have flat metrics and zero total Gaussian curvature.) \begin{remark}\label{rem-trans} The induced action of a Lorentz orthogonal transformation of $\mathbb{R}^4_1$ on the projective light-cone is nothing but a M\"obius transformation on $S^2$, or equivalently, a fractional linear transformation on $\mathbb{C}P^1=\mathbb{C}\cup\{\infty\}$ given by $A=\left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right)$ with $a,b,c,d \in \mathbb{C},~ad-bc=1$. The Gauss maps $\phi,\psi$ and the height differential $\mathrm{d}h$ transform as below: \begin{equation}\label{trans} \phi\Rightarrow \frac{a\phi+b}{c\phi+d}~,~~ \psi\Rightarrow \frac{\bar{a}\psi+\bar{b}}{\bar{c}\psi+\bar{d}}~,~~ \mathrm{d}h\Rightarrow (c\phi+d)(\bar{c}\psi+\bar{d})\mathrm{d}h~. \end{equation} This is repeatedly used in Section~2 and Section~3 to simplify or to normalize the representation of examples. \end{remark} \begin{theorem}\label{thm-period}\cite{Ma} Given holomorphic $1$-form $\mathrm{d}h$ and meromorphic functions $\phi,\psi:M\rightarrow \mathbb{C}\cup\{\infty\}$ globally defined on a Riemann surface $M$. Suppose they satisfy the regularity condition 1),2) and period conditions 3) as below: 1) $\phi\neq\bar{\psi}$ on $M$ and their poles do not coincide; 2) The zeros of $\mathrm{d}h$ coincide with the poles of $\phi$ or $\psi$ with the same order; 3) Along any closed path the periods satisfy \begin{equation}\label{eq-period1} \oint_\gamma \phi \mathrm{d}h =-\overline{\oint_\gamma \psi \mathrm{d}h }, ~~~(\text{horizontal period condition}) \end{equation} \begin{equation}\label{eq-period2} \mathrm{Re}\oint_\gamma \mathrm{d}h=\mathrm{Re}\oint_\gamma \phi\psi \mathrm{d}h=0.~~~(\text{vertical period condition}) \end{equation} Then \eqref{x} defines a stationary surface ${\bf x}:M\rightarrow \mathbb{R}^4_1$. Conversely, any stationary surface ${\bf x}:M\rightarrow \mathbb{R}^4_1$ can be represented as \eqref{x} in terms of such $\phi,\ \psi$ and $\mathrm{d}h$ over a (necessarily non-compact) Riemann surface $M$. \end{theorem} The structure equations and the integrability conditions are given in \cite{Ma}. An extremely important corollary is the formula below for the total Gaussian and normal curvature over a compact stationary surface $M$ with boundary $\partial M$: \begin{equation} \begin{split} \int_M(-K+\mathrm{i}K^{\perp})\mathrm{d}M&= 2\mathrm{i}\int_M \frac{\phi_z\bar{\psi}_{\bar{z}}}{(\phi-\bar{\psi})^2} \mathrm{d}z\wedge \mathrm{d}\bar{z} \\ &=-2\mathrm{i}\int_{\partial M} \frac{\phi_z}{\phi-\bar{\psi}} \mathrm{d}z =-2\mathrm{i}\int_{\partial M} \frac{\bar{\psi}_{\bar{z}}}{\phi-\bar{\psi}}\mathrm{d}\bar{z}. \label{eq-totalcurvature} \end{split} \end{equation} At one end $p$ with $\phi=\bar\psi$, the integral of total curvature above will become an improper integral. An important observation in \cite{Ma} is that this improper integral converges absolutely only for a special class of such ends. \begin{definition} Suppose ${\bf x}:D-\{0\}\rightarrow \mathbb{R}^4_1$ is an annular end of a regular stationary surface (with boundary) whose Gauss maps $\phi$ and $\psi$ extend to meromorphic functions on the unit disk $D\subset\mathbb{C}$. It is called a \emph{\underline{regular end}} when \[ \phi(0)\ne\bar\psi(0).~~~(\text{Thus}~ \phi(z)\ne\bar\psi(z),~\forall~z\in D.) \] It is a \emph{\underline{singular end}} if $\phi(0)=\bar\psi(0)$ where the value could be finite or $\infty$. When the multiplicities of $\phi$ and $\bar\psi$ at $z=0$ are equal, we call $z=0$ a \emph{\underline{bad singular end}}. Otherwise it is a \emph{\underline{good singular end}}. \end{definition} \begin{proposition}\cite{Ma}\label{prop-goodsingular} A singular end of a stationary surface ${\bf x}:D-\{0\}\rightarrow \mathbb{R}^4_1$ is \emph{good} if and only if the curvature integral \eqref{eq-totalcurvature} converges absolutely around this end. \end{proposition} For a good singular end we introduced the following definition of its index. \begin{definition}\cite{Ma}\label{lemma-index2} Suppose $p$ is an isolated zero of $\phi-\bar\psi$ in $p$'s neighborhood $D_p$, where holomorphic functions $\phi$ and $\psi$ take the value $\phi(p)=\overline{\psi(p)}$ with multiplicity $m$ and $n$, respectively. \emph{The index of $\phi-\bar{\psi}$ at $p$} (when $\phi,\psi$ are both holomorphic at $p$) is \begin{equation}\label{eqind} \mathrm{ind}_p(\phi-\bar{\psi})\triangleq \frac{1}{2\pi\mathrm{i}}\oint_{\partial D_{p}}d\ln(\phi-\bar{\psi})=\left\{ \begin{array}{ll} m, & \hbox{$m<n$;} \\ -n, & \hbox{ $m>n$.} \end{array} \right. \end{equation} \emph{The absolute index of $\phi-\bar{\psi}$ at $p$} is \begin{equation}\label{eqind+} \mathrm{ind}^{+}_p(\phi-\bar{\psi})\triangleq \left|\mathrm{ind}_p(\phi-\bar{\psi})\right|. \end{equation} For a regular end our index is still meaningful with $\mathrm{ind}=\mathrm{ind}^+=0.$ For convenience we also introduce \begin{equation}\label{eqind10} \mathrm{ind}^{1,0}\!\triangleq \frac{1}{2}(\mathrm{ind}^{+}\!+\mathrm{ind}),~~~ \mathrm{ind}^{0,1}\!\triangleq \frac{1}{2}(\mathrm{ind}^{+}\!-\mathrm{ind}), \end{equation} which are always non-negative. \end{definition} Note that our definition of index of $\phi-\bar\psi$ is invariant under the action of fractional linear transformation \eqref{trans}. So it is well-defined for a good singular end of a stationary surface. In particular, we can always assume that our singular ends do not coincide with poles of $\phi,\psi$; hence the definition above is valid. \medskip A stationary surface in $\mathbb{R}^4_1$ is called an \emph{algebraic stationary surface} if there exists a compact Riemann surface $\overline{M}$ with $M=\overline{M}\backslash\{p_1,p_2,\cdots,p_r\}$ such that ${\bf x}_z \mathrm{d}z$ is a vector valued meromorphic form defined on $\overline{M}$. In other words, the Gauss map $\phi,\psi$ and height differential $\mathrm{d}h$ extend to meromorphic functions/forms on $\overline{M}$. For this surface class we have established Gauss-Bonnet type formulas involving the indices of the good singular ends. \begin{theorem}[\cite{Ma}]\label{GB2} For a complete algebraic stationary surface ${\bf x}:M\rightarrow \mathbb{R}^4_1$ given by \eqref{x} in terms of $\phi,\psi,\mathrm{d}h$ without bad singular ends, the total Gaussian curvature and total normal curvature are related with the indices at the ends $p_j$ (singular or regular) by the following formulas: \begin{align} \int_M K^{\perp}\mathrm{d}M &=0~,\label{eq-deg0}\\ \int_M K\mathrm{d}M &=-4\pi \left(\deg\phi-\sum{_j} \mathrm{ind}^{1,0}(\phi-\bar{\psi})\right) \label{eq-deg1}\\ &=-4\pi \left(\deg\psi-\sum{_j} \mathrm{ind}^{0,1}(\phi-\bar{\psi})\right),\label{eq-deg2} \end{align} From \eqref{eq-deg1}\eqref{eq-deg2} we have equivalent identities: \begin{gather} \sum{_j}\mathrm{ind}_{p_j}(\phi-\bar{\psi})=\deg\phi-\deg\psi~.\label{eq-deg3}\\ \int_M K\mathrm{d}M =-2\pi \left(\deg\phi+ \deg\psi-\sum{_j} \mathrm{ind}^{+}_{p_j}(\phi-\bar{\psi})\right)~.\label{eq-deg4} \end{gather} \end{theorem} \begin{definition}\label{def-multiplicity} The multiplicity of a regular or singular end $p_j$ for a stationary surface in $\mathbb{R}^4_1$ is defined to be \[ \widetilde{d}_j=d_j-\mathrm{ind}^+_{p_j}, \] where $d_j+1$ is equal to the order of the pole of~ ${\bf x}_z \mathrm{d}z$ at $p_j$. \end{definition} \begin{theorem} [Generalized Jorge-Meeks formula \cite{Ma}] \label{GB3} Given an algebraic stationary surface ${\bf x}:M\rightarrow \mathbb{R}^4_1$ with only regular or good singular ends $\{p_1,\cdots,p_r\}=\overline{M}-M$. Let $g$ be the genus of compact Riemann surface $\overline{M}$, $r$ the number of ends, and $\widetilde{d}_j$ the multiplicity of $p_j$. We have \begin{equation}\label{eq-jorgemeeks2} \int_M K\mathrm{d}M=2\pi\left(2-2g-r-\sum_{j=1}^r \widetilde{d}_j\right)~, ~~\int_M K^{\perp}\mathrm{d}M =0~. \end{equation} \end{theorem} \begin{proposition}[\cite{Ma}]\label{ineq-multiplicity} Let ${\bf x}:D^2-\{0\}\rightarrow \mathbb{R}^4_1$ be a regular or a good singular end which is further assumed to be complete at $z=0$. Then its multiplicity satisfies $\widetilde{d}\ge 1.$ \end{proposition} \begin{corollary} [The Chern-Osserman type inequality \cite{Ma}] \label{GB2C} Let ${\bf x}:M\rightarrow \mathbb{R}^4_1$ be an algebraic stationary surface without bad singular ends, $\overline{M}=M\cup\{q_1,\cdots,q_r\}$. Then \begin{equation} \int K\mathrm{d}M \le 2\pi(\chi(M)-r)=4\pi(1-g-r). \end{equation} \end{corollary} \begin{corollary} [Quantization of total Gaussian curvature \cite{Ma}] \label{cor-quantization} Under the same assumptions of the theorem above, when $\phi,\psi$ are not constants (equivalently, when ${\bf x}$ is not a flat surface in $\mathbb{R}^3_0$), there is always \[ -\int_M K\mathrm{d}M=4\pi k\ge 4\pi, \] where $k\ge 1$ is a positive integer. \end{corollary} \section{Orientable case and examples with $-\int K \mathrm{d}M=4\pi$} This section is dedicated to the classification of complete stationary surfaces immersed in $\mathbb{R}^4_1$ with finite Gaussian curvature $-\int K \mathrm{d}M=4\pi$ which are orientable and of algebraic type. Under our hypothesis, the generalized Jorge-Meeks formula \eqref{eq-jorgemeeks2} yields \begin{equation}\label{4pi-end} r+\sum\widetilde{d}_j+2g=4, \end{equation} and the index formulas \eqref{eq-deg1}\eqref{eq-deg2} read \begin{equation}\label{4pi-ind} \deg\phi-\sum\mathrm{ind}^{1,0}=1,~~\deg\psi-\sum\mathrm{ind}^{0,1}=1. \end{equation} Since $r\ge 1$, and $\widetilde{d}_j\ge 1$ for any end, there must be $g\le 1$, and we need only to consider five cases separately as below. \vspace{3mm}\noindent $\bullet$~~\textbf{Case 1: $g=1, r=1, \widetilde{d}=1$ (torus with one end)}. Since there is only one end, at least one of the indices $\mathrm{ind}^{1,0},\mathrm{ind}^{0,1}$ is zero. By \eqref{4pi-ind} we know either $\phi$ or $\psi$ is a meromorphic function of degree $1$. Yet this contradicts the well-known fact that over a torus there do not exist such functions. So we rule out this possibility. \vspace{3mm}\noindent $\bullet$~~\textbf{Case 2: $g=0, r=1, \widetilde{d}=3$ and the unique end is regular}. Such examples exist and they are generalization of the classical Enneper surface. \begin{example}[The generalized Enneper surfaces] \label{exa-enneper} This is given by \begin{equation}\label{eq-enneper1} \phi=z,\ \psi=\frac{c}{z}~,\ \mathrm{d}h=s\cdot z\mathrm{d}z,\ \end{equation} or \begin{equation}\label{eq-enneper2} \phi=z+1,\ \psi=\frac{c}{z}~,\ \mathrm{d}h=s\cdot z\mathrm{d}z, \end{equation} over $\mathbb{C}$ with complex parameters $c,s\in\mathbb{C}\backslash\{0\}$. ${\bf x}$ has no singular points if and only if the parameter $c=c_1+\mathrm{i}c_2$ is not zero or positive real numbers in \eqref{eq-enneper1}, or \begin{equation}\label{eq-enneper3} c_1-c_2^2+\frac{1}{4}<0 \end{equation} in \eqref{eq-enneper2}. When $c=-1$ in \eqref{eq-enneper1} we obtain the Enneper surface in $\mathbb{R}^3$. \end{example} Indeed they are all examples in Case 2 according to the following result in \cite{Ma}. \begin{theorem}\label{thm-enneper}\cite{Ma} A complete immersed algebraic stationary surface in $\mathbb{R}^4_1$ with $\int K=-4\pi$ and one regular end is a generalized Enneper surface. \end{theorem} \vspace{3mm}\noindent $\bullet$~~\textbf{Case 3: $g=0, r=1, \widetilde{d}=3$ with a good singular end}. Suppose there exists such an example. Without loss of generality we assume that the singular end $p$ has positive index. Since $\mathrm{ind}\ge 1$, by definition we know that at $p$ the function $\psi$ takes the value $\psi(p)$ with multiplicity at least $2$. On the other hand, $\mathrm{ind}^{0,1}=0$ and $\deg\psi=1$, which is a contradiction to the observation above. Hence such examples do not exist. \vspace{3mm}\noindent $\bullet$~~\textbf{Case 4: $g=0, r=2, \widetilde{d}_j=1$ and both ends are regular}. The classical catenoid is one of such examples. The generalization in $\mathbb{R}^4_1$ is \begin{example} [The generalized catenoids] \label{exa-catenoid} This is defined over $M=\mathbb{C}\backslash\{0\}$ with \begin{equation}\label{eq-catenoid} \phi=z+t,\ \psi=\frac{-1}{z-t},\ \mathrm{d}h=s\frac{z-t}{z^2}\mathrm{d}z.~~~~~ (-1<t<1, s\in\mathbb{R}\backslash\{0\}) \end{equation} \end{example} When $t=0$, it is the classical catenoid in $\mathbb{R}^3$. \begin{theorem}\label{thm-catenoid} \cite{Ma} A complete immersed algebraic stationary surface in $\mathbb{R}^4_1$ with total curvature $\int K=-4\pi$ and two regular ends is a generalized catenoid. \end{theorem} \vspace{3mm}\noindent $\bullet$~~\textbf{Case 5: $g=0, r=2, \widetilde{d}_j=1$ with at least one good singular ends}. This is the most difficult case in our discussion. We will show step by step that there are no such examples. First, assume there is such a surface. We assert that it must have two singular ends whose indices have opposite signs. Otherwise, if there is only one good singular end which might be assumed to have positive index, similar to the discussion in Case 3 we can show $\deg\psi=1$ and $\psi$ has multiplicity greater than $1$ at the end, which is a contradiction. In the same way we can rule out the possibility that both ends are singular with the same signs. Second, without loss of generality we may suppose $M=\mathbb{C}\backslash\{0\}$ and the good singular ends are $0$ and $\infty$, with $\mathrm{ind}_0=m\ge 1, \mathrm{ind}_\infty<0$. By \eqref{4pi-ind}, $\mathrm{ind}^{1,0}=m, \deg\phi=m+1$. If $\mathrm{ind}_\infty\le -m-1$, by definition we know $\psi$ has multiplicity at least $m+1$ at $z=\infty$ where $\phi$ must has higher multiplicity, which is impossible since $\deg\phi=m+1$. If $\mathrm{ind}_\infty\ge -m+1$, by definition and \eqref{4pi-ind} we know $\mathrm{ind}^{0,1}\le m-1, \deg\psi\le m$, which contradicts the requirement that $\psi$ must has multiplicity greater than $m$ at the first end $z=0$. In summary there must be \begin{equation}\label{eq-case51} \mathrm{ind}_0=m\ge 1,~~ \mathrm{ind}_\infty=-m,~~\deg\phi=\deg\psi=m+1\ge 2. \end{equation} We observe that $\phi(0)\ne\phi(\infty)$. Otherwise, since $z=0,\infty$ are both singular ends, there must be $\psi(0)=\phi(0)=\phi(\infty)=\psi(\infty)$. Because $z=0$ is a good singular end and $\mathrm{ind}_0=m$, $\psi$ has multiplicity at least $m+1$ at $z=0$ and multiplicity $m$ at $\infty$. This is impossible when $\deg\psi=m+1, m\ge 1$. This observation enables us to make the following normalization. Without loss of generality, suppose $\phi(0)=\psi(0)=0,\phi(\infty)=\psi(\infty)=\infty$. Since meromorphic functions $\phi,\psi$ must be rational functions satisfying restrictions \eqref{eq-case51}, we know \begin{equation}\label{eq-case52} \phi(z)=z^m(z-a),~~\psi(z)=\frac{z^{m+1}}{z-b},~~\mathrm{d}h=\rho\frac{z-b}{z^k}\mathrm{d}z, \end{equation} where $a,b,\rho$ are arbitrary nonzero complex parameters. Note that $\mathrm{d}h$ takes the form as above because $M$ is regular at $z=b$. On the other hand, at the ends $z=0$ and $z=\infty$ it should satisfy $\widetilde{d}_0\ge 1,\widetilde{d}_{\infty}\ge 1$ according to Proposition~\ref{ineq-multiplicity}, which implies $k=m+2$ by the definition of $\widetilde{d}$. After fixing the form of $\phi,\psi,\mathrm{d}h$, we verify the period conditions. It is easy to see that the vertical period conditions are satisfied. The horizontal period conditions are satisfied if and only if $a+b=-\bar\rho/\rho.$ In summary, such examples have Weierstrass data \begin{equation}\label{eq-case53} \phi(z)=z^m(z-a),~~\psi(z)=\frac{z^{m+1}}{z-b}, ~~\mathrm{d}h=\rho\frac{z-b}{z^{m+2}}\mathrm{d}z, \end{equation} with parameters \begin{equation}\label{eq-case54} m\ge 1,~a,b,\rho\in \mathbb{C}\backslash{0},~a+b=-\bar\rho/\rho. \end{equation} If we can find nonzero parameters $a,b,\rho$ as above so that the regularity condition $\phi\ne\bar\psi$ holds true for any $ z\in\mathbb{C}\cup\{\infty\}$, then new examples with $-\int K\mathrm{d}M=4\pi$ are found. But according to Lemma~\ref{lem-main} in Appendix A, for any given nonzero parameters $a,b,\rho$ there always exist nonzero solutions $z$ to the equation $\phi(z)=\overline{\psi(z)}$ for $\phi,\psi$ given above. We conclude that there exist no examples in Case 5, The proof to the following theorem has been finished. \begin{theorem}\label{thm-4pi1} Complete regular algebraic stationary surfaces $x:M\to\mathbb{R}^4_1$ with $-\int K\mathrm{d}M=4\pi$ are either the generalized catenoids or the generalized Enneper surfaces under the assumption that $M$ is orientable. \end{theorem} Another interesting observation is that if we make change of variables $z=w^2$ in \eqref{eq-case52}, and choose the power $k$ to be a even number suitably, then the period conditions always hold true and we don't need the restriction $a+b=-1$ in \eqref{eq-case54}. In this situation, if parameters $a=b$ is chosen suitably, the regularity condition $\phi\ne\bar\psi$ is satisfied. See Lemma~\ref{lem-a=b}. In this way we find a complete, immersed stationary surface in $\mathbb{R}^4_1$, yet with total curvature $-\int K\mathrm{d}M=8\pi$. See the example below (which has appeared in \cite{Ma}). \begin{example} [Genus zero, two good singular ends and $\int_M K\mathrm{d}M=-8\pi$] \label{exa-singular1} \[ M=\mathbb{C}\backslash\{0\},~\phi=w^2(w^2-a),~\psi=\frac{w^4}{w^2-a}, ~\mathrm{d}h=\frac{w^2-a}{w^4}\mathrm{d}w.~~(a\in\mathbb{C}\backslash\{0\}) \] The regularity, completeness and period conditions are satisfied when $-a$ is a sufficiently large positive real number (e.g. $-a>1$). For the proof of regularity, see Lemma~\ref{lem-a=b}. \end{example} \section{Non-orientable stationary surfaces and examples} In this section we will consider non-orientable algebraic stationary surfaces and show that the total curvature of them is always greater than $4\pi$. For this purpose we need to consider their oriented double covering surface $\widetilde{M}$, and characterize the Weierestrass data over $\widetilde{M}$. This is a natural extension of Meeks' characterization of non-orientable minimal surfaces in $\mathbb{R}^3$ \cite{Meeks}. \subsection{Representation of non-orientable stationary surfaces} \begin{theorem}\label{thm-nonorientable} Let $\widetilde{M}$ be a Riemann surface with an anti-holomorphic involution $I:\widetilde{M}\to \widetilde{M}$ (i.e., a conformal automorphism of $\widetilde{M}$ reversing the orientation) without fixed points. Let $\{\phi,\psi,\mathrm{d}h\}$ be a set of Weierstrass data on $\widetilde{M}$ such that \begin{equation}\label{eq-nonorientable} \phi\circ I=\bar\psi,~~ \psi\circ I=\bar\phi,~~ I^*\mathrm{d}h=\overline{\mathrm{d}h}, \end{equation} which satisfy the regularity and period conditions as well. Then they determine a non-orientable stationary surface \[ M=\widetilde{M}/\{\mathrm{id},I\}\to \mathbb{R}^4_1 \] by the Weierstrass representation formula \eqref{x}. Conversely, any non-orientable stationary surface ${\bf x}:M\to \mathbb{R}^4_1$ could be constructed in this way. \end{theorem} \begin{remark} Geometrically, \eqref{eq-nonorientable} is the consequence of reversing the orientation of the tangent plane by $z\to \bar{z}$, and reversing the induced orientation of the normal plane by interchanging the lightlike normal directions $[{\bf y}],\ [{\bf y}^*]$. \end{remark} \begin{proof}[Proof to Theorem \ref{thm-nonorientable}] We prove the converse first. It is well-known that any non-orientable surface $M$ has a orientable two-sheeted covering surface $\widetilde{M}$ with an orientation-reversing homeomorphism $I$, and $M$ is realized as the quotient surface \[ M=\widetilde{M}/\mathbb{Z}_2=\widetilde{M}/\{\mathrm{id},I\}. \] Denote $\pi$ the quotient map. Notice that $\widetilde{M}$ is endowed with the complex structure induced from the metric. When $z$ is a local complex coordinate over a domain $U\subset\widetilde{M}$ which projects to $M$ one-to-one, $\bar{z}$ is also a coordinate over $I(U)$ compatible with the orientation on $\widetilde{M}$. Consider the stationary surface $\tilde{\bf x}\triangleq {\bf x}\circ \pi:\widetilde{M}\to \mathbb{R}^4_1$. In the chart $(U,z)$ we have \[ \tilde{{\bf x}}_z \mathrm{d}z=\Big(\phi+\psi, -\mathrm{i}(\phi-\psi),1-\phi\psi,1+\phi\psi\Big)\mathrm{d}h~. \] Then in the corresponding chart $(I(U),w=\bar{z})$, consider $\tilde{{\bf x}}^*=\tilde{{\bf x}}\circ I:I(U)\to \mathbb{R}^4_1$ and we have \[ \tilde{{\bf x}}^*_w \mathrm{d}w=\Big(\bar\phi+\bar\psi, \mathrm{i}(\bar\phi-\bar\psi),1-\bar\phi \bar\psi,1+\bar\phi \bar\psi\Big)\mathrm{d}\bar{h}~. \] This implies \eqref{eq-nonorientable}. Now we prove the first part. If $M=\widetilde{M}/\{\mathrm{id},I\}$ as described in the theorem and $\phi,\psi,\mathrm{d}h$ satisfy condition \eqref{eq-nonorientable} as well as the regularity and period conditions, then the integral along any path $\gamma\subset\widetilde{M}$ yields two stationary surfaces \begin{eqnarray*} \tilde{\bf x} \!&=& 2~\mathrm{Re}\int_\gamma \Big(\phi+\psi, -\mathrm{i}(\phi-\psi),1-\phi\psi,1+\phi\psi\Big)\mathrm{d}h,\\ \tilde{{\bf x}}\circ I~=~\tilde{{\bf x}}^*\!&=& 2~\mathrm{Re}\int_\gamma\Big(\bar\psi+\bar\phi, -\mathrm{i}(\bar\psi-\bar\phi),1-\bar\psi \bar\phi,1+\bar\psi \bar\phi\Big)\mathrm{d}\bar{h}~. \end{eqnarray*} If we assign the same initial value, then after either integration above we get the same result, because they are the real parts of a holomorphic vector-valued function and its complex conjugate. So $p\in \widetilde{M}$ and $I(p)\in\widetilde{M}$ are mapped to the same point in $\mathbb{R}^4_1$, yet with opposite induced orientations on the same surface. After taking quotient we get a stationary immersion of the non-orientable $M$ into $\mathbb{R}^4_1$. This finishes the proof. \qed\end{proof} As an application of this theorem, we give a natural generalization of Meeks and Oliveira's construction of minimal M\"obius strip. \begin{example} [Generalization in $\mathbb{R}^4_1$ of Meeks' minimal M\"obius strip] \label{exa-meeks} This is defined on $\widetilde{M}=\mathbb{C}\backslash\{0\}$ with involution $I:z\to -1/\bar{z}$, and the Weierstrass data be \begin{equation} \phi=\frac{z-\lambda}{z-\bar\lambda}\cdot z^{2m},~~ \psi=\frac{1+\bar\lambda z}{1+\lambda z}\cdot\frac{1}{z^{2m}},~~ \mathrm{d}h=\mathrm{i}\frac{(z-\bar\lambda)(1+\lambda z)}{z^2}\mathrm{d}z, \end{equation} where $\lambda$ is a complex parameter satisfing $\lambda\ne \pm 1, |\lambda|=1$, and the integer $m\ge 1$. \end{example} \begin{remark} When $\lambda=\pm\mathrm{i}$ we have $\phi=-1/\psi$, and the example above is equivalent to Oliveira's examples in $\mathbb{R}^3$ \cite{Oliveira}. (Meeks' example \cite{Meeks} corresponds to the case $m=1$.) Otherwise this is a full map in $\mathbb{R}^4_1$. Furthermore, for fixed $m$ these examples are not congruent to each other unless the values of the parameter $\lambda$ are the same or differ by complex conjugation, because the cross ratio \[\mathrm{cr}(0,\infty;\lambda,\bar\lambda) =\frac{\lambda}{\bar\lambda}\] between the zeros and poles in the normal form of $\phi$ is an invariant. \end{remark} \begin{proposition}\label{prop-olivaira} Example~\ref{exa-meeks} is a complete immersed stationary M\"obius strip with a regular end and total Gaussian curvature $2(2m+1)\pi$. \end{proposition} \begin{proof} We start from a general case, a M\"obius strip $M=\widetilde{M}/\{\mathrm{id},I\}\to\mathbb{R}^4_1$ with \[ \widetilde{M}=\mathbb{C}\backslash\{0\},~~I:z\to -1/\bar{z},~~ \phi(z)=\frac{az+b}{cz+d}\cdot z^{2m}.~~~(a,b,c,d\in \mathbb{C}, ad-bc\ne 0) \] To satisfy condition \eqref{eq-nonorientable}, there should be \[ \psi=\overline{\phi(-1/\bar{z})}= \frac{\bar{b}z-\bar{a}}{\bar{d}z-\bar{c}}\cdot\frac{1}{z^{2m}}~. \] The surface is regular outside the ends $\{0,\infty\}$. Together with $\mathrm{d}h^*=\overline{\mathrm{d}h}$, this implies \[ \mathrm{d}h=\mathrm{i}\frac{(cz+d)(\bar{d}z-\bar{c})}{z^2}\mathrm{d}z \] up to multiplication by a real constant. Under these conditions it is easy to verify that the metric is complete. Next, let us check the period conditions. The horizontal periods vanish automatically since $\phi\mathrm{d}h,\psi\mathrm{d}h$ has no residues at $0$ and $\infty$. The vertical periods must vanish, hence $|d|^2=|c|^2,~|b|^2=|a|^2.$ Without loss of generality we may write \[ \phi=\frac{z-\lambda}{z-\bar\lambda}\cdot z^{2m},~~~|\lambda|=1. \] To simplify $\phi$ to this form we have utilized the freedom to change complex coordinate by $z\to \mu z$ and the (fractional) linear transformation $\phi\to \mu'\phi$ induced from the Lorentz transformation of $\mathbb{R}^4_1$ (see \eqref{trans}). We are left to verify $\phi\ne \bar\psi$ over $\mathbb{C}\backslash\{0\}$. (At the ends $z=0,\infty$ it is obviously true. So they are regular ends.) Suppose $\phi(z)=\bar\psi(z)$ form some $z\in\mathbb{C}$. Substitute the expressions of $\phi,\psi$ into it. We obtain \[ |z|^{4m}=\frac{(z-\bar\lambda)(-1/\bar{z}-\lambda)} {(z-\lambda)(-1/\bar{z}-\bar\lambda)} =\mathrm{cr}\left(z,-1/\bar{z};\bar\lambda,\lambda\right). \] Since the cross ratio at the right hand side takes a real value, four points $z,\frac{-1}{\bar{z}};\bar\lambda,\lambda$ are located on a circle $C$ in the complex plane $\mathbb{C}$. We assert that this circle $C$ could not be identical to the unit circle. (Otherwise $|z|=1$ and the cross ratio above is $1$. This holds true only if $z=\frac{-1}{\bar{z}}$, which is impossible, or $\lambda=\bar\lambda=\pm 1$, which has been ruled out in Example~\ref{exa-meeks}.) Circle $C$ intersects the unit circle at $\lambda$ and $\bar\lambda$. Observe that any circle passing through $z,\frac{-1}{\bar{z}}$ will intersect the unit circle at an antipodal point pair. (Because under the inverse of the standard stereographic projection, $z,\frac{-1}{\bar{z}}$ correspond to two antipodal points on $S^2$, and the unit circle corresponds to the equator. Any circle passing through the inverse images of $z,\frac{-1}{\bar{z}}$ on $S^2$ will intersect the equator again at two antipodal points. After taking stereographic projection back to $\mathbb{C}$ we get the conclusion.) As a consequence, $\lambda=\pm\mathrm{i}$. But this time the aforementioned cross ratio could only take value as a negative real number (because on circle $C$, $z,\frac{-1}{\bar{z}}$ must be separated by $\pm\mathrm{i}$). This contradiction finishes our proof. \qed\end{proof} When $m=1$ this example has smallest possible total curvature $6\pi$ among non-orientable algebraic stationary surfaces. (Note that the classical Henneberg surface in $\mathbb{R}^3$ has total curvature $2\pi$, yet with four branch points.) This conclusion is the corollary of a series of propositions below. \subsection{Non-orientable stationary surfaces of least total curvature} In general we are interested in finding least possible total curvature for non-orientable stationary surfaces of a given topological type. This is motivated by discussions of F. Martin in \cite{Martin}. Compared with minimal surfaces in $\mathbb{R}^3$, this general case looks even more interesting (at least to the authors). As a consequence of Theorem~\ref{thm-nonorientable}, for a complete non-orientable stationary surface with double covering $\widetilde{M}$ of genus $g$ with $2r$ ends, there must be $\deg\phi=\deg\psi$; the index formula \eqref{eq-deg4} as well as the Jorge-Meeks formula \eqref{eq-jorgemeeks2} implies \begin{equation}\label{eq-jorgemeeks3} -\int_M K=2\pi \Big(\deg\phi-\sum_{j=1}^{r} |\mathrm{ind}_{p_j}|\Big)=2\pi\Big(g+r-1+\sum_{j=1}^{r} \tilde{d}_j\Big)~. \end{equation} Because $r\ge 1$ and $\tilde{d}_j\ge 1$, we know \[-\int_M K=\ge 2\pi(g+1).\] A better estimation is given in the following proposition. \begin{proposition}\label{prop-least} Given a non-orientable surface $M$ whose double covering space \ $\widetilde{M}$ has genus $g$ and finite many punctures, there does not exist complete algebraic stationary immersion ${\bf x}:M\to\mathbb{R}^4_1$ with total Gaussian curvature $-\int_M K\mathrm{d}M=2\pi(g+1)$. In other words, under our assumptions there must be \begin{equation}\label{eq-lowerbound1} -\int_M K\mathrm{d}M\ge 2\pi(g+2). \end{equation} \end{proposition} \begin{proof} Consider the lift of ${\bf x}$, i.e., $\tilde{\bf x}:\widetilde{M}\to\mathbb{R}^4_1$. Since the immersion is algebraic and $-\int_{\widetilde{M}} K<+\infty$, it has finite many regular or good singular ends, and the total number is a even number $2r$ ($r$ is the number of ends of $M$). By the modified Jorge-Meeks formula \eqref{eq-jorgemeeks3} and $r\ge 1$, a Chern-Osserman type inequality is obtained: \[ -\int_M K\mathrm{d}M\ge 2\pi(g+1). \] Suppose the equality is achieved. Then there must be two ends for $\widetilde{M}$ and $\tilde{d}_1=\tilde{d}_2=1$. Both of them are regular ends or good singular ends at the same time. We will show that in either case there will be a contradiction. \textbf{Case 1: regular end(s)}. The multiplicity $\tilde{d}_1=d_1=1$, and $\tilde{\bf x}_z\mathrm{d}z$ for the end $p_1$ has a pole of order $2$. In a local coordinate chart with $z(p_1)=0$ we write out the Laurent expansion of $\tilde{\bf x}_z$: \[ \tilde{\bf x}_z=\frac{1}{z^2}~{\bf v}_2+\frac{1}{z}~{\bf v}_1+(\text{holomorphic part}). \] Since this is a regular end, ${\bf v}_2$ is an isotropic vector whose real and imaginary parts span a 2-dimensional spacelike subspace. ${\bf v}_1$ is a real vector orthogonal to ${\bf v}_2$ by the period condition and $<\tilde{\bf x}_z,\tilde{\bf x}_z>=0$. Thus in $\mathbb{R}^4_1$ there exist a constant non-zero real vector ${\bf v}_0\perp {\bf v}_2,{\bf v}_1$. At the other end $p_2=I(p_1)$ with local coordinate $w=\bar{z}$, because $\tilde{\bf x}_w=\tilde{\bf x}_{\bar{z}}$, we know the same ${\bf v}_0$ is orthogonal to the principal part of the Laurent series. Thus $<\tilde{\bf x}_z\mathrm{d}z,{\bf v}_0>$ is a holomorphic $1$-form, and $<\tilde{\bf x},{\bf v}_0>$ is a harmonic function defined on the whole compact Riemann surface. It must be a constant; hence $\tilde{\bf x}$ as well as ${\bf x}$ is contained in a 3-dimensional subspace of $\mathbb{R}^4_1$. Yet this is impossible. Since in $\mathbb{R}^3_1$ or $\mathbb{R}^3_0$ there exist no immersed spacelike non-oriented surfaces. The possibility of $M\subset\mathbb{R}^3$ could be ruled out by Schoen's famous result \cite{Schoen} that any complete, connected, oriented minimal surface in $\mathbb{R}^3$ with two embedded ends is congruent to the catenoid. (Alternatively, we may argue by the maximal principle once again. Since the unique end of $M$ is an embedded end in $\mathbb{R}^3$, which is either a catenoid end or a planar end, one can choose the coordinate of $\mathbb{R}^3$ suitably so that the height function ${\bf x}_3$ is bounded from below over the whole $M$. Such a harmonic function must be a constant, and $M\subset \mathbb{R}^2$. Contradiction.) \textbf{Case 2: good singular end(s)}. At the good singular end $p_1$, without loss of generality, suppose it has $\mathrm{ind}=m\ge 1$ and $\phi(p_1)=\psi(p_1)=0$. Then $\tilde{d}_1=d_1-m=1$, and $\tilde{\bf x}_z\mathrm{d}z$ has a pole of order $m+2$ at $p_1$. There always exists a suitable local coordinate $z$ such that $z(p_1)=0$ and \[ \mathrm{d}h=\frac{\mathrm{d}z}{z^{m+2}},~~ \phi(z)=a_0 z^m + a_1 z^{m+1} + O(z^{m+2}),~~ \psi(z)=b_1 z^{m+1} + O(z^{m+2}). \] By \eqref{x} we know \begin{eqnarray*} \tilde{\bf x}_z \mathrm{d}z &=& \Big(\phi+\psi, -\mathrm{i}(\phi-\psi),1-\phi\psi,1+\phi\psi\Big)\mathrm{d}h\\ &=&\frac{\mathrm{d}z}{z^{m+2}}~\begin{pmatrix}0 \\0 \\1 \\1\end{pmatrix} +\frac{\mathrm{d}z}{z^2}~\begin{pmatrix}a_0 \\-\mathrm{i} a_0 \\0 \\0\end{pmatrix} +\frac{\mathrm{d}z}{z}~\begin{pmatrix}a_1+b_1 \\-\mathrm{i} (a_1-b_1) \\0 \\0\end{pmatrix} +(\text{holomorphic part}). \end{eqnarray*} Take ${\bf v}_0=(0,0,1,1)$. We can argue as in case 1 to show that $<\tilde{\bf x},{\bf v}_0>$ is a harmonic function defined on the whole compact Riemann surface, hence be a constant. (The key point is that $M$ has only one end.) Thus $\tilde{\bf x}(\widetilde{M})$ as well as $x(M)$ is contained in an affine space $\mathbb{R}^3_0$ (orthogonal to ${\bf v}_0$). Yet this is also impossible for a non-oriented spacelike surface. \qed\end{proof} We will show that the lower bound could be improved to be $2\pi(g+3)$, the same as the case for non-orientable minimal surfaces in $\mathbb{R}^3$. \begin{theorem}\label{thm-least} Given a non-orientable surface $M$ whose double covering space \ $\widetilde{M}$ has genus $g$ and finite many punctures, there does not exist complete algebraic stationary immersion ${\bf x}:M\to\mathbb{R}^4_1$ with total Gaussian curvature $-\int_M K\mathrm{d}M=2\pi(g+2)$. In other words, under our assumptions there must be \begin{equation}\label{eq-lowerbound2} -\int_M K\mathrm{d}M\ge 2\pi(g+3). \end{equation} \end{theorem} \begin{proof} As in the proof to Proposition~\ref{prop-least}, consider the lift of ${\bf x}$, i.e., $\tilde{\bf x}:\widetilde{M}\to\mathbb{R}^4_1$. Suppose the lower bound $2\pi(g+2)$ is attained. Then $\widetilde{M}$ has two ends $p_1,p_2$ with $\tilde{d}_1=\tilde{d}_2=2$ by \eqref{eq-jorgemeeks3}. By symmetry, both of them are regular or good singular ends at the same time. Each possibility is ruled out using different arguments. When both ends are good singular ends, we use the same argument as in Case~2 of Proposition~\ref{prop-least}. At the good singular end $p_1$, without loss of generality, suppose it has $\mathrm{ind}=m\ge 1$ and $\phi(p_1)=\psi(p_1)=0$. Then $\tilde{d}_1=d_1-m=2$, and $\tilde{\bf x}_z\mathrm{d}z$ has a pole of order $m+3$ at $p_1$. There always exists a suitable local coordinate $z$ such that $z(p_1)=0$ and \[ \mathrm{d}h=\frac{\mathrm{d}z}{z^{m+3}},~~ \phi(z)=a_0 z^m + a_1 z^{m+1} + O(z^{m+2}),~~ \psi(z)=b_1 z^{m+1} + O(z^{m+2}). \] By the Weierstrass representation formula we know \begin{eqnarray*} \tilde{\bf x}_z &=&\frac{1}{z^{m+3}}~\begin{pmatrix}0 \\0 \\1 \\1\end{pmatrix} +\frac{1}{z^3}~\begin{pmatrix}a_0 \\-\mathrm{i} a_0 \\0 \\0\end{pmatrix} +\frac{1}{z^2}~\begin{pmatrix}a_1+b_1 \\-\mathrm{i} (a_1-b_1) \\0 \\0\end{pmatrix} +\frac{1}{z}~\begin{pmatrix}a_2+b_2 \\-\mathrm{i} (a_2-b_2) \\a_0 b_1 z^{m-1} \\-a_0 b_1z^{m-1}\end{pmatrix}\\ && +(\text{holomorphic part}). \end{eqnarray*} Take ${\bf v}_0=(0,0,1,1)$. Because $m\ge 1$, $<\tilde{\bf x},{\bf v}_0>$ is a harmonic function (with the leading term $\ln|z|$) bounded from below or above in a neighborhood of $p_1$. Since $M$ has only one end around which the assertion above is still valid, we conclude that $<\tilde{\bf x},{\bf v}_0>$ is a harmonic function bounded from below or above over the whole compactified surface. It must be a constant, and the surface is contained in a 3-space. As in Proposition~\ref{prop-least} this leads to a contradiction. In case that both ends are regular, consider the anti-holomorphic automorphism $I:\widetilde{M}\to \widetilde{M}$ without fixed points and $M\cong \widetilde{M}/\{I\}$ Under the assumptions above, the Gauss map $\phi$ could be viewed as a continuous map from the oriented double covering space to the round 2-sphere such that \[ \phi:\{\widetilde{M};I\}\to S^2\subset \mathbb{R}^3,~~~\text{s.t.},~\phi(p)\ne \phi(I(p)). \] It is a standard fact that such a map is homotopic to an odd map \[ \tilde\phi\triangleq \frac{\phi(p)-\phi(I(p))}{|\phi(p)-\phi(I(p))|}, ~~~\tilde\phi(I(p))=-\tilde\phi(p), \] where we give the homotopy $H(p,t)= \frac{\phi(p)-t\phi(I(p))}{|\phi(p)-t\phi(I(p))|}$ directly. According to Theorem~\ref{thm-odd} in Appendix~B, the mapping degree of such an odd map and $g-1$ must be both even or both odd. Thus the mapping degree could not be $g+2$. This finishes the proof. \qed\end{proof} \begin{conjecture}\label{conj} The lower bound \eqref{eq-lowerbound2} $-\int_M K\mathrm{d}M\ge 2\pi(g+3)$ is sharp for any given $g\ge 0$. In other words, there always exists an complete, immersed, algebraic non-orientable stationary surfaces whose double covering surface has genus $g$. \end{conjecture} This is a generalization of conjecture~1 in \cite{Martin} for non-orientable minimal surfaces in $\mathbb{R}^3$. It is verified in $\mathbb{R}^3$ when $g=0$ and $g=1$. The corresponding examples are the Meeks' M\"obius strip \cite{Meeks} and Lopez's Klein bottle \cite{Lopez}. For higher genus $g$ this conjecture is still open. As the direct consequence of Theorem~\ref{thm-least} we obtain the following result: \begin{theorem}\label{thm-nonorientable4pi} There does not exist a complete, algebraic, immersed non-orientable stationary surface in $\mathbb{R}^4_1$ with total Gaussian curvature $-\int_M K\mathrm{d}M=4\pi$. \end{theorem} Combined with Theorem~\ref{thm-4pi1}, this finishes the proof to our classification theorem (Theorem A in the Introduction). \begin{remark}\label{rem-oliveira} We note a significant difference between non-orientable stationary surfaces in $\mathbb{R}^4_1$ and $\mathbb{R}^4$. Oliveira \cite{Oliveira} constructed complete M\"obius band in $\mathbb{R}^4$ with total curvature $2\pi m$ for any $m\ge 2$. So the total curvature $4\pi$ could be realized in that case. \end{remark} In the proof to Theorem~\ref{thm-least}, when treating the special case with only regular ends, indeed we have obtained the following proposition, which is a partial generalization of Meeks' result (Corollary~1 in \cite{Meeks}): \begin{proposition} A complete non-orientable stationary surface in $\mathbb{R}^4_1$ of algebraic type without singular ends must have total curvature $-\int_M K\mathrm{d}M=2\pi m$, where $m\equiv g-1(\mathrm{mod}~2)$, and $g$ is the genus of the oriented double covering surface. \end{proposition} So far we do not know whether it is true in the general case when good singular ends exist. \section{Non-algebraic examples with small total Gaussian curvature} Recall the following classical result. \begin{theorem} Let $(M,\mathrm{d}s^2)$ be a non-compact surface with a complete metric. Suppose $\int_M|K|\mathrm{d}M<+\infty$, then: (1) (Huber\cite{Huber}) There is a compact Riemann surface $\overline{M}$ such that $M$ as a Riemann surface is biholomorphic to $\overline{M}\backslash\{p_1,p_2,\cdots,p_r\}$. (2) (Osserman\cite{Osser}) When this is a minimal surface in $\mathbb{R}^3$ with the induced metric $\mathrm{d}s^2$, the Gauss map $G=\phi=-1/\psi$ and the height differential $\mathrm{d}h$ extend to each end $p_j$ analytically. (3) (Jorge and Meeks \cite{Jor-Meeks}) As in (1) and (2), suppose minimal surface $M\to \mathbb{R}^3$ has $r$ ends and $\overline{M}$ is the compactification with genus $g$. The total curvature is related with these topological invariants via the Jorge-Meeks formula: \begin{equation}\label{eq-jorgemeeks} \int_M K\mathrm{d}M=2\pi\left(2-2g-r-\sum_{j=1}^r d_j\right)~, \end{equation} Here $d_j+1$ equals to the highest order of the pole of ${\bf x}_z \mathrm{d}z$ at $p_j$, and $d_j$ is called \emph{the multiplicity at the end $p_j$}. \end{theorem} Huber's conclusion (1) means \emph{finite total curvature $\Rightarrow$ finite topology}, which is a purely intrinsic result. In particular, this is valid also for stationary surfaces in $\mathbb{R}^4_1$. Surprisingly, as to the extrinsic geometry, Osserman's result $2)$ is no longer true in $\mathbb{R}^4_1$. In particular we have non-algebraic counter-examples given below: \begin{example} [$M_{k,a}$ with essential singularities and finite total curvature \cite{Ma}] \label{exa-essen} \begin{equation}\label{ex-essential} M_{k,a}\cong \mathbb{C}-\{0\},~\phi=z^k \mathrm{e}^{az},~\psi=-\frac{\mathrm{e}^{az}}{z^k},~\mathrm{d}h=\mathrm{e}^{-az}\mathrm{d}z~. \end{equation} where integer $k$ and real number $a$ satisfy $k\ge 2, 0< a<\frac{\pi}{2}$. \end{example} \begin{proposition}\cite{Ma}\label{prop-essential} Stationary surfaces $M_{k,a}$ in Examples~\ref{exa-essen} are regular, complete stationary surfaces with two ends at $z=0,\infty$ satisfying the period conditions. Moreover their total curvature converges absolutely with \begin{equation}\label{eq-essential} \int_M K\mathrm{d}M=-4\pi k~, ~~\int_M K^{\perp}\mathrm{d}M =0~. \end{equation} \end{proposition} \begin{remark} Taking different height differential $\mathrm{d}h$ in Example~\ref{exa-essen}, we can obtain other examples with the same total Gaussian curvature. Yet the total Gaussian curvature $-\int_M K\mathrm{d}M=4\pi$ could not be realized since when $k=1$ the integral is not absolutely convergent. \end{remark} Similar to the construction of Example~\ref{exa-essen} and the proof to Proposition~\ref{prop-essential} as in \cite{Ma}, we have non-oriented, non-algebraic examples as below. \begin{example} [stationary M\"obius strips with essential singularities and finite total curvature] \label{exa-essen2} \begin{equation}\label{ex-essential2} \phi=z^{2p-1}\mathrm{e}^{\frac{1}{2}(z-\frac{1}{z})},~~ \psi=\frac{-1}{z^{2p-1}}\mathrm{e}^{\frac{1}{2}(z-\frac{1}{z})},~~ \mathrm{d}h=\mathrm{d}~\mathrm{e}^{-\frac{1}{2}(z-\frac{1}{z})}.~~(p\in\mathbb{Z}_{\ge 2}) \end{equation} \end{example} \begin{proposition} Example~\ref{exa-essen2} is a complete immersed stationary M\"obius strip with finite total curvature $\int|-K+\mathrm{i}K^{\perp}|\mathrm{d}M<+\infty$. We have \[ -\int_M K\mathrm{d}M=2(2p-1)\pi,~~\int_M K^{\perp}\mathrm{d}M=0. \] In particular, the smallest possible value of their total Gaussian curvature is $6\pi$. \end{proposition} \begin{proof} It is easy to verify $\phi^*=\bar\psi,\psi^*=\bar\phi, \mathrm{d}h^*=\overline{\mathrm{d}h}$. So we obtain a M\"obius strip according to Theorem~\ref{thm-nonorientable}. The regularity is also easy to verify. For example, if there exist $z$ such that $\phi(z)=\bar\psi(z)$, by \eqref{ex-essential2} we get $z\bar{z}~\mathrm{e}^{\mathrm{i}\cdot\mathrm{Im}(z-\frac{1}{z})}=-1$. So $|z|=1$ and $z=\mathrm{e}^{\mathrm{i}\theta}$. Insert this back to the previous equation; we obtain $\mathrm{e}^{2\mathrm{i}\sin\theta}=-1$, which is impossible. Next we check the period condition. Since \[ \mathrm{d}h=\mathrm{d}~\mathrm{e}^{-\frac{1}{2}(z-\frac{1}{z})},~~ \phi\psi\mathrm{d}h=\mathrm{d}~\mathrm{e}^{\frac{1}{2}(z-\frac{1}{z})}, \] both being exact $1$-forms, there are no vertical periods. At the same time, \[ \phi\mathrm{d}h=-\frac{1}{2}\left(1+\frac{1}{z^2}\right)z^{2p-1}\mathrm{d}z,~~ \psi\mathrm{d}h=-\frac{1}{2}\left(1+\frac{1}{z^2}\right)\frac{-1}{z^{2p-1}}\mathrm{d}z. \] When $p\ge 2$ neither of these $1$-forms has residue. So there are no horizontal periods. By direct computation one can show that the integral of the absolute total curvature $\int|-K+\mathrm{i}K^{\perp}|\mathrm{d}M$ is asymptotic to $\int |z|^{2-4p}\mathrm{d}z\mathrm{d}\bar{z}$ when $z\to\infty$, or to $\int |z|^{4p-6}\mathrm{d}z\mathrm{d}\bar{z}$ when $z\to 0$. Thus when $p\ge 2$ the total curvature integral converges absolutely. Approximate $\widetilde{M}$ by domains $A_{r,R}\triangleq\{0<r\le |z|\le R\}$. By Stokes theorem we get \begin{align*} \int_{A_{r,R}}(-K+\mathrm{i}K^{\perp})\mathrm{d}M &=2\mathrm{i}\int_{A_{r,R}} \frac{\phi_z\bar{\psi}_{\bar{z}}}{(\phi-\bar{\psi})^2} \mathrm{d}z\wedge \mathrm{d}\bar{z}\\ &=-2\mathrm{i}\oint_{|z|=R} \frac{\phi_z}{\phi-\bar{\psi}}\mathrm{d}z+2\mathrm{i}\oint_{|z|=r} \frac{\phi_z}{\phi-\bar{\psi}}\mathrm{d}z~, \end{align*} where \[\frac{\phi_z}{\phi-\bar{\psi}}= \frac{|z|^{4p-2}\left(\frac{1}{2}+\frac{2p-1}{z}+\frac{1}{2z^2}\right)} {|z|^{4p-2}+\mathrm{e}^{-\mathrm{i}\cdot\mathrm{Im}(z-\frac{1}{z})}}~. \] When $R\to\infty$ the first contour integral converges to $-2\mathrm{i}(2p-1)\cdot 2\pi\mathrm{i}$. When $r\to 0$ the second contour integral converges to $0$. This completes the proof. \qed\end{proof} In the discussion above, when $p=1$ the horizontal period condition is violated, and the total curvature integral does not converge absolutely. Thus among these simplest examples (including Examples~\ref{exa-essen}) we can not find one with total Gaussian curvature $4\pi$. This motivates the following \begin{conjecture} There does NOT exist any complete, non-algebraic stationary surfaces immersed in $\mathbb{R}^4_1$ with finite total Gaussian curvature $-\int K\mathrm{d}M=4\pi$. \end{conjecture} \section{Appendix A} In the proof to our main theorem, a key fact is that a complete, algebraic stationary surface $M\cong \mathbb{C}\backslash\{0\}$ with two good singular ends and $-\int K\mathrm{d}M=4\pi$ must have other singular (branch) points. This follows from \begin{lemma}\label{lem-main} For any positive integer $m\in \mathbb{Z}^+$ and any non-zero complex parameters $a,b\in\mathbb{C}\backslash\{0\}$ whose sum $a+b=-e^{it}$ is a given unit complex number ($t\in\mathbb{R}$), there always exists a solution $z\ne 0$ to the equation \begin{equation}\label{eq-main} (\bar{z}-\bar{a})(z-b)=\frac{z^{m+1}}{\bar{z}^m}. \end{equation} \end{lemma} \begin{proof} By change of coordinates $z\to ze^{it}$, we may consider an equivalent equation \begin{equation}\label{eq-main2} (\bar{z}-\bar{a})(z-b)=\lambda\frac{z^{m+1}}{\bar{z}^m}, \end{equation} where $\lambda=e^{it(2m+1)}$ is a unit complex number, and $a,b$ are nonzero complex parameters satisfying \[ a+b=-1. \] In other words, the middle point of the segment $\overline{ab}$ is $-1/2$. We will show that there always exists a solution $z\ne 0$ to the equation \eqref{eq-main2} for any given unit complex number $\lambda$ and when $a+b=-1$. First let us explain the basic idea of our proof. Consider the equal-module locus \[ \Gamma=\{z\in\mathbb{C}:|z-a||z-b|=|z|\} \] where the two sides of \eqref{eq-main2} have equal modules. Since $|z-a||z-b|<|z|$ when $z=a,b$, and $|z-a|\cdot|z-b|>|z|$ when $z$ is big enough, by continuity we know this locus is non-empty. It is easy to see that $\Gamma=\cup\Gamma_j$ is a union of several continuous, connected, (simple) closed curves. Next we compare the argument of the complex functions at both sides of \eqref{eq-main2}. Define \[ \arg_L\triangleq \arg[(\bar{z}-\bar{a})(z-b)],~~ \arg_R\triangleq \arg(z^{m+1}/\bar{z}^m)=(2m+1)\arg(z). \] Note that the two arguments $\arg_L,\arg_R$ can be defined and extended continuously along any continuous path (without self-intersection) on $\mathbb{C}$. We want to find one component $\Gamma_j\subset\Gamma$ such that \[ \delta= \arg_R-\arg_L, \] the difference of the two arguments, will have a bounded variation greater than $2\pi$. Again by continuity we know that along $\Gamma_j$ there is some point $z$ at which both sides of \eqref{eq-main2} share equal modules and arguments. This will finish our proof.\\ To estimate the variation of the argument difference $\delta$, the next key point is to construct two points on the locus $\Gamma$ using some elementary geometry. Now we have to consider two cases separately.\\ \textbf{Case 1: $|a-b|\le 1$.} In this case, the complex numbers $a,b$ correspond to two points located inside the circle $|z+\frac 12|=\frac 12$ and being symmetric about $z=-\frac 12$. \begin{figure} \includegraphics[width=12cm]{4pi00.jpg} \caption{Case 1, $|a-b|\le 1$} \label{fig:case1} \end{figure} Suppose $\mathrm{Im}(a)\ge 0, \mathrm{Im}(b)\le 0$. We want to find two points $C,D$ on $\Gamma$ which are equidistant to $a$ and the origin $O$, at the same time whose distance to $b$ is $1$ (see Figure~ \ref{fig:case1}). Such points are exactly the intersection between the bisector of the segment $\overline{aO}$ and the unit circle centered at $b$. Because the length $|\overline{ba}|$ and $|\overline{bO}|$ are no more than $1$ ($|b|\le|b+1/2|+|-1/2|\le 1$), we know the intersection points $C,D$ exist, and they are distinct. Let $C$ be the one on the upper half plane. We claim that $C,D$ must be located on one and the same component (a simple closed curve) $\Gamma_0\subset\Gamma$. Notice that the open segment $\overline{CD}$ (on the bisector) is contained in the interior of the unit circle centered at $b$, hence also in the interior of \[\Omega=\{z\in\mathbb{C}:|z-a||z-b|<|z|\}.\] Let $\Gamma_0\subset\Gamma=\partial\Omega$ be the component passing through $C$. Then the straight line $CD$ must have at least one more intersection with $\Gamma_0$, whose coordinate $z$ satisfies $|z-a||z-b|=|z|$ and $|z-a|=|z|$, hence $|z-b|=1$. It has to be $D$ as defined above. This verifies our claim. The main consequence of the condition $|a-b|\le 1$ is that $|b|\le 1$. Using the relation that greater angle is opposite greater side in the triangle $\triangle bCO$, we know \[2\angle OCD+\angle aCb=\angle OCb \le \angle COb.\] Similarly, in the triangle $\triangle bDO$ we have \[2\angle ODC+\angle aDb=\angle ODb \le \angle DOb.\] Taking sum of these two equalities and using $\angle COb+\angle DOb=\pi-\angle OCD-\angle ODC$ in the triangle $\triangle OCD$, we get \[\angle aCb+\angle aDb\le 3(\angle COb+\angle DOb)-2\pi \le (2m+1)(\angle COb+\angle DOb)-2\pi. \] Notice that \begin{equation*} \begin{split} \angle aCb=-\arg_L(C)=\left.[\arg(z-a)-\arg(z-b)]\right|_{z=C},\\ \angle aDb=\arg_L(D)=\left.[\arg(z-b)-\arg(z-a)]\right|_{z=D},\\ (2m+1)(\angle COb+\angle DOb)=\arg_R(D)-\arg_R(C). \end{split} \end{equation*} Then the previous inequality amounts to say \[\delta(D)-\delta(C)\ge 2\pi.\] Thus along the continuous path connecting $C,D$ which is part of the equal-module locus $\Gamma_0\subset\Gamma$, the quotient between $(\bar{z}-\bar{a})(z-b)$ and $z^{m+1}/\bar{z}^m$ can take any given unit complex parameter $\lambda$. This finishes the proof in the first case. We observe that if $\mathrm{Im}(a)\le 0, \mathrm{Im}(b)\ge 0$ (or just interchange $a,b$ in Figure~\ref{fig:case1} above), the proof is similar. It seems that our proof relies on the special case of Figure~\ref{fig:case1} where $a$ is inside the triangle $bCO$. Indeed, because the positivity of these two angles was never used in that proof, when $a$ is outside the triangle $bCO$ the proof is still valid.\\ \textbf{Case 2: $|a-b|> 1$.} \begin{figure} \includegraphics[width=12cm]{4pi01.jpg} \caption{Subcase 2.1} \label{fig:case21} \end{figure} Other than case 1, now we have to find a different way to construct such two points $C,D$ on the equal-module locus. Consider the triangle $\triangle Oab$. The length of the median on the side $\overline{ab}$ is less than half of $|\overline{ab}|$. So $\angle aOb > \pi/2$. Moreover, any point $C$ on the line segment $Oa$ or $Ob$ will span an obtuse angle $\angle aCb$. This is the main consequence of $|a-b|> 1$. Let $C$ be a moving point on the line segment $Oa$ with coordinate $z$. Since $|z-a||z-b|<|z|$ when $z$ is very close to $a$, and $|z-a||z-b|>|z|$ when $z$ is very close to $0$, there exist at least one intersection between $\overline{Oa}$ and the equal-module locus $\Gamma$. We take $C_1$ to be the one closest to $a$ among all such intersection points. Similarly, we take $C_2$ to be the one closest to $b$ among all intersection points between $\overline{Ob}$ and $\Gamma$. On the straight line $ab$ we can also find two intersection points with $\Gamma$, denoted as $D_1, D_2$, such that $D_1, b, a, D_2$ are located on the line $ab$ in the usual linear order, and $D_1$ ($D_2$) is the closest one among all intersection points between $\Gamma$ and the ray $aD_1$ ($aD_2$). Assume that $a$ is on the upper half plane and $b$ is on the lower half plane. (The other possibilities will be treated later.) We consider two subcases. The first subcase is that $C_1, D_1$ are on the same connected component $\Gamma_0$ of $\Gamma$. Let us start from $C_1$ and end up with $D_1$ while turning counter-clockwise around $a$ along $\Gamma_0$. Then it is easy to see that the variation of $\delta$ will be more than $2\pi$ because the increase of $arg_L$ is $\angle aC_1b>\frac{\pi}{2}$ and the decrease of $arg_R$ is $(2m+1)\angle D_1OC_1>\frac{(2m+1)\pi}{2}$. As explained before this finishes the proof. The second subcase is that $D_1$ does not locate on the same connected component $\Gamma_0$ passing through $C_1$. Then $b$ must not be contained in the area bounded by $\Gamma_0$ by our construction and assumption on $D_1$. Going along $\Gamma_0$ counter-clockwise, $\arg_L$ will decrease $2\pi$ while $\arg_R$ return to the same initial value. This shows that the difference of arguments $\delta$ is exactly $2\pi$, which also finishes our proof by the same reason. If $a$ is on the lower half plane and $b$ is on the upper half plane, we may choose the points $C_2, D_2$ instead, and the proof is the same. When $a,b\in \mathbb{R}$ and one of them is positive, the triangle $\triangle Oab$ degenerates. Yet our proof is still valid without any essential modification. \qed\end{proof} \begin{remark} Before finding the traditional proof to Lemma~\ref{lem-main} as given above, we seek help from symbolic and numerical computations. Our colleague Professor Bican Xia, using an algorithm developed by him \cite{Xia}, succeeded in verifying the conclusion of Lemma~\ref{lem-main}. This was very important to make us believe the conclusions of Lemma~\ref{lem-main} and Theorem~\ref{thm-4pi1}, and to motivate us to find the proof given above. Professor Xia's method has been utilized in Maple (version 13 and later). \end{remark} Compared to the previous situation where one obtains existence result, if the parameters are subject to different restrictions, one can prove non-existence result as below. This is used in Section~3 to verify the regularity of Example~\ref{exa-singular1}. \begin{lemma}\label{lem-a=b} In Lemma~\ref{lem-main}, if we assume $a=b\in\mathbb{R}$ and $m=1$, but drop the requirement of $a+b=-1$, then equation~\eqref{eq-main} has no solutions $z$ when $-a$ is a sufficiently large positive real number (e.g. $-a>1$) \end{lemma} \begin{proof} Under our assumptions, \eqref{eq-main} simplifies to \begin{equation}\label{eq-a=b} |z-a|^2=z^3/|z|^2. \end{equation} So $z=r\omega^j$ for some $j\in\{0,1,2\}$ and $r>0, \omega=\mathrm{e}^{2\pi\mathrm{i}/3}$. That means on the complex plane $\mathbb{C}$, the solution $z$, if it exists, must be located on the union of three radial lines. So we need only to compare $|z-a|^2$ and $|z|$, the modules at either sides of \eqref{eq-a=b} for $z$ in this subset. When $r>0$ is small enough or big enough, the module $|z-a|^2$ is obviously larger than $|z|$. Thus intuitively we know that for suitable $a$ there will always be $|z-a|^2>|z|$. It is easy to rigorously verify this assertion; see the elementary and standard proof in \cite{Ma} (the end of Section~7). This shows the non-existence of solution $z$. \qed\end{proof} \section{Appendix B} The theorem below is the key lemma in our proof to Theorem~\ref{thm-least} which shows the non-existence of a complete, non-oriented, algebraic stationary surface in $\mathbb{R}^4_1$ with total Gaussian curvature $2\pi(g+2)$ and without any singular points or singular ends. \begin{theorem}\label{thm-odd} Let $\widetilde{M}$ be a closed oriented surface of genus $g$, $I:\widetilde{M}\to\widetilde{M}$ be an orientation-reversing involution of $\widetilde{M}$ without fixed points. $\tilde{\phi}:\widetilde{M}\to S^2\subset \mathbb{R}^3$ is an odd map, i.e., $\tilde\phi(I(p))=-\tilde\phi(p)$. Then $\deg\tilde\phi\equiv g-1 (\mathrm{mod}~2)$ \end{theorem} The statement reminds us of the famous theorem that any odd map $f:S^n\to S^n$ has odd degree, which implies the Borsuk-Ulam Theorem. We believe that this generalization is not a new result. Yet to the best of our knowledge we could not find a reference. The proof below is provided by Professor Fan Ding from Peking University. \begin{proof} Let $M=\widetilde{M}/\{p\sim I(p)\}$ be the quotient surface which is non-orientable. $\tilde{\phi}:\widetilde{M}\to S^2$ induces a quotient map $\phi$ from $M$ to the projective plane $\mathbb{R}P^2=S^2/\{x\sim -x\}.$ Decompose $M$ as the connected sum of $g+1$ projective planes $M=M_1\sharp \cdots\sharp M_{g+1}$. For any $M_j(j=1,\cdots,g+1)$, we choose a closed path $\gamma_j$ in $M_j$ representing the generator of the first homology group $H_1(M_j,\mathbb{Z}_2)$, which lifts to a path $\tilde\gamma_j\subset\widetilde{M}$ whose end points are a pair of antipodal points. As an odd map, $\tilde{\phi}$ maps $\tilde\gamma_j$ to another path connecting antipodal points, which projects to a closed path representing the generator of $H_1(\mathbb{R}P^2,\mathbb{Z}_2)$. Thus the induced map $$\phi^*:H^1(\mathbb{R}P^2;\mathbb{Z}_2)\to H^1(M;\mathbb{Z}_2)$$ on the cohomology groups is given by $$\phi^*(\alpha)=\alpha_1+\cdots +\alpha_{g+1},$$ where $0\neq \alpha\in H^1(\mathbb{R}P^2;\mathbb{Z}_2)$, and $\alpha_j\in H^1(M;\mathbb{Z}_2)$($j=1,\ldots,g+1$) satisfies $\alpha_j([\gamma_i])=0$ for $i\neq j$ and $\alpha_j([\gamma_j])=1$. Since the intersection between the homology classes $[\gamma_i]\in H_1(M;\mathbb{Z}_2)$ and $[\gamma_j]\in H_1(M;\mathbb{Z}_2)$ is $1$ when $i=j$ and $0$ when $i\neq j$, the Poincar\'{e} dual of $\alpha_j$ is $[\gamma_j]$. Thus the Poincar\'{e} dual of $\phi^*(\alpha)$ is $[\gamma_1]+\cdots +[\gamma_{g+1}]$. Since the self-intersection of the homology class $[\gamma_1]+\cdots +[\gamma_{g+1}]\in H_1(M;\mathbb{Z}_2)$ is $g+1({\rm mod} 2)$, $$\phi^*(\alpha\cup\alpha)=\phi^*(\alpha)\cup \phi^*(\alpha)= (g+1)\beta,$$ where $0\neq\beta\in H^2(M;\mathbb{Z}_2)$. Hence the mod $2$ degree of $\phi$ is $g+1({\rm mod} 2)$. Thus the mod 2 degree of $\tilde{\phi}$ is $g+1({\rm mod}2)$. This finishes the proof. \qed\end{proof} \begin{remark} If we only consider a continuous map $\phi$ from the non-oriented quotient surface $M$ to $\mathbb{R}P^2$, then the conclusion is not necessarily true. The simplest counter-example is a constant map. On the other hand, if we assume that $\phi$ is a branched covering map, then the conclusion is one part of Meeks' Theorem $1$ in \cite{Meeks}. We don't know whether our conclusion could be generalized to the case of odd mapping $\tilde{\phi}:\widetilde{M}_1\to \widetilde{M}_2$ where each closed oriented surface is endowed with an orientation-reversing involution without fixed points. \end{remark}
2,869,038,155,172
arxiv
\section{Introduction} Coronal mass ejections (CMEs) are the major driver of intense geo-magnetic activity \citep[]{Richardson:2001} and have been studied extensively for the past 40 years. CMEs are observed remotely by coronagraphs and heliospheric imagers and measured {\it in situ} by spacecraft such as ACE and {\it Wind}. With the launch of SOHO and STEREO, the availability of white-light imagers with wide fields-of-view has made it possible to associate eruptions observed in the corona to CMEs measured {\it in situ} at 1~AU \citep[see for example the list by][]{Richardson:2010}. CMEs measured {\it in-situ} may be divided into three broad categories: magnetic clouds (MCs), non-MC isolated ejecta, and complex ejecta \citep[similar to the categories of][]{Zurbuchen:2006}. MCs have well-defined properties \citep[see][]{Burlaga:1981}. Non-MC isolated ejecta typically have some but not all the properties of MCs, and may be sometimes referred to as MC-like ejecta \citep[]{Lepping:2005}. They may correspond to a distorted CME or to the crossing through the ``leg'' of a CME. Lists of MCs and MC-like ejecta measured at 1~AU by the Wind and ACE spacecraft are maintained \citep[]{Lepping:2005,Jian:2006,Richardson:2010}. Complex ejecta result from the interaction of successive CMEs \citep[]{Burlaga:1987}. Some consist of many individual eruptions and it is impossible to relate {\it in situ} measurements to coronagraphic observations of CMEs \citep[]{Burlaga:2002}. Others are made up of two clearly distinct MCs separated by an interaction region \citep[multiple-MC events, see][]{Wang:2003, Lugaz:2005b}. Complex ejecta tend to have long duration and may drive the magnetosphere for an extended period. \citet{Xie:2006}, for example, studied long (3 days or more) and intense (peak Dst $\le -100$ nT) geomagnetic storms and found that 24 out of 37 such storms were associated with multiple CMEs. While a typical CME passes over Earth in $\sim 20$ hours, some events have duration well in excess of 30 hours \citep[]{Marubashi:2007}. It is possible that some of these long-duration events, believed to be associated with a single, isolated CME are in fact the results of the interaction of two CMEs, a possibility raised for the 2005 May 15 CME by \citet{Dasso:2009}. Here, we identify a new type of complex ejecta due to the interaction of two CMEs, which results in a long-duration event with a smooth rotation of the magnetic field vector. In section \ref{simu}, we present the result at 1~AU of two simulations, one of an isolated CME and one of two interacting CMEs, and we discuss the expected geo-effectiveness of such events. In section \ref{geo}, we present measurements of the 2001 March 19--22 period, which may be associated with the interaction of two CMEs in a way similar to that of the simulation. We discuss our findings and conclude in section \ref{conclusion}. \section{Simulated Magnetic Cloud and Complex Ejecta}\label{simu} \subsection{Simulation Set-up} The simulation set-up is in nearly identical to that of \citet{Lugaz:2013b} for their Case C (two CMEs with orientation $90^\circ$ apart). We summarize the important details here as well as one difference with this previous simulation, and refer the interested reader to this paper for further information. We use the Space Weather Modeling Framework \citep[SWMF,][]{Toth:2012} to perform the simulations. The simulation domain is a Cartesian box centered at the Sun and extending to $\pm 220~R_\odot$ in all three directions. The domain is resolved with a maximum of 34 million cells ranging in size from 0.01 to 4~$R_\odot$ after adaptive mesh refinement (AMR). Along the Sun-Earth line, cell size of 0.05~$R_\odot$ is maintained up to 0.4~AU (0.1~$R_\odot$ thereafter). \begin{figure*}[t] \centering \includegraphics[width=7.3cm]{3d_60h_CME1.ps} \includegraphics[width=7.3cm]{3d_48h_CME2c.ps} \caption{{\it Left}: Simulation result after 60 hours corresponding to one isolated CME. The pink isosurface corresponds to values of $B_y = 20$~nT; the red and blue isosurfaces correspond to values of $B_z = \pm 20$~nT, respectively. The sphere of radius 20~$R_\odot$ centered at the Sun's position and the cut in the ecliptic plane are color-coded with the velocity. {\it Right}: Simulation result after 48 hours corresponding to the two interacting CMEs with the same conventions as the image of the left panel.} \end{figure*} We use the solar wind model of \citet{Holst:2010} where Alfv{\'e}n waves drive the solar wind. To set-up the solar magnetic field, we use a non-tilted dipole with an octopole component, which yield a maximum magnetic field strength of 5.5~G at the solar poles with a polarity corresponding to that of solar cycle 24. To initiate the CMEs, we insert right-handed flux ropes using the model of \citet{Gibson:1998} (GL) in a state of force imbalance onto the steady-state solar corona. The parameters of the GL flux rope for the two CMEs are the same as for the Case C from \citet{Lugaz:2013b}, with the only difference being the time delay between the two CMEs, which is 15 hours instead of 7 hours. The first CME (CME1) has a low inclination and an eastward axial field: a NES cloud according to the categorization of \citet{Bothmer:1998}, the second CME (CME2) is highly inclined with a southward axial field: a ESW cloud. For comparison purposes, we also perform the simulation of an isolated event by simply propagating the first CME all the way to 1~AU without the second eruption. \subsection{Simulated Magnetic Cloud at 1~AU} Three-dimensional results of the simulation of the isolated CME are shown in the left panel of Figure~1 with isosurfaces of magnetic field equals to 20~nT highlighting the CME. The magnetic ejecta has a reverse-S shape, characteristic of the GL model \citep[see,][]{Gibson:1998,Manchester:2004a}. Synthetic spacecraft measurements at 1~AU are shown in the left panel of Figure~2. It corresponds to a moderately fast CME with a transit time of about 63 hours for the shock and 70 hours for the magnetic ejecta. The CME has a speed at 1~AU of about 540~km~s$^{-1}$ and is characterized by a NES rotation. The sheath duration is about 7 hours and the magnetic ejecta lasts for about 23 hours, corresponding to a width of $\sim$0.28~AU. \subsection{Simulated Complex Ejecta at 1~AU} Next, we discuss the results of the simulation for the two interacting CMEs. The timing of the interaction goes as follows: the shock driven by CME2 reaches the back of CME1 at 18.5 hours, its center at 22.5 hours and the back of the sheath at 24 hours. By 29 hours, at 0.47~AU, the two shocks have merged. The results after 48 hours, as the complex ejecta is close to 1~AU, are shown in the right panel of Figure~1. At the back of CME1, there is an extended period of southward magnetic field, as is clear from the large blue isosurface in Figure~1. Synthetic spacecraft measurements a 1~AU are shown in the right panel of Figure~2. A single fast-mode shock at 54 hours precedes the complex ejecta starting at 63 hours. The complex ejecta is characterized by a relatively short period of northward $B_z$ lasting about 5 hours, and an extended ``tail'' of southward $B_z$ for about 28 hours. The east-west component of the magnetic field, $B_y$ is close to zero for the last 24 hours of the event, after an initial period of eastward magnetic field. Throughout the complex ejecta, the velocity profile is decreasing from about 540 to 450~km~s$^{-1}$. The main differences between an isolated MC and the complex ejecta resulting from the interaction of two CMEs are: (i) shorter transit time of the complex ejecta as compared to the isolated CME, (ii) short duration of the first CME (here about 8 hours vs. 28 hours for the isolated one), (iii) hotter and somewhat denser sheath region preceding the complex ejecta. \begin{figure*}[t] \centering \includegraphics[width=6.5cm]{satdst_CME1_half.ps} \hspace{0.7cm} \includegraphics[width=6.5cm]{satCME3c_halfnose.ps} \caption{{\it Left}: Simulation result at 1~AU of an isolated CME and modeled Dst index. {\it Right}: Simulation result at 1~AU of a complex ejecta corresponding to the two interacting CMEs described in the text. The panels from top to bottom show the density, radial velocity, magnetic field, temperature and derived Dst with the same scales (except that the complex ejecta is shifted forward by 5 hours).} \end{figure*} \section{Geo-effective Potential of Complex Ejecta}\label{geo} \subsection{Simulated Events} We estimate the Dst corrected from the contribution of the ram pressure, $Dst^*$, from the simulated {\it in situ} measurements using a modified version of the \citet{Burton:1975} relation: $ \frac{d}{dt} Dst^* = Q(t) - Dst^*/\tau$, with $Q = -1.22 \times 10^{-3} \left(VB_z - 0.49\right)$ if $VB_z > 0.49$ mV~m$^{-1}$, and $Q = 0$ otherwise, and with $\tau = 8.64 \times 10^3 \exp{\left(9.74/(4.69+VB_z)\right)}$ if $B_z < 0$~nT and $\tau = 3.57 \times 10^4$ otherwise. The results are shown in the bottom panels of Figure~2. Both the isolated CME and the complex ejecta would have resulted in an intense geo-magnetic storm with a peak Dst below $-100$~nT. The isolated CME results in a relatively typical intense geo-magnetic storm with a main phase lasting about 9 hours, a peak Dst of $-191$~nT following by a recovery phase lasting more than 1 day. The Dst is below $-100$~nT for about 16.5~hours. The complex ejecta, on the other hand, results in a weaker storm with a peak Dst of $-144$~nT. The main phase lasts about 12 hours and the recovery phase lasting about 1.5 days. The Dst is below $-100$~nT for about 26~hours or about 55\% longer than for the isolated CME. Note also that the southward magnetic field in the sheath region results in a very small negative Dst for the isolated and interacting CMEs around time t = 65 and 55 hours, respectively. The larger Dst in the isolated CME is due to a combination of factors: the minimum southward $B_z$ is slightly larger ($-26.5$ vs $-21.6$~nT), it occurs at the end of the southward magnetic field period instead of at its beginning, and the velocity in the complex ejecta decreases faster than in the isolated CME (resulting in a smaller dawn-to-dusk electric field). However, the longer period below $-100$~nT predicted for this complex ejecta may result in an intensification of the geo-magnetic response, which cannot be captured by our simple model to evaluate the Dst index. \subsection{Real Event} To confirm that this type of complex event indeed exist, we identify potential examples in {\it in situ} data at 1~AU. Note that \citet{Dasso:2009} discuss a complex event which may be the result of the interaction of two successive CMEs in 2005 May 15. Here, we start from the list of 17 events from \citet{Marubashi:2007} and that from \citet{Xie:2006}. The complex event from 2001 March 19--22 is one of the best examples of a potential long-duration complex ejecta resulting from multiple CMEs (another candidate, not discussed here, occurred on 2004 April 3--5). Figure 3 shows the {\it in situ} measurements including the clock angle of the IMF, $\theta$, and the merging electric field, $E_{KL} = V \sqrt{B_y^2+B_z^2} \sin^2(\theta/2)$ following \citet{Kan:1979}. The magnetic ejecta lasts for about 57~hours (between the two vertical red lines). It is preceded by a single fast-mode shock at 11:30UT on March 19 (marked by the first vertical line). The velocity profile through the CME is similar to that of an isolated expanding event: monotonously decreasing with a center value of 400~km~$^{-1}$ and an expansion speed of 100~km~s$^{-1}$. The magnetic field strength is very smooth and reaches a maximum of $\sim 22$~nT and the plasma $\beta$ is below 0.1 throughout the structure. There are however some indications of an origin from two structures with an interface around 18-20UT on March 20: (i) the fluctuations in the magnetic field vectors occur in all three components from the start of the event to 20 UT on March 20; thereafter only the $B_x$ and $B_z$ component fluctuates but the $B_y$ component is smooth; (ii) the magnetic clock angle varies (first decrease to 180$^\circ$ then increase back to $90^\circ$) during the first part and is steady (eastward directed) afterwards, and, (iii) the merging electric field implies strong forcing of the magnetosphere during the first part (large values $\sim 5$ mV/m) and decreases monotonically thereafter. Also, small-scale structures (identified as slow shocks) are present near the peak of the magnetic-field strength (March 20 around 18-20 UT). \begin{figure}[t] \centering \includegraphics[width=8.3cm]{March2001.ps} \caption{2001 March 19--22 long-duration event. From top to bottom, the panels show the proton density (alpha-to-proton ratio in blue), the proton temperature (expected temperature in red), the velocity, dynamic pressure, magnetic field strength and magnetic field vector components in GSM coordinates, the plasma $\beta$, magnetic field clock angle $\theta$ and the merging electric field.} \end{figure} All but one lists of MCs consider this a single event with a duration of more than 50 hours \citep[see for example][]{Jian:2006, Marubashi:2007, Richardson:2008}, about twice longer than the typical duration of a magnetic cloud at 1~AU. The exception to this interpretation is the list of \citet{Lepping:2005}, which identifies two overlapping MCs (the second MC starts 0.5 hour before the end of the first one at 17:45UT on March 20). We performed a minimum-variance analysis on the magnetic field for the two separate intervals and found two inclined clouds with an angle of about $40^\circ$ between them with a large ratio of the intermediate to minimum eigenvalues. This event resulted in a double-peaked intense geo-magnetic storm with Dst below $-50$~nT for 55~hours starting on March 19 at 18UT. The first peak of $-105$~nT occurred at 22UT on March 19 and the second peak of $-149$~nT at 14UT on March 20. The geomagnetic indices (AL and sym-H) are shown in Figure 4 as is the energetic particle flux at geostationary heights from GOES 8. The storm main phase starts when the sheath is passing over the magnetosphere. The sym-H index then decreases, though non-monotonically to its peak values. Recovery starts at the time when the second interval starts (after the maximum in $B$). In addition, this time period was associated with a sawtooth event on March 20 \citep[]{Troshichev:2011}. Sawtooth events are typically associated with a strong driver of the magnetosphere and a southward IMF for extended periods of time \citep[]{Henderson:2004}. There were 10 dipolarizations associated with injection of energetic particles observed at GOES 8 during the passage of the sheath and during the first interval. The average and standard deviation of the duration between individual sawteeth is 2.1 $\pm 0.55$~hours. During the second time interval, there are only four weak dipolarization events, none of them in the last 36 hours of the event. \begin{figure}[t] \centering \includegraphics[width=7.2cm, height = 6cm]{geomag.ps}\\ \hspace{0.22cm}\includegraphics[width=7.3cm]{GOES8.ps} \caption{Geomagnetic response to the 2001 March 19--22 event. From top to bottom, the panels show the merging electric field, the AL and sym-H indices, the magnetic field components and energetic particle fluxes measured by GOES-8.} \end{figure} The ejecta measured {\it in situ} in 2001 March 19--22 is usually associated with a slow partial-halo CME observed by LASCO/C2 on March 16 at 03:50UT with a speed of about 360~km~s$^{-1}$. While the transit time to 1~AU for this CME is approximately correct, it is unlikely that such a slow CME would (i) reach speeds in excess of 500~km~s$^{-1}$ at 1~AU, as is measured here, and (ii) result in the longest MC measured during solar cycle 23 \citep[]{Marubashi:2007}. At the Sun, the period 2001 March 14--18 was CME-rich; there were a number of slow disk-centered eruptions on March 14--15 (for example from W10 on March 15 at 22:26UT) as well as a full and fast halo CME on March 18 at 02:00 UT. This halo CME lacks on-disk observations and is considered back-sided, but it may have been Earth-directed and could explain the origin of the second ejecta. Another, more plausible possibility is that the second ejecta corresponds to a CME from W37 on March 17 at 18:00UT which had a speed of about 600~km~s$^{-1}$. Overall, there are many indirect indications that this event is associated with two CMEs, contrary to what was reported by most studies. It is the case although the velocity, magnetic field strength, proton temperature and plasma $\beta$ show no indication of two events. Based on the magnetic field component and the increase in proton density between 15 and 19UT on March 20, we can identify a first magnetic ejecta between 20UT on March 19 and 14UT on March 20 (18 hours) and a second ejecta between 19UT on March 20 and 00UT on March 22 (29 hours). As in the simulation, the second ejecta is characterized by a smooth, enhanced and uni-directional magnetic field (for the March event in the eastward direction). In the simulation, it corresponds to the direction of the axial field of a second CME for which the poloidal field has reconnected away leaving a nearly uni-directional field. If this is the case for the March event, the second CME would be of low inclination. The lack of geo-effectiveness of this second ejecta with a strong $B_y$ component is to further investigate. \section{Discussion and Conclusions}\label{conclusion} By combining numerical simulations and the analysis of {\it in situ} measurements, we have identified a new class of complex ejecta resulting from the interaction of two CMEs. Due to different orientations of the ejections, measurements at 1~AU appear to indicate the passage of a long magnetic cloud, but are in fact, due to two successive and interacting CMEs. With an appropriate orientation of the two CMEs, such an event may result in long-duration geomagnetic storm and be associated with sawtooth event. In details, we have presented the results at 1~AU of a simulation of the interaction of two CMEs with different orientations. We have shown that the resulting complex ejecta is very similar to a MC from an isolated CME, except for the presence of a long ``tail'' in the magnetic field and the hotter temperature throughout the ejecta. We have estimated the expected Dst index for this complex ejecta, and we have found that, while the peak Dst is not as low as that from a well-oriented isolated CME, the tail in the magnetic field results in the Dst to be below $-100$~nT for more than a day, or about 50\% longer than for the isolated CME. We have also presented the analysis of one long-duration magnetic ejecta observed at 1~AU in 2001 March 19--22. This event resulted in a long, intense geomagnetic storm with a peak Dst of only $-149$~nT but the Dst stayed below $-50$~nT for more than 2 days. There were also a number of sawteeth in March 20 in the first half of the ejecta. Most studies have identified this as an isolated magnetic cloud with a duration in excess of 2 days. We have presented some potential evidence that this ejecta is in fact a complex ejecta associated with two CMEs. A more complete investigation of combined {\it in situ} and remote-sensing database will be required to assess how common this type of complex ejecta is. This could be helped in the future by the availability of remote-sensing observations of CMEs as they propagate and interact on their way to Earth \citep[]{CShen:2012, Lugaz:2012b}. The other event that we have tentatively identified in 2004 April 3--6 was also associated with an extended sawtooth event, although the Dst index peaked only at $-117$~nT and was below $-50$~nT for only 15 hours. Further studies are also required to determine how this type of complex ejecta affects Earth's magnetosphere and how the interaction differs from that with an isolated MC or a multiple-MC event. \begin{acknowledgments} The research for this manuscript was supported by the following grants: NSF AGS-1239699 and NASA NNX13AH94G. The simulations were performed on the NASA HEC {\it Pleiades} system under awards SMD-12-3360 and 13-3919. \end{acknowledgments} \bibliographystyle{agufull08}
2,869,038,155,173
arxiv
\section{INTRODUCTION}\label{sec:intro} There is much debate regarding the origin and evolutionary history of hot Jupiters. Traditional core accretion theory suggests that such planets form beyond the ice-line (the boundary outside which water exists in a frozen state) prior to moving inwards \citep{pollack_96}. The earliest proposed planet migration mechanisms involve a gradual inward-spiral facilitated by planet-disk interactions \citep{goldreich_80, lin_96, murray_98}. Naive interpretation of these migration models presumes that planetary orbits should be well aligned with the spin-axis of their host star. However, precision radial velocity (RV) measurements exploiting the Rossiter-Mclaughlin (RM) effect show that many transiting hot Jupiter orbits are significantly misaligned \citep{winn_09, winn_10b, triaud_10, hebrard_2011, albrecht_12}. \begin{figure*} \begin{center} \includegraphics[height=2.9in]{f1.eps} \caption{Keck AO discovery images of WASP-12 B,C taken on UT 2012-Feb-02 (left) and HAT-P-8 B,C taken on UT 2012-June-24 (right). North is up and east is left in both images. Follow up observations separated by more than one year recover each companion. } \end{center}\label{fig1} \end{figure*} Numerous dynamical models have been proposed to explain the wide range of observed spin-orbit angles, including planet-planet scattering \citep{ford_rasio_08, chatterjee_08} and Kozai-Lidov perturbations with subsequent tidal friction \citep{wu_murray_03, naoz_11}. Several teams have performed comparative analyses suggesting that these two modes could be responsible for placing Jupiters into very short (several day) orbital periods, either individually or in combination \citep{fabrycky_09, morton_johnson_10, nagasawa_11, beauge_12, dawson_13}. The underlying assumption motivating these more complex dynamical models is that protoplanetary disks maintain alignment with their host star throughout the planet formation process. However, this assertion need not apply to all stars. Recent theoretical work indicates that the cause of misalignment may instead be induced by forces acting on the disk itself. For example, \citet{lai_11} have proposed that young protostars with strong magnetic fields ($>$$10^3$ Gauss) can act to warp and misalign the circumstellar disk. Alternatively, gravitational torques from a companion star can change the inclination of the disk relative to the spin-axis of the star prior to the formation of planets \citep{batygin_12}. In any case, such mechanisms must be able to account for the observed abrupt change in the distribution of spin-axis angles as a function of stellar effective temperature \citep{winn_10, albrecht_12}. A number of plausible hot Jupiter migration mechanisms involve the presence of a massive third body. High spatial resolution imaging can detect such companions at physical scales corresponding to the expected location of their orbits \citep{eggenberger_07, daemgen_09, mugrauer_09, mason_11, roberts_11, ginski_12, faedi_13, narita_12}.\footnote{Stellar companions at short orbital periods can be constrained and sometimes ruled out by existing RV measurements.} These studies consistently find that a significant fraction (tens of percent) host a distant stellar candidate companion that could potentially affect the dynamical histories of the observed hot Jupiters. Several of the most comprehensive and recent programs have used ``lucky'' imaging to efficiently explore a large number of targets. However, near-infrared observations combined with adaptive optics (AO) provides comparatively deeper effective contrast levels especially for objects with red colors such as M dwarfs and brown dwarfs \citep{fleming_12}. We have recently commenced a multi-faceted observing program, named ``Friends of Hot Jupiters''(hereafter FHJ), that systematically searches for additional companions around a large sample of transiting planet systems \citep{knutson_2013}. The primary objective of the FHJ survey is to quantify the relative fraction of systems, including both well-aligned and misaligned hot Jupiters, that contain distant tertiary bodies, and to study any candidate perturbers in detail using imaging and spectroscopy. In this paper, we present initial results from the FHJ survey demonstrating that WASP-12 and HAT-P-8 are actually triple star systems. The candidate companion pairs found orbiting these two planet hosts were identified previously by \citet{bergfors_11} as single objects. Our diffraction limited observations, using Keck, spatially resolve each secondary source into two distinct components. Combining our measurements with previous observations increases the astrometric time baseline by a factor of 2-3 and allows us to confirm the physical association of these objects with their parent star. \section{SUMMARY OF PREVIOUS OBSERVATIONS} \subsection{WASP-12} WASP-12b is a highly irradiated transiting hot Jupiter that orbits a G0V star with a 1.09 day period \citep{hebb_09}. RM measurements yield a sky-projected spin orbit angle of $\lambda$ = 59$^{+15}_{-20}$ deg \citep{albrecht_12}. WASP-12b may have a prolate shape and be undergoing Roche-Lobe overflow that results in substantive mass loss \citep{li_10,fossati_10, fossati_13}. It has been suggested that this planet's dayside emission spectrum is consistent with a super-solar carbon-to-oxygen ratio (\citealt{madhusudhan_11, moses_13}; see however \citealt{crossfield_12}). Recent observations of WASP-12b's transmission spectrum indicate that it may also have a high-altitude haze or cloud layer \citep{swain_13, stevenson_13}. \citet{bergfors_11} detected a faint source separated by $1.047\pm0.021$'' from the WASP-12 primary. Using Keck/NIRSPEC archival data, \citet{crossfield_12} analyzed the near-infrared spectrum of the candidate companion and found that that it is consistent with an M dwarf. \citet{crossfield_12} also note that the candidate is abnormally bright for an M dwarf if situated at the same distance as the primary. \citet{bergfors_13} find that the companion's point-spread function (PSF) appears to be elongated in two separate epochs, possibly indicating that it is a marginally resolved triple system. \subsection{HAT-P-8} HAT-P-8b is a transiting hot Jupiter that orbits an F5V star with a period of 3.07 days \citep{latham_09}. Initially suspected to have an inflated radius, recent observations by \citet{mancini_13} indicate a higher density than previously reported. \citet{simpson_13} measure a sky-projected spin-orbit angle of $\lambda$ = 15$^{+33}_{-43}$ deg and \citet{moutou_11} find $\lambda$ = -17$^{+9.2}_{-11.5}$ deg, both consistent with a reasonably well-aligned prograde orbit. High spatial resolution imaging by \citet{bergfors_11,bergfors_13} indicates that HAT-P-8 may be part of a binary star system, although \citet{faedi_13} were unable to confirm the candidate companion, which had a purported angular separation of $1.027 \pm 0.011$". \begin{table*}[!ht] \caption{} Summary of astrometric measurements listing integration time ($\Delta t_{int}$), angular separation ($\rho$), and position angle (PA). Observations are separated by more than one year for each stellar system. \begin{center} \begin{tabular}{cccccccc} \hline\hline \multirow{2}{*}{Companion} & \multirow{2}{*}{JD-2,450,000} & \multirow{2}{*}{Date (UT)}& \multicolumn{3}{c}{$\Delta t_{int}$ (s)} & \multirow{2}{*}{$\rho$ (mas)} & \multirow{2}{*}{PA ($^{\circ}$)} \\ &&&J&K${'}$&K$_{s}$&&\\ \hline\hline WASP-12~B & 5,959.9 & 2012-Feb-02 & 135 & 135&&$1064 \pm 19$ & $251.3\pm1.0$ \\ WASP-12~C & --- & --- & --- & ---& &$1073 \pm19$ & $246.8\pm1.0$ \\ WASP-12~B & 6,353.8 & 2013-Mar-02 & & &150 &$1062 \pm18$ & $251.4\pm1.0$ \\ WASP-12~C & --- & --- & & &--- &$1072 \pm18$ & $247.1\pm1.0$ \\ \hline HAT-P-8~B & 6,103.0 & 2012-June-24 & 162 & 95& &$1040\pm14$ & $137.9\pm0.8$ \\ HAT-P-8~C & --- & --- & --- & ---& &$1049\pm14$ & $141.4\pm0.8$ \\ HAT-P-8~B & 6,476.9 & 2013-July-03 & 180 & &180 &$ 1053\pm14$ & $137.6\pm0.8$ \\ HAT-P-8~C & --- & --- & --- & &--- &$ 1041\pm14$ & $140.7\pm0.8$ \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[!ht] \caption{} Secondary and tertiary companion photometric properties. We estimate spectral types using near-infrared color information (when available) and absolute magnitudes by comparing to \citealt{kraus_07}. Absolute magnitudes are found using (photometric or spectroscopic) distance modulus estimates: $d=250\pm30$ pc for WASP-12 \citep{bergfors_13} and $d=230\pm15$ pc for HAT-P-8 \citep{latham_09}.\\[0.1in] \centerline{ \begin{tabular}{ccccccc} \hline \hline Companion & $\Delta J$ & $\Delta K_{s}$ & $J-K_s$ & $M_{K_s}$ & Mass ($M_{\odot}$) & Spec. Type \\ \hline \hline WASP-12~B & $3.81\pm0.05$ & $3.25\pm0.04$ & $0.85\pm0.08$ & $6.47\pm0.27$ & $0.38\pm0.05$ & M3V \\ WASP-12~C & $3.92\pm0.05$ & $3.28\pm0.04$ & $0.93\pm0.08$ & $6.50\pm0.27$ & $0.37\pm0.05$ & M3V \\ \hline HAT-P-8~B & --- & $5.58\pm0.07$ & --- & $7.73\pm0.16$ & $0.22\pm0.03$ & $\approx$M5V \\ HAT-P-8~C & --- & $6.08\pm0.10$ & --- & $8.32\pm0.17$ & $0.18\pm0.02$ & $\approx$M6V \\ \hline \end{tabular}} \label{tab2} \end{table*} \section{ADAPTIVE OPTICS IMAGING} We initially observed WASP-12 (V$=11.6$) and HAT-P-8 (V$=10.4$) as part of the FHJ program in Spring 2012 using NIRC2 (instrument PI: Keith Matthews) with the Keck II AO system (Wizinowich 2000). Our standard procedure for searching the immediate vicinity of transiting planet hosts involves executing a three-point dither pattern that facilitates removal of instrument and sky background radiation while avoiding the (noisy) bottom-left quadrant of the NIRC2 array. Observations are nominally obtained in position angle mode without allowing for field rotation since we do not perform PSF subtraction. We used the NIRC2 narrow camera setting to provide fine (10 mas) spatial sampling of the instrument PSF. Integration times for all observations are listed in Table 1. The data were processed using standard techniques to flat-field the array, replace hot pixels, subtract background noise, and align and co-add the frames (e.g., \citealt{crepp_12b}). Figure \ref{fig1} shows the final reduced K-band images for WASP-12 and HAT-P-8. Our observations provide a spatial resolution comparable to the diffraction limit (approximately 45 mas). In each case, two candidate companions (B, C) are detected. We obtained complementary photometry in the J-band to determine the companion colors and help constrain their physical properties. WASP-12~BC are spatially resolved in the J-band; however HAT-P-8~BC are not seen in the UT 2012 June-24 J-band images due to high airmass (2.19) indicating that the HAT-P-8 companions have red colors. Deeper follow-up J-band observations taken UT 2012 July-03 (see Section 4.2) detect the combined light of HAT-P-8~BC but do not spatially separate the objects as is seen at longer wavelengths. \begin{figure*} \begin{center} \includegraphics[height=2.9in]{fig2.eps} \caption{Astrometric measurements for WASP-12 and HAT-P-8. Axes correspond to the angular separation (offset) in the north and east cardinal directions as measured relative to the primary star. The combined proper motion plus parallactic motion of an infinitely distant (unassociated) object is given by the dashed and solid curves. Dashed curves correspond to the astrometric time baseline of \citet{bergfors_13}. The solid curves correspond to the astrometric time baseline of this study. \citet{bergfors_13} did not spatially resolve the WASP-12~BC and HAT-P-8~BC components, but did provide the initial detection of their combined light signal (in 2009 October). We plot the photo-center of our resolved BC companions to compare to Bergfors' 2009 and 2011 data. Our Keck AO epochs are separated by more than one year and demonstrate physical association (by themselves) for HAT-P-8; association of WASP-12 BC is established by combining our results with those of \citet{bergfors_13}. Our astrometric uncertainties are over-plotted and comparable to \citet{bergfors_13}. Our measurement precision is dominated by systematics from distortion in the individual frames. Orbital motion of these two bodies may be detectable with additional observations.} \end{center}\label{fig2} \end{figure*} \section{PHOTOMETRY AND ASTROMETRY} \subsection{PSF Model Fits} We perform a Bayesian analysis to model the AO observations of WASP-12 and HAT-P-8 at each epoch. Specifically, Markov-Chain Monte Carlo (MCMC) numerical methods are employed to compute companion relative brightnesses, astrometric positions, and determine uncertainties. The Metropolis-Hastings algorithm efficiently explores regions of parameter space to find the best-fitting global minimum and calculate posterior distributions for each fit parameter. We simultaneously model three point-spread functions to self-consistently account for contamination from nearby companion star. Free parameters include: rectilinear coordinates for each source; peak brightness of each source; sky background levels (which we model as spatially uniform); and PSF fitting parameters, $\alpha$, $\beta$, $\gamma$, $r_s$, and $w$. The observations are well-modeled using a modified Moffat function, given by: \begin{equation} I(x,y)= \sum_{i=1}^{i=3}\: \left\{ \alpha_i \left[1+\left(\frac{r_i}{r_s}\right)^{2}\right]^{-\beta}+\gamma_i\:e^{-r_i^2/w^2} \right\}, \end{equation} where $r_i= \sqrt{(x_i-{x_0}_i)^2+(y_i-{y_0}_i)^2}$ is a polar coordinate corresponding to the angular separation from each source, $i$, in the image. The term on the left describes the AO halo and the term on the right characterizes the PSF core. By separating the terms, we effectively account for tip/tilt and focal anisoplanatism in the images, although we do not allow $r_s$ and $w$ to vary individually ($w_i = w$, $r_{s{_i} }= r $) due to the already large number of degrees of freedom (twelve when including the sky background). The posterior distributions found by our MCMC algorithm marginalize over all fitting parameters. Equation 1 captures on-axis AO features but does not replicate low order aberrations or diffraction from the first Airy ring. We have experimented with other PSF forms such as sinc(...) and sinc$^2$(...) functions. Assuming that uncertainties in each reduced image are described by Poisson statistics at the pixel level, resulting from sky-background subtraction shot-noise, we find the results from each AO model are consistent with one another but uncertainties are unrealistically small. For example, angular separation measurement uncertainties are less than 1 mas (1$\sigma$). The images used for our analysis have been fully processed prior to MCMC calculations. As such, we have stacked frames acquired from different dither positions. This step is required because the companions are so much fainter than their primary star particularly in the J-band. However, by combining images obtained from different locations on the array, we have introduced PSF spatial smearing from uncorrected optical distortions. We estimate the size of this effect using polynomial fits available for the NIRC2 array provided by Keck Observatory\footnote{Distortion correction polynomials found \href{http://www2.keck.hawaii.edu/inst/nirc2/dewarp.html}{\textcolor{blue}{here}.}}. Systematic errors are of order 1-2 pixels and change depending on the size of the dither pattern. Distortion corrections may be applied before image stacking but this introduces significant numerical noise. Furthermore, the correction coefficients also change slowly with time \citep{yelda_2010}. Our final adopted astrometric uncertainties were found by adding the effects of optical distortion in quadrature with that from photon noise and pixel crosstalk resulting from PSF fitting errors. We self-consistently account for uncertainty in the plate scale and orientation of the NIRC2 array \citep{ghez_08} by randomly drawing values for the plate-scale and orientation from a normal distribution and folding the results into calculations of the angular separation and position angle when converting from pixel separations to arcseconds. Nevertheless, the effect from optical distortion dominates the uncertainty for each astrometric epoch as it is much greater than both pixel cross-talk and photon noise. Results for relative astrometry measurements are shown in Table 1. Although our observations from 2012 and 2013 were acquired in different filters ($K'$ and $K_s$), due to a change in the FHJ default observing strategy, this effect appears to be small since the results are nearly identical. \begin{figure*} \begin{center} \includegraphics[height=2.8in]{fig3.eps} \caption{Joint RV and imaging constraints on the presence of additional companions orbiting WASP-12 (left) and HAT-P-8 (right) using the accelerations listed in Equation 5. Any unseen stars, brown-dwarfs, or gas giant planets must lie below both the limits set by Doppler RV measurements (solid line) and those set by AO imaging (dashed line). } \end{center}\label{fig3} \end{figure*} \subsection{Physical Association} We perform an astrometric analysis to assess the physical association of the WASP-12~BC and HAT-P-8~BC candidates with their primary star. To do so, we compare our astrometric measurements against the null hypothesis that the off axis sources are infinitely distant unrelated background objects with zero parallax. WASP-12 has a small proper motion [-0.7, -7.8 mas/yr] comparable to the size (9.963 $\pm$ 0.006 mas) of a NIRC2 pixel \citep{hog_00, ghez_08}. HAT-P-8 has a proper motion that is an order of magnitude larger [75.5, 17.5 mas/yr] \citep{hog_00}. Neither star has a \textit{Hipparcos} parallax measurement which complicates the analysis. Instead, the distance to WASP-12 is estimated using a photometric distance modulus \citep{bergfors_13} and the distance to HAT-P-8 is determined using a spectroscopic distance modulus \citep{latham_09}. We incorporate parallactic motion by converting estimated distances to a trigonometric parallax (ellipse). The resulting differential motion across the sky between the primary star and candidate secondary/tertiary is given by the vector sum of the proper and parallactic motion \citep{zimmerman_10}. Our astrometric measurements are shown in Figure 2. Over-plotted are previous measurements taken by \citet{bergfors_13} in October 2009 that identify the combined light signal of WASP-12~BC and HAT-P-8~BC but do not spatially resolve the sources into individual components. Our Keck AO observations from 2012 and 2013 clearly separate the light from each companion star. The angular separation of WASP-12~BC is $84.3\pm0.6$ mas ($21\pm3$ AU) and the angular separation of HAT-P-8~BC is only $65.3\pm0.5$ mas ($15\pm1$AU), comparable to the diffraction limit of a 10 meter telescope at near-infrared wavelengths. Optical distortion for such small separations is negligible. To compare data on an equal footing with \citet{bergfors_13}, we plot combined light photo-centers for WASP-12 BC and HAT-P-8 BC in Figure 2. The \textit{a priori} probability of finding three point sources in a hierarchical configuration separated by only 1'' on the sky is very low. Our two astrometric epochs for WASP-12 and HAT-P-8 are separated by 393.9 days and 373.9 days, respectively. The expected motion of a background source relative to the primary star is $8.5 \pm 1.0$ mas ($0.9 \pm0.1$ pixels) for WASP-12 and $79.3\pm2.9$ mas ($8.0\pm0.3$ pixels) for HAT-P-8 over the same time-frame. With only two observations, the confirmation that WASP-12~BC are bona-fide companions is marginal. However, combining our measurements with the 2009 October initial detection from \citet{bergfors_11} we can demonstrate that the three point sources are physically associated (Figure 2). To further reinforce our results, we have determined the photometric distance modulus for WASP-12 BC. The combined light apparent magnitude of the WASP-12 System is 10.19$\pm0.02$ \citep{skrutskie_06}. Backing out the individual apparent magnitudes of WASP-12 BC from our relative photometry measurements, we find the distance to WASP-12 B is 263$\pm13$ pc and the distance to WASP-12 C is 267$\pm13$ pc. These values overlap with the photometric distance estimated by \citet{bergfors_13} of 250$\pm30$ pc ruling out the possibility that they are foreground or background objects. HAT-P-8~BC are confirmed using our observations alone due to the large space motion of the host star. We cannot claim detection of orbital motion for either system because of the aforementioned systematic errors and fact that the stars were observed with different instruments and filters. Dedicated astrometric measurements are required to determine the total dynamical mass of the secondary and tertiary in each case \citep{dupuy_10}. We note that in both cases, WASP-12 and HAT-P-8, our measurements are $\approx$ 20 mas south and $\approx$ 20 mas east of Bergfors suggesting possible systematics between the AstraLux and Keck data sets. \subsection{Companion Characterization} \citet{bergfors_13} assign a preliminary spectral type of M0V for WASP-12~``B'' (combined light), assuming the identified off axis source is associated. \citet{crossfield_12} find that WASP-12~``B'' is a hot M dwarf with $\Delta K=2.45\pm0.06$ mag. We find that WASP-12~B and WASP-12~C are $\Delta K^{A,B}_s = 3.25\pm0.04$ and $\Delta K^{A,C}_s = 3.28 \pm 0.04$ mags fainter than the primary respectively (Table 2). Combining the signal from both components, our measurements show that the expected unresolved brightness difference between the secondary/tertiary and primary star should be $\Delta K^{A,BC}_s = 2.51\pm0.03$ mag, consistent with the interpretation of \citet{crossfield_12}. To further characterize the companions around each star, we calculate absolute magnitudes based on previous distance estimates from \citet{bergfors_13} and \citet{latham_09}. Our uncertainty in absolute magnitude is dominated by the lack of a trigonometric parallax measurement. We estimate the mass of each companion using \cite{girardi_02} evolutionary models assuming a system age of 5 Gyr. Comparing our absolute magnitudes to those of \citet{kraus_07}, we find that WASP-12~BC are consistent with M3V (Table 2). Additionally, the J - K colors of WASP-12~BC (see Table 2) are also consistent with M stars \citep{kraus_07}. Although HAT-P-8~BC are detected during second epoch (UT 2013 July-03) observations, they are spatially unresolved in the J-band because the images were obtained at an airmass of 2.19. Performing aperture photometry for the pair, we find a combined difference in magnitude of $\Delta J^{A,BC} J=5.9\pm0.2$. We estimate the spectral types of HAT-P-8 B and C to be $\approx$ M5V and M6V respectively using K-band photometry alone. \subsection{Companion Constraints} As part of the FHJ program, we obtained additional RV measurements for both systems, which we use to constrain the presence of additional companions at shorter orbital periods. Our best-fit RV slopes are: \begin{eqnarray} {dv/dt}_{WASP-12}= -4.12 \pm 4.37~\mbox{m/s/year} \nonumber \\ {dv/dt}_{HAT-P-8}= -2.72 \pm 2.39~\mbox{m/s/year} , \end{eqnarray} consistent with the absence of massive, $m\geq5~M_J$, objects out to $a \leq 8.3$ AU for WASP-12 and $a \leq10.9$ AU for HAT-P-8. Figure 3 displays joint constraints imposed by the combination of Doppler RV measurements (solid lines) and direct imaging observations (dashed lines). Should any additional companions be present in these systems, their masses must reside below both curves. Continued RV monitoring of the host stars will further eliminate regions of mass-semi-major axis parameter space. \section{SUMMARY \& DISCUSSION} We have commenced a multi-disciplinary follow-up observing program, named ``Friends of Hot Jupiters'' (FHJ), that targets a large sample of short-period gas giant transiting planet systems. In this paper, we present AO images from Keck that spatially resolve previously identified candidate companions around WASP-12 and HAT-P-8 into two distinct sources. When combined with previous observations from \citealt{bergfors_13}, our astrometric measurements show that WASP-12~BC and HAT-P-8~BC are gravitationally bound to one another as well as the primary, making WASP-12b and HAT-P-8b members of hierarchical triple star systems. Our diffraction-limited measurements show that the two companions around WASP-12 are separated by $84.3\pm0.6$ mas ($21\pm3$ AU) and have roughly equal brightness. We estimate spectral types of M3V, consistent with the \citealt{crossfield_12} (spatially unresolved) combined-light spectroscopic analysis. Our photometric measurements combined with evolutionary models indicate masses of $0.38\pm0.05~\mbox{M}_{\odot}$ and $0.37\pm0.05~\mbox{M}_{\odot}$ for WASP-12 B and C, respectively. The companions orbiting HAT-P-8 are separated by only $65.3 \pm0.5$ mas ($15\pm1$ AU) and have somewhat more disparate properties. We estimate that HAT-P-8~B has a mass of $0.22\pm0.03~\mbox{M}_{\odot}$ and HAT-P-8~C has a mass of $0.18\pm0.02~\mbox{M}_{\odot}$. In each case our ability to characterize each system is limited by the lack of an accurate trigonometric parallax. The ongoing debate concerning the origin of misaligned hot Jupiters has brought about several potential orbital evolutionary theories. AO imaging shows significant promise to improve our understanding of the dynamical history of these systems. Although numerous candidate companions around hot Jupiter hosts have been identified (e.g., \citealt{bergfors_13}), multi-epoch astrometry that assesses the physical association of these objects requires dedicated follow-up measurements from comprehensive programs that study close-seperations stellar companions in detail. WASP-12 and HAT-P-8 may offer unique insights into the dynamics of hot Jupiter systems because their hierarchy will ultimately enable companion mass estimates using dynamics. \section{ACKNOWLEDGEMENTS} This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. J.A.J. is supported by generous grants from the David and Lucile Packard Foundation and the Alfred P. Sloan Foundation. B. T. M. is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1144469.
2,869,038,155,174
arxiv
\section{Introduction} \label{sec:intro} Association Rule Mining (ARM) is definitely one of the most important and popular data mining techniques for discovering unknown knowledge from transaction databases. The ARM is also a part of Machine Learning (ML) with the task to discover interesting relationships between items in large transaction datasets. The relationships are expressed by association rules determining how and why certain items are connected. The story of ARM started with a seminal paper of~\cite{agrawal1994fast}. Agrawal set the theoretical foundations for the process of ARM, and proposed the first algorithm, called {{\em Apriori}. Apriori is a deterministic algorithm for mining association rules, and is still today featured as one of the top algorithms in the DM domain~\citep{wu2008top}, as well as a member of an unprecedented scale in student textbooks. In the following years, the ARM gained huge interest in the ML community. Its popularity was proven with many practical applications, especially, in the domains such as market-based analysis~\citep{nisbet2018advanced}, medical diagnosis~\citep{xu2022privacy}, census data~\citep{malerba2002mining} or protein sequences~\citep{gupta2006mining}, among others. Data analysis pipelines typically consist of data cleanup and minimizing data imputations (also data pre-processing), data collection and exploration design, and comprehending the mined knowledge. Thus, the whole ARM pipeline is complex (see Fig.~\ref{basic-pipeline}), because it consists of three steps, as follows: the pre-processing, the ARM, and the post-processing. The input to the pipeline presents the transaction database, which consists of rows and columns, where each row presents a transaction, and columns the attributes. In the pre-processing step, some optional substeps can be applied to make the data more robust, i.e., data cleaning and missing data imputation, where some outliers, or rows with a lot of missing data can even be removed. On the other hand, some other operations, for example, data squashing~\citep{fister2022datasqa}, can help reduce the transaction dataset. Then, the ARM process itself is performed. In line with this, several algorithms, e.g., Apriori or Eclat, exist and are, as mentioned, some of the most used. The output of this step is usually a huge collection of mined/identified/found association rules. Usually, researchers present these rules as a table, or summarize them using some metrics. However, visualization of the association rules needs to be conducted for the best insights. \begin{figure} \centering \includegraphics[width=9cm]{Figures/ARM_pipeline.pdf} \caption{The basic ARM pipeline.} \label{basic-pipeline} \end{figure} Nowadays in ML, there is a trend to go for easier representation of the results obtained by ML/AutoML (automated ML) pipelines. This intention also coincides with the emerging research area of eXplainable Artificial Intelligence (XAI)~\citep{barredo2019explainable,arrieta2020explainable}. XAI has become an important part of the future of AI, because XAI models explain the reasoning behind their decisions. This provides an increased level of understanding between humans and machines, which can help build trust in AI systems~\citep{kumar2022what}. In summary, XAI is a set of processes and methods to comprehend and trust the results created by the ML algorithms. In line with this, it tries to describe an AI model's impact on the one hand, and exposes its potential biases on the other. Thus, the ML model is estimated according to its accuracy, fairness, transparency, and outcomes of AI-powered decision-making~\citep{borego2022explainable}. XAI can be manifested in several forms: text explanation, visualization, local explanation, explanation by example, explanation by simplification, and feature relevance~\citep{barredo2019explainable,bennetot4229624practical}. Thus, there is an increased interest of researchers in developing new methods for easier representation of the results. Definitely, one of the most important parts of these efforts is visualization methods \citep{arrieta2020explainable}. Typically, ARM algorithms generate a huge number of association rules. Frequently, the results are opaque for ordinary users, and need some explanations to understand their meaning. On the other hand, visualization of the results has a huge explanation power. Although a lot of visual methods have been proposed for ARM, to the best of our knowledge, no review for dealing with this problem from the XAI point of view exists nowadays. Therefore, the aim of this paper is to collect and discuss visualization techniques for ARM that have appeared from its advent to the present day. Each method is studied in detail and features are compared with each other in the sense of XAI. The contributions of this review paper are summarized as follows: \begin{itemize} \item The evolution of ARM visualization methods is presented. \item The features of each of the methods are defined. \item The advantages/disadvantages of each method are outlined. \item An example is presented for each of the surveyed methods. \item Explaining models using the ARM visualization are summarized. \end{itemize} The review of the ARM visualization methods is based on papers published from three different main sources: the ACM Digital Library, IEEEXplore, and Google Scholar. The analyse of the methods are highlighted from the following points of view: (1) characteristics, (2) visualization focus, and (3) attribute type. The taxonomies of the ARM visualization methods are introduced based on the highlights. The structure of the paper is organized as follows: Section~\ref{sec:2} deals with the ARM problem in a nutshell. The mathematical definition of the ARM visualization is the subject of Section~\ref{sec:3}. A detailed overview of traditional ARM visualization methods is reviewed in Section~\ref{sec:4}. New ideas in the ARM visualization are the subject of Section~\ref{sec:5}. The subject of Section~\ref{sec:6} is a review of graphical systems, while Section~\ref{sec:7} introduces challenges and open problems. The review concludes with Section~\ref{sec:8} that summarizes the performed work and outlines potential ideas for the future work. \section{Association rule mining in a nutshell}\label{sec:2} The ARM problem is defined formally as follows: Let us suppose a set of items $I=\{i_1, \ldots,i_M\}$ and transaction database $D=\{Tr_1,\ldots,Tr_N\}$ are given, where each transaction $Tr_i$ is a subset of objects $Tr_i \subseteq I$. Thus, the variable $M$ designates the number of items, and $N$ the number of transactions in the database. Then, an association rule can be defined as an implication: \begin{equation} X \Rightarrow Y, \label{arule} \end{equation} \noindent where $X \subset I$ (left-hand-side or LHS), $Y \subset I$ (right-hand-side or RHS), and $X \cap Y = \emptyset$. The following four measures are defined for evaluating the quality of the association rule~\citep{agrawal1994fast}: \begin{equation} \mathit{supp}(X \Rightarrow Y) = \frac{n(X \cap Y)}{N}, \label{Eq:2} \end{equation} \begin{equation} \mathit{conf}(X \Rightarrow Y) = \frac{n(X \cap Y)}{n(X)}, \label{Eq:1} \end{equation} \begin{equation} \mathit{lift}(X \Rightarrow Y) = \frac{supp(X \cap Y)}{supp(X)\times supp(Y)}, \label{Eq:lift} \end{equation} \begin{equation} \mathit{conv}(X \Rightarrow Y) = \frac{1-supp(Y)}{1-conf(X\Rightarrow Y)}, \label{Eq:conv} \end{equation} \noindent where $\mathit{supp}(X \Rightarrow Y) \geq S_{\mathit{min}}$ denotes the support, $\mathit{conf}(X \Rightarrow Y)\geq C_{\mathit{min}}$ the confidence, $\mathit{lift}(X \Rightarrow Y)$ the lift, and $\mathit{conv}(X \Rightarrow Y)$ the conviction of the association rule $X \Rightarrow Y$. There, $N$ in Eq.~(\ref{Eq:2}) represents the number of transactions in the transaction database $D$, and $n(.)$ is the number of repetitions of the particular rule $X \Rightarrow Y$ within $D$. Additionally, $S_{\mathit{min}}$ denotes minimum support and $C_{\mathit{min}}$ minimum confidence, determining that only those association rules with confidence and support higher than $C_{\mathit{min}}$ and $S_{\mathit{min}}$ are taken into consideration, respectively. The interpretations of the measures are as follows: The support measures the proportion of transactions in the database which contain the item. The confidence estimates the conditional probability $P(Y|X)$, denoting the probability to find the $Y$ of the rule in transaction under the condition that this transaction also contains the $X$. The lift is the ratio of the observed support that $X$ and $Y$ arose together in the transaction if both set of items are independent. The conviction evaluates the frequency with which the rule makes an incorrect prediction. \subsection{Numerical association rule mining} Numerical Association Rule Mining (NARM) extends the idea of ARM, and is intended for mining association rules where attributes in a transaction database are represented by numerical values. Usually, traditional algorithms, e.g., Apriori, require a discretization of numerical attributes before they are ready to use. The discretization is sometimes trivial, and thus does not affect the results of mining positively. On the other hand, many methods for ARM exist that do not require the discretization step before applying the process of mining. Most of these methods are based on population-based nature-inspired metaheuristics, such as, for example, Differential Evolution (DE)~\citep{storn1997differential} or Particle Swarm Optimization (PSO)~\citep{kennedy1995particle}. Consequently, the NARM has recently showed an importance in the data revolution era that has been confirmed by some review papers~\citep{altay2019performance,telikani2020survey} tackling the solving this class of problems. Each numerical attribute is determined in NARM by an interval of feasible values limited by its lower and upper bounds. The more association rules are mined the broader the interval. The narrower the interval, the more specific relations are discovered between attributes. Introducing intervals of feasible values has two major effects on the optimization: To change the existing discrete search space to continuous, and to adapt these continuous intervals to suit the problem of interest better. Mined association rules can be evaluated according to several criteria, like support and confidence. For the NARM, however, additional measures must be considered, in order to evaluate the mined set of association rules properly. \subsection{Time Series Association Rule Mining} TS-ARM is a new paradigm, which treats a transaction database as a time series data. The formal definition of the NARM problem needs to be redefined in line with this. In the TS-ARM, the association rule is defined as an implication: \begin{equation} X(\Delta t)\implies Y(\Delta t), \end{equation} where $X(\Delta t)\subset O$, $Y(\Delta t)\subset O$, and $X(\Delta t)\cap Y(\Delta t)=\emptyset$. The variable $\Delta t=[t_1,t_2]$ determines the sequence of the transactions which have arisen within the interval $t_1$ and $t_2$, where $t_1$ denotes the start and $t_2$ the end time of the observation. The measures of support and confidence are redefined as follows: \begin{equation} \mathit{conf_t}(X(\Delta t) \implies Y(\Delta t)) = \frac{n(X(\Delta t) \cap Y(\Delta t))}{n(X(\Delta t))}, \label{Eq:1t} \end{equation} \begin{equation} \mathit{supp_t}(X(\Delta t) \implies Y(\Delta t)) = \frac{n(X(\Delta t) \cap Y(\Delta t))}{N(\Delta t)}, \label{Eq:2t} \end{equation} where $\mathit{conf_t}(X(\Delta t) \implies Y(\Delta t))\geq C_{\max}$ and $\mathit{supp_t}(X(\Delta t) \implies Y(\Delta t))\geq S_{\max}$ denotes the confidence and support of the association rule $X(\Delta t)\implies Y(\Delta t)$ within the same time interval $\Delta t$. \section{Visualization of association rule mining}\label{sec:3} Visualization of ARM can be described mathematically as a set of triplets: \begin{equation} \mathcal{R}=\{\langle X_1,Y_1,Z_1\rangle,\ldots,\langle X_i,Y_i,Z_i\rangle,\ldots,\langle X_n,Y_n,Z_n\rangle \}, \end{equation} where $X_i$ denotes an antecedent, $Y_i$ a consequent, and $Z_i$ a vector of available interestingness measures (e.g., support, confidence, etc.) for $i=1,\ldots,N$. In a nutshell, different visualization methods depend on: \begin{itemize} \item the number of interstingness measures to display, \item the visualization focus, \item the rule set size. \end{itemize} The number of interestingness measures to display is limited by the number of dimensions that can be visualized (i.e., 2D or 3D). The visualization focus determines how the association rule defines the neighborhood of rules to be visualized. In line with this, the neighborhood is defined by: interestingness measure, items, similarity of RHS and LHS, or time series' visualization. The rule set size limits the number of association rules that are included into a specific visualization method. \subsection{Study design} \label{res-methodology} For conducting the systematic literature review, we followed the guidelines presented in the Systematic Literature Review Guidelines in Software Engineering \citep{kitchenham2007guidelines}. Our primary goal was to identify the frequency of the ARM visualization methods, the main features of these methods, and the applications in which these methods were applied. According to our goals, we developed the following Research Questions (RQ)s: \begin{itemize} \item RQ1: Which methods are developed for the ARM visualization? \item RQ2: Which challenges and open problems are placed behind the ARM visualization? \item RQ3: Which software packages are available to users tackling these problems? \item RQ4: What awaits the methods for visualization of association rules in the future? \end{itemize} We conducted a literature search using major databases from 18 to 22 November, 2022. The main search strings that were used for searching the databases were as follows: ``association rule mining'' AND ``visualization'' OR ``visualisation''. The search string was also modified according to the search formats of different databases. Table~\ref{tab:papers} presents the results of our search~\footnote{Note that we also checked the citing articles of results from Google Scholar manually.}. Each of the papers was prescreened according to its abstract and keywords. \begin{table}[htb] \caption{Search results of papers regarding the keywords in various databases.} \label{tab:papers} \centering \begin{tabular}{ l l r r } \hline \textbf{Database name} & \textbf{URL} & \textbf{Total} & \textbf{Included} \\ \hline ACM Digital Library & \href{https://dl.acm.org}{dl.acm.org} & 6 & 4 \\ IEEEXplore & \href{https://ieeexplore.ieee.org}{ieeexplore.ieee.org} & 214 & 21 \\ Google Scholar & \href{https://scholar.google.com}{scholar.google.com} & 16,100 & 25+ \\ \hline Total & & 16,320 & 25+ \\ \hline \end{tabular} \end{table} When the results were collected, we also filtered out the duplicates. Additionally, when searching through the Google scholar we checked for citing articles of each paper, so that additional results were then identified and included in this review paper. We also specified the selection and exclusion criteria as well as limitations. The selection criteria were the follows: (1) research paper addresses any kind of ARM and its connection with visualization, and the research must be peer reviewed, i.e., published in a referred conference, journal paper, book chapter or monograph. The search was conducted with exclusion criteria as follows: ``The research paper is not written in the English language'', and limitations such as: ``The literature review search was limited to only three databases''. The summary of abstracts from IEEEXplore and ACM Digital Library publications is shown in the wordcloud~Fig.~\ref{wordcloud}. \begin{figure} \centering \includegraphics[width=12cm]{Figures/arm-wordcloud.png} \caption{Wordcloud of the extracted abstracts.} \label{wordcloud} \end{figure} \section{A detailed overview of traditional ARM visualization methods}\label{sec:4} In the following subsections, each of the methods is outlined, followed by a summary of related work, while several methods are also illustrated by an example. The examples of the particular visualization are implemented in arulesViz \citep{hahsler2011arules} on a set of 11,267 association rules produced by the Apriori algorithm \citep{agrawal1994fast} mining the Mushroom UCI ML dataset \citep{uci2020repository} using the following limitations: $S_{\mathit{min}}=0.3$ and $S_{\mathit{min}}=0.5$. Table~\ref{tab:1} presents a summary of the traditional ARM visualization methods that were found in our systematic literature review. It is divided into four columns that present: a sequence number (column 'Nr.'), a class (column 'Class'), a variant (column 'Variant'), and the method's developer (column 'Reference'). \begin{table}[htb] \caption{Summary of ARM visualization methods.} \label{tab:1} \begin{tabular}{ c|l|l|l } \hline Nr. & Class & Variant & Reference \\\hline \multirow{2}{*}{1} & \multirow{2}{*}{Scatter} & Scatter plot & \citet{bayardo1999mining} \\ & & Two key plot & \citet{unwin2001twokey} \\ \hline \multirow{1}{*}{2} & \multirow{1}{*}{Graph} & Graph-based & \citet{klemettinen1994finding} \\\hline \multirow{2}{*}{3} & \multirow{2}{*}{Matrix} & Matrix-based & \citet{Ong2002crystalClear} \\%\hline & & Grouped matrix-based & \citet{hasler2017visualizing} \\\hline \multirow{2}{*}{4} & \multirow{2}{*}{Mosaic} & Mosaic plot & \citet{Hofmann2008mosaic} \\ & & Double decker plot & \citet{hofman2001visual} \\\hline \hline \end{tabular} \end{table} As can be seen from the table, we are focused on eight classes of visualization methods and their variants (together 7 visualization methods). In the remainder of the paper, the aforementioned visualization methods are illustrated in a nutshell. \subsection{Scatter plot} A Scatter plot (Fig.~\ref{fig:Scatter}) was firstly used for visualizing mined association rules by \citep{bayardo1999mining}. \begin{figure}[htb] \centering \begin{subfigure}[t]{.48\linewidth} \includegraphics[width=1.\textwidth]{Figures/Scatter.png} \caption{Scatter plot.} \label{fig:Scatter} \end{subfigure} \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\linewidth]{Figures/Two-key.png} \caption{Two-key plot.} \label{fig:Twokey} \end{subfigure} \caption{Scatter and Two-key plots powered by arulesViz.} \label{fig:Scatter_Two-key} \end{figure} In general, this plot is used to display an association or relationship between interestingness measures $Z_i$ (usual support and confidence) that are presented as dots in the Scatter plot. Additionally, the third measure (usual lift) is included as a color key. Thus, rules with similar values of interestingness measures are placed closer to each other, while the correlation can be established between dependent and independent variables. Typically, the so-called regression line is drawn in the Scatter plot, representing the trend of the relationship between two observed variables. This line can also be used as a predictive tool in some circumstances. \subsubsection{Twokey plot} A two-key plot (Fig.~\ref{fig:Twokey}) is a special kind of Scatter plot that was developed by \cite{unwin2001twokey}, especially, for analyzing association rules. It consists of a two dimensional Scatter plot displaying an association between two measures of interestingness (usually support and confidence), while the third measure is represented by the color of the points (i.e., support/confidence pairs), where the color corresponds to the length of the rule (also order). Interestingly, 2-order association rules describe trails moving from the upper right side (perfect result) to the left lower side of the same plot (lesser support and lesser confidence). \subsection{Graph-based} Graph-based techniques (Fig.~\ref{fig:graph}) identify how rules share individual item \citep{klemettinen1994finding,rainsford2000temporal,buono2005visualizing,ertek2006framework}. They visualize \begin{figure}[htb] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=1.\textwidth]{Figures/Graph.png} \caption{Graph plot powered by arulesViz.} \label{fig:graph} \end{minipage}% \end{figure} association rules using vertices and edges, where vertices annotated with item labels represent items, and itemsets or rules are represented as a second set of vertices. The items are connected with itemsets/rules using arrows. For rules, arrows pointing from items to rule vertices indicate LHS items, and an arrow from a rule to an item indicates the RHS. Interestingness measures are typically added to the plot by using the color or the size of the vertices representing the itemsets/rules. Graph-based visualization offers a very clear representation of rules but they tend to become cluttered easily, and, thus, are only viable for very small sets of rules. \subsection{Matrix-based} Matrix-based visualization \citep{Ong2002crystalClear} (Fig.~\ref{fig:Matrix}) identifies associations between antecedent (LHS) and consequent (RHS) items. \begin{figure}[htb] \centering \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\linewidth]{Figures/Matrix.png} \caption{Matrix plot.} \label{fig:Matrix} \end{subfigure} \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\textwidth]{Figures/Matrix3D.png} \caption{Grouped matrix-based plot.} \label{fig:Matrix3D} \end{subfigure} \caption{Matrix and Grouped matrix-based plots powered by arulesViz.} \label{fig:Matrix2} \end{figure} Thus, association rules are organized as a square matrix $M=\{m_{j,k}\}$ of dimension $M\times M$, in which distinct antecedent items $X_i\in\{x_{i,j}\}$ for $j=1,\ldots,|X_i|$ and distinct consequent items $Y_i\in\{y_{i,k}\}$ for $k=1,\ldots,|Y_i|$ are included. The values of some interestingness measure (e.g., lift) are then assigned to the corresponding position $m_{j,k}=Z_i$ of the matrix. Typically, the antecedent itemset of the rules is ordered by increasing support, while the consequent itemset by increasing confidence before visualization. However, the matrix visualization is limited by the rule set size (i.e., $<1,000$), especially in the case of a huge matrix, which makes the exploration of the matrix much harder. \subsubsection{Grouped matrix-based visualization} The grouped matrix-based visualization \citep{hasler2017visualizing} (Fig.~\ref{fig:Matrix3D}) is a variant of the original matrix-based visualization, where the large set of different antecedents (the columns in matrix $M$) are grouped into the smaller set of groups using clustering. Mathematically, the set of antecedents is grouped into a set of $k$ groups $S=\{S_1,\ldots,S_k\}$ according to minimizing the sum of squares within the particular cluster, in other words: \begin{equation} \arg\min_S\sum_{i=1}^k\sum_{m_{i,j}\in S_i} ||m_{i,j}-\overline{m}_i||^2, \end{equation} where $\mathbf{m}_i=\{m_{i,j}\}$ for $j=1,\ldots,|A_i|$ is a column $i$ of matrix $\mathbf{M}$ which represents all values with the same antecedent, and $\overline{m}_i$ is the center of the cluster $S_i$. Thus, the $k$-means algorithm~\citep{hartigan1979k-means} is applied 10-times with random initialization of the centroids. The best solution is then used for an ARM visualization. The motivation behind the ARM visualization method is to reduce the antecedent's dimension that enables more informative visualization of the association rules. \subsection{Mosaic plot} A mosaic plot \citep{hartigan1984mosaic} is applied for visualizing the interesting rule, consisting primarily of categorical attributes (Fig.~\ref{fig:Mosaic}). It is based on the so-called contingency table, in which the frequencies of the attribute appearances in the interesting rule $r^*$ are assigned to each position $m_{j,k}$, where $j$ denotes the corresponding the antecedent attribute $A_j$ and $k$ the consequent attribute $A_k$. \begin{figure}[htb] \centering \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\linewidth]{Figures/Mosaic.pdf} \caption{Mosaic plot.} \label{fig:Mosaic} \end{subfigure} \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\textwidth]{Figures/DoubleDecker.pdf} \caption{Double Decker plot.} \label{fig:DoubleDecker} \end{subfigure} \caption{Mosaic and Double Decker plots powered by arulesViz.} \label{fig:DoubleDecker2} \end{figure} The interesting rule is determined as follows: Let us assume that each rule $r_i\in \text{R}$ is a tuple $r_i=\langle X_i,Y_i,Z_i\rangle$, where $X$ denotes the attributes $A=\{A_i,\ldots,A_p\}$ belonging to the antecedent, $Y$ to the consequent, $Z$ is a set of interestingness measures, and $X\cap Y=\emptyset$. Then, the interesting rule $r^*$ for visualizing with mosaic plot is defined as \begin{equation} r^*\Rightarrow Y|Z, \end{equation} where $X=\{A_{x_1}=a_{x_1}\wedge A_{x_k}=a_{x_k}\}$, $Y=\{A_{y}=a_{y}\}$, and $Z=\{\mathit{supp},\mathit{conf}\}$, for which the difference of confidence (doc) for rule $X\Rightarrow Y$ and $\neg X\Rightarrow Y$ is the maximum, in other words: \begin{equation} \max_{r\in\text{R}}\mathit{conf}(X\Rightarrow Y)-\mathit{conf}(\neg X\Rightarrow Y). \end{equation} Mosaic plots were introduced as a graphical analogy of multivariate contingency tables \citep{hofman2000visualizing}. This means that the position $m_{i,j}$ (also a cell in a contingency table) is presented in a mosaic plot as an area divided into the highlighted part (colored) that is proportional to the support of the rule $X\Rightarrow Y$ and the unhighlighted part of the rule $\neg X\Rightarrow Y$. Thus, the confidence is proportional to the height of the highlighted part of the area. \subsubsection{Double Decker plot} Double Decker plot \citep{hofmann2000mosaic} allows comparing the proportions of the highlighted heights referring to confidence measure more easily (Fig.~\ref{fig:DoubleDecker}). While the original mosaic plot splits tiles in vertical and horizontal directions, the Double Decker splits these only horizontally. As a result, the antecedent of the interesting rule is now expressed mathematically as: \begin{equation*} X=\{A_{x_1}=\cdot\wedge A_{x_p}=\cdot\}, \end{equation*} i.e., the proportions of the highlighted heights are presented in each tile of the mosaic plot, while the widths of the tiles are represented as labels denoting the antecedent's attributes. Thus, the highlighted shades illustrate relations with an outcome set to 'True', while the white shades refer to relations, whose outcome leads to 'False'. \section{New ideas in the visualization of association rules}\label{sec:5} This section reviews papers dealing with ARM visualization methods that accumulate new ideas in this domain. The ideas are collected in Table~\ref{tab:1a}, \begin{table}[htb] \caption{Summary of the new ARM visualization methods.} \label{tab:1a} \begin{tabular}{ c|l|l|l } \hline Nr. & Class & Variant & Reference \\\hline 1 & \multirow{1}{*}{Fishbone} & Ishikawa diagram & \citet{tsurinov2021farm} \\\hline 2 & \multirow{1}{*}{Molecular} & Molecular representation & \citet{said2013visualisation} \\\hline 3 & \multirow{1}{*}{Lattice} & Concept lattice & \citet{shen2020research} \\\hline 4 & \multirow{1}{*}{Metro} & Metro map & \citet{fister2022information} \\\hline 5 & \multirow{1}{*}{Sankey} & Sankey diagram & \citet{fister2022association} \\\hline 6 & \multirow{1}{*}{Ribbon} & Ribbon plot & \citet{fister2020visualization} \\\hline 7 & \multirow{1}{*}{Glyph} & Glyph-based & \citet{hrovat2015interestingness} \\\hline \hline \end{tabular} \end{table} from which it can be seen that, here, we were focused on the seven ARM visualization methods, which, in our opinion, best reflect the development in this domain. In the remainder of the section, the selected ARM visualization methods are illustrated in detail. \subsection{Ishikawa diagram} Typically, the Ishikawa diagram \citep{tague2005quality} is applied as a cause analysis tool appropriate for describing the structure of a brainstorming session, in which a development team tries to identify possible reason causing a specific effect. Consequently, the Ishikawa chart is also called a cause/effect diagram. As a result of the brainstorming process, a fishbone diagram is constructed as an arrow with an arc directing to the effect (i.e., a problem statement). Then, the possible causes of the problem need to be identified that are presented as branches originating from the main arrow. The diagram has also been applied in ARM visualization. For instance, the authors \cite{tsurinov2021farm} have established that ARM algorithms produce a large number of mined association rules in unstructured form. This means that there is no information about which features are more relevant for a user. In this sense, they proposed the Fishbone ARM (FARM) that is able to introduce a hierarchical structure for rules. The structure enables that the priority of features becomes clearly visible. The fishbone structure presents a basis for visualization with FARM. In this structure, features, inserted as ribs in a symbolic fishbone, are ordered such that the conviction metric values grow from the rear toward the head. Thus, the complexity of the structure increases by adding additional attributes. On the other hand, the statistical significance of the results also needs to be increased. In line with this, cross-validation is employed for evaluating the significance that splits the result dataset into two different portions (i.e., test and validation), and then re-sampled during more iterations. \subsection{Molecular representation} A molecule is a group of two or more atoms connected together with chemical bounds (e.g., covalent, ionic) \citep{ebbing2016general}. Therefore, a molecule representation refers to a connected graph with nodes denoting atoms and edges denoting the chemical bounds between them. The representation inspired \cite{said2013visualisation} into developing a new ARM visualization method that is devoted for visualizing items arising in the antecedent and consequent of the selected association rule. Thus, two characteristics need to be determined: (1) the contribution of each item to the rule, and (2) the correlation between each pair of antecedents and each pair of consequents from an archive of association rules. The association rules are explored before visualization according to one of the interestingness measures selected by the user, e.g., support, confidence, and lift. The contribution of an item in the selected association rule $R=X\implies Y$ is calculated with measuring the Information Gain (IG) defined by~\citet{freitas1998on}: \begin{equation} IG(A_i)=Info(R)-Info(R|A_i), \end{equation} where \begin{equation} \begin{aligned} Info(R) &= -\sum_{j=1}^n{P(R_j)\log P(R_j)},~\text{and}\\ Info(R|A_i) &= \sum_{k=1}^m{P(A{i,k})}\left(-\sum_{j=1}^n{P(R_j|A_{i,k})\log P(R_j|A_{i,k})}\right). \end{aligned} \end{equation} Thus, it holds that attributes with higher values of $IG$ are good predictors of the selected rule. In contrast, if items with low or negative $IG$ values are encountered, the selected rules are estimated as irrelevant. On the other hand, the lift interestingness measure (Eq.~\ref{Eq:lift}) is applied for determining the correlations between pairs of items in the antecedent and consequent, respectively. The visualization of molecular representation is typically realized using sphere 3D graphs (also powered by R), where spheres present items and edges of the different distances' connection between them. The calculated characteristics of items into the selected rule are captured in a sphere graph as follows: \begin{itemize} \item the size of the sphere is proportional to the value of $IG$, \item the positive value of $IG$ is a plot in a sphere of one color (e.g., blue), while the negative one in a sphere of another color (e.g., white), \item the distance between two spheres is proportional to the measure lift. \end{itemize} However, authors \cite{said2013visualisation} simplified the visualization of association rules based on a molecular representation by developing a tool for VISual mining and Interactive User-Centred Exploration of Association Rules (IUCEARVis). In summary, the main weakness of the molecular structure is that it shows the importance of items to rules, and cannot show the distribution of association rules. \subsection{Concept lattice} A concept lattice is a tool for extracting specific information from massive data. It is obtained after a concept analysis that belongs to the domain of applied mathematics \citep{truong2010structure}. The results of the concept analysis are aggregated in a data structure that is, typically, presented in a Hasse graph. The Hasse graph consists of concepts representing as nodes in a 2-dimensional lattice, and edges expressing the generalization and instantiation of relationships between the concepts \citep{shen2020research}. Formally, the concept lattice is defined as a triple $L=\langle O,A,B\rangle$, where $O$ denotes a set of objects, $A$ a set of attributes, and $B$ is a binary relationship matrix $B\subseteq O\times A$ denoting that an object $o\in O$ and attribute $a\in A$ are in a relationship, if $(i,a)\in B$. Thus, a node in the concept lattice is defined as a pair $\langle A,B\rangle$, where the former member is also called an extension $A\in O$ (i.e., a collection of objects), and the latter a connotation (i.e., collection of attributes). Indeed, a combination of objects and attributes is needed for a more comprehensive analysis of the association rules. The task of the ARM visual algorithms based on the context is to display association rules extracted from concept lattice. Thus, the central area of the visualization interface consists of a 2-dimensional lattice, within which the concepts are positioned as points according to their values of support and confidence. Two lines are attached below and above the lattice: The former represents the objects which have arisen in the antecedent, while the latter the same in the consequent of the potential association rule. Indeed, if there is a relationship between particular object and attribute in the relationship matrix $(i,a)\in B$, the object is connected with the node (concept) using an edge. The advantages of the ARM visualization based on a concept lattice can be summarized as follows: (1) a deeper understanding of association rules at the conceptual level, and (2) analyzing the relationships between concepts more comprehensively. However, the main weakness of the visualization is that this is only appropriate for visualizing the binary values of objects. In order to overcome the problem, \citet{yang2005pruning} proposed generalized association rules capable of visualizing the frequent rules in an itemset lattice that presents one item in parallel coordinates. In this way, many-to-many rules can be visualized on the one hand, and the large number of rules as selected by the user can be displayed on the other. Obviously, the advantage of the ARM visualization methods is that the user can limit the number of association rules for visualization interactively by specifying the parameters $S_{\min}$ and $C_{\min}$. \subsection{Metro maps} The concept of information maps enables analysis of data having a "geographical look" \citep{shahaf2012trains,shahaf2015metro}. The look can also be prescribed to mined association rules. Therefore, the idea to visualize these in the form of metro maps has become appreciated \citep{fister2022information}. This means, similar as the metro map can help a user to orientate him/herself in the environment, the information map can help them to understand the information hidden in the mined association rules. Thereby, the metro map is divided into more metro lines, consisting of various metro stops. In the information sense, each metro stop represents an attribute, while the metro lines a linear sequence of the attributes (also different association rules). Mutual connections between the metro lines reveal how an attribute in one association rule affects an attribute in the other, and vice versa. Finally, understanding the linear sequences of attributes and connections between them can even tell stories about the specific information domain. The metro map is defined mathematically as $\mathcal{M}=(G,\Pi$, where $G=(A,E)$ denotes an attribute graph of vertices $A=\{A_1,\ldots,A_M\}$, representing attributes and edges $E=\{r_i,\ldots,r_n\}$ representing simple rules (i.e., rules with one antecedent attribute and one consequent attribute), together with incident function $\psi_G$ that associates an ordered pair $\psi_G=(X,Y)$ denoting the implication $X\implies Y$, and $\Pi$ is a set of metro lines $\pi\in\Pi$ \citep{fister2022information}. The evolutionary algorithm was applied in~\cite{fister2022information} for constructing a metro map that must obey the following four objectives: (1) maximum path length $\tau$, (2) maximum map size $K$, (3) high coverage, and (4) high structure quality. Indeed, the maximum path length refers to the maximum number of metro stops (i.e., attributes) in a linear sequence. The maximum map size limits the number of metro lines. The coverage is proportional to the lift interestingness measure, where we were interested in rules with a lift value $>1$, determining the degree to which the probability of occurrence of the antecedent, and this of the consequent are dependent on one another. The structure quality ensures that the linear sequences of the metro stops are coherent in all metro lines. An example of a metro map obtained by mining the Mushroom dataset, that was constructed using the parameters $\tau=6$ and $|\mathcal{K}|=4$, is illustrated in Fig.~\ref{fig:MM}. \begin{figure}[htb] \centering \begin{minipage}{.6\textwidth} \includegraphics[width=.96\linewidth]{Figures/Mush_MM2.png} \caption{Metro map plot powered by R.} \label{fig:MM} \end{minipage} \begin{minipage}{.38\textwidth} \scriptsize \begin{tabular}{ |c|l| } \hline Tag & Attribute \\ \hline at1\_e & class\_edible \\ at1\_p & class\_poisonous \\ at5\_f & bruises?\_no \\ at5\_t & bruises?\_bruises \\ at6\_n & odor\_none \\ at7\_f & gill-attachment\_free \\ at8\_c & gill-spacing\_close \\ at9\_b & gill-size\_broad \\ at11\_t & stalk-shape\_tapering \\ at13\_s & stalk-surface-above-ring\_smooth \\ at14\_s & stalk-surface-below-ring\_smooth \\ at17\_p & veil-type\_partial \\ at18\_w & veil-color\_white \\ at19\_o & ring-number\_one \\ at20\_p & ring-type\_pendant \\ at22\_v & population\_several \\ \hline \end{tabular} \end{minipage}% \end{figure} Let us notice that the figure is divided into two parts, i.e., a diagram and a table. The diagram presents the visualized metro map, while the table the meaning of the metro stops (attributes). \subsection{Sankey diagram} Similar to the metro map, the Sankey diagram is also focused on "geographical data". Additionally, the kind of visualization enables visualization of hierarchical multivariate data. It is represented as a graph consisting of nodes representing attributes and edges representing connectivity by flows across time. In this diagram, the quality of each connection is distinguished by its weight that is proportional to some of the interestingness measures. Mathematically, the Sankey diagram is defined as a directed graph $G=\langle K,R\rangle$, where $K$ denotes the maximum path length and $R$ is a set of similar rules~\cite{fister2022association}. The rules in this diagram are presented by the antecedent $X=\{A_{x_{1}}=a_{x_{1}}\wedge,\ldots,\wedge A_{x_{k}}=a_{x_{k}}\}$, representing a set of source nodes, consequent $Y=\{A_y=a_y\}$, representing a set of sink nodes, and interestingness measure $Z=\{\mathit{supp,cons,lift}\}$, reflecting the quality of a particular connection. The quality can also be expressed with a linear combination of the measures. The similarity between two rules $r_i$ and $r_j$ is defined as: \begin{equation} \mathit{sim}(r_j,r_j)=\frac{|\mathit{Ante}(r_i)\cap \mathit{Ante}(r_j)|+|\mathit{Cons}(r_i)\cap \mathit{Cons}(r_j)|}{|\mathit{Ante}(r_i)\cup \mathit{Ante}(r_j)|+|\mathit{Cons}(r_i)\cup \mathit{Cons}(r_j)|}, \end{equation} where $\mathit{Ante}(.)$ denotes a set of antecedent attributes, and $\mathit{Cons}(.)$ a set of consequent ones. However, the $\mathit{sim}(r_i,r_j)\in [0,1]$, where the value 0 means that the rules are not similar, and 1 that the rules are absolutely similar. The similarities are then combined into an adjacency matrix $\mathit{Adj}$, defined as follows: \begin{equation} \text{Adj}=\left[ \begin{matrix} a_{1,1} & \ldots & a_{1,M} \\ & \ldots & \\ a_{\tilde{M},1} & \ldots & a_{M,M}. \\ \end{matrix} \right], \end{equation} The problem of searching for the most similar set of association rules $R$ is defined as a Knapsack 0/1 problem \citep{kellerer2010knapsack}. The construction of the Sankey diagram visualization is divided into two steps: (1) searching for a set of the most similar association rules, and (2) visualization using Sankey diagrams. In~\citet{fister2022association}, the authors proposed a DE meta-heuristic algorithm using the Knapsack 0/1 deterministic algorithm for determining the set of the most similar rules, while the R programming language for statistical computing was applied to solve the second step. The example of Sankey diagrams is illustrated in Figs.~\ref{fig:river_plot1}-\ref{fig:river_plot2} \begin{figure*}[!ht] \begin{subfigure}[t]{.48\linewidth} \includegraphics[width=\textwidth]{Figures/arm_flow_1.png} \caption{Sankey diagram for time period 1.} \label{fig:river_plot1} \end{subfigure} \hfill \begin{subfigure}[t]{.48\linewidth} \includegraphics[width=\textwidth]{Figures/arm_flow_3.png} \caption{Sankey diagram for time period 2.} \label{fig:river_plot2} \end{subfigure} \caption{Sankey diagrams for time periods 1 and 2 powered by R.} \end{figure*} that refer to mining the sport training database obtained in more seasons (i.e., years). This database consists of training load indicators measured during an implementation of a sport training session. The visualization is divided into two parts: The first part (Fig.~\ref{fig:river_plot1}) presents the results of the ARM visualization on sport training data captured during one season, while the second (Fig.~\ref{fig:river_plot2}) highlights the data obtained during the next season. In this way, two historical insights are served to a sport trainer: (1) In what proportion do the training load indicators contribute to the whole? and (2) What changes can be observed in the sense of training load indicators by athletes who have already had the main portion of training sessions during the previous seasons? Interestingly, \cite{hlosta2013approach} proposed a visualization of evolving association rules using graphs, where the nodes of the graphs represent items and edges specific association rules. Thus, the graph-based diagram shows how evolving models mined using the ARM algorithms and stored into a transaction database can be filtered and visualized. \subsection{Ribbon plot} Ribbon plots are appropriate for visualizing data without self-intersections, where linearized simplification of events exposes the significant ones. Although the plot is ideal for analyzing linearized sequences, it can be applied successfully for visualizing the best association rule in NARM, where the proper boundaries need to be discovered between the numerical attributes. Thus, the attribute with the best support is compared with the other attributes in the association rule according to support and confidence. The attributes are ordered into linear sequence according to the closeness of the first attribute regarding the others. The inspiration behind the visualization is presented by the Tour De France (TDF), i.e., the most famous cycling race in the world. Similar as in the TDF, where the best hill climbers have more chance to win the race, the attribute with the higher support also has the most decisive role in a decision-making process. Indeed, virtual hill slopes are visualized as triangles situated on a plain, where the left leg denotes an ascent and the right leg a descent of the virtual hill in a linear sequence, starting from the left to the right side. In the paper of~\citet{fister2020visualization}, the ascent of the virtual hill is proportional to the attribute's support, while the descent to the confidence of the simple association rule. Mathematically, the best rule $X\Rightarrow Y$ consists of an antecedent $X=\{A_x=a_x\}$ and a consequent $$Y=\{A_{y_1}=a_{y_1},\ldots,A_{y_k}=a_{y_k}\}$$, where the $A_x$ denotes the best attribute according to the support, and simple association rules $A_x\Rightarrow A_{y_j}$ for $j=1,\ldots,k$ are ordered as: \begin{equation} \mathit{conf}(A_x\Rightarrow A_{y_{\pi_1}})\geq \mathit{conf}(A_x\Rightarrow A_{y_{\pi_k}}), \end{equation} where $\pi_j$ is a permutation of the attributes belonging to the consequent. Moreover, the distances $\mathit{dist}_j$ between the virtual hills are also proportional to $\mathit{dist}_j\propto \mathit{conf}(A_x\Rightarrow A_{y_{\pi_j}})$. An example of a ribbon plot is illustrated in Fig.~\ref{fig:Ribbon} representing a visualization of the best association rule mined by the uARMSolver \citep{fister2020uarmsolver} (i.e., the framework for NARM using the nature-inspired algorithms). \begin{figure}[htb] \centering \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\linewidth]{Figures/ribbon.jpg} \caption{Ribbon plot powered by Matlab.} \label{fig:Ribbon} \end{subfigure} \begin{subfigure}[t]{.48\linewidth} \centering \includegraphics[width=1.\textwidth]{Figures/seq_chart.pdf} \caption{Glyph-based chart.} \label{fig:glyph} \end{subfigure} \caption{Ribbon plot and Glyph-based chart.} \end{figure} The framework was applied for mining a database consisting of transactions obtained by cycling training sessions. Thus, the best transaction is composed from seven attributes $A_1,\ldots,A_{k+1}$ ordered into the association rule: \begin{equation*} A_x\Rightarrow A_{y_{\pi_1}}\wedge \ldots\wedge A_{y_{\pi_k}}. \end{equation*} Seven virtual hills can be observed as can be seen from the figure. While the first three virtual hills are of comparable height to the first one, the remainder of the hills are of lower height, and, thus, reflect the lower inter-dependence. \subsection{Glyph-based plots} Glyph-based plots are suitable for visualizing multivariate data with more than two attribute dimensions, where different data variables are presented by a set of visual channels (i.e., shape, size, color, orientation, curvature, etc.) \citep{borgo2013glyph}. Indeed, glyphs are devoted for depicting attributes of data that, typically, appear in collections of visualized objects. They are founded on the basics of a semiotic theory that is, in fact, the science of signs \citep{lagopoulos2020theory}. According to this theory, signs have emerged in three forms: icons, indices, and symbols. Icons reflect a physical correlation to the sign. The index expresses a space and time correlation to the object. In other words, they have an indirect effect on the object. A meta-physic correlation (i.e., no real correlation) exists between the symbol and the sign. An example of glyph-based visualization for ARM was performed by~\citet{hrovat2015interestingness} that analyzed the time series data gathered from a single athlete (i.e., a cyclist) during a large time period of training (i.e., the whole season). In this study, the sequential pattern mining algorithm \citep{agrawal1994fast} was exploited, where the sequential patterns were discovered by employing the novel trend interestingness measure for mining sequential patterns. Thus, a time-series sequences $ts=\langle,ts_1,\ldots, ts_m\rangle$ were discovered from a transaction database consisting of sport training performed by a single athlete. Two trend interestingness measures are defined in the study as follows: (1) the duration trend $\overrightarrow{\mathit{dut}}(ts)$, and (2) the daily trend $\overrightarrow{\mathit{dat}}(ts)$. The former discovers trends within a trend database on a monthly, while the latter on a daily basis. The trend database is constructed from the original transaction database by dividing each training session into $m$-time series. Then, the permutation test is performed, after which those sequential patterns are selected with a minimum $p$-value. Obviously, the $p$-value is obtained as a result of the permutation test, and serves as a trend interestingness measure. Both trend interestingness measures are visualized using glyphs in order to depict how trends increase or decrease during a specific training period (Fig.~\ref{fig:glyph}). Thus, two glyph symbols are used by the visualization: (1) level, and (2) variable. The level's symbol depicts the trend interestingness measure using an optical channel (i.e., color), where the intensity training load indicators are presented in different colors, depending on low, moderate, intensity, and high intensity levels. The variable's symbol addresses the geometric channels, like: the cyclist's speed (as maximum, average or standard deviation), average heart rate (as minimum, maximum, average and standard deviation), and altitude (as standard deviation). These symbols are depicted using different shapes. \subsection{Other ideas in ARM visualization} The characteristics of the remainder of the analyzed papers can be summarized in the present section as follows: The majority of the papers were published for various data mining conferences. As a result, these include ideas more on the conceptual level, and, therefore, the solutions that they reveal are not robust enough for using in the everyday real-world environment. On the other hand, these ideas are not included into some recognizable ARM visualization system. However, they could be interesting for the potential readers for sure. The principles of ARM visualization methods, as found in the observed collections of analyzed papers, can be classified into the following two classes (Table~\ref{tab:2a}): \begin{itemize} \item reducing a rule set, \item visual data mining. \end{itemize} Indeed, the first two principles of ARM visualization are used commonly in the ARM community: Thereby, the association rules are mined using some of the known mining algorithms. These algorithms produce a lot of association rules that need to be reduced (also rummaged) into an association rule set necessary for visualization. The second principle is more goal-oriented, and mines the association rules either in a visualization context, or tries to reduce their number by avoiding occlusions using optimization. In this way, the association rule set does not need to be reduced further before visualization. Interestingly, the first principle is characterized for papers which emerged at the beginning of the ARM visualization domain, while the second one is typical for papers of the new age, where the ARM exploration is part of the visualization process. \begin{table}[htb] \caption{Other ARM visualization methods.} \label{tab:2a} \small \begin{tabular}{ c|l|l|r } \hline \multicolumn{3}{c}{Principles of ARM visualization} \\\cline{1-4} \multicolumn{2}{l|}{1 Reducing rule set} & Attribute & Reference\\\hline 1 & Relational SQL & Categorical & \cite{chakravarthy2003visualization} \\ 2 & Conditional AR analysis & Categorical & \cite{yamada2015visualization} \\ 3 & Rummaging model & Categorical & \cite{blanchard2003user} \\ 4 & Rummaging model & Categorical & \cite{menin2021arviz} \\ 5 & Correlation rule visualization & Categorical & \cite{zheng2017visualization} \\ 6 & Weighted association rules & Categorical & \cite{saeed2011activity} \\ 7 & Multiple antecedent & Text & \cite{wong1999visualizing} \\ 8 & Weighted association rules & Text & \cite{kawahara1999performance} \\ 9 & Hierarchical structure & Boolean & \cite{jiang2008finite} \\ \hline \multicolumn{3}{l}{2 Visual data mining} \\\hline 1 & Correlation visualization alg. & Categorical & \cite{xu2009visualization} \\ 2 & Integrated framework & Categorical & \cite{couturier2007scalable} \\ 3 & Occlusion reducing & Categorical & \cite{couturier2008optimizing} \\ 4 & Contextual exploration & Categorical & \cite{yahia2004contextual} \\ 5 & Generic association rule set & Categorical & \cite{yahia2004emulating} \\ 6 & 3D visualization engine & Categorical & \cite{ounifi2016new} \\ 7 & Rule-to-items mapping & Categorical & \cite{wang20173d} \\ \hline \end{tabular} \end{table} Obviously, the reducing can be performed on many ways. For instance, \cite{chakravarthy2003visualization} proposed a relational SQL query language, with which a user can select the suitable association rule set for visualization interactively from the collection of association rules stored in tables. \cite{yamada2015visualization} applied the conditional association rule analysis and the association rule analysis with user attributes for the comprehending questionnaire data. \cite{blanchard2003user} introduced the rummaging model for filtering association rules interactively, and included into an experimental prototype called ARVis. A similar model was recommended by \cite{menin2021arviz}, devoted to exploring the RDF data that employed the traditional methods for visualizing and was incorporated into the prototype ARViz. The gray correlation rule visualization algorithm was advised by \cite{zheng2017visualization} that is suitable for considering the influence of the association rules on the visualization. \cite{saeed2011activity} mined a collection of documents consisting of metadata with the Apriori algorithm, and selected an association rule set for visualization according to the calculated weights. On the other hand, \citet{wong1999visualizing} visualized an association rule set with multiple antecedents using a 3-dimensional graph, and applied their solution to a text mining study on a large corpora. \cite{kawahara1999performance} developed the web search engine for manipulating weighted association rules. Thus, the text mining algorithm derived appropriate keywords, while the ROC graph served for the visualization of association rules. Boolean association rules were visualized by~\citet{jiang2008finite} using the hierarchical structure for all of them and depicted in a Hasse diagram. Visual data mining can be performed in various ways, as found in our study: The correlation visualization algorithm was proposed for mining the alarm association rules by \cite{xu2009visualization}. \cite{couturier2007scalable} recommended the integrated framework for association rule extraction and visualization in one step, which integrated previous methods of association rule visualization. Occlusion optimization was proposed by \cite{couturier2008optimizing}. Contextual exploration of an association rule set was developed by \cite{yahia2004contextual} and \cite{yahia2004emulating}, where the additional knowledge needed for visualization was constructed using the fuzzy meta-rules. \cite{ounifi2016new} solved the problem of extraction and visualization by a 3-dimensional visualization engine, while \cite{wang20173d} introduced a 3-dimensional matrix-based visualization system, where the basic matrix-based approach was extended by rule-to-items mapping. \subsection{Taxonomies of the ARM visualization} The ARM visualization methods can be classified according to many aspects. These aspects depend on the various standpoints from which they are observed. Indeed, the following questions reflect those standpoints more precisely: \begin{itemize} \item How to visualize? \item Which visualization methods to use? \item Which characteristics of association rules are essential to visualize? \item What to visualize? \item Which type of attributes to display? \end{itemize} In the remainder of the section, these queries are described in detail. \subsubsection{How to visualize?} The aspect "How to visualize?" refers to the mode of how the exploration and visualization are performed. In line with this, four different methods are distinguished, as follows (Fig.~\ref{fig:tax_1}): \begin{itemize} \item reducing the item set, \item visual data mining, \item a concept lattice, \item evolving association rules. \end{itemize} Reducing the item set means that the exploration of association rules is performed with traditional ARM methods (e.g., Apriori, Eclat, evolutionary algorithms), after which the visualization is performed using some traditional or new age visualization methods. \begin{figure} \includegraphics[width=.9\textwidth]{Figures/Vis_1.png} \caption{How to visualize?} \label{fig:tax_1} \end{figure} Visual data mining comprises those ARM visualization methods that perform the exploration and visualization phases in one step. These methods mine association rules more directionally, where mining can be performed from some concept, can use meta rules, or can be able to limit the number of occlusions. The concept lattice enables displaying the structure of the association rules (i.e., attributes) beside the single rules. However, this visualization method is reserved for displaying the binary association rules only. The evolving association rules are appropriate for visualizing either warehouse data cubes stored in a multidimensional data model, or data suitable for displaying by Sankey diagrams. \subsubsection{Which visualization methods to use?} This aspect is focused on the question, which visualization method to use? In line with this, we can consider that the methods are divided into traditional and new age visualization methods. The former consists of charts, like scatter plot, group-based, matrix-based and mosaic plots, and their variants, like two-key, grouped-matrix and double Decker plots (see Table~\ref{fig:taxonomy} under the column "Method"). The new age visualization methods are comprised of an Ishikawa diagram, molecular representation, a concept lattice, metro maps, Sankey diagrams, ribbon plots, and glyph-based charts. \begin{scriptsize} \input{featuretable} \end{scriptsize} \subsubsection{Which characteristics of association rules are essential to visualize?} The characteristics of the ARM visualization methods refer to: (1) the number of displayed interestingness measures, (2) the rule set size, and (3) the interactivity tools. The number of displayed interestingness rules determines, how many of the interestingness measures are included into the representation for user. For instance, the scatter plot is able to display three interestingness measures, while the two-key plot actually only two, but the third measure is presented indirectly by a color. In general, the number of measures by various visualization methods are typically in the range $[1,3]$. The rule set size determines the number of association rules to be displayed by the definite visualization method. This number is denoted in Table~\ref{fig:taxonomy} in the column "Rule set size" in circles with numbers within them. The numbers present the powers of base 10. This means that the grouped matrix can display $10^5$ association rules. The column "Interactive" shows if specific visualization method supports interactive tools (e.g., hover, zoom, pan, drill down, etc.) or not. Interestingly, although the new age visualization methods do not support interactive tools in general, they allow tuning of parameter settings that enable users some kind of interactivity. \subsubsection{What to visualize?} The aspect, answering to the question "What to visualize?", deals with the focus, which an ARM visualization is presenting. Actually, the ARM visualization can be focused on illustrating: (1) number of interestingness measures, (2) rule length, (3) items, (4) RHS and LHS, and (5) time series data. The first focus is devoted to displaying the number if interestingness measures. The rule length refers to the number of attributes in the visualized association rules. The item focuses on depicting the attributes of the association rules, while the RHS+LHS focus is concentrated on the structure of the more important rules. Finally, the last focus considers the time series data. Interestingly, the concept lattice and metro maps even cover two focuses of displaying association rules, i.e., items (i.e., attributes) and their structure. On the other hand, the glyph-based visualization is dedicated for presenting the time series data. \subsubsection{Which type of attributes to display?} The aspect "Which type of attributes to display?" is focused on visualization based to distinguish the attribute types. In the ARM exploration/visualization, three attribute types can be identified as follows: (1) categorical, (2) numerical, and (3) binary. Interestingly, the majority of the traditional visualization methods are suitable for displaying the categorical type of attributes. Usually, displaying attributes of the numerical type is performed by these visualization methods by discretizing the numerical attributes into discrete classes. Obviously, the new age visualization methods are capable of working with the numerical and binary attributes directly as well. \section{ARM visualization systems}\label{sec:6} The section aims to compile a list of specialized ARM visualization systems and software packages for any of the ARM visualization methods. Obviously, this does not present the other visualization libraries, from which we can develop some methods (e.g., \textbf{matplotlib} in Python, or \textbf{ggplot2} in R). The study focused on presenting only the collection of graphics system that are used more nowadays in the ARM community. The collection of systems is illustrated in Table~\ref{tab:3}. \begin{table}[htb] \begin{center} \caption{List of the ARM graphics systems.} \label{tab:3} \begin{tabularx}{\textwidth}[t]{XX} \arrayrulecolor{black}\hline \textbf{\textcolor{novabarva}{R packages}} & \\ \arrayrulecolor{novabarva}\hline arulesViz \url{https://cran.r-project.org/web/packages/arulesViz/index.html} & \begin{minipage}[t]{\linewidth}% \begin{itemize} \item[1.1] probably the only state-of-the-art tool that supports many visualization methods up to this date \item[1.2] includes also interactive tools \end{itemize} \end{minipage}\\ \arrayrulecolor{black}\hline \hline \arrayrulecolor{green}\hline \textbf{\textcolor{novabarva}{Python packages}} \\ \arrayrulecolor{black}\hline pycaret \url{https://github.com/pycaret/pycaret} & \begin{minipage}[t]{\linewidth}% \begin{itemize} \item[2.1] basically low-code machine learning library in Python \item[2.2] association rule mining is a part of this library \item[2.3] library supports 2D and 3D plots of association rules \end{itemize} \end{minipage}\\ \arrayrulecolor{novabarva}\hline NiaARM (\url{https://github.com/firefly-cpp/NiaARM})& \begin{minipage}[t]{\linewidth}% \begin{itemize} \item[3.1] minor module devoted for visualization \item[3.2] for now supports only ribbon plots \end{itemize} \end{minipage}\\ \arrayrulecolor{novabarva}\hline PyARMViz \url{https://github.com/Mazeofthemind/PyARMViz} & \begin{minipage}[t]{\linewidth}% \begin{itemize} \item[4.1] Python Association Rule Visualization Library that is loosely based on ArulesViz \item[4.2] Development probably stalled (no commits in the last 2.5 years) \end{itemize} \end{minipage}\\ \arrayrulecolor{black}\hline \hline \arrayrulecolor{black}\hline \multicolumn{2}{l}{% \textbf{\textcolor{novabarva}{C++ packages}}} \\ \arrayrulecolor{black}\hline uARMSolver~\url{https://github.com/firefly-cpp/uARMSolver} & \begin{minipage}[t]{\linewidth}% \begin{itemize} \item[5.1] small part of this package is devoted to the visualization \item[5.2] provides the coordinates for metro plots which can be later visualized using metro map algorithms \end{itemize} \end{minipage} \end{tabularx} \end{center} \end{table} As can be seen from the table, the \textbf{arulesViz} graphics system is the most complete, due to covering the majority of the visualization methods dealt with in this review paper. This is an extensive toolbox in the R-extension package \citep{hahsler2011arules}, and works in two phases: (1) exploration using known ARM methods to which tools for reducing the huge number of association rules are applied (e.g., filtering, zooming and rearranging), and (2) visualization of results. The current version of the software supports the following visualization methods (i.e., graphics): scatter plots, network plots, matrix-based, graph-based, mosaic plots and parallel coordinate plots. The other libraries are just a smaller drop in the ocean and, typically, they solve only limited ARM exploration/visualization approaches. For example, while the \textbf{NiaARM} is focused at this moment on only one visualization method (i.e., ribbon plot), the \textbf{PyARMviz} graphics system tends to be what is arulesViz for R, but in Python. Unfortunately, the development of this graphics software has probably stalled since the last commit was done almost three years ago. On the other hand, the development of the NiaARM is not finished yet, due to the unfinished inclusion of the new ideas in ARM visualization (e.g., metro maps, Sankey diagram, etc.) that should shortly widen the usability of the graphics system. \section{Challenges and open problems}\label{sec:7} After deep analysis of the ARM visualization methods, we can conclude that a universal method for covering all the ARM visualization problems does not exist. As a result, the arulesViz software package offers a spectrum of solutions useful for visualization with traditional ARM visualization methods. In this package, the scatter plot is applied as an entry point for an analysis of how to distinguish the similarity of association rules according to interestingness measures, like support and confidence. Then, the matrix-based visualization can be applied, capable of organizing association rules into a matrix, where the antecedent and consequent items can be distinguished. Finally, the graph-based methods are recommended by authors, in order to get the user the broadest view of the relationships between individual items reflecting, their memberships in different association rules. In summary, the problems caused by using the traditional ARM visualization methods can be aggregated as follows \citep{shen2020research}: \begin{itemize} \item the domain knowledge is not displayed sufficiently, i.e., the rules are displayed from a single point of view, \item the visualization of background knowledge is not enough for sharing, i.e., the role and relationship of global information is lost in the context of the background knowledge, \item the use and exploration of potential knowledge hidden in non-connected attributes are reduced. \end{itemize} However, the new age ARM visualization methods tries to reveal the aforementioned problems. Moreover, some of these methods are even able to tell stories in mined data (e.g., metro maps), while the others are able to analyze the information from the history point of view (e.g., Sankey diagrams). Although searching for a new age ARM visualization methods almost stopped after the rapid development of the traditional ARM visualization methods in the past, in our opinion, the future of the ARM visualization remains in the development of the new age ARM visualization methods. These methods might consolidate displaying items as well as the structure of the association rules. Additionally, these need to be independent of the attribute types. The main advantage of the ARM visualization undoubtedly presents the interactivity of the ARM visualization methods. Interactive visualization improves the user's experience and interpretation of the results. Although several popular implementations of the traditional ARM visualization methods (e.g., arulesViz R-package by \cite{hasler2017visualizing}, and InterVisAR by \citep{intervisar}) already offer some interactive tools (e.g., hover, zoom, pan, drill down, inspect, brush), these tools are usually missing in the observed new age ARM visualization methods. \section{Conclusions}\label{sec:8} Data mining methods today suffer from a lot of comprehension of the mass results they produce. In line with this, a new domain of AI, the so-called XAI, has emerged that searches for methods which will be suitable to present these results clearly to the user. The visualization methods are one of the useful tools for helping users understand the results of different data mining methods better. The present study has revised the most important visualization methods associated with ARM. Consequently, the most important ARM visualization methods, published in research papers, have been identified, analyzed, and classified. The ARM visualization methods are divided into traditional and new age methods. Moreover, they have been classified according to the characteristics of the displayed association rules, the focus of visualization, and the types of attributes. The potential reader of this work will be able to get deeper overview of the ARM exploration/visualization process. Furthermore, it encourages readers to open new avenues of potential research. According to the research paper review, there is a huge opportunity to use the knowledge, especially in biological/medical sciences. \section*{Acknowledgements} This research has been supported partially by the project PID2020-115454GB-C21 of the Spanish Ministry of Science and Innovation (MICINN).). The authors acknowledge the financial support from the Slovenian Research Agency (Research Core Funding No. P2-0057 \& P2-0042).
2,869,038,155,175
arxiv
\section{Introduction} The computation of correlation functions in quantum integrable systems, is in general a quite hard task. One paradigmatic example is the spin-$\frac{1}{2}$ XXZ spin chain \cite{Baxter}. Even in the study of the free fermion case quite interesting mathematical structures have appeared \cite{mccoy}. Starting from the seminal papers of Razumov and Stroganov \cite{raz-strog1,raz-strog2}, we know that the anti-ferromagnetic ground state of the XXZ spin chain at the value $\Delta=-\frac{1}{2}$ of the anisotropy parameter (or equivalently $q=e^{2\pi i/3}$) presents a remarkable combinatorial structure. The spin chain hamiltonians with an odd number of spins $N=2n+1$ and periodic boundary conditions or even number of spins $N=2n$ and twisted-periodic boundary conditions are related through a change of basis to the Markov matrix of a stochastic loop model \cite{BdGN}. In the loop basis, the ``ground state'' is actually the steady state probability of the stochastic loop model. The most astonishing discovery was made by Razumov and Stroganov \cite{raz-strogO(1)_1}, who observed that, once properly normalized, the components of the steady state are integer numbers enumerating Fully Packed Loop configurations on a square grid. This conjecture has been eventually proven in \cite{cantini-sportiello}. Despite being the XXZ spin chain at $\Delta=-\frac{1}{2}$ a fully interacting system, several of its correlation functions have simple exact formulate even at finite size. This is the case of the Emptiness Formation Probability (in short EFP), which is the probability that $k$ consecutive spins are in the up direction in a chain of length $N$. In their original papers \cite{raz-strog1,raz-strog2}, Razumov and Stroganov have conjectured exact factorized formulas for the EFP in terms of products of factorials. The aim of the present paper is to prove these conjectures. \vskip .5cm The XXZ spin chain with odd size has two ground states $\Psi^+_{2n+1}$ and $\Psi^-_{2n+1}$, related by a spin flip on each site. Razumov and Stroganov have conjectured \cite{raz-strog1} that $E^{-}_{2n+1}(k)$, the EFP of a k-string of spins up in the state $\Psi^-_{2n+1}$, satisfies \begin{equation}\label{recurE--} \frac{E_{2n+1}^{-}(k-1)}{E^{-}_{2n+1}(k)}= \frac{(2k-2)!(2k-1)!(2n+k)!(n-k)!}{(k-1)!(3k-2)!(2n-k+1)!(n+k-1)!}. \end{equation} Strangely enough, Razumov and Stroganov didn't provide the analogous formula for the state $\Psi^+_{2n+1}$, which reads \begin{equation}\label{EFP++} \frac{E^{+}_{2n+1}(k-1)}{E^{+}_{2n+1}(k)}= \frac{(2k-2)!(2k-1)!(2n+k)!(n-k+1)! }{(k-1)!(3k-2)!(2n-k+1)!(n+k)!} \end{equation} In particular, the probability of having a string of spins-up of length $n$ (or $n+1$) in a chain of length $2n+1$ is equal to the inverse of $A_{HT}(2n+1)$, the number of of Half Turn Symmetric Alternating Sign Matrices of size $2n+1$, \begin{equation}\label{sum-conj1} E_{2n+1}^{-}(n)= E_{2n+1}^{+}(n+1)= A_{HT}(2n+1)^{-1} = \prod_{j=1}^n\frac{(2j-1)!^2(2j)!^2}{(j-1)!j!(3j-1)!(3j)!} \end{equation} In the case of a spin chain with even length and twisted boundary conditions, the ground state is unique. Razumov and Stroganov have conjectured \cite{raz-strog2} that $E^{e}_{2n}(k)$, the EFP of a k-string of spins up satisfies \begin{equation}\label{EFPee*} \frac{E^{e}_{2n}(k-1)}{E^{e}_{2n}(k)}= \frac{(2k-2)!(2k-1)!(2n+k-1)!(n-k)!} {(k-1)! (3k-2)!(2n-k)!(n+k-1)!} \end{equation} The previous equation implies that in the case $k=n$ \begin{equation}\label{sum-conj2} E^{e}_{2n}(k)=A_{HT}(2n)^{-1}. \end{equation} Unlike the ground states of the odd size chains, whose components can be chosen to be real, the even size ground state has complex valued components, therefore we can consider also $E^{\tilde e}_{2n}(k)$, a sort of ``pseudo'' EFP obtained by sandwiching the ground state with itself (and not with its complex conjugate). The ratio of ``pseudo'' EFPs corresponding to the same size of the spin chain has a factorized form given by \begin{equation}\label{EFPee} \frac{E^{\tilde e}_{2n}(k-1)}{E^{\tilde e}_{2n}(k)}= -q\frac{(2k-2)!(2k-1)!(2n+k-1)!(n-k)! }{(k-1)!(3k-3)!(3k-1)(2n-k)!(n+k-1)!}. \end{equation} It turns out that, apart for the factor $-q$, the ratio in eq.(\ref{EFPee}) can be written as a ratio of enumerations of $k$-Punctured Cyclically Symmetric Self-Complementary Plane Partitions (PCSSCPP) of size $2n$, i.e. rhombus tilings of a regular hexagon of side length $2n$, which are symmetric under a $\pi/3$ rotation around the center of the hexagon and with a star shaped frozen region of size $k$, as exemplified in Figure \ref{figura1}. \begin{figure} \begin{center} \includegraphics[scale=.6]{csscTnk2.eps}~~~~~~\includegraphics[scale=.6]{csscT62.eps} \end{center} \caption{Domain tiled by $k$-Punctured Cyclically Symmetric Self-Complementary Plane Partitions, with an example of tiling for $n=3$ and $k=2$. The shadowed region indicates a fundamental domain.}\label{figura1} \end{figure} Calling these enumerations $CSSCPP(2n,k)$ we have \begin{equation} \frac{E^{\tilde e}_{2n}(k-1)}{E^{\tilde e}_{2n}(k)} =-q \frac{CSSCPP(2n,k-1)}{CSSCPP(2n,k)}. \end{equation} For $k=n$ one obtains \begin{equation}\label{sum-conj3} E^{\tilde e}_{2n}(n) = (-q)^{n}A_n^{-2}= (-q)^{n}CSSCPP(2n)^{-1} \end{equation} where $A_n$ is the number of Alternating Sign Matrices of size $n$ \begin{equation} A_n=\prod_{j=0}^{n-1} \frac{(3j+1)!}{(n+j)!}, \end{equation} and $CSSCPP(2n)$ is the number of Cyclically Symmetric Self-Complementary Plane Partitions in an hexagon of size $2n$. The enumerations $CSSCPP(2n,k)$ are easily computed by applying a result of Ciucu \cite{ciucu} concerning enumerations of dimer coverings of planar graphs with reflection symmetry. This will be explained briefly in Appendix \ref{plane-part}. \vskip .5cm Some partial results concerning the (pseudo)-norm, and the EFP have been obtained in the literature. In \cite{pdf-pzj-jbz}, by cleverly exploiting the relation between a natural degenerate scalar product in the loop basis and the usual scalar product in the spin basis, Di Francesco and collaborators have proven eq.(\ref{sum-conj1}) and eq.(\ref{sum-conj3}). In the large $N$ limit $N\rightarrow \infty$, the EFP have been studied Maillet and collaborators \cite{maillet-emptiness}. Of course the conjectures (\ref{recurE--}, \ref{EFP++}, \ref{EFPee*}) must coincide and must give for the thermodynamic limit of the EFP \begin{equation} \lim_{N\rightarrow \infty} E_N(k) = \left(\frac{\sqrt{3}}{2} \right)^{3k^2}\prod_{j=1}^k \frac{\Gamma(j-1/3)\Gamma(j+1/3)}{\Gamma(j-1/2)\Gamma(j+1/2)}. \end{equation} This formula has been proven in \cite{maillet-emptiness} by specializing to $\Delta=-\frac{1}{2}$ a multiple integral representation for the correlation functions, which is valid for generic values of the anisotropy parameter $\Delta$. \vskip .5cm The most effective technique which has allowed to compute (partial) sum of components in the loop or in the spin basis has been pushed forward by Di Francesco and Zinn-Justin \cite{pdf-pzj-1} in the context of the periodic loop model with an even number of sites. They introduced spectral parameters in the model in such a way to preserve its integrability, the original model being recovered once the spectral parameters are set to $1$. In this way the components of the ground state become homogeneous polynomials in the spectral parameters, which satisfy certain relation under exchange or specialization of the spectral parameters. As noticed first by Pasquier \cite{pasquier} and largely developed by Di Francesco and Zinn-Justin \cite{pdf-pzj-qKZ1} the exchange relations satisfied by the ground state are a special case ($q=e^{2\pi i/3}$) of the very much studied quantum Knizhnik--Zamolodchikov equations (qKZ) \cite{Frenkel-Reshetikhin}. In \cite{r-s-pzj} the authors have applied this idea to the XXZ spin chain with spectral parameters and have shown that the properly normalized ground state of the spin chain with periodic or twisted periodic boundary conditions satisfies a special case of the $U_q(sl_2)$ qKZ equations at level $1$. Here we employ this property in order to compute the Emptiness Formation Probability. Our main idea is to consider a generalization of the EFP with spectral parameters EFP, which is constructed from the solution of the $U_q(sl_2)$ qKZ equations for generic $q$ (see eqs.(\ref{def-inhom-EFP},\ref{def-inhom-pseudo}). This ``inhomogeneous'' EFP has certain symmetry and recursion properties that completely fix it in the same spirit as the recursion relations of the 6-vertex model with Domain Wall Boundary Conditions completely determine its partition function. This will allow us to present an explicit determinantal formula for the inhomogeneous EFP valid at $q=e^{2\pi i/3}$, and upon specialization of the spectral parameters will allow to obtain the formulas (\ref{recurE--}, \ref{EFP++}, \ref{EFPee*}, \ref{EFPee}). \vskip .5cm The idea to use the solution of the qKZ equation to compute the inhomogenous version of a correlation function can be in principle adapted to several other models like: XXZ spin chain with different boundary conditions, fused XXZ spin chain, $U_q(sl_n)$ spin chain or even XYZ spin chain etc. \cite{boundary,fused,SU(N),XYZ}. Indeed, in all these cases, by properly tuning the parameters (generalizing the relation $q=e^{2\pi i/3}$), one has a so called ``combinatorial point'' at which the ground state energy per site doesn't get finite size corrections. By reasonings similar to the one in \cite{r-s-pzj}, one can argue that the ground state \emph{with spectral parameters} satisfies a qKZ equation (or, in the case of the XYZ spin chain, an elliptic version of it). Whether this idea could lead to other exact finite size formulae for some correlation function is an open question that in our opinion deserves further investigation. \vskip .5cm The paper is organized as follows. In Section \ref{conv}, after having recalled some basic facts about the XXZ spin chain, following \cite{r-s-pzj} we present the exchange equations satisfied by the ground state at $\Delta=-\frac{1}{2}$, then in Section \ref{rec-subsect} we derive the recursion relations satisfied by the solutions of the $U_q(sl_2)$ qKZ equations at level $1$. In Section \ref{inhom-section} we define the inhomogeneous version of the (pseudo) EFP, constructed using the solutions of the qKZ equations. We derive first its symmetries and then in Section \ref{rec-section-q-gen} we derive the recursion relations which completely determine it. In section \ref{combinatorial-pol} we will restrict to $q=e^{2\pi i/3}$ and, by showing that certain determinantal expressions satisfy the same recursion relations as the inhomogeneous EFP, we produce a representation of this inhomogeneous EFP whose homogeneous specialization is considered in Section \ref{hom-sect}, where we prove the main conjectures. In Appendix \ref{fact-det} we compute the determinants of a family of matrices which are relevant for the computation of the homogeneous specialization considered in Section \ref{hom-sect}. In Appendix \ref{plane-part} we compute the lozenge tilings enumerations $CSSCPP(2n,k)$. \section{XXZ spin chain at $\Delta=-\frac{1}{2}$ and qKZ equations}\label{conv} The hamiltonian of the XXZ spin chain acts on a vector space $\mathcal H_N=(\mathbb C^2)^{\otimes N}$ that consists of $N$ copies of $\mathbb C^2$ each one labeled by an index $i$. The hamiltonian is written in terms of operators $\sigma^\alpha_i$ which are Pauli matrices acting locally on the $i$-th component $\mathbb C_i^2$ \begin{equation}\label{xxz-ham} H_N(\Delta) = -\frac{1}{2}\sum_{i=1}^N \sigma^x_i\sigma^x_{i+1}+\sigma^y_i\sigma^y_{i+1}+\Delta \sigma^z_i\sigma^z_{i+1}. \end{equation} It is convenient to parametrize the anisotropy parameter as $\Delta=\frac{q+q^{-1}}{2}$. The model is completely specified once the boundary conditions are provided. Here we will consider: \begin{itemize} \item periodic boundary conditions for odd values of the length of the spin chain, $N=2n+1$, i.e. $\sigma_{N+1}^\alpha = \sigma_{1}^\alpha$ \item twisted periodic boundary conditions for even values of the length of the spin chain, $N=2n$, i.e. $\sigma_{N+1}^z = \sigma_{1}^z$, while $\sigma^\pm_{N+1}=e^{\pm i \frac{2\pi}{3}}\sigma^\pm_{N+1}$, where $\sigma^\pm= \sigma^x+\pm i \sigma ^y$. \end{itemize} It is well know \cite{Baxter} that the Hamiltonian (\ref{xxz-ham}), for generic values of the parameter $\Delta$ and of the twisting, is the logarithmic derivative of an integrable transfer matrix. In order to define the transfer matrix we need the $R$-matrix and the twist matrix. In the present context the $R$-matrix is an operator depending on a spectral parameter $z$, which acts on a tensor product $\mathbb C^2_i\otimes \mathbb C^2_j$. Introducing the basis of $\mathbb C_i^2$ $$ e^{\uparrow}_i=\left(\begin{array}{c}1\\0 \end{array} \right),~~~e^{\downarrow}_i= \left(\begin{array}{c}0\\1 \end{array} \right) . $$ we write $R_{i,j}(z)$ in the basis $\{e^{\uparrow}_i\otimes e^{\uparrow}_j,e^{\uparrow}_i\otimes e^{\downarrow}_j,e^{\downarrow}_i\otimes e^{\uparrow}_j,e^{\downarrow}_i\otimes e^{\downarrow}_j \} $ of $\mathbb C_i^2 \otimes \mathbb C^2_j$ as \begin{equation}\label{R-matr} R_{i,j}(z)= \left( \begin{array}{cccc} a(z) & 0 &0 &0 \\ 0 & b(z) &c_1(z) &0 \\ 0 & c_2(z) & b(z) &0 \\ 0 & 0 &0 &a(z) \end{array} \right) \end{equation} with \begin{equation}\label{coef-Rmatr} a(z)=\frac{qz -q^{-1}}{q-q^{-1}z},~~~~b(x)=\frac{z -1}{q-q^{-1}z}, ~~~~c_1(z)=\frac{(q -q^{-1})z}{q-q^{-1}z}, ~~~~c_2(z)=\frac{(q -q^{-1})}{q-q^{-1}z} . \end{equation} The twist matrix $\Omega(\phi)$ acts on a single $\mathbb C_i^2$ and in the basis $(e^{\uparrow}_i,e^{\downarrow}_i$) it reads \begin{equation} \Omega(\phi) = \left( \begin{array}{cc} e^{i\phi} & 0\\ 0 & e^{-i\phi} \end{array} \right). \end{equation} Using both the twist and the $R$-matrix we construct the family of transfer matrices \begin{equation}\label{mono} T_N(y|{\bf z}_{\{1,\dots, N\}},\phi) = {\rm tr}_0\left[R_{0,1}(y/z_1)R_{0,2}(y/z_2)\dots R_{0,N}(y/z_N) \Omega_0(\phi) \right] \end{equation} depending on $N$ ``vertical'' spectral parameters ${\bf z}_{\{1,\dots, N\}}$\footnote{Our convention for a ordered string of variables labeled by an index is to use a bold character and a label for the ordered set of indices of the variables: ${\bf x}_{\{a_1,\dots, a_N\}}= \{x_{a_1},\dots,x_{a_N} \}$. Often, when clear from the context, we will omit the label ${\{a_1,\dots,a_N\}}$ and write ${\bf x}$ for ${\bf x}_{\{a_1,\dots, a_N\}}$.} and a single ``horizontal'' spectral parameter $y$. Thanks to the commutation relation \begin{equation} [R_{i,j}(x),\Omega_i(\phi)\otimes \Omega_j(\phi)]=0 \end{equation} and the Yang-Baxter equation \begin{equation}\label{YBE} R_{i,j}(z_i/z_j)R_{i,k}(z_i/z_k)R_{j,k}(z_j/z_k)= R_{j,k}(z_j/z_k)R_{i,k}(z_i/z_k)R_{i,j}(z_i/z_j) \end{equation} the transfer matrices for different values of $y$ commute \begin{equation} [T_N(y'|{\bf z}_{\{1,\dots,N\}},\phi),T_N(y''|{\bf z}_{\{1,\dots,N\}},\phi)]=0. \end{equation} The hamiltonian of the XXZ spin chain is given by \begin{equation} \frac{1}{T_N(1|{\bf 1},\phi)}\frac{dT_N(y|{\bf 1},\phi)}{dy}{\Bigg | }_{y=1} = -\frac{1}{q-q^{-1}} \left( H_N(\Delta) -\frac{3N}{2}\Delta \right). \end{equation} At $\Delta=-\frac{1}{2}$ and for generic values of the vertical spectral parameters, both in the odd size case with periodic boundary conditions and in the even size case with twisted boundary conditions, the transfer matrix has an eigenvalue equal to $1$. \begin{itemize} \item When $N=2n+1$, the eigenspace with eigenvalue $1$ is two-fold degenerate, $\Psi^\pm_{2n+1}({\bf z})$, corresponding to two the values $\pm \frac{1}{2}$ of the total spin $S^z=\frac{1}{2}\sum_{i=1}^{N}\sigma^z_i$ \begin{equation} S^z \Psi^\pm_{2n+1}({\bf z})=\pm \Psi^\pm_{2n+1}({\bf z}) \end{equation} The two eigenstates are related by a flipping of all the spins \begin{equation} \Psi^+_{2n+1}({\bf z}) = \prod_{i=1}^N \sigma^x_i \cdot\Psi^-_{2n+1}({\bf z}). \end{equation} \item When $N=2n$ there is a single vector $\Psi^e_{2n}({\bf z})$ with eigenvalue $1$. It is in the zero sector of the total spin, \begin{equation} S^z \Psi^e_{2n}({\bf z})=0, \end{equation} These eigenstates reduce to the anti-ferromagnetic ground state(s) of the XXZ spin chain when the spectral parameters are specialized at $z_i=1$. \end{itemize} \subsection{Exchange relations at $\Delta = -\frac{1}{2}$} A crucial observation made in \cite{r-s-pzj} was that, for an appropriate choice of the normalization of $\Psi^\mu_N({\bf z})$, the eigenvector equation \begin{equation} T_N(y|{\bf 1},\phi)\Psi^\mu_N({\bf z}) = \Psi^\mu_N({\bf z}) \end{equation} is equivalent to a set of exchange relations. Define the exchange operator $P_{i,j} (e^\mu_i\otimes e^\nu_j) =e^\nu_i\otimes e^\mu_j $, the left rotation operator $\sigma(v_1\otimes v_2\otimes\cdots \otimes v_{N-1}\otimes v_N)=v_2\otimes\cdots \otimes v_{N-1}\otimes v_N\otimes v_1$ and let $\check R_{i,i+1}(z) = P_{i,i+1}R_{i,i+1}(z)$, then $\Psi^\mu_N({\bf z})$, as a polynomial of minimal degree in the spectral parameters $z_i$, is determined up to a constant factor, by the following set of equations \cite{r-s-pzj} \begin{align}\label{qKZ1} \check R_{i,i+1}(z_{i+1}/z_i)\Psi^\mu_N(z_1,\dots,z_i,z_{i+1},\dots,z_N) &= \Psi^\mu_N(z_1,\dots,z_{i+1},z_{i},\dots,z_N) \\[5pt]\label{qKZ2} \sigma \Psi^\mu_N(z_1,z_2,\dots,z_{N-1},z_N)&=D\Psi^\mu_N(z_2,\dots,z_{N-1},z_N,s z_1). \end{align} with $D=s=1$. These equations can be seen as the special case $q=e^{2\pi i/3}$ of the level $1$ qKZ equations \cite{Frenkel-Reshetikhin}, which corresponds to generic $q$, $s=q^6$ and $D=q^{3N}q^{3(s^z_N+1)/2}$. The solution of the level $1$ qKZ equations can be normalized in such a way that they become polynomials in the variables $z_i$ \cite{r-s-pzj} of degree $n-1$ in the case of even size $N=2n$, and degree $n$ in the odd case $N=2n+1$. Using the projectors $p_i^\pm= \frac{1\pm\sigma_i^z}{2}$ of the $i$-th spin in the up/down direction, let us write the exchange eqs.(\ref{qKZ1}) in components. If we have a spin up at site $i$ and a spin down at site $i+1$ or viceversa, we have \begin{equation}\label{triang-qKZ} \begin{split} p_i^- p_{i+1}^+ \Psi^\mu (z_{i},z_{i+1})&=\sigma_i^- \sigma_{i+1}^+ \frac{(q z_{i}-q^{-1}z_{i+1})\Psi^\mu (z_{i+1},z_{i}) -(q-q^{-1})z_{i}\Psi^\mu (z_{i},z_{i+1})}{z_{i+1}-z_{i}} \\ p_i^+ p_{i+1}^-\Psi^\mu (z_{i},z_{i+1})&=\sigma_i^+ \sigma_{i+1}^-\frac{(q z_{i}-q^{-1}z_{i+1})\Psi^\mu (z_{i+1},z_{i}) -(q-q^{-1})z_{i+1}\Psi^\mu (z_{i},z_{i+1})}{z_{i+1}-z_{i}} \end{split} \end{equation} These equations form a triangular system. Starting from a given component we can reconstruct all the others repeatedly using eqs.(\ref{triang-qKZ}). Therefore if we want to show that two a priori distinct solutions of the qKZ equations actually coincide, it is enough to check the equality of one of their components. When there are two consecutive spins pointing in the same direction at positions $i$ and $i+1$, then the first of the qKZ equations reads \begin{equation} \begin{split} p_i^\pm p_{i+1}^\pm \Psi^\mu (\dots,z_{i+1},z_{i},\dots) = \frac{qz_{i+1}-q^{-1}z_i}{qz_{i}-q^{-1}z_{i+1}}p_i^\pm p_{i+1}^\pm \Psi^\mu (\dots,z_{i},z_{i+1},\dots) \end{split} \end{equation} which means that the components having two consecutive spins up or down at positions $i$ and $i+1$, $p_i^\pm p_{i+1}^\pm \Psi^\mu (\dots,z_{i+1},z_{i},\dots)$ have a factor $qz_{i}-q^{-1}z_{i+1}$ \begin{equation} p_i^\pm p_{i+1}^\pm\Psi^\mu (\dots,z_{i},z_{i+1},\dots) = (qz_{i}-q^{-1}z_{i+1}) \tilde\Psi^{\mu,\pm}_{i, i+1}(\dots,z_{i},z_{i+1},\dots) \end{equation} and the vectors $\tilde\Psi^{\mu,\pm}_{i, i+1}(\dots,z_{i},z_{i+1},\dots)$ are symmetric under exchange $z_i\leftrightarrow z_{i+1}$. Another useful relation is obtained by considering the matrix $e_i$, which is proportional to a projector and is a generator of the Temperley-Lieb algebra \begin{equation}\label{TL-gen} e_{i}=\left(\begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & -q & 1 & 0 \\ 0 & 1 & -q^{-1} & 0 \\ 0 & 0 & 0 & 0 \end{array} \right),~~~~~~~~~~~~\begin{array}{l} e_i^2= \tau e_i,~~~~\tau=-q-q^{-1}\\[5pt] e_ie_{i\pm 1}e_i = e_i\\[5pt] e_ie_j=e_je_i ~~~\textrm{for}~~~~|i-j|>1 \end{array} \end{equation} The matrix $e_i$ is preserved under multiplication by a $\check R$-matrix for any value of the spectral parameter \begin{equation} e_i\check R_{i,i+1}(z)=\check R_{i,i+1}(z)e_i =e_i. \end{equation} By applying $e_i$ to the left of the first of the qKZ eqs.(\ref{qKZ1}) we find \begin{equation}\label{e_i-symm} e_i \Psi^\mu(\dots,z_{i},z_{i+1},\dots) = e_i \Psi^\mu(\dots,z_{i+1},z_{i},\dots). \end{equation} The components with most consecutive aligned spins have a completely factorized form \begin{equation}\label{factor-comp} \begin{split} \Psi^e_{\underbrace{\uparrow,\dots,\uparrow}_n,\underbrace{\downarrow,\dots,\downarrow}_n}({\bf z})&= \prod_{1\leq i< j\leq n}\frac{qz_i-q^{-1}z_{j}}{q-q^{-1}} \prod_{n+1\leq i< j\leq 2n}\frac{qz_i-q^{-1}z_{j}}{q-q^{-1}} \\ \Psi^+_{\underbrace{\uparrow,\dots,\uparrow}_{n+1},\underbrace{\downarrow,\dots,\downarrow}_n}({\bf z})&= \prod_{1\leq i< j\leq n+1}\frac{qz_i-q^{-1}z_{j}}{q-q^{-1}} \prod_{n+2\leq i< j\leq 2n+1}\frac{qz_i-q^{-1}z_{j}}{q-q^{-1}}\prod_{i=n+2}^{2n+1}z_i \\ \Psi^-_{\underbrace{\uparrow,\dots,\uparrow}_n,\underbrace{\downarrow,\dots,\downarrow}_{n+1}}({\bf z})&= \prod_{1\leq i< j\leq n}\frac{qz_i-q^{-1}z_{j}}{q-q^{-1}} \prod_{n+1\leq i< j\leq 2n+1}\frac{qz_i-q^{-1}z_{j}}{q-q^{-1}} \end{split} \end{equation} where the residual normalization ambiguity has been fixed by requiring these components to be equal to $1$ for $z_i=1$. From eqs.(\ref{factor-comp}) we see that the maximally factorized components satisfy (among others) the following relations \begin{equation}\label{diff-size-eq} \begin{split} \Psi^+_{\underbrace{\uparrow,\dots,\uparrow}_{n+1},\underbrace{\downarrow,\dots,\downarrow}_n}({\bf z})&= (1-q^{-2})^{n}\Psi^e_{\underbrace{\uparrow,\dots,\uparrow}_{n+1},\underbrace{\downarrow,\dots,\downarrow}_{n+1}}({\bf z})|_{z_{2n+2}=0}\\[5pt] \Psi^e_{\underbrace{\uparrow,\dots,\uparrow}_n,\underbrace{\downarrow,\dots,\downarrow}_n}({\bf z})&= (1-q^{2})^n \lim_{z_{2n+1}\rightarrow \infty}z_{2n+1}^{-n}\Psi^-_{\underbrace{\uparrow,\dots,\uparrow}_{n},\underbrace{\downarrow,\dots,\downarrow}_{n+1}}({\bf z}) \end{split}. \end{equation} Using the triangularity of the eqs.(\ref{triang-qKZ}) we can conclude that the eqs.(\ref{diff-size-eq}) induce equalities between components of $\Psi^+_{2n+1}({\bf z})$ or $\Psi^e_{2n}({\bf z})$ and components of $\Psi^e_{2n+2}({\bf z})$ or $\Psi^-_{2n+1}({\bf z}) $ with the last spin down, i.e. \begin{equation}\label{zero-sp} \begin{split} \Psi^+_{2n+1}({\bf z})\otimes e_{2n+2}^\downarrow &= (1-q^{-2})^np^-_{2n+2}\Psi^e_{2n+2}({\bf z})|_{z_{2n+2}=0}\\ \Psi^e_{2n}({\bf z}) \otimes e_{2n+1}^\downarrow &= (1-q^{2})^n \lim_{z_{2n+1}\rightarrow \infty}z_{2n+1}^{-n}p^-_{2n+1}\Psi^-_{2n+1}({\bf z}) \end{split} \end{equation} We will use these equations in order to find a relation among the inhomogeneous versions of the EFP that we shall introduce in Section \ref{inhom-section}. \subsection{Recursion relation}\label{rec-subsect} We claim that, upon specialization $z_{i+1}= q^2z_i$ the solution of the qKZ equation for $N$ spins reduces to the solution of the same system of equations for $N-2$ spins. In order to make the previous statement more precise we need to introduce some notation. Let $v_i$ be the vectors which are in the image of the projectors proportional to the generator of the Temperley-Lieb algebra $e_i$ \begin{equation} v_i = e_i^{\uparrow}\otimes e_{i+1}^{\downarrow}-q^{-1} e_{i}^{\downarrow}\otimes e_{i+1}^{\uparrow},~~~~~~e_iv_i= -(q+q^{-1})v_i \end{equation} Introduce the injective map \begin{equation} \Phi_N^{(i)} :(\mathbb C^{2})^{\otimes N} \rightarrow (\mathbb C^{2})^{\otimes N+2}, \end{equation} which inserts the vector $v_i$ at position $(i,i+1)$ and shift by two steps the indices of the sites with $j\geq i$, i.e. on a basis \begin{equation} \Phi_N^{(i)} (e_1^{a_1}\otimes \cdots \otimes e_{i-1}^{a_{i-1}} \otimes e_{i}^{a_{i}}\otimes \cdots \otimes e_{N}^{a_{N}} ) = e_1^{a_1}\otimes \cdots \otimes e_{i-1}^{a_{i-1}} \otimes v_i \otimes e_{i+2}^{a_{i}}\otimes \cdots \otimes e_{N+2}^{a_{N}} . \end{equation} Then we claim that \begin{proposition} The solutions of the exchange equations (\ref{qKZ1}), with the ``boundary conditions'' given by eqs.(\ref{factor-comp}) satisfy the following recursion relations \begin{equation}\label{recursion-psi} \Psi^{\mu}_N({\bf z})|_{z_{i+1}=q^2 z_i} = (-q)^{f_N(i)}(q^2z_i)^{\delta(\mu)}\prod_{j=1}^{i-1}\frac{qz_{j}-q^{-1}z_i}{q-q^{-1}} \prod_{j=i+1}^{N}\frac{q^{3}z_i-q^{-1}z_{j}}{q-q^{-1}} \Phi_{N-2}^{(i)} (\Psi^{\mu}_{N-2})({\bf z}_{\{\widehat{i,i+1}\}}) \end{equation} with $\delta(e)=\delta(-)=0$, $\delta(+)=1$, $f_N(i)=i-\lfloor \frac{N}{2}\rfloor$ and ${\bf z}_{\{\widehat{i,i+1}\}}= \{z_1, \dots,\widehat{z_i},\widehat{z_{i+1}},\dots,z_N\}$, i.e. the ordered set ${\bf z}$ from which the variables $z_i$ and $z_{i+1}$ are removed. \end{proposition} \begin{proof} If $z_{i+1}=q^{2}z_i$ then $\check R_{i,i+1}(z_{i}/z_{i+1})$ becomes a projector proportional to a generator of the Temperley-Lieb algebra \begin{equation} \check R_{i,i+1}(q^{-2}) = \tau^{-1}e_{i \end{equation} Therefore, by specializing the qKZ equations to $z_{i+1}=q^2z_i$, we deduce $$ \Psi^\mu_N(z_{i},z_{i+1}=q^2 z_i)= \check R_{i,i+1}(q^{-2})\Psi^\mu_N(q^2 z_{i},z_{i}) = \tau^{-1}e_i \Psi^\mu_N(q^2 z_{i},z_{i}). $$ In particular $ \Psi^\mu_N(z_{i},z_{i+1}=q^2 z_i)$ lies in the image of $\Phi_{N-2}^{(i)}$ and, by the injectivity of this map, there is a unique $\tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}})$ such that \begin{equation} \Psi^\mu_N(z_{i},z_{i+1}=q^2 z_i) = \Phi_{N-2}^{(i)} \tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}}). \end{equation} In order to determine the equations satisfied by $\tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}})$ we make use of the following relations among $R$-matrices \begin{equation} \begin{split} e_i \hat R_{i-1}(z_{i+2}/z_i) \hat R_{i}(z_{i+2}/(q^2 z_i)) \hat R_{i+1}(z_{i+2}/z_{i-1}) &\hat R_{i}(q^2z_{i}/z_{i-1}) \hat R_{i-1}(z_{i}/z_{i}) e_i = \\[5pt] \frac{(qz_{i+2}-q^{-1}z_i)(qz_{i}-q^{-1}z_{i-1})}{ (qz_{i-1}-q^{-1}z_i)(qz_{i}-q^{-1}z_{i+2})}e_i &\hat R_{i-1,i+2}(z_{i+2},z_i) e_i. \end{split} \end{equation} Applying both sides to $\Psi^\mu_N(z_{i},z_{i+1}=q^2 z_i)$ and using that \begin{equation} \hat R_{i-1,i+2}(z_{i+2},z_i) \Phi_i = \Phi_i \hat R_{i-1,i}(z_{i+2},z_i) \end{equation} we get that $\tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}})$ satisfies \begin{equation} \begin{array}{c} $$\displaystyle{ \tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}}) =}$$\\[5pt] $$\displaystyle{ \frac{(qz_{i+2}-q^{-1}z_i)(qz_{i}-q^{-1}z_{i-1})}{ (qz_{i-1}-q^{-1}z_i)(qz_{i}-q^{-1}z_{i+2})} \hat R_{i-1,i}(z_{i+2},z_i) \tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}}). }$$ \end{array} \end{equation} and the vector \begin{equation} (-q)^{-f_N(i)}(q^2z_i)^{\delta(\mu)}\prod_{j=1}^{i-1}\frac{q-q^{-1}}{qz_{j}-q^{-1}z_i} \prod_{j=i+1}^{N}\frac{q-q^{-1}}{q^{3}z_i-q^{-1}z_{j}} \tilde \Psi^\mu_N(z_{i},{\bf z}_{\{\widehat{i,i+1}\}}) \end{equation} satisfies all the qKZ equations at size $N-2$. In order to check that it coincides with $\Psi^\mu_{N-2}({\bf z}_{\{\widehat{i,i+1}\}})$ it is enough to check that the components with most aligned consecutive spins starting from position $i+1$ coincide, which is indeed the case. \end{proof} \section{(Pseudo)-EFP with spectral parameters}\label{inhom-section} The formal definition of the Emptiness Formation Probability makes use of the natural scalar product $(\cdot,\cdot)_N$ on $\mathcal H_N$, induced by the scalar product on $\mathbb C_i^2$ where $\{e_i^\uparrow, e_i^\downarrow \}$ form an orthonormal basis\footnote{In the following most of the time it will be clear from the context which Hilbert space we are considering and therefore we will omit the label $N$ in the scalar product $(\cdot,\cdot)_N$.}. The (pseudo)-EFPs read \begin{equation}\label{hom-EFP-def} \begin{split} E^{\pm}_{2n+1}(k)&= \frac{\left(\Psi^\pm_{2n+1}({\bf 1}),\prod_{i=1}^k p^+_i \cdot\Psi^\pm_{2n+1}({\bf 1})\right)}{\left(\Psi^\pm_{2n+1}({\bf 1}), \Psi^\pm_{2n+1}({\bf 1})\right)}\\ E^{e}_{2n}(k)&= \frac{\left((\Psi^e_{2n}({\bf 1}))^*,\prod_{i=1}^k p^+_i \cdot\Psi^e_{2n}({\bf 1})\right)}{\left((\Psi^e_{2n}({\bf 1}))^*, \Psi^e_{2n}({\bf 1} )\right)} \\ E^{\tilde e}_{2n}(k)&= \frac{\left(\Psi^e_{2n}({\bf 1}),\prod_{i=1}^k p^+_i \cdot\Psi^e_{2n}({\bf 1})\right)}{\left(\Psi^e_{2n}({\bf 1}), \Psi^e_{2n}({\bf 1})\right)}. \end{split} \end{equation} Our strategy to compute the EFPs is to consider an inhomogeneous version of these quantities which is obtained, roughly speaking, by substituting in eqs.(\ref{hom-EFP-def}) $\Psi^\mu_N({\bf 1})$ with $\Psi^\mu_N({\bf z})$, solution of the qKZ equations. We shall see that if the substitution is done in the proper way, then the inhomogeneous EFPs turn out to be symmetric polynomials in the spectral parameters and satisfy certain recursion relations which completely characterize these functions among the polynomials of the same degree in the variable $z_i$. When defining the inhomogeneous EFP for $k$ aligned spins up, it is convenient to extract the factor $\prod_{1\leq i<j\leq k}(qy_i-q^{-1}y_j)$ from $\prod_{i=1}^k p^+_i \Psi_N({\bf y}_{\{1,\dots ,k\}}; {\bf z}_{\{1,\dots ,N-k\}})$, and to introduce the vectors $\Psi_N(k;{\bf y}_{\{1,\dots ,k\}}; {\bf z}_{\{1,\dots ,N-k\}}) \in \mathcal H_{N-k} $ \begin{equation} \left(\bigotimes_{i=1}^k e_i^\uparrow \right)\otimes \Psi_N(k;{\bf y}_{\{1,\dots ,k\}}; {\bf z}_{\{1,\dots ,N-k\}}) = \frac{\prod_{i=1}^k p^+_i \Psi_N(y_1,\dots, y_k, z_{1},\dots, z_{N-k})}{\prod_{1\leq i<j\leq k}(qy_i-q^{-1}y_j)} \end{equation} Let us moreover introduce the operator \begin{equation} \mathcal P_N({\bf z})=\prod_{i=1}^N (z_i p_i^+ +p_i^-). \end{equation} that multiplies each component of the vector by $z_i$ for a spin-up at position $i$. The last ingredient we need is the $*$ operation which consists in substituting $q$ with $q^{-1}$. Our definition of the inhomogeneous (and unnormalized) EFP is the following \begin{equation}\label{def-inhom-EFP} \begin{array}{c} \mathcal E^{\mu}_N(k;{\bf y}_{\{1,\dots ,2k \}};{\bf z}_{\{1,\dots ,N-k \}}) = \\[10pt] \prod_{i=1}^{N-k}z_i^{-\delta(\mu)}(\mathcal P_{N-k}({\bf z})(\Psi^{\mu}_{N}(k;q^{-6} {\bf y}_{\{k+1,\dots ,2k \}}; {\bf z}))^* ,\Psi^\mu_N(k;{\bf y}_{\{1,\dots ,k\}}; {\bf z}) )_{N-k} \end{array} \end{equation} where $\delta(e)=\delta(-)=0$, while $\delta(+)=1$ and $N$ has the parity corresponding to $\mu$. For the moment it is evident that for $\mu=e$ or $\mu=-$, $\mathcal E^{\mu}_N(k;{\bf y}; {\bf z})$ are polynomials in their variables. Actually the polynomiality is true also for the case $\mu=+$ as will be shown below. We define also the inhomogeneous version of the pseudo-EFP \begin{equation}\label{def-inhom-pseudo} \begin{array}{c} \mathcal E^{\tilde \mu}_{N}(k;{\bf y};{\bf z}) = \\[10pt] \prod_{i=k+1}^{2k} y_i^{\lfloor \frac{N+1}{2}\rfloor -k}\prod_{i=1}^{N-k} z_i^{\lfloor \frac{N+1}{2}\rfloor-1} (\Psi^{\mu}_{N}(k;q^{-6}{\bf y}^{-1}_{\{k+1,\dots,2k \}} ; {\bf z}^{-1}) ,\Psi^\mu_{N}(k;{\bf y}_{\{1,\dots, k\}}; {\bf z}) )_{N-k}. \end{array} \end{equation} The choice to multiply the variables $y_{k+1},\dots, y_{2k}$ by $q^{-6} $ is motivated by the fact that in this way $\mathcal E^{\mu}_N(k;{\bf y}; {\bf z})$ turns out to be symmetric under exchange $y_i \leftrightarrow y_j$ for all $1\leq i,j\leq 2k$, as will be shown at the end of Section \ref{rec-section-q-gen}. The polynomials (\ref{def-inhom-EFP}, \ref{def-inhom-pseudo}) have other remarkable properties, but for the moment let us simply notice that for $q=e^{2\pi i/3}$ and $z_i=y_i=1$ these functions reduce to the unnormalized version of the EFPs as defined in eqs.(\ref{hom-EFP-def}). \vskip .5cm \noindent Apparently, looking at eqs.(\ref{def-inhom-EFP},\ref{def-inhom-pseudo}), it seems that we have six different families of polynomials under consideration. Actually in the odd size case we have \begin{equation}\label{equality-tilde} \mathcal E^{\mu}_{2n+1}(k;{\bf y}; {\bf z})=\mathcal E^{\tilde \mu}_{2n+1}(k;{\bf y};{\bf z}). \end{equation} This follows from the fact that \begin{equation}\label{rel-odd-Psi} \begin{split} \prod_{i=1}^{2n+1} z_i^{n+1} \Psi^{+}_{2n+1}({\bf z}^{-1})= \mathcal P_{2n+1}({\bf z})(\Psi^{+}_{2n+1}({\bf z}))^*\\ \prod_{i=1}^{2n+1} z_i^{n} \Psi^{-}_{2n+1}({\bf z}^{-1})= \mathcal P_{2n+1}({\bf z})(\Psi^{-}_{2n+1}({\bf z}))^*. \end{split} \end{equation} To prove eqs.(\ref{rel-odd-Psi}) we observe that the vector $\Psi^{\mu}_{N}({\bf z}^{-1})$ satisfies the exchange equation \begin{equation}\label{qKZinv} \Psi^{\mu}_{N}(z_{i+1}^{-1},z_{i}^{-1}) = \check{R}_{i,i+1}(z_{i}/z_{i+1}) \Psi^{\mu}_{N}(z_{i}^{-1},z_{i+1}^{-1}). \end{equation} The same equation equation holds also for the vector $\mathcal P_N({\bf z})(\Psi^{\mu}_{N}({\bf z}))^*$, i.e. \begin{equation}\label{qKZstar} \mathcal P_N(z_{i+1},z_i)(\Psi^{\mu}_{N}(z_{i+1},z_i))^* = \check{R}_{i,i+1}(z_{i}/z_{i+1}) \mathcal P_N(z_i,z_{i+1})(\Psi^{\mu}_{N}(z_{i},z_{i+1}))^*. \end{equation} This is a consequence of the following commutation relation among the $\check R$-matrix and the operator $(p_i^+ +z_ip_i^-)(p_{i+1}^+ +z_{i+1}p_{i+1}^-)$ $$ \check{R}_{i,i+1}(z_{i}/z_{i+1})(z_{i}p_{i}^+ +p_{i}^-)(z_{i+1}p_{i+1}^+ +p_{i+1}^-) = (z_{i+1}p_i^+ +p_i^-)(z_{i}p_{i+1}^+ +p_{i+1}^-)\check{R}^*_{i,i+1}(z_{i+1}/z_{i}). $$ which implies \begin{equation}\label{symm-R} \check{R}_{i,i+1}(z_{i}/z_{i+1})\mathcal P(z_{i},z_{i+1}) = \mathcal P(z_{i+1},z_{i})\check{R}^*_{i,i+1}(z_{i+1}/z_{i}) \end{equation} and eq.(\ref{qKZstar}). Therefore to conclude eqs.(\ref{rel-odd-Psi}) it is sufficient to check that they hold for the components with most aligned spins. We are left with only \emph{four} different inhomogeneous EFP and as a bonus we have also shown that $\mathcal E^{+}_{2n+1}(k;{\bf y}; {\bf z})$ is a polynomial of its variables. \vskip .5cm \noindent {\bf Symmetry under $z_i\leftrightarrow z_j$} \noindent The inhomogeneous (pseudo)-EFP $\mathcal E^{\mu}_N(k;{\bf y};{\bf z})$ is obviously symmetric under exchange $y_i \leftrightarrow y_j$ for $1\leq i,j\leq k$ and $k+1\leq i,j\leq 2k$. Using eqs.(\ref{qKZinv},\ref{qKZstar}) it is easy to show that it is symmetric also under exchange $z_i\leftrightarrow z_j$. Indeed \begin{equation} \begin{array}{c} \mathcal E^{\mu}_N(k;{\bf y}; \dots, z_i, z_{i+1},\dots) =\\[5pt] ((\mathcal P(z_i,z_{i+1})\Psi^{\mu}_{N}(k;z_i, z_{i+1} ))^* , \check{R}_{i,i+1}(z_i/z_{i+1}) \Psi^\mu_N(k; z_{i+1}, z_{i})_N= \\[5pt] (\check{R}_{i,i+1}(z_i/z_{i+1})(\mathcal P(z_i,z_{i+1})\Psi^{\mu}_{N}(k; z_i, z_{i+1} ))^* , \Psi^\mu_N(k; z_{i+1}, z_{i})_N= \\[5pt] \mathcal P(z_{i+1},z_{i})\Psi^{\mu}_{N}(k; z_{i+1}, z_{i}))^*, \Psi^\mu_N(k; z_{i+1}, z_{i})_N=\\[5pt] \mathcal E^{\mu}_N(k;{\bf y}; \dots, z_{i+1}, z_{i},\dots) \end{array} \end{equation} where in the third equality we have used the fact that the $\check R$-matrix is symmetric while the fourth equality follows from eq.(\ref{qKZstar}). The proof of the symmetry of the pseudo-EFP under $z_i\leftrightarrow z_j$ is completely analogous. \vskip .5cm \noindent {\bf Factorized cases} \noindent Using eqs.(\ref{factor-comp}) we can provide the value of $\mathcal E^{\mu/\tilde \mu }_{N}(k;{\bf y}; {\bf z})$ corresponding to the maximal number of consecutive aligned spins. They coincide for the true and for the pseudo EFP and read \begin{equation}\label{initial-value} \begin{split} \mathcal E^{e/\tilde e}_{2k}(k;{\bf y};{\bf z}) &= \prod_{1\leq i<j\leq k}\frac{(qz_i-q^{-1}z_j)(qz_j-q^{-1}z_i)}{(q-q^{-1})^2} \\ \mathcal E^{+/\tilde +}_{2k+1}(k+1;{\bf y}; {\bf z}) &=\prod_{1\leq i<j\leq k}\frac{(qz_i-q^{-1}z_j)(qz_j-q^{-1}z_i)}{(q-q^{-1})^2}\prod_{i=1}^kz_i^2\\ \mathcal E^{-/\tilde -}_{2k+1}(k;{\bf y};{\bf z}) &=\prod_{1\leq i<j\leq k+1}\frac{(qz_i-q^{-1}z_j)(qz_j-q^{-1}z_i)}{(q-q^{-1})^2}. \end{split} \end{equation} We will see in the following that the first and the third of these equations will provide the starting point of a recursion which will be worked out in the next section and which completely characterize the inhomogeneous EFP. \subsection{Recursion relation for the inhomogeneous EFP}\label{rec-section-q-gen} We begin this section by presenting some relations among the EFP at different parities which are obtained by setting one of the spectral parameters to zero or sending it to infinity \begin{align}\label{special-0empt} \mathcal E^{+}_{2n+1}(k;{\bf y}; {\bf z} \setminus z_{2n+2})&= (-1)^n(q-q^{-1})^{-2n} \left(\prod_{i=1}^{2n+1-k}z_i^{-1}\right)\mathcal E^{e}_{2n+2}(k;{\bf y}; {\bf z})|_{z_{2n+2}=0} \\[7pt]\label{special-0empt2} \mathcal E^{e}_{2n}(k;{\bf y}; {\bf z} \setminus z_{2n+1})&= (-1)^n(q-q^{-1})^{2n} \lim_{z_{2n+1}\rightarrow \infty} z_{2n+1}^{-2n} \mathcal E^{-}_{2n+1}(k;{\bf y}; {\bf z}). \end{align} The first of these equation follows from eqs.(\ref{zero-sp}) and by noticing that \begin{equation} \mathcal P({\bf z})|_{z_{2n+2}=0}=\mathcal P({\bf z}\setminus z_{2n+2})p_{2n+2}^- . \end{equation} For the second one we notice that, writing \begin{equation} \begin{array}{c} \mathcal E^{-}_{2n+1}(k;{\bf y}; {\bf z}) = \\[5pt] \left((\Psi^{-}_{2n+1}(k; {\bf z}))^* , \mathcal P_{2n}({\bf z} \setminus z_{2n+1})z_{2n+1}p^+_{2n+1} \Psi^-_{2n+1}(k; {\bf z}) \right)_{2n+1}~+\\[5pt] \left((\Psi^{-}_{2n+1}(k;{\bf z}))^* , \mathcal P_{2n}({\bf z} \setminus z_{2n+1})p^-_{2n+1} \Psi^-_{2n+1}(k; {\bf z})\right)_{2n+1} \end{array} \end{equation} the first term in the r.h.s. is a polynomial in $z_{2n+1}$ of degree $2n-1$ while the second is of degree $2n$, therefore in the limit only the second one survives and we can use again eqs.(\ref{zero-sp}). \vskip .5cm \noindent {\bf Specialization $z_i=q^{\pm 2} z_j$} \noindent The inhomogeneous EFP satisfies a recursion relation inherited from the the recursion relation among solutions of the qKZ equations, eq.(\ref{recursion-psi}) \begin{equation}\label{comp1-rec} \begin{array}{c} $$\displaystyle{ \mathcal E^{\mu}_{N}(k;{\bf y}; {\bf z} )|_{z_{i+1}=q^{2}z_i}=}$$\\[5pt] $$\displaystyle{\tau^{-1} ( (\Psi^{\mu}_{N}(k; z_i, z_{i+1}=q^{-2}z_{i} ))^* ,\mathcal P(z_{i+1}=q^2z_{i}) e_i \Psi^\mu_N(k; z_{i}, z_{i+1}=q^2z_{i}) =}$$\\[5pt] $$\displaystyle{\tau^{-1} ( e_i \mathcal P(z_{i+1}=q^2z_{i})(\Psi^{\mu}_{N}(k; z_i, z_{i+1}=q^{-2}z_{i} ))^* ,\Psi^\mu_N(k; z_{i}, z_{i+1}=q^2z_{i}).}$$ \end{array} \end{equation} A simple computation shows that $e_i (p_i^++ z_i p_i^-)(p_i^++ q^2 z_i p_i^-) =\frac{1}{\tau} e_i (p_i^++ z_i p_i^-)(p_i^++ q^2 z_i p_i^-)e_i^* $ which means \begin{equation} e_i \mathcal P(z_{i+1}=q^2z_{i})=\frac{1}{\tau} e_i \mathcal P(z_{i+1}=q^2z_{i})e_i^* \end{equation} and we can substitute it into the last line of eq.(\ref{comp1-rec}) obtaining \begin{equation} \tau^{-2} ( e_i \mathcal P(z_{i+1}=q^2z_{i})(e_i\Psi^{\mu}_{N}(k; z_i, z_{i+1}=q^{-2}z_{i} ))^* ,\Psi^\mu_N(k; z_{i}, z_{i+1}=q^2z_{i})_N \end{equation} Now use eq.(\ref{e_i-symm}) in order to exchange the variables $z_i$ and $z_{i+1}$ in the l.h.s. of the scalar product \begin{equation} \tau^{-2} ( e_i \mathcal P(z_{i+1}=q^2z_{i})(e_i\Psi^{\mu}_{N}(k;z_{i+1}=q^{-2}z_{i}, z_i))^* ,\Psi^\mu_N(k; z_{i}, z_{i+1}=q^2z_{i})_N. \end{equation} Therefore we can apply to both sides of the scalar product the recursion relation (\ref{recursion-psi}) and find \begin{equation}\label{rec1} \begin{array}{c} $$\displaystyle{ \frac{\mathcal E^{\mu}_{N}(k;{\bf y}; {\bf z})|_{z_{i+1}=q^{2}z_i}}{\mathcal E^{\mu}_{N-2}(k;{\bf y};{\bf z}\setminus\{z_{i}, z_{i+1}\})}= }$$ \\[15pt] $$\displaystyle{(-1)^k (1+q^2)z_i\prod_{j=1}^{2k}\frac{qy_{j}-q^{-1}z_i}{q-q^{-1}}\prod_{\substack{1\leq j\leq N-k\\j\neq i,i+1}}\frac{(qz_{j}-q^{-1}z_i)(q^{3}z_i-q^{-1}z_{j})}{-(q-q^{-1})^2}.}$$ \end{array} \end{equation} The case of the pseudo EFP at even size is analogous \begin{equation} \begin{array}{c} $$\displaystyle{ \mathcal E^{\tilde e}_{2n}(k;{\bf y}; {\bf z})|_{z_{i+1}=q^{2}z_i}=}$$\\[5pt] $$\displaystyle{ (\Psi^{e}_{2n}(k; z_i^{-1}, z_{i+1}^{-1}=q^{-2}z_{i}^{-1} ) ,e_i \Psi^e_{2n}(k; z_{i}, z_{i+1}=q^2z_{i})_{2n} =}$$\\[5pt] $$\displaystyle{ ( e_i \Psi^{e}_{2n}(k; z_{i+1}^{-1}=q^{-2}z_{i}^{-1}, z_i^{-1}) ,\Psi^e_{2n}(k; z_{i}, z_{i+1}=q^2z_{i})_{2n} }$$ \end{array} \end{equation} and again we can apply the recursion at the level of vectors to both sides of the scalar product finding \begin{equation}\label{rec2} \begin{array}{c} $$\displaystyle{ \frac{\mathcal E^{\tilde e}_{2n}(k;{\bf y}; {\bf z})|_{z_{i+1}=q^{2}z_i}}{\mathcal E^{\tilde e}_{2n-2}(k;{\bf y}; {\bf z}\setminus\{z_{i}, z_{i+1}\})}= }$$\\[15pt] $$\displaystyle{(-1)^k (1+q^2)\prod_{j=1}^{2k}\frac{qy_{j}-q^{-1}z_i}{q-q^{-1}}\prod_{\substack{1\leq j\leq N-k\\j\neq i,i+1}}\frac{(qz_{j}-q^{-1}z_i)(q^{3}z_i-q^{-1}z_{j})}{-(q-q^{-1})^2}. }$$ \end{array} \end{equation} Let us look at $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y}; {\bf z})$ as polynomials in $z_1$. Their degrees are in both cases less than $2n-1$. The recursion relations eqs.(\ref{rec1},\ref{rec2}) provide the value of $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y};{\bf z})$ for $2(2n-k-1)$ distinct values of $z_1$ (i.e. for $z_1=q^\pm z_i$). Therefore, for $n>k$, by Lagrange interpolation these specializations determine uniquely $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y}; {\bf z})$ once $\mathcal E^{e/\tilde e}_{2n-2}(k;{\bf y}; {\bf z})$ is known. As a first consequence we can argue that $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y}; {\bf z})$ is symmetric under exchange $y_i \leftrightarrow y_j$ for all $1\leq i,j\leq 2k$. Indeed for the case when $n=k$ we have explicit expressions for $\mathcal E^{e/\tilde e}_{2n=2k}(k;{\bf y}; {\bf z})$, given by eqs.(\ref{initial-value}), from which we can read that they are even independent from ${\bf y}$. The recursion relations (\ref{rec1},\ref{rec2}) are symmetric under exchange $y_i \leftrightarrow y_j$ and therefore by induction, if $\mathcal E^{e/\tilde e}_{2n-2}(k;{\bf y} ; {\bf z})$ is symmetric, then also $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y}; {\bf z})$ is symmetric. A second important consequence is that any family of polynomials labeled by $n$ and $k$, which satisfy the following conditions: \begin{itemize} \item[-] they are symmetric in the spectral parameters, \item[-] the degree in each spectral parameter is less than $2n-1$, \item[-] they coincide with $\mathcal E^{e/\tilde e}_{2k}(k;{\bf y}; {\bf z})$ for $n=k$, \item[-] they satisfy the recursion relations eqs.(\ref{rec1},\ref{rec2}), \end{itemize} must coincide with $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y}; {\bf z})$. This line of reasoning will be adopted in Section \ref{combinatorial-pol} where we will provide a determinantal representation of $\mathcal E^{e/\tilde e}_{2n}(k;{\bf y}; {\bf z})$ at $q=e^{2\pi i/3}$. The same arguments holds also for $\mathcal E^{-}_{2n+1}(k;{\bf y}; {\bf z})$, because the degree is less than $2n+1$ and we have always enough specialization in order to apply the Lagrange interpolation and reconstruct all the $\mathcal E^{-}_{2n+1}(k;{\bf y}; {\bf z})$ starting from the initial conditions $\mathcal E^{-}_{2k+1}(k;{\bf y}; {\bf z})$. The case of $\mathcal E^{+}_{2n+1}(k;{\bf y}; {\bf z})$ is slightly different. Again the degree is bounded by $2n+1$ and this allows to fix $\mathcal E^{+}_{2n+1}(k;{\bf y}; {\bf z})$ starting from $\mathcal E^{+}_{2k+1}(k;{\bf y}; {\bf z})$, but the problem is that we do not have an explicit formula $\mathcal E^{+}_{2k+1}(k;{\bf y}; {\bf z})$, being available only for $n=k-1$. This apparent problem is bypassed using relations (\ref{special-0empt}). \section{Inhomogeneous EFP at $q^{2\pi i/3}$}\label{combinatorial-pol} In order to introduce the expression of $\mathcal E^{\mu}_N(k;{\bf y};{\bf z})$ and of $\mathcal E^{\tilde \mu}_N(k;{\bf y}; {\bf z})$ which is best suited for taking the specialization $z_i=y_\alpha=1$ we analyze first the case $k=0$, in which there are no variables $y$. Let us introduce the Young tableaux \begin{equation} \lambda(m,r)=\{\lfloor \frac{r}{2} \rfloor,\lfloor \frac{r+1}{2} \rfloor,\dots,\lfloor \frac{r+i-1}{2} \rfloor,\dots,\lfloor \frac{r+m-1}{2} \rfloor \} \end{equation} Then we find that the inhomogeneous version of the squared norm or of the sum of the square of the components is given in terms of the product of two Schur polynomials \begin{equation}\label{case:k=0} \begin{array}{c} \mathcal E^{\mu}_{N}(0;{\bf z}) = 3^{-\lfloor \frac{N}{2} \rfloor\left(\lceil \frac{N}{2} \rceil-1 \right)} S_{\lambda(N,0)}(z_1,\dots,z_{N})S_{\lambda(N,1)}(z_1,\dots,z_{N})\\[5pt] \mathcal E^{\tilde e}_{2n}(0;{\bf z}) = 3^{-n(n-1)} S_{\lambda(2n,0)}(z_1,\dots,z_{2n})^2 \end{array} \end{equation} The proof of eqs.(\ref{case:k=0}) is quite simple and follows the pattern discussed at the end of the previous section. Eqs.(\ref{case:k=0}) are trivially true for $N=1,2$ (or $n=1$), moreover their r.h.s. are polynomials in $z_1$ of degree at most $2\lceil \frac{N}{2} \rceil -1$. The Schur polynomials $S_{\lambda(m,r)}(z_1,\dots,z_m)$ satisfy a recursion relation when one specializes $z_i=q^{\pm} z_j$ (see for example Appendix B of \cite{biane}) \begin{equation}\label{recursion-schur} S_{\lambda(m,r)}({\bf z})|_{z_i=q^{\pm } z_j}= (-q^{\mp}z_j)^r\prod_{\substack{\ell=1\\ \ell \neq i,j}}^{m}(z_\ell-q^{\mp}z_j) ~ S_{\lambda(m-2,r))}({\bf z} \setminus \{ z_i, z_j\}). \end{equation} This implies that the r.h.s. of eqs.(\ref{case:k=0}) satisfy the recursion relations eqs.(\ref{rec1},\ref{rec2}) and therefore eqs.(\ref{case:k=0}) hold. \vskip .5cm \noindent {\bf Generic value of $k$} \noindent The recursion relations eq.(\ref{recursion-schur}) for the Schur functions $S_{\lambda(m,r)}$ suggests a possible representation also in the case $k\neq 0$. For the sake of clarity let us focus for a moment on $\mathcal E^{e}_{2n}(k;{\bf y};{\bf z})$. It is easy to see that any product of the kind $S_{\lambda(2n,0)}({\bf y}_I,{\bf z})S_{\lambda(2n,1)}({\bf y}_{I^c},{\bf z})$, with $I\subset \{1,\dots, 2k \}$ and $I^c=\{1,\dots, 2k \}\setminus I$, satisfies the recursion relations (\ref{rec1}), but with a ``wrong'' initial condition. It is reasonable to hope that an appropriate linear combination of terms with different choices of $I$ could provide the right initial condition and hence $\mathcal E^{e}_{2n}(k;{\bf y};{\bf z})$. In order to present how this idea actually works it is convenient to introduce a bit of notation. Let $\tilde\rho,\tilde\sigma $ be strictly increasing infinite sequences of non negative integers, then consider the following family of matrices \begin{equation} \mathcal M^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z}) = \left( \begin{array}{cccccccc} z_1^{\tilde\rho_1} & z_1^{\tilde\rho_2} & \dots & z_1^{\tilde\rho_{r+s}}& 0 & 0 & \dots &0 \\ z_2^{\tilde\rho_1} & z_2^{\tilde\rho_2} & \dots & z_2^{\tilde\rho_{r+s}}& 0 & 0 & \dots &0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots &\vdots\\ z_r^{\tilde\rho_1} & z_r^{\tilde\rho_2} & \dots & z_r^{\tilde\rho_{r+s}}& 0 & 0 & \dots &0 \\ 0 & 0 & \dots & 0 & z_1^{\tilde\sigma_1} & z_1^{\tilde\sigma_2} & \dots & z_1^{\tilde\sigma_{r+s}} \\ 0 & 0 & \dots & 0 & z_2^{\tilde\sigma_1} & z_2^{\tilde\sigma_2} & \dots & z_2^{\tilde\sigma_{r+s}} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots &\vdots\\ 0 & 0 & \dots & 0 & z_r^{\tilde\sigma_1} & z_r^{\tilde\sigma_2} & \dots & z_r^{\sigma_{r+s}} \\ y_1^{\tilde\rho_1} & y_1^{\tilde\rho_2} & \dots & y_1^{\tilde\rho_{r+s}}& y_1^{\tilde\sigma_1} & y_1^{\tilde\sigma_2} & \dots & y_1^{\tilde\sigma_{r+s}} \\ y_2^{\tilde\rho_1} & y_2^{\tilde\rho_2} & \dots & y_2^{\tilde\rho_{r+s}}& y_2^{\tilde\sigma_1} & y_2^{\tilde\sigma_2} & \dots & y_2^{\tilde\sigma_{r+s}} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots &\vdots\\ y_{2s}^{\tilde\rho_1} & y_{2s}^{\tilde\rho_2} & \dots & y_{2s}^{\tilde\rho_{r+s}}& y_{2s}^{\tilde\sigma_1} & y_{2s}^{\tilde\sigma_2} & \dots & y_{2s}^{\tilde\sigma_{r+s}} \end{array} \right) \end{equation} and let us define the following polynomials \begin{equation} \mathcal S^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z})= \frac{\det \mathcal M^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z})}{\prod_{1\leq i<j\leq r}(z_i-z_j)^2\prod_{1\leq i<j\leq 2s}(y_i-y_j) \prod_{\substack{1\leq i \leq r\\ 1\leq j \leq 2s}}(z_i-y_j)}. \end{equation} The divisibility of $\det \mathcal M^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z})$ by $\prod_{1\leq i<j\leq r}(z_i-z_j)^2 $ and by $\prod_{1\leq i<j\leq 2s}(y_i-y_j)$ is immediate. If we set $z_i=y_j$ for some $i,j$ then we subtract from the row corresponding to $y_j$ the two rows corresponding to $z_i$ getting a null row. This means that $\det \mathcal M^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z})$ is also divisible by \hbox{$(z_i-y_j)$}. Using the Laplace expansion along the first $r+s$ columns we can write $\mathcal S^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z})$ as a bilinear in Schur polynomials \begin{equation}\label{laplace-exp} \mathcal S^{(\tilde\rho,\tilde\sigma)}(r,s;{\bf y};{\bf z}) = \sum_{\substack{I\subset \{1,\dots 2s \}\\|I|=s}} (-1)^{\epsilon(I)}\frac{\displaystyle{ \prod_{i,j \in I \& i<j}(y_i-y_j)\prod_{i,j \in I^c \& i<j}(y_i-y_j)} }{\displaystyle{ \prod_{1\leq i<j \leq 2s}(y_i-y_j)}} S_{\rho(r+s)}({\bf z},{\bf y}_I)S_{\sigma(r+s)}({\bf z},{\bf y}_{I^c}), \end{equation} where $\rho(m)$ and $\sigma(m)$ are Young tableaux of length $m$, whose entries are $\rho(m)_i= \tilde \rho_i -i+1$, $\sigma(m)_i= \tilde \sigma_i -i+1$. In particular notice that when $s=0$ then $\mathcal S^{(\tilde\rho,\tilde\sigma)}(r,0;{\bf z})$ factorizes as product of two Schur polynomials. Now let us introduce the following family of integer sequences \begin{equation} \tilde \lambda_i(r)=\lfloor \frac{3i-3+r}{2}\rfloor ,~~~~~~ \begin{array}{c} \tilde\lambda(0)=\{0,1,3,4,6,7,\dots \}\\ \tilde\lambda(1)=\{0,2,3,5,6,8,\dots \}\\ \tilde\lambda(2)=\{1,2,4,5,7,8,\dots \}\\ \cdots \end{array} \end{equation} Then we claim that \begin{align}\label{spectral1-3} \mathcal E^{-}_{2n+1}(k;{\bf y};{\bf z}) &=3^{-n^2+k(k-1)/2} \mathcal S^{(\tilde\lambda(0),\tilde\lambda(1))}(2n+1-k,k;{\bf y};{\bf z})\\[5pt]\label{spectral1-2} \mathcal E^{+}_{2n+1}(k;{\bf y};{\bf z}) &=3^{-n^2+k(k-1)/2} \left( \prod_{j=1}^{2n-k+1}z_j^{-1} \right) \mathcal S^{(\tilde\lambda(1),\tilde\lambda(2))}(2n+1-k,k;{\bf y};{\bf z})\\[5pt]\label{spectral1-1} \mathcal E^{e}_{2n}(k;{\bf y};{\bf z}) &=3^{-n(n-1)+k(k-1)/2} \mathcal S^{(\tilde\lambda(0),\tilde\lambda(1))}(2n-k,k;{\bf y};{\bf z})\\[5pt]\label{spectral1-4} \mathcal E^{\tilde e}_{2n}(k;{\bf y};{\bf z}) &=3^{-n(n-1)+k(k-1)/2} \left(\prod_{j=1}^{2n-k}z_j^{-1}\right) \mathcal S^{(\tilde\lambda(0),\tilde\lambda(2))}(2n-k,k;{\bf y};{\bf z}) \end{align} These formulas reduce to eqs.(\ref{case:k=0}) for $k=0$. Using the relations among inhomogeneous EFP with different parities eqs.(\ref{special-0empt},\ref{special-0empt2}) and the explicit form of the matrices $\mathcal M^{(\tilde\lambda(i),\tilde\lambda(j))}$ we see easily that eq.(\ref{spectral1-2}) follows from eq.(\ref{spectral1-1}), which in turn follows from eq.(\ref{spectral1-3}). Therefore it remains to prove only eq.(\ref{spectral1-3}) and eq.(\ref{spectral1-4}). The r.h.s. of eq.(\ref{spectral1-3}) and eq.(\ref{spectral1-4}) are polynomials in $z_i$ respectively of degree $2n+1$ and $2n-2$. Moreover using eq.(\ref{recursion-schur}) and the form of $\mathcal S^{(\tilde\lambda(r_1),\tilde\lambda(r_2))}(m,k;{\bf y};{\bf z})$ expressed by eq.(\ref{laplace-exp}) we can easily obtain the recursion relation \begin{equation}\label{rec-cS} \begin{array}{c} $$\displaystyle{ \mathcal S^{(\tilde\lambda(r_1),\tilde\lambda(r_2))}(m,k;{\bf y};{\bf z})|_{z_i=q^{\pm } z_j} = } $$\\[5pt] $$\displaystyle{ (-q^{\mp}z_j)^{r_1+r_2}\prod_{\substack{\ell=1\\ \ell \neq i,j}}^{m}(z_\ell-q^{\mp}z_j)^2 \prod_{\alpha=1}^{2k}(y_\alpha-q^{\mp }z_j)\mathcal S^{(\tilde\lambda(r_1),\tilde\lambda(r_2))}(m-2,k;{\bf y};{\bf z} \setminus \{z_i,z_j \}).}$$ \end{array} \end{equation} In order to conclude, as explained at the end of Section \ref{rec-section-q-gen}, it remains to show that eqs.(\ref{spectral1-3},\ref{spectral1-4}) hold for $n=k$, i.e. we need to prove that \begin{equation} \begin{array}{c} \mathcal S^{(\tilde\lambda(1),\tilde\lambda(2))}(k+1,k;{\bf y};{\bf z})= \prod_{1\leq i<j \leq k+1}(z_i^2+z_iz_j+z_j^2)\\ \mathcal S^{(\tilde\lambda(0),\tilde\lambda(2))}(k,k;{\bf y};{\bf z})= \prod_{1\leq i<j \leq k}(z_i^2+z_iz_j+z_j^2) . \end{array} \end{equation} We proceed by factor exhaustion. A preliminary remark is that both $\mathcal S^{(\tilde\lambda(1),\tilde\lambda(2))}(2n-k+1,k;{\bf y};{\bf z})$ and $\mathcal S^{(\tilde\lambda(0),\tilde\lambda(2))}(2n-k,k;{\bf y};{\bf z})$, as polynomials in $y_i$ are of degree $n-k$ and in particular they vanish as soon as $k>n$. Therefore using the recursion relation (\ref{rec-cS}) we conclude that \begin{equation}\label{initial-to-prove} \mathcal S^{(\tilde\lambda(1),\tilde\lambda(2))}(k+1,k;{\bf y};{\bf z})|_{z_i=q^\pm z_j}= \mathcal S^{(\tilde\lambda(0),\tilde\lambda(2))}(2n-k,k;{\bf y};{\bf z})|_{z_i=q^\pm z_j}=0 . \end{equation} Since their degree as polynomials in $z_i$ is respectively $k$ and $k-1$, this means that we have proven eqs.(\ref{initial-to-prove}) up to a numerical constant. Such a constant will be fixed to be equal to $1$ in the following section, where we shall compute explicitly the specialization of the inhomogeneous EFP for $z_i=t^{i-1}$ and $y_j = t^{N-k+j-1}$. \subsection{Homogeneous limit}\label{hom-sect} In this section we arrive at last to the computation of the homogeneous (pseudo) EFP using eqs.(\ref{spectral1-3}-\ref{spectral1-4}). We need only to consider a last intermediate step by setting ${\bf z}= {\bf z}(t)$ and ${\bf y} = t^{N-k} {\bf y}(t)$ with $$z(t)_i=t^{i-1} ~~~~~ \textrm{and} ~~~~~y(t)_j = t^{j-1}$$ The matrices $\mathcal M^{(\tilde\lambda(r),\tilde\lambda(s))}(m,k;{\bf z}(t);t^m{\bf y}(t))$ with $r,s$ and $m$ as in eqs.(\ref{spectral1-3}-\ref{spectral1-4}) have noticeable structure as columns matrices. Let us look at a concrete example \begin{equation} \mathcal M^{(\tilde\lambda(0),\tilde\lambda(1))}(3,2;{\bf z}(t);t^3{\bf y}(t)) = \left( \begin{array}{cccccccccc} t^{0\cdot 0} & t^{1\cdot 0} & t^{3\cdot 0} & t^{4\cdot 0} & t^{6\cdot 0} & 0 & 0 & 0 & 0 & 0 \\ t^{0\cdot 1} & t^{1\cdot 1} & t^{3\cdot 1} & t^{4\cdot 1} & t^{6\cdot 1} & 0 & 0 & 0 & 0 & 0 \\ t^{0\cdot 2} & t^{1\cdot 2} & t^{3\cdot 2} & t^{4\cdot 2} & t^{6\cdot 2} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & t^{0\cdot 0} & t^{2\cdot 0} & t^{3\cdot 0} & t^{5\cdot 0} & t^{6\cdot 0}\\ 0 & 0 & 0 & 0 & 0 & t^{0\cdot 1} & t^{2\cdot 1} & t^{3\cdot 1} & t^{5\cdot 1} & t^{6\cdot 1}\\ 0 & 0 & 0 & 0 & 0 & t^{0\cdot 2} & t^{2\cdot 2} & t^{3\cdot 2} & t^{5\cdot 2} & t^{6\cdot 2}\\ t^{0\cdot 3} & t^{1\cdot 3} & t^{3\cdot 3} & t^{4\cdot 3} & t^{6\cdot 3} & t^{0\cdot 3} & t^{2\cdot 3} & t^{3\cdot 3} & t^{5\cdot 3} & t^{6\cdot 3}\\ t^{0\cdot 4} & t^{1\cdot 4} & t^{3\cdot 4} & t^{4\cdot 4} & t^{6\cdot 4} & t^{0\cdot 4} & t^{2\cdot 4} & t^{3\cdot 4} & t^{5\cdot 4} & t^{6\cdot 4}\\ t^{0\cdot 5} & t^{1\cdot 5} & t^{3\cdot 5} & t^{4\cdot 5} & t^{6\cdot 5} & t^{0\cdot 5} & t^{2\cdot 5} & t^{3\cdot 5} & t^{5\cdot 5} & t^{6\cdot 5}\\ t^{0\cdot 6} & t^{1\cdot 6} & t^{3\cdot 6} & t^{4\cdot 6} & t^{6\cdot 6} & t^{0\cdot 6} & t^{2\cdot 6} & t^{3\cdot 6} & t^{5\cdot 6} & t^{6\cdot 6} \end{array} \right) \end{equation} The entries of the $j$-th column (apart for the zeros) are consecutive powers of some $v_j$, where $v_j$ is itself a power of $t$ which depends on the column index $j$. In the precedent example ${\bf v}= \{1,t,t^3,t^4,t^6,1,t^2,t^3,t^5,t^6\}$. Moreover some $v_j$ appear twice, once in the first half of the columns and once in the second half (in the example $1,t^3,t^6$), while the remaining $v_j$ are of the form $a_1 \lambda^i$ in the first half of the columns and $a_2 \lambda^i$ in the second half (in the example $\lambda =t^3, a_1=t, a_2=t^2$). By these considerations we are led to introduce the following families of $2(\ell+r+s)\times 2(\ell+r+s)$ matrices $\mathcal G^{(\ell,r,s)}({\bf v};\lambda,a_1,a_2)$, that are made of $6$ blocks of rectangular matrices as follows \begin{equation} \mathcal G^{(\ell,r,s)}({\bf v};\lambda,a_1,a_2)=\left( \begin{array}{cccc} D_{\ell+r;\ell}^{(0)}({\bf v}) &D_{\ell+r;r+s}^{(0)}(a_1\vec \lambda) & {\bf 0}& {\bf 0}\\[7pt] {\bf 0} & {\bf 0} & D_{\ell+r;\ell}^{(0)}({\bf v}) &D_{\ell+r;r+s}^{(0)}(a_2 \vec \lambda)\\[7pt] D_{2s;\ell}^{(\ell+r)}({\bf v}) &D_{2s;r+s}^{(\ell+r)}( a_1 \vec \lambda) & D_{2s;\ell}^{(\ell+r)}({\bf v}) &D_{2s;r+s}^{(\ell+r)}(a_2\vec \lambda) \end{array} \right), \end{equation} where ${\bf v}=\{v_1,\dots,v_\ell\}$, $a_i \vec \lambda=\{a_i,a_i\lambda, \dots, a_i\lambda^{r+s-1}\}$ and the blocks consist of the following rectangular matrices \begin{equation} D_{m;\ell}^{(j)}({\bf v})=\left( \begin{array}{cccc} v_1^j & v_2^j & \dots & v^j_{\ell} \\ v_1^{j+1} & v_2^{j+1} & \dots & v^{j+1}_{\ell} \\ \vdots & \vdots & \ddots & \vdots \\ v_1^{j+m-1} & v_2^{j+m-1} & \dots & v^{j+m-1}_{\ell} \end{array} \right). \end{equation} Apart for a trivial reordering of the columns we have \begin{equation}\begin{array}{c} \mathcal M^{(\tilde\lambda(0),\tilde\lambda(1))}(2n-k,k;{\bf z}(t);{\bf y}(t))= \mathcal G^{(n,n-k,k)}(\{t^{3i-3}\};t^3,t,t^2)\\[5pt] \mathcal M^{(\tilde\lambda(0),\tilde\lambda(2))}(2n-k,k;{\bf z}(t);{\bf y}(t))= \mathcal G^{(n,n-k,k)}(\{t^{3i-2}\};t^3,1,t^2)\\[5pt] \mathcal M^{(\tilde\lambda(0),\tilde\lambda(1))}(2n+1-k,k;{\bf z}(t);{\bf y}(t))= \mathcal G^{(n+1,n-k,k)}(\{t^{3i-3}\};t^3,t,t^2)\\[5pt] \mathcal M^{(\tilde\lambda(1),\tilde\lambda(2))}(2n+1-k,k;{\bf z}(t);{\bf y}(t))= \mathcal G^{(n,n-k+1,k)}(\{t^{3i-2}\};t^3,1,t) \end{array}. \end{equation} Therefore, by calling \begin{equation} \mathcal E^{\mu}_{N}(k;t):=\mathcal E^{\mu}_{N}(k;{\bf y}(t);{\bf z}(t)) \end{equation} we have \begin{equation}\label{t-spectral} \begin{array}{c} \mathcal E^{e}_{2n}(k;t)= 3^{-n(n-1)+k(k-1)/2}\frac{\det \left( \mathcal G^{(n,n-k,k)}(\{t^{3i-3}\};t^3,t,t^2) \right)}{\prod_{1\leq i< j \leq 2n-k}(t^{j-1}-t^{i-1}) \prod_{1\leq i< j \leq 2n+k}(t^{j-1}-t^{i-1})}\\[8pt] \mathcal E^{\tilde e}_{2n}(k;t)= 3^{-n(n-1)+k(k-1)/2} \frac{\det \left( \mathcal G^{(n,n-k,k)}(\{t^{3i-2}\};t^3,1,t^2) \right)}{\prod_{1\leq i< j \leq 2n-k}(t^{j-1}-t^{i-1}) \prod_{1\leq i< j \leq 2n+k}(t^{j-1}-t^{i-1})}\\[8pt] \mathcal E^{-}_{2n+1}(k;t)= 3^{-n^2+k(k-1)/2} \frac{\det \left( \mathcal G^{(n+1,n-k,k)}(\{t^{3i-3}\};t^3,t,t^2) \right)}{\prod_{1\leq i< j \leq 2n-k+1}(t^{j-1}-t^{i-1}) \prod_{1\leq i< j \leq 2n+k+1}(t^{j-1}-t^{i-1})}\\[8pt] \mathcal E^{+}_{2n+1}(k;t)= 3^{-n^2+k(k-1)/2} \frac{\det \left( \mathcal G^{(n,n-k+1,k)}(\{t^{3i-2)}\};t^3,1,t) \right)}{\prod_{1\leq i< j \leq 2n-k+1}(t^{j-1}-t^{i-1}) \prod_{1\leq i< j \leq 2n+k+1}(t^{j-1}-t^{i-1})} \end{array} \end{equation} The remarkable fact about the matrices $\mathcal G^{(\ell,r,s)}({\bf v};\lambda,a_1,a_2)$ is that their determinants factorize nicely. For $r\geq 0$ we have \begin{equation}\label{factor-glob0} \det \mathcal G^{(\ell,r,s)}({\bf v};\lambda,a_1,a_2)= \prod_{1\leq i,j \leq \ell}(v_i-v_j)^2 \prod_{\alpha=1,2}\prod_{\substack{1\leq i\leq \ell\\ 1\leq j \leq r+s}}(v_i-\lambda^{j-1}a_\alpha) \det \mathcal G^{(0,r,s)}(\lambda,a_1,a_2), \end{equation} with \begin{equation}\label{fact-det0-0} \det \mathcal G^{(0,r,s)}(\lambda;a_1;a_2) = (a_1a_2)^{\binom{r+s}{2}}\prod_{1\leq i,j\leq s}(\lambda^{j-1}a_1-\lambda^{i-1}a_2) ~\mathcal D^{(r,s)}(\lambda), \end{equation} and \begin{equation}\label{Dlambda-0} \mathcal D^{(r,s)}(\lambda)= \frac{(-1)^{s(r+s)}\lambda^{s\left(\binom{r}{2}-\binom{r+s}{2}\right) }\prod_{1\leq i<j\leq r} (\lambda^{j-1}-\lambda^{i-1})\prod_{1\leq i<j\leq r+2s}(\lambda^{j-1}-\lambda^{i-1})}{\prod_{1\leq i,j\leq s}(\lambda^{j+s-1}-\lambda^{i-1})}. \end{equation} These facts are proved in all detail in Appendix \ref{fact-det}. Before proceeding to the computation of the r.h.s. of eqs.(\ref{t-spectral}), we come back for a moment to the argument we interrupted at the end of Section \ref{combinatorial-pol}. Eqs.(\ref{t-spectral}) have been proven up to a constant independent of the difference $n-k$. To show that the constant is $1$, it is enough to check the equations for $\mathcal E^{\tilde e}_{2n}(k;t)$ and for $\mathcal E^{-}_{2n+1}(k;t)$ hold true in the case $n=k$. For the l.h.s. we use eqs.(\ref{initial-value}) with $q=e^{2\pi i/3}$ and $z_i=t^{i-1}$, while for the r.h.s. we use eqs.(\ref{factor-glob0}-\ref{Dlambda-0}). Rather than directly comparing the two sides of the equations, it is more convenient to compute the double ratio. A tedious but straightforward computation using Proposition \ref{fact-det0-1} shows that for both sides we have \begin{equation} \begin{array}{c} \left(\frac{\mathcal E^{-}_{2k+3}(k+1;t)}{\mathcal E^{-}_{2k+1}(k;t)}\right)/\left( \frac{\mathcal E^{-}_{2k+1}(k;t)}{\mathcal E^{-}_{2k-1}(k-1;t)}\right) = 3^{-1}t^{2k}\frac{t^{3(k+1)}-1}{t^{k+1}-1} \\ \left(\frac{\mathcal E^{\tilde e}_{2k+2}(k+1;t)}{\mathcal E^{\tilde e}_{2k}(k;t)}\right)/\left( \frac{\mathcal E^{\tilde e}_{2k}(k;t)}{\mathcal E^{\tilde e}_{2k-2}(k-1;t)}\right) = 3^{-1}t^{2(k-1)}\frac{t^{3k}-1}{t^{k}-1} \end{array} \end{equation} which, combined with the direct verification for $n=k=1,2$, gives the desired result. \vskip .5cm \noindent {\bf Proofs of the conjectures} \noindent Taking the limit $t\rightarrow 1$ directly in eqs.(\ref{t-spectral}) is not easy. Instead we consider the ratios $\frac{\mathcal E^{\mu}_{N}(k-1,t)}{\mathcal E^{\mu}_{N}(k,t)}$ which are easier to compute, and from them recover eqs.(\ref{recurE--},\ref{EFP++},\ref{EFPee*},\ref{EFPee}). Let us explain the computation for the case $\frac{\mathcal E^{e}_{2n}(k-1,t)}{\mathcal E^{e}_{2n}(k,t)}$, the other case being dealt with in the same manner. Using the first of eqs.(\ref{t-spectral}) we get \begin{equation}\label{ratio-t} \frac{\mathcal E^{e}_{2n}(k-1,t)}{\mathcal E^{e}_{2n}(k,t)} = 3^{1-k} \frac{\prod_{i=1}^{2n+k-1}t^{2n+k-1}-t^{i-1}}{\prod_{i=1}^{2n-k}t^{2n-k}-t^{i-1}} \times\frac{\det \left( \mathcal G^{(n,n-k+1,k-1)}(\{t^{3i-3}\};t^3,t,t^2) \right)}{\det \left( \mathcal G^{(n,n-k,k)}(\{t^{3i-3}\};t^3,t,t^2) \right)} \end{equation} Then from eqs.(\ref{factor-glob0}-\ref{Dlambda-0}) we find that for generic values of ${\bf v}$, $\lambda$ and $a_i$ the ratio $\frac{\det \mathcal G^{(\ell,r,s)}({\bf v};\lambda,a_1,a_2)}{\det \mathcal G^{(\ell,r+1,s-1)}({\bf v};\lambda,a_1,a_2)}$ does not depend on ${\bf v}$ and is given by a very simple formula \begin{equation} \frac{\det \mathcal G^{(\ell,r,s)}({\bf v};\lambda,a_1,a_2)}{\det \mathcal G^{(\ell,r+1,s-1)}({\bf v};\lambda,a_1,a_2)} =\lambda^{(s-1)(3s-2)/2}\prod_{j=-(s-1)}^{s-1}(\lambda^ja_1-a_2) \frac{\mathcal D^{(r,s)}(\lambda)}{\mathcal D^{(r-1,s+1)}(\lambda)} \end{equation} \begin{equation} \frac{\mathcal D^{(r,s)}(\lambda)}{\mathcal D^{(r-1,s+1)}(\lambda)} =(-1)^{r+s} \frac{\prod_{i=1}^{r+2s-1}(\lambda^{i} -1)\prod_{i=1}^{s-1}(\lambda^i-1)^2}{\prod_{i=1}^r(\lambda^{i}-1) \prod_{i=1}^{2s-1}(\lambda^{i}-1) \prod_{i=1}^{2s-2}(\lambda^{i}-1)} \end{equation} At this point we make use of these equations in eq.(\ref{ratio-t}) and substitute $\lambda=t^3$, $a_1=t$ and $a_2=t^2$. Repeating the same steps with the proper modifications for the other EFPs we finally obtain \begin{equation}\label{t-conj} \begin{split} \frac{\mathcal E^{e}_{2n}(k-1;t)}{\mathcal E^{e}_{2n}(k;t)}&= t^{\alpha_e(n,k)} \left(\frac{[3]_t}{3}\right)^{k-1} \frac{[2n+k-1]_{t}![n-k]_{t^{3}}![2k-1]_{t^{3}}![2k-2]_{t^{3}}!} {[2n-k]_t![n+k-1]_{t^{3}}![k-1]_{t^{3}}! [3k-2]_{t}!} \\[5pt] \frac{\mathcal E^{\tilde e}_{2n}(k-1;t)}{\mathcal E^{\tilde e}_{2n}(k;t)}&= t^{\alpha_{\tilde e}(n,k)} (-q)\left(\frac{[3]_t}{3}\right)^{k-1} \frac{[2n+k-1]_{t}![n-k]_{t^{3}}![2k-1]_{t^{3}}![2k-2]_{t^{3}}! }{[2n-k]_t![n+k-1]_{t^{3}}![k-1]_{t^{3}}![3k-3]_{t}![3k-1]_{t}} \\[5pt] \frac{\mathcal E^{-}_{2n+1}(k-1;t)}{\mathcal E^{-}_{2n+1}(k;t)}&= t^{\alpha_-(n,k)}\left(\frac{[3]_t}{3}\right)^{k-1} \frac{[2n+k]_{t}![n-k]_{t^{3}}![2k-1]_{t^{3}}![2k-2]_{t^{3}}! }{[2n-k+1]_t![n+k-1]_{t^{3}}![k-1]_{t^{3}}![3k-2]_{t}!} \\[5pt] \frac{\mathcal E^{+}_{2n+1}(k-1;t)}{\mathcal E^{+}_{2n+1}(k;t)}&= t^{\alpha_+(n,k)}\left(\frac{[3]_t}{3}\right)^{k-1} \frac{[2n+k]_{t}![n-k+1]_{t^{3}}![2k-1]_{t^{3}}![2k-2]_{t^{3}}! }{[2n-k+1]_t![n+k]_{t^{3}}![k-1]_{t^{3}}![3k-2]_{t}!} \end{split} \end{equation} where we have introduced the usual $t$-numbers and $t$-factorials $$ [n]_t!=\prod_{i=1}^n[i]_t ~~~~~\textrm{and}~~~~~ [i]_t=\frac{t^i-1}{t-1}. $$ The powers of $t$ in the r.h.s. of eqs.(\ref{t-conj}) do not concern us because we are actually interested in the specialization $t=1$, which at this point is immediate and reproduces the conjectured formulas (\ref{recurE--},\ref{EFP++},\ref{EFPee*},\ref{EFPee}). \section*{Acknowledgments} This work has been supported by the CNRS through a ``Chaire d'excellence''.
2,869,038,155,176
arxiv
\section{Introduction} In the standard model of particle physics, neutrinos are treated as elementary massless particles. However, it has been conclusively shown that neutrino flavour (i.e. electronic, muonic, tauonic) can change with time~\citep{Fukuda:1998,Ahmed:2004}, a phenomenon know as \emph{flavour oscillations}. For this to be possible, at least two neutrinos must possess a non-zero mass, therefore pointing to physics beyond the standard model. Since oscillation experiments measure the mass-squared splittings between the three mass eigenstates, they can only provide a lower bound on the absolute mass scale, and hence alone cannot determine the neutrino mass hierarchy~\citep{Qian:2015}. On the other hand, the presence of massive neutrinos has profound implications for the formation and evolution of structures in the Universe~\citep{Lesgourgues:2006}. At early times, in particular at recombination, neutrinos are ultra-relativistic and so their masses do not affect the primary CMB. At redshifts of $\sim200(m_\nu/0.1 \, \mathrm{eV})$ neutrinos become non-relativistic; however, their still large thermal velocities prevent them from clustering strongly producing a characteristic modification to the matter power spectrum. Large-scale structure observables are therefore sensitive to the sum of neutrino masses~\citep{Marulli:2011}, with measurable effects, for instance, on the abundance of massive galaxy clusters~\citep[e.g.][]{Costanzi:2013} and two-point shear statistics~\citep[e.g.,][]{Liu:2018}. Upcoming wide-field galaxy surveys will map the large-scale structure of the Universe to an unprecedented volume and accuracy~\citep{Laureijs:2011,LSST:2012,Green:2012,Levi:2013}, thus challenging our ability to predict cosmological summary statistics with the required small uncertainties over the entire range of relevant scales. In particular, percent-level knowledge of the matter power spectrum in the non-linear regime is necessary to take full advantage of future cosmic shear measurements~\citep{Taylor:2018}. At present, however, all known \mbox{(semi-)analytical} methods incorporating the non-linear effects of massive neutrinos on the matter power spectrum lack sufficient accuracy to be employed in future cosmological analyses aimed at stringent and unbiased constraints of the absolute mass scale~\citep{Bird:2012,Blas:2014,Mead:2016,Lawrence:2017}. In this paper we demonstrate how the halo model reaction framework of~\citet{Cataneo:2019} can predict the non-linear total matter power spectrum in the presence of massive neutrinos to the accuracy requirements imposed by the next generation of cosmological surveys. Sec.~\ref{sec:methods} describes our approach and the cosmological simulations used for its validation. Sec.~\ref{sec:results} presents our results, and in Sec.~\ref{sec:discussion} we discuss their implications and future applications. Our baseline flat $\Lambda$CDM cosmology has total matter density $\Omega_{\rm m} = 0.2905$, baryon density $\Omega_{\rm b} = 0.0473$, reduced Hubble constant $h=0.6898$, scalar spectral index $n_{\rm s} = 0.969$ and amplitude of scalar fluctuations $A_{\rm s} = 2.422 \times 10^{-9}$ at the pivot scale $k_0 = 0.002 \, {\rm Mpc}^{-1}$. In massive neutrino cosmologies we fix all parameters to their baseline values, and vary the cold dark matter (CDM) density as $\Omega_{\rm c}=\Omega_{\rm m}-\Omega_{\rm b}-\Omega_\nu$, with $\Omega_\nu$ denoting the neutrino density. For our linear calculations we use the Boltzmann code {\sc camb}\footnote{\url{https://camb.info}}~\citep{Lewis:2000}. \section{Methods}\label{sec:methods} \subsection{Halo model reactions with massive neutrinos}\label{sec:reactions} The implementation of massive neutrinos in the halo model (HM) has been previously studied in~\citet{Abazajian:2005} and~\citet{Massara:2014}, with the latter finding inaccuracies as large as 20-30\% in the predicted total matter non-linear power spectrum when compared to $N$-body simulations. To reduce these discrepancies down to a few percent, \citet{Massara:2014} proposed the use of massive-to-massless neutrino HM power spectrum ratios. Here, we follow a similar strategy by extending the recently developed \emph{halo model reaction} framework~\citep[][also see~\citet{Mead:2017} for its first applications]{Cataneo:2019} to include the effect of massive neutrinos. As we shall see in Sec.~\ref{sec:results}, this approach improves the halo model performance by more than one order of magnitude, therefore reaching the target accuracy set by the next generation of galaxy surveys, albeit neglecting baryonic feedback~\citep{Chisari:2019}. The total matter power spectrum in the presence of massive neutrinos is given by the weighted sum \begin{align}\label{eq:totPk} P^{\mathrm{(m)}}(k) = (1-f_\nu)^{2} P^{\mathrm{(cb)}}(k)+2 f_\nu(1-f_\nu) P^{(\mathrm{cb}\nu)}(k)+f_\nu^{2} P^{(\nu)}(k) \, , \end{align} where $f_\nu = \Omega_\nu/\Omega_{\mathrm{m}}$, $P^{\mathrm{(cb)}}$ is the auto power spectrum of CDM+baryons\footnote{In this work we treat baryons as cold dark matter, and only account for their early-time non-gravitational interaction through the baryon acoustic oscillations imprinted on the linear power spectrum~\citep[cf.][]{McCarthy:2018}. }, $P^{(\nu)}$ is the neutrino auto power spectrum, and $P^{(\mathrm{cb}\nu)}$ is the cross power spectrum of the neutrinos and the two other matter components\footnote{In general, we drop the dependence on redshift of the power spectrum and related quantities, unless required to avoid confusion.}. In our halo model predictions we approximate neutrino clustering as purely linear, allowing us to replace the neutrino non-linear auto power spectrum with its linear counterpart, $P_{\mathrm{L}}^{(\nu)}$, and thus rewrite the cross power spectrum as\footnote{This approximation is motivated by the two following arguments: (i) the cross-correlation coefficient between the neutrino and CDM fields is large on all relevant scales~\citep{Inman:2015}; and (ii) although using the linear neutrino power spectrum introduces substantial errors in the cross power spectrum on small scales~\citep{Massara:2014}, due to $P^{(\nu)} \ll P^{(\mathrm{cb})}$ and the suppression factor $2 f_\nu(1-f_\nu)$ preceding $P^{(\mathrm{cb}\nu)}$ in Eq.~\eqref{eq:totPk}, the overall impact on the total matter power spectrum becomes negligible.}~\citep{Agarwal:2011,Ali-Haimoud:2013} \begin{eqnarray} P_{\mathrm{HM}}^{(\mathrm{cb}\nu)}(k) \approx \sqrt{P_{\mathrm{HM}}^{\mathrm{(cb)}}(k) P_{\mathrm{L}}^{(\nu)}(k)} \, . \end{eqnarray} The CDM+baryons auto power spectrum is then divided into two-halo and one-halo contributions~\citep[see, e.g.,][]{Cooray:2002}, \begin{eqnarray}\label{eq:P_HM_cb} P_{\mathrm{HM}}^{\mathrm{(cb)}}(k) = P_{\mathrm{L}}^{\mathrm{(cb)}}(k) + P_{1 \mathrm{h}}^{\mathrm{(cb)}}(k) \, , \end{eqnarray} where we neglect the two-halo integral pre-factor involving the linear halo bias~\cite[see][for details]{Cataneo:2019}\footnote{This integral introduces corrections $\gtrsim 1\%$ to the two-halo term only for $k \gtrsim 0.5 \, h \, {\rm Mpc}^{-1}$\citep[see, e.g.,][]{Massara:2014}. On these scales, however, the leading contribution to the power spectrum comes from the one-halo term instead. Moreover, in this work we take the ratio of halo-model predictions, and our findings presented in Sec.~\ref{sec:fits} suggest that ignoring the two-halo correction can introduce errors no larger than 0.3\%.}. In the \emph{reaction} approach described in \citet{Cataneo:2019}, we must now define a \emph{pseudo} massive neutrino cosmology, which is a flat and massless neutrino $\Lambda$CDM cosmology whose linear power spectrum is identical to the total linear matter power spectrum of the \emph{real} massive neutrino cosmology at a chosen final redshift, $z_{\mathrm{f}}$, that is \begin{eqnarray}\label{eq:P_L_pseudo} P_{\mathrm{L}}^{\mathrm{pseudo}}\left(k, z_{\mathrm{f}}\right)=P_{\mathrm{L}}^{\mathrm{(m)}}\left(k, z_{\mathrm{f}}\right) \, . \end{eqnarray} Owing to the different linear growth in the two cosmologies, $P_{\mathrm{L}}^{\mathrm{pseudo}}$ and $P_{\mathrm{L}}^{\mathrm{(m)}}$ can differ substantially for $z > z_{\mathrm{f}}$. In the halo model language, the ratio of the \emph{real} to \emph{pseudo} non-linear total matter power spectra, i.e. the \emph{reaction}, takes the form \begin{align}\label{eq:react} \mathcal{R}(k)=\frac{(1-f_\nu)^{2} P_{\mathrm{HM}}^{\mathrm{(cb)}}(k)+2 f_\nu(1-f_\nu) P_{\mathrm{HM}}^{(\mathrm{cb}\nu)}(k)+f_\nu^{2} P_{\mathrm{L}}^{(\nu)}(k)}{P_{\mathrm{HM}}^{\mathrm{pseudo}}(k)} \, , \end{align} with \begin{align} P_{\mathrm{HM}}^{\mathrm{pseudo}}(k) = P_{\mathrm{L}}^{\mathrm{(m)}}(k)+P_{\mathrm{1h} }^{\mathrm{pseudo}}(k) \, . \end{align} For a mass-dependent and spherically symmetric halo profile with Fourier transform $u(k, M)$, the one-halo term is given by the integral \begin{eqnarray}\label{eq:P1h} P_{1 \mathrm{h}}(k)=\int \mathrm{d} \ln M \, n(M) \left(\frac{M}{\bar{\rho}}\right)^{2}\left|u\left(k, M \right)\right|^{2} \, , \end{eqnarray} where \begin{eqnarray}\label{eq:hmf} n(M) \equiv \frac{\mathrm{d} n}{\mathrm{d} \ln M}=\frac{\bar{\rho}}{M} \left[ \nu f(\nu) \right] \frac{\mathrm{d} \ln \nu}{\mathrm{d} \ln M} \end{eqnarray} is the virial halo mass function, and we use the Sheth-Tormen multiplicity function~\citep{Sheth:2002} \begin{eqnarray}\label{eq:ST} \nu f(\nu)=A \sqrt{\frac{2}{\pi} q \nu^{2}}\left[1+\left(q \nu^{2}\right)^{-p}\right] \exp \left[-q \nu^{2} / 2\right] \, , \end{eqnarray} with $\{A,q,p\} = \{0.3292,0.7665,0.2488\}$~\citep{Despali:2016}. In Eqs.~\eqref{eq:hmf} and~\eqref{eq:ST} the peak height $\nu(M,z) \equiv \delta_{\mathrm{coll}}(z)/\sigma(M,z)$, where $\delta_{\mathrm{coll}}$ is the redshift-dependent spherical collapse threshold, and \begin{eqnarray} \sigma^{2}(R, z)=\int \frac{\mathrm{d}^{3} k}{(2 \pi)^{3}}|\tilde{W}(k R)|^{2} P_{\mathrm{L}}(k, z) \, . \end{eqnarray} Here, $R = (3M/4\pi \bar\rho)^{1/3}$, and $\tilde{W}$ denotes the Fourier transform of the top-hat filter. For the halo profile in Eq.~\eqref{eq:P1h} we assume the Navarro-Frank-White (NFW) profile~\citep{Navarro:1996} truncated at its virial radius $R_{\mathrm{vir}} = (3M/4\pi \bar\rho\Delta_{\mathrm{vir}})^{1/3}$, where $\Delta_{\mathrm{vir}}$ is the redshift- and cosmology-dependent virial spherical overdensity~\citep[see, e.g.,][]{Cataneo:2019}. In our NFW profiles calculations, we approximate the relation between the halo virial concentration and mass with the power law \begin{eqnarray}\label{eq:Bullock} c(M, z)=\frac{c_{0}}{1+z}\left[\frac{M}{M_{*}(z)}\right]^{-\alpha} \, , \end{eqnarray} where the characteristic mass, $M_{*}$, satisfies $\nu(M_{*},z) = 1$, and we set the $c$-$M$ relation parameters to their standard values $c_0 = 9$ and $\alpha = 0.13$~\citep{Bullock:2001}. For the evaluation of the one-halo term (Eq.~\ref{eq:P1h}) we use different comoving background matter densities, linear matter power spectra, and spherical collapse evolution in the \emph{real} and \emph{pseudo} cosmologies. More specifically, for the CDM+baryons component in the \emph{real} cosmology we have \begin{eqnarray} \bar\rho &\rightarrow& \bar\rho_{\mathrm{cb}} \, , \\ P_{\mathrm{L}} &\rightarrow& P_{\mathrm{L}}^{\mathrm{(cb)}} \, . \end{eqnarray} Then the equation of motion for the spherical collapse overdensity~\citep[see, e.g.,][]{Cataneo:2019} is independent of mass and sourced only by the CDM+baryons Newtonian potential~\citep[cf.][]{LoVerde:2014}; the flat $\Lambda$CDM background expansion is controlled by $\Omega_{\mathrm{m}}$. On the other hand, for the \emph{pseudo} cosmology \begin{eqnarray} \bar\rho &\rightarrow& \bar\rho_{\mathrm{m}} \, , \\ P_{\mathrm{L}} &\rightarrow& P_{\mathrm{L}}^{\mathrm{(m)}} \, , \end{eqnarray} while the spherical collapse dynamics is still governed by the standard $\Lambda$CDM equation with $\Omega_{\mathrm{cb}}^{\rm pseudo} = \Omega_{\mathrm{m}}^{\rm real}$. Finally, assuming we can accurately compute the non-linear matter power spectrum of the \emph{pseudo} cosmology with methods other than the halo model~\citep[see, e.g.,][]{Giblin:2019}, the total matter power spectrum of the \emph{real} cosmology, Eq.~\eqref{eq:totPk}, can be rewritten in the \emph{halo model reaction} framework as \begin{eqnarray}\label{eq:tot_matter_pred} P^{\mathrm{(m)}}(k,z)=\mathcal{R}(k,z) \times P^{\rm{pseudo}}(k,z) \, . \end{eqnarray} In this work we generally use the \emph{pseudo} matter power spectrum measured from the simulations described in the next Section. However, to test the robustness of the \emph{reaction} approach to alternative $N$-body codes implementing massive neutrinos, in Sec.~\ref{sec:halofit} we employ~\cite{Bird:2012} and~\cite{Takahashi:2012} fitting formulas as proxy for the \emph{real} and \emph{pseudo} massive neutrino simulations, respectively. \subsection{$N$-body simulations}\label{sec:sims} We compute our fiducial non-linear power spectra and halo properties with the publicly available $N$-body code {\sc cubep$^3$m} \citep{HarnoisDeraps:2013}, which has been modified to include neutrinos as a separate set of particles \citep{Inman:2015,Emberson:2017}. We run a suite of simulations both with and without neutrino particles. In the standard massless neutrino case, particles are initialized from the Zel'dovich displacement field \citep{Zeldovich:1970}, obtained from the combined baryons + CDM transfer functions, linearly evolved from $z=0$ to $z=100$. However, for the \emph{pseudo} cosmologies we generate the initial conditions from the total linear matter power spectrum of the corresponding \emph{real} massive neutrino cosmologies at $z_{\mathrm{f}} = 0$ or 1 (see Eq.~\ref{eq:P_L_pseudo}), rescaled to the initial redshift $z = 100$ with the $\Lambda$CDM linear growth function using $\Omega_{\rm m}^{\rm real}$. In the massive neutrino case the simulations run in two phases, as unphysical dynamics sourced by the large thermal velocities (such as unaccounted for relativistic effects or large Poisson fluctuations) can occur if neutrinos are included at too high redshift\footnote{The particle initialization and the execution pipelines were improved since \citet{Inman:2015}, which is why we provide more details here \citep[see][for additional descriptions]{Inman:2017b}.}~\citep{Inman:2015}. In the first, from $z=100$ to $z=10$, only CDM particles are evolved; the neutrinos are treated as a perfectly homogeneous background component. We account for their impact on the growth factor by multiplying a $z=10$ CDM transfer function with the neutrino correction, $D(z=100)/D(z=10)$, where $D(a)\propto a^{1-3f_\nu/5}$ \citep{Bond:1980}. The Zel'dovich displacement is also modified to account for neutrino masses, with every velocity component being multiplied by $1-3f_\nu/5$. Finally, the mass of every particle is multiplied by $1-f_{\nu}$. With this strategy, CDM perturbations are correct at $z=10$ even though we do not evolve neutrino perturbations before then. In the second phase, neutrinos are added into the code as a separate $N$-body species. For their initialization, neutrino density and velocity fields are computed at $z=10$ from {\sc camb} transfer functions, and the Zel'dovich approximation is again used to compute particle displacements and velocities. A random thermal contribution, drawn from the Fermi-Dirac distribution, is also added to their velocities. {\sc cubep$3$m} then co-evolves neutrinos and dark matter with masses weighted by $f_\nu$ and $1-f_\nu$ respectively. In all neutrino runs, we assume a single massive neutrino contributing $\Omega_\nu h^2=m_\nu/93.14 \, {\rm eV}$~\citep{Mangano:2005}, and consider cosmologies with $m_\nu = 0.05, 0.1, 0.2, 0.4$ eV. We perform runs with $N_\nu = 3072^3$ neutrino particles and box sizes $L_{\rm box} = 500 \, h^{-1} {\rm Mpc}$ for all values of $m_\nu$ considered, as well as one set of large-volume runs with $L_{\rm box} = 1000 \, h^{-1} {\rm Mpc}$ and $m_\nu = 0.4$ eV. We use $N_{\rm cb} = 1536^3$ CDM particles in the smaller boxes and $N_{\rm cb} = 3072^3$ particles in the larger boxes, corresponding to a common mass resolution of $m_{\mathrm{cb}} = 2.78 \times 10^9 \, h^{-1}M_\odot$ for the baseline $\Lambda$CDM cosmology. A common gravitational softening length of $24 \, h^{-1}{\rm kpc}$ is also used. \begin{figure*} \begin{center} \includegraphics[width=\columnwidth]{./Figures/pk_Mnu_z0} \quad \includegraphics[width=\columnwidth]{./Figures/pk_Mnu_z1} \end{center} \caption{ Total matter power spectrum ratios of the massive to the massless neutrino cosmologies at $z=0$ (left) and $z=1$ (right). The data points show the results of the $L_{\rm box} = 500 \, h^{-1} {\rm Mpc}$ simulations described in Sec.~\ref{sec:sims}, and the black lines correspond to the halo model reaction predictions, $P^{\rm (m)} = \mathcal{R} \times P^{\rm pseudo}$, where $P^{\rm pseudo}$ is taken from flat $\Lambda$CDM dark matter-only simulations with \emph{pseudo} initial conditions, and the halo model reactions are computed assuming the ~\citet{Despali:2016} and~\citet{Bullock:2001} fits for the halo mass functions and $c$-$M$ relations, respectively. The lower panels illustrate the excellent performance of our method, which matches the simulations at percent level for all $k \lesssim 10 \, h \, {\rm Mpc}^{-1}$ (solid lines). } \label{fig:pk_ratios} \end{figure*} Halo catalogues for each simulation are generated using a spherical overdensity algorithm based on the method described in \citet{HarnoisDeraps:2013}. Briefly, the first stage of this process is to identify halo candidates as peaks in the dark matter density field. This is achieved by interpolating dark matter particles onto a uniform mesh with cell width $81 \, h^{-1}{\rm kpc}$ and denoting candidates as local maxima in the density field. We then refine the density interpolation in the local region of each candidate using a mesh of width $16 \, h^{-1}{\rm kpc}$ and identify a centre as the location of maximum density. The halo radius is defined by building spherical shells around the centre until the enclosed density reaches the cosmology- and redshift-dependent virial density, $\Delta_{\mathrm{vir}}$, derived from the spherical collapse and virial theorem. The density profile for each halo is stored using 20 logarithmically-spaced bins that reach out to $2 \, h^{-1} {\rm Mpc}$. We compute a concentration for each halo by performing a least-squares fit to an NFW density profile. When doing so, we discard all radial bins smaller than twice the gravitational softening length and larger than the virial radius. \section{Results}\label{sec:results} \subsection{$P^{\mathrm(m)}$ from the standard halo abundance and concentration fits} We begin by presenting the performance of the \emph{halo model reactions} against our suite of small-volume simulations. For this comparison our reaction predictions (Eq.~\ref{eq:react}) are based on the standard values of the parameters entering the halo mass function~\citep{Despali:2016} and $c$-$M$ relation~\citep{Bullock:2001}, which we apply to both the \emph{real} and \emph{pseudo} massive neutrino cosmologies. The upper panels of Fig.~\ref{fig:pk_ratios} shows the the impact of massive neutrinos on the non-linear total matter power spectrum for the range of neutrino masses relevant for the next generation of cosmological surveys~\citep{Coulton:2019}. The lower panels display the relative deviation of our predictions (see Eq.~\ref{eq:tot_matter_pred}) from the full massive neutrino simulations, which is $\lesssim 1\%$ over the entire range of scales analysed and at both redshifts considered. This highly accurate result follows from the good agreement between the predicted \emph{real}-to-\emph{pseudo} halo mass function ratio and the simulations, which we show in the lower-left panel of Fig.~\ref{fig:sim_virial} for the largest neutrino mass in our study. \citet{Cataneo:2019} noticed that this quantity is directly related to the accuracy of the \emph{reaction} across the transition to the non-linear regime. In fact, although the \emph{real} and \emph{pseudo} standard halo mass functions are a poor fit for halo masses $M \gtrsim 10^{14.5} \, h^{-1}M_\odot$ when taken individually (Fig.~\ref{fig:sim_virial}, upper- and middle-left panel), the predicted ratio remains within $\sim 2\%$ of the simulation measurements, thus corroborating the original findings of~\citet{Cataneo:2019}. On the other hand, halo concentrations become relevant deep in the non-linear regime, and the right panel of Fig.~\ref{fig:sim_virial} illustrates that despite the large absolute inaccuracies of the standard fits, once again the \emph{real}-to-\emph{pseudo} ratio is not too dissimilar from that of the simulations. This fact enables the excellent performance of the \emph{halo model reactions} on scales $k \gtrsim 1 \, h \, {\rm Mpc}^{-1}$. \begin{figure*} \begin{center} \includegraphics[width=\columnwidth]{./Figures/P1h_all} \quad \includegraphics[width=0.965\columnwidth]{./Figures/virial_cM_mnu_0p4_z0_v2.pdf} \end{center} \caption{Halo properties extracted from the $z=0$ snapshots of the large-volume simulations ($L_{\rm box} = 1000 \, h^{-1} {\rm Mpc}$). {\it Left:} the abundance of dark matter halos for the real (top panel) and pseudo (middle panel) cosmologies with $m_\nu = 0.4$ eV, both adjusted with prefactors such as to match the large-scale limit of the corresponding one-halo integrands (Eq.~\ref{eq:P1h}). The lower panel shows the \emph{real}-to-\emph{pseudo} halo mass function ratio, a quantity controlling the two-to-one-halo transition of the \emph{halo model reaction}. The data points and error bars represent the means and Jackknife uncertainties obtained by splitting the simulation boxes in octants. Halo masses are binned in logarithmic bins of size $\Delta\log_{10}M = 0.1$. We only use halos with more than 1000 particles and discard mass bins with fewer than 5 halos per sub-volume. The blue lines represent the Sheth-Tormen semi-analytical predictions with halo mass function parameters either taken from~\citet{Despali:2016} (dashed) or re-calibrated to fit individually our \emph{real} and \emph{pseudo} simulations (solid). {\it Right:} virial concentration-mass relation for the \emph{real} (blue) and \emph{pseudo} (orange) cosmologies with $m_\nu = 0.4$ eV. The coloured lines are power law approximations with parameter values taken from~\citet{Bullock:2001} (dashed) or fitted to our simulations (solid). Symbols denote measurements from the simulations with central values corresponding to the mass-weighted mean concentration of the halos within each mass bin, and error bars only account for the Poisson noise. In addition, we only keep halos with more than 3000 particles to minimise profile fitting errors. } \label{fig:sim_virial} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./Figures/pk_Mnu_z0_MV_refit} \end{center} \caption{Present-day total matter power spectrum ratio of the massive neutrino cosmology with $m_\nu = 0.4$ eV relative to the massless neutrino case. Symbols correspond to the measurements from the large-volume simulations. Solid lines are the \emph{halo model reaction} predictions adopting the refitted halo mass functions and $c$-$M$ relations shown in Fig.~\ref{fig:sim_virial} for the \emph{pseudo} cosmology, while the \emph{real} quantities use either the standard or refitted parameters. For comparison, we also show the predictions computed using the standard fits for both the \emph{pseudo} and the \emph{real} halo properties (dashed line). For all cases, our predictions use the non-linear matter power spectrum of the large-volume \emph{pseudo} simulation. The lower panel shows that once the \emph{pseudo} halo properties are calibrated to the simulations, the \emph{reaction} enables an accurate one-to-one mapping between the \emph{real} halo properties and the power spectrum, thus out-performing the traditional halo model calculations. Differences on small scales for the predictions based on the full standard fits (dashed line) compared to those in Fig.~\ref{fig:pk_ratios} are due to different halo concentrations in the small- and large-volume \emph{real} massive neutrino simulations. } \label{fig:MV} \end{figure} \subsection{The effect of halo properties measured in simulations}\label{sec:fits} It is currently unclear how accurately the non-linear matter power spectrum can be predicted given just mean halo properties such as their abundance and density profiles. For the standard halo model, it is well known that this approach fails due to large inaccuracies on quasi-linear scales of the absolute power spectrum~\citep[see, e.g.,][]{Giocoli:2010,Massara:2014}. The \emph{halo model reactions}, however, are fractional quantities, and as such better suited to absorb the errors incurred separately by the \emph{real} and \emph{pseudo} halo model predictions. To quantify the accuracy of this approach, we fit the Sheth-Tormen mass function and $c$-$M$ relations to our large volume $m_\nu=0.4$ eV simulations, obtaining $\{A^\mathrm{real},q^\mathrm{real},p^\mathrm{real}\} = \{0.3152,0.8423,0\}$, $\{A^\mathrm{pseudo},q^\mathrm{pseudo},p^\mathrm{pseudo}\} = \{0.3097,0.8313,0\}$, $\{c_0^\mathrm{real},\alpha^\mathrm{real}\} = \{6.3,0.062\}$, $\{c_0^\mathrm{pseudo},\alpha^\mathrm{pseudo}\} = \{6,0.058\}$. We show these fits as solid lines in Fig.~\ref{fig:sim_virial}. To estimate the relative importance of the mean halo properties for the accuracy of the predicted non-linear power spectrum, in Fig.~\ref{fig:MV} we fix the \emph{pseudo} halo mass function and $c$-$M$ parameters to their refitted values while varying their \emph{real} counterparts. When the parameters entering the halo mass function and $c$-$M$ relation are all set to their standard values (blue line), our predictions experience deviations as large as $\sim 10\%$ for $k \gtrsim 0.1 \, h \, {\rm Mpc}^{-1}$. The match to the simulations improves substantially on scales $0.1 \lesssim k \lesssim 1 \, h \, {\rm Mpc}^{-1}$ by including information on the halo mass function (orange line). If we further add our knowledge of the \emph{real} halo concentrations, the agreement with the simulations reaches sub-percent level down to the smallest scales modelled in this study. These results confirm that the \emph{halo model reactions} can produce even higher-quality predictions when supplied with accurate halo properties and \emph{pseudo} non-linear power spectra\footnote{As pointed out earlier in the text, the \emph{reactions} are fractional quantities, that is, as long as the same halo finder and halo concentration algorithm are used for the \emph{real} and \emph{pseudo} cosmologies, the refitted halo-model predictions will match the simulations very well. In the future we will be interested in calibrating the \emph{pseudo} halo properties with the end goal of building emulators. At that stage, the level of convergence in the output of more sophisticated halo finders~\citep[e.g.,][]{Behroozi:2013,Elahi:2019} will be an important indicator of the absolute accuracy attainable by the \emph{reaction} framework.}. For comparison, we also show the calculation based on the standard fits for both the \emph{pseudo} and the \emph{real} halo properties (dashed line). Differences on scales $k \gtrsim 5 \, h \, {\rm Mpc}^{-1}$ compared to the same prediction in Fig.~\ref{fig:pk_ratios} are primarily sourced by changes to the concentrations of small halos between the small- and large-volume simulations of the \emph{real} massive neutrino cosmology, which in turn depend on the different $N_\nu/N_{\rm cb}$ particle number ratio used for these two runs (see Sec.~\ref{sec:sims}). \subsection{Comparison to \sc{halofit}}\label{sec:halofit} We shall now assess the validity of the \emph{halo model reactions} for alternative implementations of the gravitational force~\citep[e.g.][]{Springel:2005,Habib:2016} and of massive neutrinos~\citep[e.g.][]{Banerjee:2016,Ali-Haimoud:2018} in $N$-body codes. Ideally, we would carry out this test using the simulation outputs of codes other than {\sc cubep$^3$m}~\citep[e.g.,][]{Castorina:2015,Liu:2018}. However, publicly available snapshots do not include runs for the \emph{pseudo} cosmologies, which means we must resort to our simulations for these cases. Given that the clustering of matter generated by different codes can vary considerably even for dark matter-only simulations~\citep{Schneider:2016,Garrison:2019}, this choice could bias our conclusions in the highly non-linear regime. Instead, we use {\sc halofit} to compute the non-linear matter power spectrum, employing the \citet{Takahashi:2012} calibration for the {\it pseudo} and the massless $\Lambda$CDM cases, and the \citet{Bird:2012} prescription for the massive neutrino cosmologies; these two fitting functions are calibrated to the output of {\sc gadget-2} and {\sc gadget-3} codes~\citep{Springel:2001,Springel:2005}, respectively. Moreover, for this comparison we use the standard halo mass function and $c$-$M$ relation parameters listed in Sec.~\ref{sec:reactions}, i.e. without refitting to the {\sc cubep$^3$m} simulations. We find that our \emph{reaction}-based predictions for the total matter power spectrum of the massive neutrino cosmologies deviate no more than 3\% from the {\sc halofit} outputs. Such departures are comparable to, or smaller than, the typical {\sc halofit} inaccuracies~\citep[see, e.g.,][]{Knabenhans:2019,Smith:2019}, which suggests our method can also satisfactorily reproduce the results of other $N$-body codes provided that the baseline \emph{pseudo} power spectrum is obtained from simulations run with the same code and initial random phases of their \emph{real} massive neutrino counterparts. \section{Discussion}\label{sec:discussion} In this paper we incorporated in the \emph{halo model reaction} framework of~\citet{Cataneo:2019} an effective analytical strategy to accurately describe the non-linear effects induced by massive neutrinos on the total matter power spectrum. Our approach draws from the \emph{cold dark matter prescription} adopted in~\cite{Massara:2014}, with the notable difference that here we treated the clustering of massive neutrinos as purely linear, and worked with \emph{pseudo} rather than the standard massless neutrino cosmology as baseline in our halo model power spectrum ratios. In contrast to modified gravity cosmologies~\citep{Cataneo:2019}, we found that the inclusion of high-order perturbative corrections to the two-halo contributions in the \emph{reaction} was unnecessary. We studied the interdependency between halo properties and matter power spectrum \emph{reactions}, and conclusively showed that accurate knowledge of the mean halo abundances and concentrations (both central in cluster cosmology studies) leads to exquisite predictions for the \emph{halo model reactions}. Together with the fast emulation method to compute the \emph{pseudo} non-linear matter power spectrum presented in~\citet{Giblin:2019}, the tight connection between halo mass function and matter power spectrum in our approach enables, for instance, the simultaneous analysis of cluster number counts and cosmic shear data in a novel, self-consistent way. In a future work, we will merge in a single \emph{reaction} function both massive neutrino and dark energy/modified gravity cosmologies, which will enable us to predict the combined effects of these extensions on the matter power spectrum in a regime so far only accessible to specially modified $N$-body simulations~\citep{Baldi:2014,Giocoli:2018,Wright:2019}. Finally, poorly understood baryonic processes impact the distribution of matter on scales $k \gtrsim 1 \, h \, {\rm Mpc}^{-1}$, thus limiting our ability to correctly model the power spectrum deep in the non-linear regime~\citep[see][for a review]{Chisari:2019}. It was showed that it is possible to account for these additional effects within the halo model~\citep{Semboloni:2011,Semboloni:2013,Fedeli:2014,Mohammed:2014,Mead:2015,Schneider:2019,Debackere:2019}, and we leave the implementation of baryonic feedback in the \emph{halo model reactions} to future investigation. \section*{Acknowledgements} MC thanks A. Mead for useful conversations in the early stages of this work. MC, JHD and CH acknowledge support from the European Research Council under grant number 647112. CH acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research. Work at Argonne National Laboratory was supported under U.S. Department of Energy contract DE-AC02-06CH11357. The authors are grateful to Ue-Li Pen for his assistance with computing resources. Computations were performed on the BGQ and Niagara supercomputers at the SciNet HPC Consortium \citep{Loken:2010,Ponce:2019}. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. \bibliographystyle{mnras}
2,869,038,155,177
arxiv
\section{Introduction} Materials hosting quantum spin-liquid states have attracted much interest recently~\cite{Sachdev2008NPhys,Balents2010,Norman2016RMP,rau2016anurev,Winter2017JPCM,Takagi2019NRP,Broholm2020Sci}, as in these systems the quantum information may be protected from decoherence, and they can be applied in quantum computing technology~\cite{Nayak2008RMP}. A prime example for a theoretical model to host a quantum spin-liquid state is provided by the exactly solvable Kitaev model on the honeycomb lattice~\cite{Kitaev2006}, which contains frustrated, bond-dependent magnetic interactions that lead to fractionalized quasiparticles; gauge fluxes and Majorana fermions. The investigation and experimental verification of quantum spin-liquid states presents, however, an ongoing challenge, that has brought \mbox{$\alpha$-RuCl$_3$}\xspace to the forefront of research as a prime candidate for Kitaev physics. While the honeycomb-layered \mbox{$\alpha$-RuCl$_3$}\xspace orders antiferromagnetically at low temperatures~\cite{Johnson2015PRB,Sears2015PRB}, the possibility of residual physics of fractionalization~\cite{nasu2016fermionic,Do2017NatPhys,Jansa2018NatPhys,Motome2020JPSJ,Li2020PRR} or even a field-induced Kitaev spin-liquid state~\cite{yadav2016KitaevExchangeFieldinduced,Baek2017PRL,Banerjee2018QM} have been intensively discussed. So far, numerous experimental methods have been used for the investigation of \mbox{$\alpha$-RuCl$_3$}\xspace, including neutron and Raman scattering~\cite{Sandilands2015PRL,Do2017NatPhys,Banerjee2018QM,Balz2019PRB,sahasrabudhe2020HighfieldQuantumDisordered,Wulferding2020NatComm,Wang2020QM}, specific heat~\cite{Wolter2017PRB}, Grüneisen parameter~\cite{bachus2020Thermodynamic}, microwave and terahertz absorption~\cite{Baek2017PRL,Wang2017PRL,Wellm2018PRB}, as well as thermal transport measurements~\cite{KasaharaPRL2018,Hentrich2019PRB,HentrichPRL2019,czajka2021OscillationsThermalConductivitya}, and notably some reporting a half-integer-quantized thermal Hall conductivity~\cite{Kasahara2018Nat,yokoi2020half,Yamashita2020PRB,bruin2021RobustnessThermalHall}. Various Raman scattering studies have reported pronounced Fano line shapes~\cite{Sandilands2015PRL,Mai2019PRB,sahasrabudhe2020HighfieldQuantumDisordered,Wulferding2020NatComm}, which evidence a significant magnetoelastic coupling between the phonon modes and the magnetic continuum. Indeed, more recent thermal expansion and magnetostriction measurements have probed direct consequences of such coupling in \mbox{$\alpha$-RuCl$_3$}\xspace~\cite{He2018JPCM,Gass2020PRB,Schonemann2020PRB}. Strong magnetostrictive effects are plausible, considering those two aspects. Firstly, magnetoelastic coupling is expected to be especially sensitive in Kitaev materials due to the strongly geometry-dependent exchange mechanisms~\cite{jackeli2009MottInsulatorsStrong,rau2016anurev,Winter2016PRB}. Secondly, the weak van-der-Waals force between the honeycomb layers leads to large changes in the lattice parameters when mechanical stress is applied. We note that magnetoelastic coupling is necessarily in play when measuring a hypothetical spinful chiral edge current (that would be present in the Kitaev spin-liquid) \cite{vinkler-aviv2018ApproximatelyQuantizedThermal,ye2018QuantizationThermalHall}. Furthermore, magnetoelastic coupling could also lend the phonons themselves a bulk transverse (Hall) current, see e.g. Ref.~\onlinecite{ye2021phonon}. Further understanding of the intrinsic anisotropy in \mbox{$\alpha$-RuCl$_3$}\xspace\ could be gained by field-angular dependent measurements. So far, significant magnetic torque effects have been found and investigated for various directions of the magnetic ($H$) field \cite{Leahy2017PRL,modic2018ResonantTorsionMagnetometry,modic2018ChiralSpinOrder,Riedl2019PRL}. Additionally, specific heat and thermal conductivity measurements have been performed in magnetic fields applied in various in-plane and out-of-plane angles and revealed anisotropic thermodynamic and transport properties \cite{Kasahara2018Nat,yokoi2020half,Yamashita2020PRB,czajka2021OscillationsThermalConductivitya,bruin2021RobustnessThermalHall,tanaka2020thermodynamic}. Therefore, combined investigations of the magnetoelastic coupling and the magnetic anisotropy, using canted fields (i.e. fields tilted out of the honeycomb plane), can help to unveil the complex behavior of \mbox{$\alpha$-RuCl$_3$}\xspace. In this combined experimental and theoretical study, we focus on the angular, temperature, and magnetic field dependence of the magnetic and magnetoelastic properties of \mbox{$\alpha$-RuCl$_3$}\xspace. Depending on the in-plane field angle, we resolve a phase transition between different antiferromagnetic orders, in accord with recent previous studies. In the presence of canted magnetic fields, we observe an anomalous increase in the magnetostriction at high fields related to the magnetic torque effects. The combination of magnetic measurements reveals a significant, non-symmetric anisotropy for magnetic fields canted out of the hexagonal $ab$-plane in opposite directions, upwards or downwards, namely towards the $+c^*$ or $-c^*$ axes. This angular-anisotropy is related to the co-aligned, corner-sharing RuCl$_6$ octahedra within the hexagonal planes. The experimentally observed magnetic and magnetoelastic anisotropy is the largest when the $H$ field is rotated within the $ac^*$ plane, and smallest when rotated within the $bc^*$ plane. To model the magnetostriction and the effect of magnetic torque, we employ \textit{ab-initio} derived magnetoelastic couplings \cite{kaib2021magnetoelastic}, allowing us to separate the different contributions of the magnetoelastic interactions. The theoretical model provides a good qualitative description of the experimental observations, namely predicting non-symmetric angular-anisotropy for the $ac^*$ plane while excluding it for the $bc^*$ plane. However, our experiments and the theoretical model also point out the significant role of magnetic torque in those experiments, where the sample can freely move or deform, such as in magnetostriction and thermal transport measurements. In case of magnetostriction measurement, the movement or deformation of the sample is on the sub-$\mu$m scale, while in case of thermal transport measurements the deformation can be significantly higher. \section{Experimental and Theoretical Methods} \subsection{Experimental details} Single crystals of \mbox{$\alpha$-RuCl$_3$}\xspace were grown using the chemical vapor transport method~\cite{Banerjee2017Sci}. The orientations of the monoclinic $a$ and $b$ axes with respect to the honeycomb plane were determined by angular dependent magnetization measurements with $\mathbf{H}\in{ab}$ fields. The angular dependent magnetization measurements were carried out in a SQUID magnetometer (MPMS-XL, Quantum Design). The field dependent magnetization measurements up to $\mu_0H$=14\,T were measured in a vibrating sample magnetometer (VSM, PPMS, Quantum Design). The precise 45\,deg canting orientation of the crystals was ensured by a pair of appropriately cut quartz pads, between which the sample was fixed with varnish. The $ab$-plane orientation of the crystals was aligned under a microscope with $\pm$1-2\,deg angular precision. The magnetostriction was measured using a custom-built dilatometer based on the capacitance measurement technique (AH2700A, Andeen-Hagerling)~\cite{Pott1983}. Due to the dimensions of the available single crystals, the length change $\Delta{L}$ of the sample was measured along the $c^*$ axis ($\Delta{L}_{c^*}\parallel{c}^*$, see \cref{RuCl3-1}(a)), while the $H$ field could be applied in arbitrary directions via the rotation of the capacitance cell body or the sample. In this measurement technique, the sample is held in place in the dilatometer by a small uniaxial pressure applied on the sample during the mounting. Therefore if sufficient torque is applied, the sample may slightly rotate or deform within the dilatometer, which is measured as an apparent length change. This issue will be discussed in details in Sec.~\ref{sec:MandME} as well as we give an estimate to its magnitude. During the magnetostriction measurement the magnetic field was swept between $\pm$14\,T with 0.01\,T/min or 0.03\,T/min rates at constant temperatures. The linear magnetostriction coefficient along the $c^*$ axis ($\lambda_{c^*}$) was calculated as the $H$-field derivative of the relative length change: \begin{equation} \lambda_{c^*}=\frac{\partial}{\partial(\mu_0H)}\frac{\Delta{L}_{c^*}(T,\mu_0H)}{L_{c^*}(300\,K,0\,T)}. \label{eq:magnetostrictionexp} \end{equation} The measurements were performed on two different pieces of \mbox{$\alpha$-RuCl$_3$}\xspace crystals from the same batch (samples $\#$1 and $\#2$) with thicknesses of $\sim$800\,$\mu$m. \begin{figure} \includegraphics[width=8.5truecm] {RuCl3_uc_and_MTheta_85_v02.pdf} \caption{(Color online) (a)~Single honeycomb layer of \mbox{$\alpha$-RuCl$_3$}\xspace and the crystallographic axes ($a$, $b$, and $c^*$). We highlight two directions within the $ab$ plane; the $a$ axis is perpendicular ($\perp{bond}$) and the $b$ axis is parallel ($\parallel{bond}$) to one of the Ru-Ru bonds, respectively. Red and blue arrows at the honeycomb sites indicate the zigzag domain with ordering wave vector $\mathbf Q \parallel b$. Dashed line indicates a C$_2$ rotation symmetry around an axis parallel to the $b$ axis. (b)~During the magnetization and magnetostriction measurements, the magnetic field ($\mathbf{H}$) was canted out of the $ab$ plane by an angle of $\vartheta$, while the planar projection of the applied field was either along the $a$ or $b$ axis. (c)~Temperature dependence of the magnetization for fields along the main crystallographic axes in the field cooling runs ($\mu_0H$=1\,T). Note, that the data for $\mathbf{H}\parallel{c^*}$ is multiplied by a factor of 5 for better visibility. (d-e)~Angular dependence of the magnetization at $T$=2\,K for fields rotated within the $ab$, $ac^*$, and $bc^*$ planes, respectively. Measurement data is plotted with symbols (full circles) for $\mu_0H$=1\,T and 5\,T. In case of $\mathbf{H}\in{ab}$, the measurements are plotted for $\mu_0H$=2\,T. Dashed curves in panels (e,f) correspond to the theoretical calculations. } \label{RuCl3-1} \end{figure} \subsection{Theoretical details \label{sec:theory}} We compare our measurements to numerical results on the extended Kitaev models. In such models, the bonds are labeled as X, Y or Z depending on their orientation. For a nearest-neighbor Z-bond (parallel to the $b$ axis, see \cref{RuCl3-1}(a)) with local $C_{2h}$ symmetry, the symmetry-allowed magnetic exchange between the $J_{\text{eff}}=\frac12$ pseudospins, labeled as $\mathbf S_i$ and $\mathbf S_j$ is \cite{rau2014generic} \begin{align} H_{\mathrm Z} = & K S_{i}^{z} S_{j}^{z} + J \mathbf{S}_{i} \cdot \mathbf{S}_{j} +\Gamma\left(S_{i}^{x} S_{j}^{y}+S_{i}^{y} S_{j}^{x}\right) \notag \\ & +\Gamma^{\prime}\left(S_{i}^{x} S_{j}^{z}+S_{i}^{z} S_{j}^{x}+S_{i}^{y} S_{j}^{z}+S_{i}^{z} S_{j}^{y}\right), \label{eq:Hamil_mag} \end{align} where $K$ and $J$ correspond to the Kitaev and Heisenberg exchanges, respectively, while $\Gamma, \Gamma'$ are symmetric off-diagonal exchanges. The X and Y bond exchanges can be constructed via the cyclic permutation of $(x,y,z)$ in \cref{eq:Hamil_mag}. The magnetic Hamiltonian is then given as the sum of these exchange terms (including possible longer-range terms) and the Zeeman term $H_\text{Zee}=-\mu_B\mu_0 \sum_i \mathbf H \cdot \mathbb G \cdot \mathbf S_i$, where $\mathbb G$ is the gyromagnetic tensor. To solve it, we employ exact diagonalization (ED) on a hexagon-shaped 24-site cluster. As a magnetic model, we discuss the \textit{ab-initio} guided minimal model of Ref.~\onlinecite{winte17}, which has been shown to reproduce many experimental observations in \mbox{$\alpha$-RuCl$_3$}\xspace~\cite{winte17,Wolter2017PRB,winte18,cookmeyer2018SpinwaveAnalysisLowtemperature,Riedl2019PRL,sahasrabudhe2020HighfieldQuantumDisordered,bachus2020Thermodynamic,bachus2021angle}. Here the exchange parameters are \begin{equation} (K,\,J,\,\Gamma,\,\Gamma',\,J_3)=(-5,\,-0.5,\,2.5,\,0,\,0.5)\text{\,meV}, \end{equation} where $J_3$ denotes an additional third-nearest-neighbor Heisenberg exchange, and the components of $\mathbb G$ are $g_{ab}=2.3$ and $g_\ensuremath{{c^{\ast}}} =1.3$, for the in-plane and out-of-plane elements, respectively. Note that this model is $C_3$-simplified, \textit{i.e.} the coupling magnitudes are equal on X, Y and Z bonds. The $C2/m$ structure of \mbox{$\alpha$-RuCl$_3$}\xspace \cite{Johnson2015PRB} does however slightly break $C_3$ symmetry, a property that manifests in the \textit{in-plane} angle-dependent measurements discussed below and therefore it is not described in the present model by construction. To model the spin-lattice coupling, we employ the \textit{ab-initio}-derived linear magnetoelastic couplings of Ref.~\onlinecite{kaib2021magnetoelastic} for \mbox{$\alpha$-RuCl$_3$}\xspace, defined as $\MEC{\mathcal J}=\left(\frac{\partial \mathcal J}{\partial \ensuremath{\epsilon_\cstar}}\right)|_{\ensuremath{\epsilon_\cstar}=0}$, where $\ensuremath{\epsilon_\cstar} = \Delta {L_\ensuremath{{c^{\ast}}}}/L_{\ensuremath{{c^{\ast}}}}$ and $\mathcal{J} \in \{{K},{J},\dots,g_{ab},g_\ensuremath{{c^{\ast}}}\}$. The strongest magnetoelastic exchange couplings are then \begin{equation} (\MEC{K},\MEC{J},\MEC{\Gamma},\MEC{\Gamma'})=(40.5,\,1.3,\,7.5,\,-11.5)\text{\,meV} \label{eq:MECmodel} \end{equation} and the magnetoelastic $g$ couplings $(\MEC{g_{ab}},\MEC{g_{\ensuremath{{c^{\ast}}}}})=(-1.6,\,3.85)$. Note that the predicted large magnetoelastic $\MEC\Gamma'$ coupling in this model is a somewhat unexpected property, as the \textit{magnetic} $\Gamma'$ coupling is generally found to be subdominant or negligible in magnetic models of \mbox{$\alpha$-RuCl$_3$}\xspace (see, e.g., Ref.~\onlinecite{laurell2020DynamicalThermalMagnetic}). Nevertheless we find this large $\MEC \Gamma'$ to be essential to reproduce the strong anisotropy found in our magnetostriction measurements, as discussed below. In our calculations we also include the weaker longer-range magnetoelastic couplings of Ref.~\onlinecite{kaib2021magnetoelastic}, which however do not qualitatively change the results. For the magnetostriction, we then employ the approximation \cite{kaib2021magnetoelastic} \begin{equation} \lambda_{c^*} \approx \frac{\kappa_{\cstar}}{V} \sum_{\mathcal J\in\{K,J,\dots\}} \MEC{\mathcal J} \, \left(\frac{\partial M}{\partial \mathcal J} \right)_{\ensuremath{\epsilon_\cstar}=0}, \label{eq:lambda_theory} \end{equation} where the sum goes through all strain-dependent interactions and $g$ values. The parameter $\kappa_{\cstar} \equiv - (\partial \ensuremath{\epsilon_\cstar}/\partial p_\ensuremath{{c^{\ast}}})$ is the (unknown) linear compressibility along $\ensuremath{{c^{\ast}}}$ against uniaxial pressure $p_\ensuremath{{c^{\ast}}}$. The field-dependence of $\lambda_\ensuremath{{c^{\ast}}}$ enters through the field-dependencies of the magnetization susceptibilities $\left(\frac{\partial M}{\partial \mathcal J} \right)_{\ensuremath{\epsilon_\cstar}=0}$, which we compute using ED in the magnetic model described above. \section{Magnetic and elastic properties in canted magnetic fields} \label{sec:MandME} Each layer of \mbox{$\alpha$-RuCl$_3$}\xspace consists of edge-sharing RuCl$_6$ octahedra that form a honeycomb network, as shown in Fig.~\ref{RuCl3-1}(a). For the crystal structure, both the rhomboedral $R\bar 3$ \cite{park2016emergence,glamazda2017RelationKitaevMagnetism,janssen2020MagnonDispersionDynamic} and the monoclinic $C2/m$ \cite{Johnson2015PRB,cao2016low} structures are presently discussed in the literature. We employ the axis convention of the $C2/m$ structure, where the honeycomb plane is spanned by the crystallographic $a$ and $b$ axes, while $\ensuremath{{c^{\ast}}}$ is perpendicular to it, see \cref{RuCl3-1}(a,b). Note that the $b$ axis is parallel to one of the honeycomb bonds, while the $a$ axis is perpendicular to the same bond. The antiferromagnetic \lq\lq{zigzag}\rq\rq\ long-range order \cite{Johnson2015PRB} (Fig.~\ref{RuCl3-1}(a)) develops at $T_{\rm N}$=7.1\,K, as shown by the magnetization data (Fig.~\ref{RuCl3-1}(c)) in moderate $\mu_0H$=1\,T fields applied along the main crystallographic axes. The magnetization curves for $\mathbf{H}\parallel{a}$ and $\mathbf{H}\parallel{b}$ show a sudden decrease at $T_{\rm N}$, however the weaker temperature dependence for $\mathbf{H}\parallel{b}$ suggests that the ordered moments are perpendicular to the $b$ axis. The particular zigzag domain structure associated to such ordering \cite{chaloupka2016MagneticAnisotropyKitaev}, where the ordered moments lie in the $a\ensuremath{{c^{\ast}}}$-plane, is illustrated in Fig.~\ref{RuCl3-1}(a). Note, that the minor transition apparent for $\mathbf{H}\parallel{a}$ at $T$=14\,K is indicative of the so-called ABC/ABAB-stacking faults~\cite{Sears2015PRB,Banerjee2016NatMat}. For fields applied perpendicular to the honeycomb plane ($\mathbf H \parallel \ensuremath{{c^{\ast}}}$) a much smaller susceptibility is found, highlighting the strong easy-plane anisotropy in \mbox{$\alpha$-RuCl$_3$}\xspace. This is further resolved in \cref{RuCl3-1}(e) and \ref{RuCl3-1}(f), where the field is rotated within the $a\ensuremath{{c^{\ast}}}$-plane or $b\ensuremath{{c^{\ast}}}$-plane, respectively (cf.~\cref{RuCl3-1}(b)), in the presence of constant field strength and temperature $T$=2\,K. Corresponding theoretical $T$=0\,K results within the magnetic minimal model (see \cref{sec:theory}) agree well with the measurement, see dashed lines in \cref{RuCl3-1}(e,f). In case of magnetic properties, the easy-plane anisotropy is primarily facilitated by the strong $\Gamma$-term and the anisotropic $g$-tensor~\cite{janssen2017MagnetizationProcessesZigzag,Riedl2019PRL}. Note, that theoretical curves in Figs.~\ref{RuCl3-1}(e) and \ref{RuCl3-1}(f) are identical, while the experimental curves are different for $\mathbf{H}\in{ac^*}$ and $\mathbf{H}\in{bc^*}$. This is explained by our $C_3$-symmetrized model, which suppresses the \textit{in-plane} anisotropy, whereas the real \mbox{$\alpha$-RuCl$_3$}\xspace is apparently monoclinic. Therefore, the agreement between theory and experiment is excellent only in \cref{RuCl3-1}(f). Nevertheless, we obtain overall semi-quantiatve agreement between theory and experiment. \Cref{RuCl3-1}(d) shows the field-angular dependence of the magnetization within the $ab$ honeycomb plane for moderate $\mu_0 H $=2\,T. In the angular dependence, components with clear 2-fold and 6-fold symmetries are identified. Assuming a honeycomb lattice with $C_6$ symmetry, as present in the proposed $R\bar 3$ structure of \mbox{$\alpha$-RuCl$_3$}\xspace, only an angular dependence with 6-fold symmetry is expected. A spontaneous selection of single-domain zigzag magnetic order can break the 6-fold symmetry and give a component with 2-fold symmetry. However, the same angular dependence of $M$ as shown in \cref{RuCl3-1}(d) is reproduced repeatedly for every measurement, even after heating the sample to room temperature, well above $T_{\mathrm N}$. Hence the preference of zig-zag domain selection with respect to the crystallographic axes is consistent, and probably related to the crystal structure, compatible with the suggested monoclinic C2/m space group. Accordingly, the zero-field magnetic Hamiltonian favors certain zigzag domains energetically out of the three possible domain directions. From the measured angular preference we infer, analogously as done in Ref.~\cite{lampenkelley2018fieldinduced}, that in our sample the dominant domain at low field is that with ordering wave vector $\mathbf Q \parallel b$ (as illustrated in \cref{RuCl3-1}(a)). While this domain is expected to stay stable at finite fields $\mathbf H \parallel b$, a re-orientation to the other zigzag domains is expected at an intermediate field when $\mathbf H \parallel a$ \cite{winte18,lampenkelley2018fieldinduced}. \begin{figure} \includegraphics[width=8.5truecm]{RuCl3_MS-H_T_85_v02.pdf} \caption{(Color online) Magnetic field dependence of the $\lambda_{c^*}$ linear magnetostriction coefficient at selected temperatures. The $\Delta{L}$ length change was measured along the $c^*$ axis and $\mathbf{H}\parallel{a}$ magnetic field was applied, perpendicular to one of the Ru-Ru bonds.} \label{RuCl3-2} \end{figure} Figure~\ref{RuCl3-2} shows the experimental $c^*$-axis magnetostriction ($\lambda_{c^*}$, \cref{eq:magnetostrictionexp}) as a function of magnetic field $\mathbf H \parallel a$ for selected temperatures within the ordered temperature regime, the short-range correlated Kitaev paramagnet, and the conventional thermal paramagnet. The magnetostrictions measured in increasing and decreasing fields were found to be identical within the accuracy of the measurement. At $T$=3\,K, the magnetostriction $\lambda_{c^*}$ has a positive peak at low fields and a sharp negative double-peak structure at higher fields. The positive peak at $\mu_0H_0$=0.7\,T corresponds to the aforementioned domain re-population of the antiferromagnetic order and it is present in both the field increasing and decreasing runs. We resolve two sharp negative peaks at $\mu_0H_1$=6.4\,T and $\mu_0H_2$=7.2\,T. The former is a phase transition at $\mu_0H_1$ where the inter-plane ordering between the zigzag-ordered honeycomb planes changes \cite{Balz2021intermediate}, while the latter at $\mu_0H_2$ is the transition where the zigzag magnetic order disappears. In agreement with Refs.~\onlinecite{lampenkelley2018fieldinduced,bachus2021angle,Balz2021intermediate}, the extent between these two phases is the largest for $\mathbf H \parallel a$ and the smallest or absent for $\mathbf H \parallel b$ (cf.~\cref{RuCl3-3}(a)). At $T$=5\,K, the double-peak structure merges into a single, negative peak at $\mu_0H_1$=6.3\,T. Above $T_{\rm N}=7.1$\,K, the sharp peaks of the low-temperature magnetostriction are replaced by broad field-dependent features. For $T\gtrsim 30$\,K, the magnetostriction shows a linear field dependence, as expected for a conventional thermal paramagnet~\cite{Johannsen2005PRL}. In contrast, for intermediate temperatures $T_{\rm N} < T \lesssim 30$\,K, we find the magnetostriction to show a non-linear and non-monotonic field dependence. This appears to be a property of the short-range correlated Kitaev paramagnet~\cite{Do2017NatPhys,Jansa2018NatPhys,winte18,suzuki2020ProximateFerromagneticState} in this temperature range. \begin{figure*}[t!] \includegraphics[width=17truecm]{RuCl3_lambda_theta_170_v02.pdf} \caption{(Color online) (a) Field dependence of the linear magnetostriction coefficient $\lambda_{c^*}$ at $T$=3\,K. The magnetic field was applied along the main crystallographic axes, as well as canted out of the $ab$ plane with projection perpendicular to the bond ($\mathbf{H}\in ac^*$). The $\vartheta$=0\,deg and 90\,deg angles correspond to the $a$ and $c^*$ axes, respectively. The two peaks in the $\lambda_\ensuremath{{c^{\ast}}}$ data correspond to ($\mu_0 H_1$) a phase transition between different zigzag interplane orderings and ($\mu_0 H_2$) a transition into the field-induced quantum paramagnetic phase. The $\mu_0H_1$ and $\mu_0H_2$ phase boundaries are indicated by triangles, respectively. (b) Angular dependence of the $\mu_0H_1$ and $\mu_0H_2$ phase transition fields. The dashed lines indicate the $\sim{1}/\cos{\vartheta}$ field dependence. (c) The field dependence of the $\lambda_{c^*}/\kappa_{c^*}$ magnetostriction calculated for $H$ field canted out of the $ab$ plane with $\vartheta$ angle, $\mathbf{H}\in ac^*$. The inset defines the field angle $\vartheta$. Note, that panels (a) and (c) are shown for different field scales. (d) Magnetic field dependence of the calculated $\tau$ magnetic torque for selected $\vartheta$ canting angles, $\mathbf{H}\in ac^*$. (e) The effect of magnetic torque on the field-dependence of the magnetostriction is modelled with the $\lambda_{c^*}/\kappa_{c^*} + A\cdot\vert\tau\vert$ relation with the same $A$=2.2$\cdot$10$^5$\,Pa$\cdot$Rad / (T$^2\cdot\mu_B$/f.u.) parameter fitted for each curve. } \label{RuCl3-3} \end{figure*} The $\lambda_{c^*}$ magnetostriction with fields applied along the main crystallographic axes as well as for $\mathbf H$ canted out from the $ab$ plane are shown in Fig.~\ref{RuCl3-3}(a). The experimental configuration and the definition of the $\vartheta$ canting angle is illustrated in Fig.~\ref{RuCl3-3}(c); The $H$ field is canted away from the $a$ axis by angle $\vartheta$ within the $a\ensuremath{{c^{\ast}}}$-plane. The measurement with $\vartheta$=0\,deg and 90\,deg corresponds to $\mathbf{H}\parallel{a}$ and $\mathbf{H}\parallel{c}^*$, respectively. For better comparison and for the sake of completeness, we present the $\mathbf{H}\parallel b$ data reproduced after Ref.~\cite{Gass2020PRB} (dashed pink line in Fig.~\ref{RuCl3-3}(a)). The $\lambda_{c^*}$ magnetostriction for $\mathbf{H}\parallel b$ has one single negative peak at $\mu_0H$=7.5\,T. Unlike $\lambda_\ensuremath{{c^{\ast}}}$ for $\mathbf H\parallel a$, no significant domain re-orientation at low fields is visible, as expected for the identified dominant \mbox{$\mathbf Q\parallel b$} zigzag domain (Fig.~\ref{RuCl3-1}(a)). In contrast to the in-plane field results, the magnetostriction for $\mathbf{H}\parallel{c}^*$ is small and shows weak, non-monotonous field dependence. The $\mu_0H_1$ and $\mu_0H_2$ critical fields of the two peaks in the magnetostriction data, approximately follow a simple $\sim{1}/\cos{\vartheta}$ angular dependence, as shown in Fig.~\ref{RuCl3-3}(b). Such an angular dependence is expected if the phase transitions are entirely driven by the in-plane component of the magnetic field. We note, that the $\mu_0H_0$ critical field does not follow the $\sim{1}/\cos{\vartheta}$ angular dependence. This deviation suggests that the reorientation of the three differently oriented zigzag domains in the presence of canted fields has a non-trivial energetic competition. Theoretical calculations for the $\lambda_{c^*}/\kappa_{c^*}$ magnetostriction are shown in Fig.~\ref{RuCl3-3}(c) for the same field configurations as in Fig.~\ref{RuCl3-3}(a). The calculated magnetostrictions for fields applied along the main crystallographic axes ($a,b,\ensuremath{{c^{\ast}}}$) qualitatively reproduce the measured data. In the experiments, we ascribed the measured $\mu_0H_0$=0.7\,T peak to the zig-zag domain reorientation. However, in our $C_3$-symmetrized model, there is no preferred domain orientation at $\mu_0H$=0\,T, and therefore no reorientation is expected. Moreover, due to the restriction to a two-dimensional finite cluster in the calculations, the results are limited in the reproduction of the lower-field peak $\mu_0H_1$ for $\mathbf{H}\parallel a$ (related to the inter-layer re-ordering~\cite{Balz2021intermediate}), and peaks at phase transitions are generally expected to be broadened. When the magnetic field is tilted out of the $ab$ plane, $\mathbf{H}\in ac^*$, theoretical calculations show that the peaks in the $\lambda_{c^*}/\kappa_{c^*}$ become smaller and appear at higher fields. While the experimental data in Fig.~\ref{RuCl3-3}(a) for $\vartheta$=30\,deg and 45\,deg retain the double-peak like features in $\lambda_{c^*}$, they differ significantly from the calculations. In contrast to the theory, the measured $\lambda_{c^*}$ magnetostriction for $\theta$=30\,deg and 45\,deg changes sign at intermediate field strengths due to a large positive component added to the measurement. We attribute the observed anomalous component to magnetic torque effects. When the magnetic torque is strong, it could rotate, bend, and deform the \mbox{$\alpha$-RuCl$_3$}\xspace crystal within the dilatometer, as discussed in the Supplementary, in Fig.~S2. Theoretical calculations for the magnetic torque $\tau=\frac{dF}{d\vartheta}$ ($F$ being the free energy) for $\mathbf{H}\in ac^*$ are shown in Fig.~\ref{RuCl3-3}(d). The torque, facilitated by $\Gamma$-exchange and $g$-anisotropy \cite{Riedl2019PRL}, is small for fields along the main crystallographic axes ($\vartheta=0$\,deg and $\vartheta=90$\,deg), but becomes large for intermediate canting angles where it strongly increases with field strength. Although the $H$ field points along a main crystallographic axis for $\vartheta$=0\,deg, ($\mathbf{H}\parallel a$), a small but nonzero torque persists anyway. Note that no symmetry in the Hamiltonian requires the torque to be maximal at $\theta$=45\,deg. While the presently employed model parameters predict the magnetic torque to reach its maximum close to $\vartheta$=45\,deg in Fig.~\ref{RuCl3-3}(d), a smaller g-tensor anisotropy can further decrease the canting angle needed for maximum torque. This can explain why the positive contribution in the $\lambda_{c^*}$ magnetostriction measurement is larger for $\vartheta$=30\,deg than for $\vartheta$=45\,deg in Fig.~\ref{RuCl3-3}(a). This further demonstrates that the effect of magnetic torque on the magnetostriction measurements is a complex issue, which depends on the spring constant, pressure setting, and dimensions of the dilatometer, as well as the dimensions and elastic constants of the sample. For small rotations (deformations), it is reasonable to assume that the change in the magnetostriction is linear in the torque as $\Delta\lambda_{c^*}/\kappa_{c^*}\sim{A}\cdot\vert\tau\vert$, where $A$ is a material, measurement setup, and pressure setting dependent, but field magnitude- and angle-independent constant. Figure~\ref{RuCl3-3}(e) illustrates the modified magnetostriction, calculated with the $\lambda_{c^*}/\kappa_{c^*}+{A}\cdot\vert\tau\vert$ relation, where $A$=2.2$\cdot$10$^5$\,Pa$\cdot$Rad / (T$^2\cdot\mu_B$/f.u.) is a fixed value for all curves. The $A$ parameter was fitted to the $\vartheta$=45\,deg data with the highest magnetic torque $\tau$, so that $\lambda_{c^*}/\kappa_{c^*}(H^*) + A\cdot\vert\tau\vert=0$ is satisfied for the theoretical data at the same $H^*$ field as the measured magnetostriction ($\lambda_{c^*}(H^*)$=0). This qualitatively models that the strong positive contributions to $\lambda_\ensuremath{{c^{\ast}}}$ in the measurements with canted fields ($\vartheta$=+30\,deg and +45\,deg) are related to the rotational effect of the magnetic torque, and not intrinsic to the sample. However, note that even with these efforts, the effect of torque cannot be removed from the measurement data in a quantitative manner. Focusing back on the crystallographic axes $a$ ($\vartheta$=0\,deg) and $\ensuremath{{c^{\ast}}}$ ($\vartheta$=90\,deg), where torque effects are expected to be much weaker, we point out a much stronger anisotropy found in $\lambda_\ensuremath{{c^{\ast}}}$ than expected from the magnetization anisotropy~\cite{Johannsen2005PRL}. In principle, due to the Maxwell relation $\lambda_\ensuremath{{c^{\ast}}} = - \partial M / \partial p_\ensuremath{{c^{\ast}}}$, one can expect $\lambda_\ensuremath{{c^{\ast}}}$ to be roughly proportional to the magnetization $M$ at small field strengths. However, this does not explain the observed angular dependence of $\lambda_\ensuremath{{c^{\ast}}}$: While the magnetization for $\vartheta$=90\,deg is already reduced by a factor of $\sim 6$ to 10 compared to $\vartheta$=0\,deg (cf.~\cref{RuCl3-1}(e)), this alone cannot account for the much larger $\sim 30$-fold reduction of the magnetostriction between $\vartheta$=90\,deg and $\vartheta$=0\,deg (cf.~\cref{RuCl3-3}(a)). This increased anisotropy effect is also reproduced in our model calculations (\cref{RuCl3-3}(c)). In the calculations, we can trace the unusual reduction in magnetostriction back to contributions from different magnetoelastic couplings, i.e.~from different summands in \cref{eq:lambda_theory}. The largest entering magnetoelastic couplings $\MEC{\mathcal J}$ are the nearest-neighbor anisotropic couplings $\MEC K, \MEC \Gamma, \MEC \Gamma'$, which are field-independent. The field-strength and field-direction dependency enters through the susceptibilities $\partial M / \partial \mathcal J$. \Cref{fig:dissection}(a) shows the largest summands ($\MEC{\mathcal J} \cdot \partial M / \partial \mathcal J$) as a function of in-plane field $\mathbf H \parallel a$ ($\vartheta=0$deg), where a dominating effect from the contribution with $\mathcal J = \Gamma'$ is demonstrated. The large susceptibility $\partial M/\partial\Gamma'$ for in-plane fields can be understood from the fact that $\Gamma'$ is the exchange that tunes most strongly the easy-plane anisotropy of \mbox{$\alpha$-RuCl$_3$}\xspace \cite{maksimov2020RethinkingRuCl}. However, for out-of-plane fields $\mathbf H \parallel \ensuremath{{c^{\ast}}}$, variations in $\Gamma'$ have little effect onto the magnetization. Therefore the large $\MEC{\Gamma'}$-contribution breaks off for $\mathbf H \parallel \ensuremath{{c^{\ast}}}$, leading to a much smaller $\lambda_\ensuremath{{c^{\ast}}}$ as shown in \cref{fig:dissection}(b). The agreement with experiment therefore confirms the presence of a strong \textit{negative} magnetoelastic $\MEC{\Gamma'}$ coupling in \mbox{$\alpha$-RuCl$_3$}\xspace\ (see \cref{eq:MECmodel}). Note that the large $\MEC\Gamma'<0$ suggests that the application of 3\% to 5\% compressive uniaxial \ensuremath{{c^{\ast}}}-strain may destabilize the zigzag magnetic order~\cite{kaib2021magnetoelastic}. While \mbox{$\alpha$-RuCl$_3$}\xspace in measurements under hydrostatic pressure show dimerization~\cite{Biesner2018PRB,Bastien2018PRB}, the application of uniaxial strain leads to fundamentally different lattice deformations. As an example, compression along the \ensuremath{{c^{\ast}}} axis expands the lattice within the ab plane, in contrast to the application of hydrostatic pressure, which compresses both the \ensuremath{{c^{\ast}}}~axis and the honeycomb $ab$ plane. Another estimate for the uniaxial pressure dependence of $T_{\rm N}$ comes from the Ehrenfest relation~\cite{Johannsen2005PRL}: \begin{equation} \frac{\partial{T}_{\rm N}}{\partial{p}_\ensuremath{{c^{\ast}}}}=V_{mol} T_{\rm N} \frac{\Delta{\alpha}_\ensuremath{{c^{\ast}}}}{\Delta{C}_p}, \end{equation} where $p_\ensuremath{{c^{\ast}}}$ is the uniaxial pressure applied along the \ensuremath{{c^{\ast}}}~axis, $V_{mol}$ is the molar volume, and $\Delta{C}_p$ and $\Delta{\alpha}_\ensuremath{{c^{\ast}}}$ are the heights of the anomaly in the specific-heat and thermal-expansion at $T_{\rm N}$, respectively. Using specific heat data $\Delta{C}_p$=3.0\,J/mol/K from Ref.~\onlinecite{Wolter2017PRB}, $\Delta{\alpha}_\ensuremath{{c^{\ast}}}$=$-$7$\cdot$10$^{-5}$\,1/K from Ref.~\onlinecite{Gass2020PRB}, and $V_{mol}$=5.26$\cdot$10$^{-5}$\,m$^3$/mol, we get $\frac{\partial{T}_{\rm N}}{\partial{p}_\ensuremath{{c^{\ast}}}}\approx-$8.8\,K/GPa. This means that a noticeable change in $T_{\rm N}$ can be obtained under experimentally achievable conditions~\cite{Nakajima2015PRL}. \begin{figure*} \includegraphics[width=17truecm]{RuCl3_lambda_dissection_170_v03.pdf} \caption{(Color online) Dissection of the largest contributions to theoretical magnetostriction through \cref{eq:lambda_theory}. Each curve correspond to a component of \cref{eq:lambda_theory} with different $\mathcal J$. The notation $K$, $J$, $\Gamma$, $\Gamma'$, and $\mathbb{G}$ corresponds to the summand components of $\frac{1}{V} \MEC K \left(\frac{\partial M}{\partial{K}}\right)$, $\frac{1}{V}\MEC J\left(\frac{\partial M}{\partial{J}}\right)$, $\frac{1}{V}\MEC \Gamma\left(\frac{\partial M}{\partial{\Gamma}}\right)$, $\frac{1}{V}\MEC \Gamma'\left(\frac{\partial M}{\partial{\Gamma'}}\right)$, and $\frac{1}{V}\widetilde{\mathbb{G}}\left(\frac{\partial M}{\partial{\mathbb{G}}}\right)$, respectively. The line with $\mathcal J = \mathbb G$ corresponds to magnetoelastic coupling with the $g$-tensor. (a)~In-plane field $\mathbf H \parallel a$ ($\vartheta=0$deg), (b)~Out-of-plane field $\mathbf H \parallel \ensuremath{{c^{\ast}}}$ ($\vartheta=90$deg). } \label{fig:dissection} \end{figure*} \begin{figure*}[t!] \includegraphics[width=17truecm]{RuCl3_pm45deg_MH-lambdaH_170_v02.pdf} \caption{(Color online) (a) Magnetic field dependence of the magnetization at $T$=2\,K for $\mathbf{H}\in ac^*$ with $\vartheta$=$\pm$45\,deg canting out of the $ab$ plane. (b) Magnetic field dependence of the field-derivative, and (c) the field dependence of the $\Delta{M}$ magnetization difference for the $\vartheta$=$+$45\,deg and $-$45\,deg measurements ($\Delta{M}$=$M_\mathrm{+45\,deg}-M_\mathrm{-45\,deg}$). The phase transitions for $\mathbf{H}\in ac^*$ are indicated by triangles as peaks in $dM/dH$. (d-f) Magnetic field dependence of the magnetization, field derivative, and $\Delta{M}$ at $T$=2\,K for $\mathbf{H}\in bc^*$, $\vartheta$=$\pm$45\,deg. (g) Magnetic field dependence of the $\lambda_{c^*}$ magnetostriction coefficient at $T$=3\,K for $\mathbf{H}\in ac^*$ and $\mathbf{H}\in bc^*$ measured in the field-increasing runs with $\vartheta$=$\pm$45\,deg canting angles. Three $\lambda_{c^*}$-$H$ curves are shown for each configuration, with numerals indicating the order of the measurements (complete list is shown in the supplementary material~\cite{Kocsis2021PRB2SM}, Fig.~\ref{RuCl3-1Suppl}). The experimental conditions are illustrated as an inset in panels (a) and (d). While panels (a,b,d,e,g) show measurements for the field-increasing runs, panels (c,f,h) show the measurements for the field increasing and decreasing runs. } \label{RuCl3-4} \end{figure*} While the magnetostriction measurements under canted magnetic fields are strongly affected by the magnetic torque, the magnetization measurements are unaffected, as the sample is strongly fixed to a rigid sample holder. Fig.~\ref{RuCl3-4} reveals an interesting anisotropy found in the magnetization measurements for fields rotated out of the $ab$-plane into opposite directions (i.e.~towards $+\ensuremath{{c^{\ast}}}$ or $-\ensuremath{{c^{\ast}}}$). Magnetization measurements at $T$=2\,K for $\mathbf{H}\in ac^*$ and $\mathbf{H}\in bc^*$, canted out of the $ab$-plane by $\vartheta$=$\pm$45\,deg angles are shown in Figs.~\ref{RuCl3-4}(a-c) and \ref{RuCl3-4}(d-f), respectively. Figures~\ref{RuCl3-4}(a,d), \ref{RuCl3-4}(b,e), and \ref{RuCl3-4}(c,f) show the $H$-field dependence of the magnetization, the field-derivative, and the field dependence of the $\Delta{M}$ magnetization difference between the $\vartheta$=$+$45\,deg and $-$45\,deg measurements ($\Delta{M}$=$M_\mathrm{+45\,deg}-M_\mathrm{-45\,deg}$), respectively. For $\mathbf{H}\in ac^*$, in Fig.~\ref{RuCl3-4}(a), the field dependence of the magnetization for $\vartheta$=$+$45\,deg and $-$45\,deg show clear differences above $\mu_0H$=8\,T, while those of $\mathbf{H}\in bc^*$ show small differences only. Moreover, peaks in the field derivative of the magnetization (Fig.~\ref{RuCl3-4}(b)) indicate two phase transitions for the $\vartheta$=$+$45\,deg measurement at $\mu_0H_1$=8.6\,T and $\mu_0H_2$=10.3\,T, while in the $\vartheta$=$-$45\,deg measurement there is only one peak seen at $\mu_0H'_2$=10.9\,T. Note, that the field derivatives in the $\mathbf{H}\in bc^*$ measurements are similar to those of the $\mathbf{H}\in ac^*$ experiments, however here the $\vartheta$=$-$45\,deg measurement has two peaks in $dM/dH$ and the $\vartheta$=$+$45\,deg measurement has one at slightly different fields. The $\Delta{M}$ magnetization difference in Fig.~\ref{RuCl3-4}(c) shows a shoulder-like magnetization change starting from $\mu_0H$=8.5\,T and a peak at around $\mu_0H$=10.7\,T. The $\Delta{M}$ magnetization difference for $\mathbf{H}\in bc^*$ in Fig.~\ref{RuCl3-4}(f) is about 4 times smaller and has opposite sign than those for $\mathbf{H}\in ac^*$. For comparison, Fig.~\ref{RuCl3-4}(g) shows the field dependence of the $\lambda_{c^*}$ magnetostrictions for $\mathbf{H}\in ac^*$ and $\mathbf{H}\in bc^*$ fields canted out from the $ab$-plane in $\vartheta$=$\pm$45\,deg angles for the field-increasing runs. The complete set of measurements for the field-increasing and decreasing runs are shown in Fig.~\ref{RuCl3-1Suppl}, while additional measurements on sample $\#$1 are shown in Fig.~\ref{RuCl3-3Suppl}~\cite{Kocsis2021PRB2SM}. During the $\lambda_{c^*}$ - $H$ measurements the $H$ field was swept between $\pm$14\,T several times, then the sample was removed and rotated to the next measurement configuration. Curves labeled as field up and field down refer to measurements in $H$ field with increasing or decreasing magnitudes, respectively. In Figs.~\ref{RuCl3-4}(g), \ref{RuCl3-1Suppl}, \ref{RuCl3-2Suppl}, and \ref{RuCl3-3Suppl} we show three $\lambda_{c^*}$ - $H$ curves to demonstrate the signal to noise level, while Fig.~\ref{RuCl3-3Suppl} discusses the reproducibility when we change the $\vartheta$ canting angle. Note, that these magnetostriction measurements in such canted fields show significant hysteresis. However, the difference between the +45\,deg and -45\,deg magnetostriction, $\Delta\lambda_{c^*}$=$\lambda_{c^*\mathrm{,+45\,deg}}-\lambda_{c^*\mathrm{,-45\,deg}}$ (Fig.~\ref{RuCl3-4}(h)), is found to be rather independent of the direction of the field sweep. Similarly to the magnetization measurements, the magnetostriction shows a significant difference $\Delta\lambda_\ensuremath{{c^{\ast}}}$ for $\mathbf H \in a\ensuremath{{c^{\ast}}}$, but a small one for $\mathbf H \in b\ensuremath{{c^{\ast}}}$. Furthermore we can also observe a shoulder above $\mu_0H$=8\,T and a peak at $\mu_0H$=10.4\,T for $\mathbf{H}\in ac^*$. Note that the $\Delta{M}$ and $\Delta\lambda_{c^*}$ curves show a different field dependence at low fields, which is related to torque effects not compensated by the subtraction. \begin{figure*} \includegraphics[width=15.5truecm]{RuCl3_theory_MH-lambdaH_155_v05.pdf} \caption{(Color online) (a) Schematic illustration of the measurement configurations for each experiment. For $\mathbf{H}\in ac^*$, $\vartheta$=$+$45\,deg and $\vartheta$=$-$45\,deg, the net magnetization points along the vertexes and the edges of the Cl$_6$ octahedra, respectively. In both cases of $\mathbf{H}\in bc^*$, $\vartheta$=$\pm$45\,deg, the net magnetization points along the side of the RuCl$_6$ octahedra, which configurations are connected by the C$_2$ rotation in the honeycomb plane. (b) Magnetic field dependence of the magnetization and (c) $\Delta{M}$ magnetization difference calculated for $\mathbf{H}\in ac^*$ and $\mathbf{H}\in bc^*$ with $\vartheta$=$\pm$45\,deg canting out of the $ab$ plane. (d) Magnetic field dependence of the $\tau$ magnetic torque for $\mathbf{H}\in ac^*$, $\vartheta$=$\pm$45\,deg. (e) Magnetic field dependence of $\lambda_{c^*}/\kappa_{c^*}$ magnetostriction and (f) $\Delta\lambda_{c^*}/\kappa_{c^*}$ magnetostriction difference, calculated for $\mathbf{H}\in ac^*$ and $\mathbf{H}\in bc^*$, $\vartheta$=$\pm$45\,deg. The effect of the magnetic torque on the $\Delta\lambda_{c^*}/\kappa_{c^*}$ magnetostriction difference is illustrated in panel (f), the torque is scaled to the unit of $\lambda_{c^*}/\kappa_{c^*}$ with $A$=2.2$\cdot$10$^5$\,Pa$\cdot$Rad / (T$^2\cdot\mu_B$/f.u.). } \label{RuCl3-5} \end{figure*} Theoretical calculations for the field dependence of the magnetization are shown in Fig.~\ref{RuCl3-5}(b) with the experimental configurations illustrated in Fig.~\ref{RuCl3-5}(a) for $\mathbf{H}\in ac^*$ as well as for $\mathbf{H}\in bc^*$ with fields canted out of the $ab$-plane by $\vartheta$=$\pm$45\,deg. Note that the theoretical calculations are plotted on a wider field range than those of the measurements. In line with the experimental observations, the theoretical calculations confirm the different $M$-$H$ curves between the $\vartheta$=$+$45\,deg and $-$45\,deg canting angles for $\mathbf{H}\in ac^*$, and the calculated $\Delta{M}$ is shown in Fig.~\ref{RuCl3-5}(c). This non-symmetric difference in the magnetization is related to the orientation of the Ru$^{3+}$ magnetic moment within the Cl$_6$ octahedra, schematically illustrated in Fig.~\ref{RuCl3-5}(a). For $\vartheta$=$+$45\,deg and $-$45\,deg angles with $\mathbf{H}\in ac^*$, the net magnetization points roughly towards the top vertex, or towards the midpoint of the edge of the RuCl$_6$ octahedra, respectively. For $\mathbf{H}\in bc^*$, the calculated $M$-$H$ curves are exactly the same for the $\vartheta$=$+$45\,deg and $-$45\,deg cases. In these cases, the Ru$^{3+}$ magnetic moments are pointing towards another edge of the RuCl$_6$ octahedra in sideways. Both cases are connected by the C$_2$ rotation symmetry around the $b$ axis and therefore yield identical response in an ideal crystal. The small $\Delta{M}$ difference observed for the $\mathbf{H}\in bc^*$ measurements can be related to twinning faults, where the honeycomb layers are rotated by 30\,deg with respect to each other. This twinning fault is a different structural defect from the earlier recognized ABC/ABAB-stacking faults~\cite{Sears2015PRB,Banerjee2016NatMat}. Calculations for the field dependence of the magnetic torque $\tau$, magnetostriction $\lambda_{c^*}/\kappa_{c^*}$, and the magnetostriction difference $\Delta\lambda_{c^*}/\kappa_{c^*}$ for $\mathbf{H}\in ac^*$ and $\vartheta$=$\pm$45\,deg canting angles are shown in Fig.~\ref{RuCl3-5}(d), \ref{RuCl3-5}(e), and \ref{RuCl3-5}(f), respectively. Similarly to the experimental observations, the field dependence of $\lambda_{c^*}/\kappa_{c^*}$ is different for $\vartheta$=$+$45\,deg and $-$45\,deg, and $\Delta\lambda_{c^*}/\kappa_{c^*}$ is finite for $\mathbf{H}\in ac^*$. For $\mathbf{H}\in bc^*$ and $\vartheta$=$\pm$45\,deg, the $\lambda_{c^*}/\kappa_{c^*}$ curves are identical, similarly to the magnetization. In order to account for the effect of the magnetic torque on the experimental data, Fig.~\ref{RuCl3-5}(d) shows calculations for the $\tau$ magnetic torque for $\mathbf{H}\in ac^*$, $\vartheta$=$\pm$45\,deg. While the magnetic torque for both $\vartheta$=$\pm$45\,deg is large, the calculations show only slight differences in the magnitudes of the magnetic torques, \textit{i.e.} the $A\cdot\Delta\tau$=$A\cdot\vert\tau_{\rm +45\,deg}-\tau_{\rm -45\,deg}\vert$ is relatively small. Still, we find that the modelled $A\cdot\Delta\tau$ is comparable in magnitude to $\Delta\lambda_{c^*}/\kappa_{c^*}$, which is shown in Fig.~\ref{RuCl3-5}(f) with the same scaling factor as in Fig.~\ref{RuCl3-3}(e) for $\vartheta$=$+$45\,deg . This means that it is not possible to subtract the effect of torque on the measurements by simply measuring and subtracting the magnetostrictions in the $\pm\vartheta$ configurations. Therefore, we consider the experimentally observed $\Delta\lambda_{c^*}$ in Fig.~\ref{RuCl3-4}(h) as an aggregate of the real $\vartheta$=$\pm$45\,deg non-symmetric anisotropies in the magnetostriction and a finite magnetic torque. Additional theoretical calculations for the angular dependence of the magnetization for $\mathbf{H}\in ac^*$ and $\mathbf{H}\in bc^*$ are shown in Fig.~\ref{RuCl3-4Suppl}~\cite{Kocsis2021PRB2SM}. \section{Summary} \label{sec:Summary} We have studied the magnetic anisotropy in the Kitaev-candidate material \mbox{$\alpha$-RuCl$_3$}\xspace, using field-dependent magnetization and magnetostriction $\lambda_{c^*}$ measurements. During these measurements, the magnetic field was selectively applied both along the main crystallographic axes or canted out from the honeycomb plane, while the length changes in the $\lambda_{c^*}$ experiments were always measured along the $c^*$ axis. The field dependence of the low-temperature $\lambda_{c^*}$-$H$ magnetostriction measurements shows a double-peak structure for $\mathbf H \parallel a$ (perpendicular to one of the Ru-Ru bonds) and a single peak for $\mathbf H \parallel b$ (parallel to that Ru-Ru bond). This is in agreement with the extents of the recently reported intermediate ordered phase with modified inter-plane ordering \cite{Balz2021intermediate,bachus2021angle}. We found that the $\lambda_{c^*}$-$H$ measurements show an unusually increased degree of field-angular anisotropy compared to the magnetization measurements ($\mathbf H \parallel a$ and $\mathbf H \parallel b$ experiments compared to $\mathbf H \parallel c^*$). This suggests an additional degree of anisotropy in the magnetoelastic couplings. Our theoretical calculations based on \textit{ab-initio} derived magnetoelastic couplings show that this effect can be explained through the presence of a strong magnetoelastic $\MEC{\Gamma'}$-type coupling. The presence of the latter implies the possibility to destabilize the magnetic order via the application of uniaxial compressive strain. Both the $M$-$H$ and $\lambda_{c^*}$-$H$ measurements in the presence of canted fields show large differences and demonstrate a significant angular asymmetry when fields are canted away from the $a$-axis towards $+\ensuremath{{c^{\ast}}}$ or $-\ensuremath{{c^{\ast}}}$ axes ($\mathbf{H}\in{ac^*}$), i.e.~for positive or negative canting angles of the $H$ field. This angular asymmetry stems from the orientation of the $\mathbf H$ field with respect to the co-aligned RuCl$_6$ octahedra. However, we found that the magnetic torque has a strong influence on our magnetostriction measurements. From theory, magnetic torque is expected to be large only for canted field directions. We confirmed that the magnetic torque can qualitatively account for the measured field dependence of the magnetostriction and can contribute to the difference between of the magnetostrictions measured in positive and negative $\vartheta$ canting angles. This implies that when performing or comparing experiments in canted magnetic fields where the samples of different sizes are free-standing, such as in case of dilatometry or thermal Hall measurements, due to the very soft mechanical properties of \mbox{$\alpha$-RuCl$_3$}\xspace the magnetic torque may add relevant contributions to the measurements via plastic distortion or tilting of the crystals. \section*{Acknowledgements} The authors are grateful for fruitful discussions with Taro Nakajima, Lukas Janssen, and Matthias Vojta. The structural unit cell of the \mbox{$\alpha$-RuCl$_3$}\xspace was illustrated using the software \texttt{VESTA}\cite{Momma2008}. D.~G.~M. acknowledges support from the Gordon and Betty Moore Foundation’s EPiQS Initiative, Grant GBMF9069. S.~N. was supported by the U.S. Department of Energy Office of Science, Division of Scientific User Facilities. We acknowledge financial support from the German Research Foundation (DFG) through the Collaborative Research Center SFB 1143 (project-id 247310070), the W\"urzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147, project-id 390858490), and funding through DFG Project No. 411289067 (VA117/15-1) and TRR 288-422213477 (project A05).
2,869,038,155,178
arxiv
\section{Introduction} Heavy particles with masses above a TeV decaying to top quarks can lead to an enhanced cross section of top quarks compared to the Standard Model expectations. The fact that such a cross section can be more than doubled for top quarks with high transverse momenta ($p_{T}(\mathrm{top})$) was recognized \cite{PhysRevD.49.4454} almost immediately after the discovery of top quarks at the Tevatron. However, the Standard Model predictions on top quark cross section have not yet been confronted with experimental data for transverse energies close to the TeV scale. According to the Standard Model, inclusive production of top quarks is dominated by the $t\bar{t}$ process. Top-quark production includes contributions from single top quark processes ($t$- and $s$-channels) and from $Wt$. Top quarks can also be produced via associated Higgs production. Finally, top quarks at very large $p_{T}(\mathrm{jet})$ can originate from fragmentation, but no data exist to constrain this process. Currently, there are several high-$p_T$ measurements of top quarks focusing on the $t\bar{t}$ event topology. The D0 collaboration has reported the $t\bar{t}$ cross section up to $p_{T}(\mathrm{top})=350$~GeV \cite{Abazov:2010js}. The CDF collaboration \cite{cdftop} performed searches for highly-boosted top quarks, but statistics was insufficient to support the claim for observation of top-quark production at $p_{T}(\mathrm{top})>400$ GeV. At the LHC, ATLAS performed \cite{:2012qa} searches for $Z'$ extending the reach in $p_{T}(\mathrm{top})$ up to $500$~GeV, but without cross section measurements. CMS recently measured the top quark $p_T$ distribution up to $p_{T}(\mathrm{top})=400$~GeV \cite{:2012qka}. The measurement of top-quark cross sections at very large transverse momenta is challenging. For large jet transverse momenta, the identification of leptons (muons and electrons) from the $W$ decay is difficult since they are often collimated with $b-$jets from the top decays. This leads to a reduced electron efficiency due to isolation requirements and large fake rates for muons due to the presence of $b$-quark decay products. In addition, a $b$-tagging technique suffers from an inefficiency for large $p_{T}(\mathrm{jet})$ and poor separation between the signal and multijet background events. Because of the above reasons, the main focus of this analysis is the hadronic-final state characteristics of jets which are expected to be sensitive to the production of hadronically decaying top quarks with large $p_{T}(\mathrm{top})$. For such studies, jet masses and jet shapes are often discussed as a useful tool for the identification of top quarks and for reduction of the overwhelming rate from conventional QCD processes \cite{Agashe:2006hk,*Lillie:2007yh,*Butterworth:2007ke,*Almeida:2008tp,*Almeida:2008yp, *Kaplan:2008ie,*Brooijmans:2008,*Butterworth:2009qa,*Ellis:2009su,*ATL-PHYS-PUB-2009-081,*CMS-PAS-JME-09-001,*Almeida:2010pa,*Hackstein:2010wk,Chekanov:2010vc,*Chekanov:2010gv}. In this paper, we adopt a strategy based on a high-precision measurement of shapes of jet masses. Using realistic Monte Carlo (MC) simulations after a fast detector simulation, we show that hadronic decays of highly-boosted top quarks can be observed by performing a data-driven analysis of jet-mass shapes near the 170 GeV region, without any additional technique involving jet substructure variables. This article shows that this method becomes feasible if the top-quark yield in the fiducial region $p_{T}(\mathrm{top})>0.8$~TeV is a factor two or more larger than the expectation from the best understood $t\bar{t}$ process. Given large theoretical uncertainties for the $t\bar{t}$ process at large $p_{T}(\mathrm{top})$ and a number of other not well understood sources (see Sect.~\ref{sec:theory}) contributing to top quark production at large $p_{T}(\mathrm{top})$, this approach can be promising for observation of inclusively produced top quarks. Moreover, we also demonstrate that a $b-$tagging can substantially increase the signal-over-background ratio, leading to observation of top jets from $t\bar{t}$. \subsection{Theoretical calculations for inclusive top production} \label{sec:theory} \begin{figure} \begin{center} \includegraphics[scale=0.38, angle=0]{top_pt_summary_theory} \end{center} \caption{ The NLO and aNNLO cross sections for the number of top quarks in the $t\bar{t}$ process as a function of transverse momenta for $|\eta|<0.8$. The hatched area shows the renormalization scale uncertainty for NLO, while the filled green area shows the PDF uncertainty (see the text of Sect.~\ref{sec:theory}). The dashed line shows the NLO cross section for $\sqrt{s}=8$~TeV. } \label{fig:nlo} \end{figure} There are several Standard Model processes contributing to inclusive top-quark production at large $p_{T}(\mathrm{jet})$. The best studied process is the $t\bar{t}$ process when each boosted top quark gives origin to a jet. Single-top production ($t-$ and $s-$channels) and top-quark associated production are other sources of top-quark jets. In this case, no second jet originated from a hadronically decaying top quark is expected. Top quarks within a single high-$p_T$ jet can be produced due to flavor-changing processes and fragmentation. Finally, new resonance physics most readily contributes to the high-$p_T$ region. For the present analysis, the theoretical calculation for high-$p_T$ top quarks was performed at next-to-leading-order (NLO) using the {\tt MCFM} 6.3 program \cite{Campbell201010} based on the CT10 parton density functions (PDF) \cite{Lai:2010vv}. The renormalization ($\mu_R$) and factorization ($\mu_F$) scales were varied between $m(\mathrm{top})-m(\mathrm{top})/2$ and $m(\mathrm{top})+m(\mathrm{top})/2$, keeping the renormalization and factorization scales to be the same. The PDF uncertainty was calculated from 53 CT10 PDF sets. A check was performed with the {\tt POWHEG} program \cite{nason2007manual} which uses a $p_T$ dependent (dynamic) scale (which is considered to be more appropriate at large $p_{T}(\mathrm{top})$). It was found that this model is in good agreement with the {\tt MCFM} prediction assuming the estimated renormalization and factorization scale uncertainties. Near partonic threshold for $t{\bar t}$ production the contributions from soft-gluon emission become dominant. The soft-gluon corrections to the double-differential top cross section in transverse momentum and rapidity can be resummed at next-to-next-to-leading-logarithm (NNLL) accuracy via the two-loop soft anomalous dimension matrices \cite{Kidonakis:2010dk,*Kidonakis:2012rm}. The resummed result has been expanded at fixed order to next-to-next-to-leading order (NNLO) and, after integration over rapidity, used to calculate the top quark transverse momentum distribution, $d\sigma/dp_T$. This approximate next-to-next-to-leading-order (aNNLO) calculation from NNLL soft-gluon resummation leads to a factor of two larger $t\bar{t}$ cross section at large $p_{T}(\mathrm{top})$ compared to NLO. Figure~\ref{fig:nlo} shows the NLO and aNNLO cross sections for top quarks from the $t\bar{t}$ process in $pp$ collisions at the center-of-mass energy $\sqrt{s}=14$~TeV. The cross sections are presented as a function of the transverse momentum cut in the pseudorapidity region $|\eta|<0.8$. The expected PDF uncertainty is about $20\%$ (shown as filled band on Fig.~\ref{fig:nlo}), while the renormalization scale uncertainty is smaller. For a comparison, the cross section for $\sqrt{s}=8$~TeV is also shown but without uncertainties. Assuming an integrated luminosity of 10~fb$^{-1}$, the NLO calculation predicts 3500 top quarks in the all decay channel in the fiducial volume $p_{T}(\mathrm{top})>0.8$~TeV. This number is expected to increase to 5920 top quarks for the aNNLO. The contribution to the top quark yield from single-top production ($t-$channel \cite{Kidonakis:2011wy}, $s-$channel \cite{Kidonakis:2010tc}, and $Wt$ production \cite{Kidonakis:2010ux}) is expected to be smaller \cite{Kidonakis:2012rm} than for the $t\bar{t}$ process. Although the main focus of this study are jets with $p_{T}(\mathrm{jet})>0.8$~TeV, it should be pointed out that contributions to such jets from top quarks with $p_{T}(\mathrm{top})$ lower than $0.8$~TeV are possible due to jet-energy resolution effects. In order to take into account such effects, top quarks were generated at lower $p_{T}(\mathrm{top})$ than the minimum $p_{T}(\mathrm{jet})=0.8$~TeV used in this analysis. In the following studies, top quarks were generated using MC models with $p_{T}(\mathrm{top})>0.65$~TeV. Then their rate was scaled to 15690 top quarks as predicted by the aNNLO for the fiducial region $p_{T}(\mathrm{top})>0.65$~TeV assuming an integrated luminosity of 10~fb$^{-1}$. \begin{figure} \begin{center} \includegraphics[scale=0.38, angle=0]{ttbar_cball} \end{center} \caption{ Expectations for the jet mass distribution initiated by top quarks using {\tt PYTHIA8\,} and {\tt HERWIG++\,} after the fast detector simulation. The jet selection cuts are $p_{T}(\mathrm{jet})>0.8$ TeV and $|\eta(\mathrm{jet})|<0.8$. The number of initial top quarks is normalized to the aNNLO for $p_{T}(\mathrm{top})>0.65$~TeV. The expected number of top jets shown in this figure is 3,500, with 2,200 in the Gaussian core $140<M_{\mathrm{jet}}<200$ GeV. The jet masses generated with {\tt PYTHIA8\,} were fitted in the mass range 100-210~GeV using a Crystal Ball function \protect\cite{Oreglia}. The bottom plot shows the fit residuals. The fit has $\chi^2/$ndf=1.3. } \label{fig:tt} \end{figure} \subsection{Monte Carlo simulations} Top quark jets in $pp$ collisions were modeled using {\tt PYTHIA8\,} \cite{Sjostrand:2006za} and {\tt HERWIG++\,} \cite{Bahr2008} MC models assuming $pp$ collisions at a center-of-mass energy of $\sqrt{s}=14$~TeV. As discussed above, the number of top quarks in the fiducial region $p_{T}(\mathrm{top})>0.65$~TeV was scaled to the aNNLO cross section assuming an integrated luminosity of 10~fb$^{-1}$. In addition to the top-quark initiated jets, QCD background due to jets originating from light-flavor quarks and gluons were considered. Hadronic jets from all QCD processes (but excluding the $t\bar{t}$ production), were generated using {\tt PYTHIA8\,} and {\tt HERWIG++\,}. The MC inclusive cross section of jets was corrected to match the NLO prediction estimated with the {\tt NLOjet++} program \cite{Catani:1996vz,*Nagy:2003tz}. The estimated scaling factor was found to be close to $10\%$. The samples for $t\bar{t}$ and for the QCD dijet background events were processed through a fast detector simulation based the {\tt DELPHES} 2.0.3 framework \cite{Ovyn:2009tx} assuming the ATLAS detector geometry. The most crucial in such simulation are detector resolutions for hadronic and electromagnetic calorimeters of the ATLAS detector. Those were taken from the default {\tt DELPHES} setting based on the ATLAS studies \cite{Aharrouche:2006nf,Kulchitsky:2000gg}. \subsection{Jet mass reconstruction} Events after the fast detector simulation were selected if they contain at least one jet reconstructed with the anti-$k_T$ algorithm \cite{Cacciari:2008gp} with a distance parameter of 0.6. This distance parameter is the most optimal to collect the decay products of hadronically decaying top quarks inside jets with $p_{T}(\mathrm{jet})>0.8$~TeV \cite{Chekanov:2010vc,*Chekanov:2010gv}. Jets were reconstructed with the {\tt FastJet} package \cite{Cacciari:2011ma} using the {\tt DELPHES} calorimeter cell positions and energies. The final jets were selected with $p_{T}(\mathrm{jet})>0.8$~TeV and $|\eta(\mathrm{jet})|<0.8$. For the current analysis, the central calorimeter region is used in order to avoid biases in the reconstruction of jet shapes and in order to increase the signal over background ratio for boosted top searches: for $p_{T}(\mathrm{top})>0.8$~TeV, top quarks from the $t\bar{t}$ process are predominantly produced in the very central rapidity region. \section{Results} \subsection{Masses of top jets} As is well known, jet masses are sensitive to the presence of top-quark decays. Figure~\ref{fig:tt} shows the masses ($M_{\mathrm{jet}}$) of jets initiated by top quarks (``top jets'') using {\tt PYTHIA8\,} and {\tt HERWIG++\,} after the fast detector simulation. The jet selection cuts are $p_{T}(\mathrm{jet})>0.8$~TeV and $|\eta(\mathrm{jet})|<0.8$. The jet masses include contributions from all-top decays (including leptonic decays of $W$ bosons). The jet mass distribution can be described by a Crystal Ball function \cite{Oreglia} which has a Gaussian core (with a mean $m_0$ and a width $\sigma$) and a power-law tail with an exponent $n$ to account for energy losses of hadronic decays or leptonic $W$ decays. The parameter $\alpha$ defines the transition between the Gaussian and the power-law functions. Figure~\ref{fig:tt} shows the fit with the Crystal Ball function using {\tt PYTHIA8\,}. The peak position of the Gaussian component, which is intended to describe fully-hadronic decays, is close to 180~GeV with the width $\sigma \simeq 20$ GeV. Figure~\ref{fig:tt} shows that the difference in shapes between {\tt PYTHIA8\,} and {\tt HERWIG++\,} is small and thus can be neglected. \begin{figure} \begin{center} \subfigure[Jet mass distribution without $t\bar{t}$ process.]{ \includegraphics[scale=0.36, angle=0]{fit3param_nsignal} } \subfigure[Jet mass distribution with the $t\bar{t}$ process.]{ \includegraphics[scale=0.36, angle=0]{fit3param_sig} } \end{center} \caption{ Expectations for the jet-mass distributions for the MC models after the fast detector simulation. The rate of light-flavor jets is scaled to the NLO prediction for inclusive jets. The jet masses are shown for (a) assuming no $t\bar{t}$ process and, (b) with the $t\bar{t}$ process included. A $\chi^2$ fit was performed using the background function $a\cdot M_{\mathrm{jet}}^{-b} \cdot \exp{(-c\cdotM_{\mathrm{jet}})}$ in the mass range $100<M_{\mathrm{jet}}<270$~GeV. The jet mass prediction shown in (b) as shaded histograms is based on {\tt PYTHIA8\,} $t\bar{t}$ scaled to the aNNLO. The fit quality is $\chi^2/$ndf=1.9 for (a) and $\chi^2/$ndf=2.1 for (b). } \label{fig:fit3param} \end{figure} \subsection{Jet masses for light-flavor jets} The mass distribution of jets originating from light quarks and gluons is distinct from jet masses initiated by top quarks. Figure~\ref{fig:fit3param}(a) shows the $M_{\mathrm{jet}}$ distributions for light-flavor QCD jets (without the $t\bar{t}$ process) for {\tt PYTHIA8\,} and {\tt HERWIG++\,} after the fast detector simulation. The number of light-flavor jets was scaled to the expectation from the {\tt NLOjet++} program as discussed before. Events with $W$+jet events where also studied in the context of a possible contribution to the jet-mass shape. It was shown that $W$+jet events do not distort the region near $M_{\mathrm{jet}}\simeq 170-180$ GeV. The jet masses for light jets can reasonably be described by the functional form $a\cdot M_{\mathrm{jet}}^{-b} \cdot \exp{(-c\cdotM_{\mathrm{jet}})}$, where $a$, $b$ and $c$ are free parameters. A similar function was previously used in the measurement of hadronic $W/Z$ decays in two-jet mass spectra~\cite{Alitti:1990kw}. A fit using this function provides a MC independent way to search for any significant deviations from the jet mass shape which is expected to be falling in the tails. The fit residuals for {\tt PYTHIA8\,} and {\tt HERWIG++\,} show no significant deviation from zero. \begin{figure}[ht] \begin{center} \subfigure[Masses of jets with the $t\bar{t}$ signal scaled by two.]{ \includegraphics[scale=0.36, angle=0]{fit3param_sig_increased} } \subfigure[Masses of jets with the $t\bar{t}$ signal after $b$-tagging.]{ \includegraphics[scale=0.36, angle=0]{fit3param_sig_btag} } \end{center} \caption{ (a) Expectations for the jet mass distributions using {\tt PYTHIA8\,} and {\tt HERWIG++\,} after the fast detector simulation. For the simulation, top quarks were added to light-flavor jets. The QCD dijet background was scaled to the NLO inclusive jet cross section. A $\chi^2$ fit was performed using the background function $a\cdot M_{\mathrm{jet}}^{-b} \cdot \exp{(-c\cdotM_{\mathrm{jet}})}$ in the mass range $100<M_{\mathrm{jet}}<270$~GeV. (a) The {\tt PYTHIA8\,} expectation with the normalisation from the aNNLO for $t\bar{t}$ was scaled by a factor two. (b) The same distribution using the $t\bar{t}$ signal yield predicted by the aNNLO after applying the $b$-tagging for background and top jets. The fit quality using the background function is $\chi^2/$ndf=2.7 for (a) and $\chi^2/$ndf=3.5 for (b). } \label{fit3param_sig} \end{figure} \begin{figure}[ht] \begin{center} \subfigure[ {\tt PYTHIA} jet mass with the $t\bar{t}$ signal scaled by two.]{ \includegraphics[scale=0.36, angle=0]{fit_signalback_pythia} } \subfigure[ {\tt HERWIG} jet mass with the $t\bar{t}$ signal scaled by two.]{ \includegraphics[scale=0.36, angle=0]{fit_signalback_herwig} } \end{center} \caption{ The distributions of jet masses for $p_{T}(\mathrm{jet}) >0.8$~TeV and $|\eta(\mathrm{jet})|<0.8$ for MCs after the fast detector simulation. The jet masses include contributions from $t\bar{t}$ assuming that the $t\bar{t}$ cross section is a factor two larger than the aNNLO cross section. A simultaneous $\chi^2$ fit was performed in the mass range $100<M_{\mathrm{jet}}<270$~GeV using the function $a\cdot M_{\mathrm{jet}}^{-b} \cdot \exp{(-c\cdotM_{\mathrm{jet}})}$ for the background description plus a Gaussian to describe the excess near 170 GeV. To improve the fit stability, the width of the Gaussian is fixed to 20~GeV as expected for top jets. The bottom plots show the fit residuals with respect to the fitted signal plus background function, as well as with respect to the background component of the combined fit. } \label{fit_signalback} \end{figure} The inclusion of top quarks modifies $M_{\mathrm{jet}}$ near the 170 GeV region. Figure~\ref{fig:fit3param}(b) shows the expectation for $M_{\mathrm{jet}}$ assuming the contribution from the $t\bar{t}$ process. The top jets were simulated using {\tt PYTHIA8\,}, while their yield was scaled to the aNNLO calculation. The fit using $a\cdot M_{\mathrm{jet}}^{-b} \cdot \exp{(-c\cdotM_{\mathrm{jet}})}$ was performed in the range $100<M_{\mathrm{jet}}<270$~GeV. The fit residuals do not show any significant excess above zero, indicating that the extraction of the top signal assuming the nominal aNNLO yield for $t\bar{t}$ can be challenging. The situation is different if the top-quark yield is somewhat larger than the $t\bar{t}$ expectation. For example, Fig.~\ref{fit3param_sig}(a) shows what happens when the top signal has a factor of two larger cross section than the aNNLO prediction shown before. The signal is difficult to miss; the residuals of the fit near $M_{\mathrm{jet}}\simeq 180$~GeV show an excess above zero and have rather characteristic $S$-shape form due to the pull from the signal region. This is more apparent for {\tt HERWIG++\,} than for {\tt PYTHIA8\,}, indicating a model dependence of this observation. Another way of looking at the effect of top quarks on the jet mass distribution is to reduce the contribution of light-flavor jets using a $b$-tagging technique. Figure~\ref{fit3param_sig}(b) shows the jet masses with the nominal $t\bar{t}$ signal strength, but after applying a $b$-tagging using the {\tt DELPHES} \cite{Ovyn:2009tx} setting. It assumes a $40\%$ $b-$quark reconstruction efficiency, $10\%$ and $1\%$ misstag rates due to $c-$quark and light-flavor jets, respectively. The $b$-tagging increases the signal-over-background ratio and the $t\bar{t}$ signal is clearly observed. The scenario when the cross section of boosted top quarks is higher than the $t\bar{t}$ prediction was further studied in Fig.~\ref{fit_signalback} when a potential excess of top jets near the 170 GeV region is extracted using a signal plus background function. As before, light-flavor jets were combined with top jets from the $t\bar{t}$ process. For this hypothetical scenario, the yield of the latter process was scaled by a factor of two with respect to the aNNLO prediction. The signal function is assumed to be a Gaussian with the width of 20 GeV as expected for the top jets (see Fig.~\ref{fig:tt}). The number of top quarks included in the simulation for $p_{T}(\mathrm{top})>0.65$~TeV was 11,840 (5920 top quarks from the aNNLO times two, see Sect.~\ref{sec:theory}). This leads to 4,400 top jets with $p_{T}(\mathrm{jet})>0.8$~TeV contributing to the $M_{\mathrm{jet}}\simeq 170$ GeV region (2,200 top jets in the Gaussian core shown in Fig.~\ref{fig:tt} times two). According to the fit shown in Fig.~\ref{fit_signalback}, the number of extracted top jets is between 2,000-3,400, depending on the MC simulation. This number was extracted by integrating the Gaussian component of the background plus signal fit. Thus, the extracted number of top jets is close to the expected number of top jets included in the simulation, but there is some indication that the signal-plus-background fit somewhat underestimates the number of top jets. While the scenario when the number of top jets is a factor of two larger than the aNNLO prediction for $t\bar{t}$ may seem exotic at first, such an assumption may not be too far from the Standard Model expectation for top quarks produced inclusively within a jet (see the discussion in Sect.\ref{sec:theory}). Given the large difference between the aNNLO and NLO \cite{Kidonakis:2010dk}, higher-order QCD effects for the $t\bar{t}$ process may play a significant role in an increase of top-quark jets at very large $p_{T}(\mathrm{jet})$. It is also important to mention that theoretical uncertainties, especially those related to PDF, can be as large as $20\%$ (see Fig.~\ref{fig:nlo}). Less understood contributions from single-top production (about $30\%$ at lower $p_{T}(\mathrm{top})$), flavor-changing processes and from fragmentation within jets should also be considered for the inclusive production of top quarks inside jets. Taking into account all such effects, our conjecture about the factor two may not be too far from the real situation. Therefore, a better understanding of all Standard Model processes leading to top production at high $p_{T}(\mathrm{jet})$ is needed. One can also consider the discussed result from the point of view of discovery reach. The approach can be used to exclude any potential source of new physics for a number of models (such as those based on $Z'$ and $W'$ bosons) leading to top quarks at large $p_{T}(\mathrm{jet})$. From the above consideration, any source of new physics can be excluded if it leads to a top-quark cross section above 1184~fb in the fiducial region $p_{T}(\mathrm{jet})>0.8$ TeV and $|\eta(\mathrm{jet})|<0.8$. This cross section is obtained from the aNNLO $t\bar{t}$ prediction multiplied by a factor two. Note that the approach can exclude a number of exotic processes. For example, models with $Z'$ and Kaluza–Klein gluons may have larger cross sections compared to the Standard Model $t\bar{t}$ process at very large $p_{T}(\mathrm{jet})$. As a consequence, a number of limits have been set \cite{Chatrchyan:2012cx,:2012qa} excluding such models up to $1.5-2$ TeV without experimental observation of top quarks from the Standard Model $t\bar{t}$ process at $p_{T}(\mathrm{top})>0.6$~TeV. The high-precision studies of jet mass using analytic background templates may seem difficult from the instrumental point of view since we are looking for a top quark signal on top of a smoothly falling distribution which has a signal-over-background ratio at the level of $10\%$ for the $t\bar{t}$ process. However, the assessment of systematics on the presence of a bump must have a different strategy than for a typical jet-mass measurement. Any variation of selection cuts or change in the instrumental procedure should be followed by the data-driven approach using the analytic fit to identify a bump after each systematic change, unlike a typical QCD measurement of jet masses. For example, jet-energy scale variation should lead to a change of jet masses, but the signal strength after the signal-plus-background fit should not be strongly affected given the data-driven nature of such extraction. Finally, a possibility of using other techniques based on $b$-tagging, jet shapes and jet substructure can be considered, which can also help to deal with some experimentally unavoidable effects, such as pile up. These techniques have the potential to increase the signal over background ratio for $M_{\mathrm{jet}}$ close to 170 GeV when dealing with high-$p_T$ inclusive jets. This has been illustrated in Fig.~\ref{fit3param_sig}(b) when considering jets after the $b$-tagging. As follows from this study, if the QCD multijet background is reduced at least by a factor of two compared to the top-quark signal, the $t\bar{t}$ process should be well observed for the yield expected from the aNNLO calculation. Studies of such techniques are outside the scope of this paper and can be found elsewhere \cite{Agashe:2006hk,*Lillie:2007yh,*Butterworth:2007ke,*Almeida:2008tp,*Almeida:2008yp, *Kaplan:2008ie,*Brooijmans:2008,*Butterworth:2009qa,*Ellis:2009su,*ATL-PHYS-PUB-2009-081,*CMS-PAS-JME-09-001,*Almeida:2010pa,*Hackstein:2010wk,Chekanov:2010vc,*Chekanov:2010gv}. \section{Conclusions} This paper shows that jet masses alone, without any complicated techniques involving substructure variables, already provide a sensitive probe for inclusively produced top quarks within high-$p_T$ jets. Due to the nature of the inclusive measurement, such technique is not based on tagging of top quarks in the opposite direction. The approach allows to study top quarks using the assumption that the background fit function has a smoothly falling shape and does not contain a hump near the 170 GeV region, thus it can be modeled analytically. As shown in this paper, the method has the potential to detect highly-boosted top quarks if their yield is a factor of two or more larger than that from the best-understood $t\bar{t}$ process assuming the aNNLO prediction. This observation also implies that any technique capable of reducing QCD background near $M_{\mathrm{jet}}\simeq 170$~GeV at least by a factor of two should be sufficient for the observation of boosted top quarks from the Standard Model $t\bar{t}$ process. There are other sources for inclusive production of top quarks for very large $p_{T}(\mathrm{jet})$, but their good understanding requires further studies. Once they are understood, any enhancement of top-quark cross section over the Standard Model prediction would be indicative of the presence of new resonances at the TeV scale. \section*{Acknowledgements} We would like to thank many colleagues for the discussion of these results. We thank R.~Blair, T.~LeCompte, J.~Proudfoot and R.~Yoshida for the discussion of the jet-mass fitting technique. We also thank M.~Schulze, E.~Berger and Z.~Sullivan for the discussion and their help with the NLO calculations. The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (``Argonne''). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. The work of N. Kidonakis is supported by the National Science Foundation under Grant No. PHY 1212472.
2,869,038,155,179
arxiv
\section{Introduction} \vskip 1pc The study of the potential power of quantum computers has been a major theoretical challenge. There will be an additional incentive to build a quantum computer if we can identify computationally important problems for which quantum computation offers significant speedups over computation on a classical computer. For discrete problems, the best known quantum algorithms are due to Shor and Grover, see \cite{shor,grover}. Shor's algorithm for factorization has an exponential speedup over all {\it known} algorithms on a classical computer. Still, we can not yet claim that we have an exponential speedup for this problem, since the complexity of factorization on a classical computer is unknown. Grover's algorithm for data search offers a quadratic speedup. For continuous problems, quantum complexity is known for linear problems such as multivariate integration, path integration and multivariate approximation, see \cite{heinrich,H03,H04a,H04b,N01,TW02}. For these problems we have an exponential speedup over the worst case setting, and a polynomial speedup over the randomized setting. The first quantum study of a nonlinear continuous problem was done in \cite{Kacewicz} for ordinary differential equations with polynomial speedups over the classical settings. The purpose of this paper is to present classical and quantum complexity results of another nonlinear continuous problem. This continuous problem is quite natural and computationally important, since it corresponds to the (simplified) univariate Sturm-Liouville eigenvalue problem. The Sturm-Liouville eigenvalue problem is defined in \cite{courant} in full generality. Here it is defined as finding the smallest eigenvalue of the differential operator $$ {\mathbb L}_qu\,(x)\,:=\,-u^{\prime\prime}(x)\,+\,q(x)\,u(x)\qquad\mbox{for}\ \ x\in (0,1), $$ with the boundary conditions $u(0)=u(1)=0$. We assume that the function $q$ is non-negative and belongs to the class $C^2([0,1])$ of twice continuously differentiable functions whose norm $\|q\|:=\max_{i=0,1,2}\max_{x\in [0,1]}|q^{(i)}(x)|$ is bounded by $1$. The operator ${\mathbb L}_q$ maps $C^2([0,1])$ into $C([0,1])$. The Sturm-Liouville eigenvalue problem has been extensively studied in the literature. The properties of the eigenvalues and the eigenfunctions are well known and so are numerical algorithms for approximating them on a classical computer, see, e.g. \cite{babuska,collatz,courant,keller,strang}. Nevertheless, the complexity of approximating the smallest eigenvalue in the worst case and randomized settings, as well as in the quantum setting, has not yet been addressed. In this paper we study classical and quantum algorithms. We prove bounds on the worst case and randomized complexities on a classical computer, and bounds on the query complexity and on the qubit complexity. We prove that the complexity in the classical settings is a polynomial in ${\varepsilon}^{-1}$. We study the quantum setting with {\em bit} queries and prove polynomial speedups over the classical settings. Bit queries correspond to approximate computation of function values, see~\cite{heinrich}, and are used in all papers dealing with the quantum study of continuous problems. We also study the quantum setting with {\em power} queries. Such queries are formally defined in Section 5.2. Here we only mention that they are used in the phase estimation algorithm, which is the core of many quantum algorithms including Shor's and Grover's algorithms. Power queries are controlled-$W^{p_j}$ queries for some $n\times n$ unitary matrix $W$ and some exponents~$p_j$. For the phase estimation algorithm, we have $p_j=2^{j-1}$ for $j=1,2,\dots,m$, with~$m$ of order $\log\,{\varepsilon}^{-1}$. For the factoring problem of a large integer $N$, Shor's algorithm uses the unitary matrix $W$ such that power queries can be implemented by at most $O(\log^3N)$ elementary quantum gates. For the Sturm-Liouville eigenvalue problem, as well for all problems studied in \cite{PW04}, we use power queries with the specific unitary matrix \begin{equation}\label{matrixW} W\,=\,\exp\left({\tfrac12\,\mathrm{i}\,M_q}\right)\qquad \mbox{with}\qquad \mathrm{i}\,=\,\sqrt{-1}, \end{equation} where $M_q$ is an $n\times n$ real symmetric tridiagonal matrix that is a classical approximation of the differential operator ${\mathbb L}_q$, see Section 3.2. The matrix $M_q$ depends on the values of $q(j/(n+1))$ that appear on the diagonal of $M_q$ for $j=1,2,\dots,n$. Unitary matrices similar to (\ref{matrixW}) play a key role in quantum mechanics. They give the solution of the Schr\"odinger equation, they are the propagator of a system evolving with Hamiltonian $M_q$, and are important in quantum simulation, see \cite{nielsen}. Zalka \cite{zalka} deals with their implementation. The crucial point about power queries is that we can use $W^j$ of the matrix $W$ given by (\ref{matrixW}) as one quantum query for some $j$. Hence, lower bound results for bit queries do not apply to power queries. We prove that in the quantum setting with power queries, the Sturm-Liouville eigenvalue problem requires only roughly $\log\,{\varepsilon}^{-1}$ power queries with the matrix $W$ of (\ref{matrixW}). As shown in \cite{PW04}, many computational problems can be reduced to the solution of the Sturm-Liouville eigenvalue problem, and they can be also solved in polylog number of power queries. The list of such problems include Grover's search, NP-complete problems, and many continuous problems. This proves that the quantum setting with power queries with the matrix $W$ of (\ref{matrixW}) is exponentially more powerful than the quantum setting with bit queries. We stress that, contrary to Shor's algorithm, we do {\em not} know if power queries with the $n\times n$ matrix $W$ of (\ref{matrixW}) can be implemented by a number of existing elementary quantum gates that is polylog in $n$. We asked a number of colleagues and most of them doubt whether this can be achieved. If this is indeed the case, then the positive results on the polylog number of such power queries will be of only theoretical interest. Still, if a future quantum computer is able to perform such power queries in a polylog number of, perhaps, more general elementary quantum gates or by some other quantum devices, the polylog number of power queries will lead to efficient quantum algorithms, and will allow us to solve many computational problems exponentially faster than on a classical computer. {}From this point of view, we may interpret the positive results on the number of power queries with the matrix~$W$ of (\ref{matrixW}) as the indication that building a quantum computer with such queries would be a very desirable task which would give us a very powerful computational device. \section{Survey of the Results} In this section we explain our results in more technical terms. For a classical computer, we study the worst case and randomized settings in the real number model of computation with oracles, see \cite{novak,traub,TW98}. That is, we assume that arithmetic operations (addition, subtraction, multiplication, division, and evaluation of elementary functions), as well as comparisons of real numbers, are performed exactly with cost taken as unity. We also assume that the information about functions $q$ is given by sampling $q$ at finitely many points with the cost of one function evaluation taken as $\cc$. Typically $\cc\gg 1$. We want to approximate the smallest eigenvalue ${\lambda}(q)$ of the operator ${\mathbb L}_q$ to within ${\varepsilon}$. Let $n({\varepsilon})$ be the smallest number of function values of $q$ needed to compute such an ${\varepsilon}$-approximation in a given setting. The number $n({\varepsilon})$ is called the {\em information complexity}. The {\em complexity}, $\comp({\varepsilon})$, is defined as the minimal total cost of computing an ${\varepsilon}$-approximation in a given setting. Obviously we have $$ \cc\,n({\varepsilon})\,\le\,\comp({\varepsilon}). $$ We prove that in both classical settings, the complexity of the Sturm-Liouville eigenvalue problem is polynomial in ${\varepsilon}^{-1}$, or equivalently is exponential in the number $\lfloor \log\,{\varepsilon}^{-1}\rfloor $ of correct bits of a computed approximation. More precisely, there exist positive numbers ${\alpha}_i$ independent of ${\varepsilon}$ such that: \begin{itemize} \item in the worst case setting, \begin{eqnarray*} {\alpha}_1\,{\varepsilon}^{-1/2}\,\le\,&n({\varepsilon})&\,\le\,{\alpha}_2\,{\varepsilon}^{-1/2},\\ {\alpha}_1\,\cc\,{\varepsilon}^{-1/2}\,\le\,&\comp({\varepsilon})&\,\le\, {\alpha}_2\,\cc\,{\varepsilon}^{-1/2}\,+\,{\alpha}_3\,{\varepsilon}^{-1/2}\,\log\,{\varepsilon}^{-1}, \end{eqnarray*} \item in the randomized setting, \begin{eqnarray*} {\alpha}_4\,{\varepsilon}^{-2/5}\,\le\,&n({\varepsilon})&\,\le\,{\alpha}_5\,{\varepsilon}^{-2/5},\\ {\alpha}_4\,\cc\,{\varepsilon}^{-2/5}\,\le\,&\comp({\varepsilon})&\,\le\,{\alpha}_5\,\cc\,{\varepsilon}^{-2/5}\,+\, {\alpha}_6\,{\varepsilon}^{-1/2}\,\log\,{\varepsilon}^{-1}. \end{eqnarray*} \end{itemize} The lower bounds on $n({\varepsilon})$, and consequently on $\comp({\varepsilon})$, are obtained by relating the eigenvalue problem to the integration problem for functions from the unit ball of $C^2([0,1])$. It is well known that the minimal number of function values for this integration problem is bounded from below by roughly ${\varepsilon}^{-1/2}\,$ in the worst case setting and by ${\varepsilon}^{-2/5}$ in the randomized setting; see, e.g., \cite{novak,traub} and the survey of these results in \cite{TW98}. The upper bounds on $n({\varepsilon})$ and $\comp({\varepsilon})$ in the worst case setting are obtained by the cost of the classical algorithm that computes an ${\varepsilon}$-approximation by the bisection algorithm of the Sturm sequence \cite[p.~300]{wilkinson}, see also \cite[Ch.~5.3.4]{demmel}, applied to an $n\times n$ matrix which is the classical discretization of the operator ${\mathbb L}_q$ with $n=\Theta({\varepsilon}^{-1/2})$. The matrix depends on $n$ function values of $q$ computed at equidistant points of $[0,1]$. Since we need roughly $\log\,{\varepsilon}^{-1}$ bisection steps, and the cost of each step is proportional to $n$, the total cost is of order $(\cc+\log\,{\varepsilon}^{-1}){\varepsilon}^{-1/2}$. Hence, modulo the logarithm of ${\varepsilon}^{-1}$, the worst case complexity is of order $\cc\,{\varepsilon}^{-1/2}$. The upper bounds on $n({\varepsilon})$ and $\comp({\varepsilon})$ in the randomized setting are obtained by the following algorithm. We first approximate the function $q$ by a natural cubic spline $\bar q$ using $n$ deterministic sample points of $q$ at equidistant points of $[0,1]$ with $n=\Theta({\varepsilon}^{-2/5})$. The relationship between the smallest eigenvalue and integration problems, see Section~3, states that \begin{equation}\label{1} {\lambda}(q)\,=\,{\lambda}(\bar q)\,+\,\int_0^1\left(q(x)-\bar q(x)\right) u_{\bar q}^2(x)\,dx \,+\,O(n^{-4}). \end{equation} Here $u_{\bar q}$ is the normalized eigenfunction, $\int_0^1u_{\bar q}^2(x)\,dx=1$, corresponding to the smallest eigenvalue ${\lambda}(\bar q)$. Since we have complete information on the spline $\bar q$, we may approximate ${\lambda}(\bar q)$ and $u_{\bar q}$ with arbitrarily small error. For ${\lambda}(\bar q)$, we achieve an error of order ${\varepsilon}$ as in the worst case setting, with cost proportional to ${\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}$. To obtain an approximation to $u_{\bar q}$, we apply one step of the inverse power algorithm with an appropriately chosen initial vector. In this way we obtain a vector, from which we compute $u_{\bar q}$ via piecewise interpolation. The total cost of computing ${\lambda}(\bar q)$ and $u_{\bar q}$ is of order ${\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}$. We then approximate the second term in (\ref{1}) using the Monte Carlo algorithm for the function $(q(x)-\bar q(x))u_{\bar q}^2(x)$ computed at $n$ randomized points with uniform distribution over $[0,1]$. This leads to an ${\varepsilon}$-approximation in the randomized setting with cost bounded from above by a quantity proportional to $\cc\,{\varepsilon}^{-2/5}+{\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}$, where the first term bounds the information cost and the second term bounds the combinatorial cost of the algorithm. Hence, we have a sharp estimate on the randomized information complexity $n({\varepsilon})$. The ratio of the upper to lower bounds of the randomized complexity is roughly at most ${\varepsilon}^{-1/10}$. In both classical settings, algorithms for which we obtain upper bounds on complexity require space of order ${\varepsilon}^{-1/2}$. This follows from the fact that we need to work on $n\times n$ tridiagonal matrices with $n$ of order ${\varepsilon}^{-1/2}$. We now turn to the quantum setting. Quantum algorithms are described in Section 4. Here we only mention that quantum algorithms work on $2^\nu\times 2^\nu$ unitary matrices, where $\nu$ is the number of qubits. The qubit complexity is defined as the minimal number of qubits needed to solve a problem. Roughly speaking, the qubit complexity corresponds to the space complexity for a classical computer. For the foreseeable future, qubits will be a scarce resource. That is why the qubit complexity is especially important, and computationally important problems with relatively small qubit complexity are of special interest. We prove that the qubit complexity, $\mathrm{comp}^{\mathrm{qub}}({\varepsilon})$, of the Sturm-Liouville eigenvalue problem is of order $\log\,{\varepsilon}^{-1}$, which is relatively modest. In this paper $\log$ denotes $\log_2$. More precisely, we prove that $$ \tfrac12\,\log\,{\varepsilon}^{-1}\,+\,\Omega(1)\,\le\,\mathrm{comp}^{\mathrm{qub}}({\varepsilon})\,\le\, \tfrac32\,\log\,{\varepsilon}^{-1}\,+\,O(1). $$ These bounds hold regardless of the kind of queries used. Clearly, the qubit complexity yields a lower bound for the cost of any quantum algorithm solving this problem. We now turn to the quantum setting with bit queries. We show that the bit query complexity is $\Theta({\varepsilon}^{-1/3})$. This result is obtained by using: \begin{itemize} \item equation (\ref{1}) relating the Sturm-Liouville eigenvalue problem to integration, \item a lower bound on bit queries for integration, and \item a modification of the classical randomized algorithm described above that uses a quantum summation algorithm instead of Monte Carlo to approximate the weighted integral in (\ref{1}). \end{itemize} We now discuss the quantum setting with power queries. In this setting, the Sturm-Liouville eigenvalue problem can be solved using the well-known phase estimation algorithm as a basic tool, see, e.g., \cite[Section~5.2]{nielsen}. This algorithm uses power queries and the quantum inverse Fourier transform as its main ingredients. The power queries have the form controlled-$W^{2^j}$ for $j\in\naturals$, i.e., they use powers of the matrix $W=\exp\left(\frac12\mathrm{i}\,M_q\right)$, with $M_q$ an $n\times n$ real symmetric tridiagonal matrix whose diagonal elements depend on the values of~$q$. The matrix $M_q$ is a well-known discretization of the differential operator ${\mathbb L}_q$, and its size $n$ depends on the necessary accuracy. To obtain an ${\varepsilon}$-approximation we use $n$ of order ${\varepsilon}^{-1/2}$. The phase estimation algorithm uses the exact eigenvector of $M_q$, equivalently of $W$, as part of its initial state, see \cite[Section~5.2]{nielsen}. Abrams and Lloyd \cite{abrams} analyzed the case when the exact eigenvector is replaced by an approximate eigenvector and concluded that as long as the approximation is {\em good enough}, the phase estimation algorithm will still supply a good approximation to the corresponding eigenvalue. Jaksch and Papageorgiou \cite{jaksch} proposed an efficient construction of an approximate eigenvector. Their idea was to solve the problem with low accuracy on a classical computer and obtain a \lq\lq short\rq\rq vector which approximates the eigenfunction $u_q$ at few points. Then the amplitudes of this short vector are replicated on a quantum computer by the Hadamard transform, which yields a \lq\lq long\rq\rq (vector) state that can be used as the approximate initial state in the phase estimation algorithm. We show how the construction of Jaksch and Papageorgiou can be used for the Sturm-Liouville eigenvalue problem. In this way, we compute an ${\varepsilon}$-approximation of the smallest eigenvalue with probability~$\tfrac34$ by the phase estimation algorithm using $\log\,{\varepsilon}^{-1}\,+O(1)$ power queries. The algorithm requires an additional number of quantum operations at most of order $\log^2{\varepsilon}^{-1}$. This additional cost is for the quantum inverse Fourier transform. Finally, the number of qubits is $\frac32\,\log\,{\varepsilon}^{-1}\,+\,O(1)$. A lower bound on the number of power queries of order $\log\,{\varepsilon}^{-1}$ has been proven in \cite{Bessen}. Comparing these quantum estimates to the classical complexity bounds in the worst case and randomized setting, we see that the quantum setting with power queries yields an exponential speedup between the number of power queries and the number of function values needed for the Sturm-Liouville eigenvalue problem. Finally, we point out important consequences of our results, which we study in detail in \cite{PW04}. Knowing that the Sturm-Liouville eigenvalue problem can be solved with polylog power queries, it is natural to study which computational problems can be reduced to this problem. In this respect, we think that the most important result of this paper is the formula that relates this eigenvalue problem to integration. In a particular case, this formula, see~(\ref{333}), states that \begin{equation}\label{444} {\lambda}(q)\,=\,\pi^2+\tfrac12\,+\,2\int_0^1\left(q(x)-\tfrac12\right) \sin^2(\pi x)\, dx\,+\,O\left(\|q-\tfrac12\|_{\infty}^2\right). \end{equation} Hence, the problem of computing the smallest eigenvalue is equivalent, modulo the second order term, to the weighted integration problem. Since ${\lambda}(q)$ can be approximated with polylog power queries, so can the weighted integral of $q$. It turns out that many computational problems can be formulated as an integration problem. Examples include important discrete problems such as Grover's search, the approximation of the Boolean mean, and NP-complete problems. The approximation of the Boolean mean is used as the primary tool to compute multivariate integrals and path integrals. Hence, all these problems can be solved by reducing them to the Sturm-Liouville eigenvalue problem with a polylog number of power queries in the quantum setting. It is well-known that Grover's search and the approximation of the Boolean mean require a number of bit queries polynomial in the problem size, which in our case is a polynomial in ${\varepsilon}^{-1}$. This shows that power queries are exponentially more powerful than bit queries, see \cite{PW04} for details. \vskip 2pc \section{Problem Definition} We deal with functions from the class $$ {\bf Q}\,=\,\big\{\,q:[0,1]\to [0,1]\ \big|\ \ q\in C^2([0,1])\ \ \mbox{and}\ \ \|q\|:=\max_{i=0,1,2}\ \max_{x\in [0,1]}|q^{(i)}(x)|\,\le 1\,\big\}. $$ For a function $q\in {\bf Q}$, we consider the Sturm-Liouville eigenvalue problem ${\mathbb L}_qu={\lambda}\,u$ for a non-zero $u$, or equivalently \begin{equation} u^{\prime\prime}(x) - q(x) u(x) + \lambda u(x) = 0, \quad {\rm for\ } \ x\,\in\,(0,1), \label{eq:SL1} \end{equation} with the boundary conditions \begin{equation}\label{eq:SL2} u(0)=u(1)=0. \end{equation} Let ${\lambda}={\lambda}(q)$ be the smallest eigenvalue of (\ref{eq:SL1}), (\ref{eq:SL2}). Multiplying (\ref{eq:SL1}) by $u$ and integrating by parts, see \cite{babuska,courant,strang}, we conclude that the smallest eigenvalue satisfies \begin{equation}\label{var} {\lambda}(q)\,=\,\min_{0\ne u\in H_0^1} \frac{\int_0^1\left[ (u^\prime(x))^2 + q(x) u^2(x)\right] \, dx} {\int_0^1 u^2(x)\, dx}, \end{equation} where $H_0^1$ is the Sobolev space of absolutely continuous\footnote{ A function $f$ is absolutely continuous if and only if it can be written as $f(x)=f(0)+\int_0^xf'(t)dt$ for all $x\in[0,1]$.} functions for which $u^\prime\, \in L_2([0,1])$ and $u(0)=u(1)=0$. Let $u_q$ be a normalized real eigenfunction corresponding to the smallest eigenvalue. It is known that the eigenvalues of ${\mathbb L}_q$ are simple, and the eigenspace corresponding to ${\lambda}(q)$ is of dimension one. Therefore $u_q$ is uniquely defined up to the sign. In particular, $u_q^2$ is uniquely defined. Then (\ref{var}) states that \begin{equation} {\lambda}(q)\,=\,\int_0^1\left(\left(u^{\prime}_q(x)\right)^2\,+\,q(x)u^2_q(x)\right) \, dx\qquad\mbox{and}\qquad \|u_q\|_{L_2}\,:=\,\left( \int_0^1u_q^2(x)\,dx\right)^{1/2}\,=\,1. \end{equation} Observe that $q\in {\bf Q}$ implies that $u_q\in C^4([0,1])$. Since $\|q\|\le 1$, and $\|u_q\|_{L_2}=1$ with $u_q(0)=u_q(1)=0$, then $|u^{(i)}_q(x)|$ are uniformly bounded for all $i\in[0,4]$, $x\in [0,1]$ and $q\in {\bf Q}$, see e.g., \cite[p. 337]{courant}. The smallest eigenvalue ${\lambda}(q)$ is a non-decreasing function of $q$, i.e., $q_1(x)\le q_2(x)$ for $x\in [0,1]$ implies ${\lambda}(q_1)\le {\lambda}(q_2)$. It is known that for $q\equiv c$ we have $$ {\lambda}(c)\,=\,\pi^2+c\qquad\mbox{and}\qquad u_c(x)\,=\,\sqrt{2}\,\sin(\pi x). $$ This implies that for $q\in {\bf Q}$, we have ${\lambda}(q)\in [{\lambda}(0),{\lambda}(1)]=[\pi^2,\pi^2+1]$. We will need estimates of the smallest eigenvalues and their eigenfunctions for perturbed functions $q$. This is a classical problem and many such estimates can be found in the literature, not only for the simplified Sturm-Liouville problem that we consider in this paper but also for more general eigenvalue problems. In our case, the problem of perturbed eigenvalues and eigenvectors is well-conditioned, since the differential operator ${\mathbb L}_q$ is symmetric and the eigenvalues of ${\mathbb L}_q$ are well separated. Combining results from \cite{courant,keller,titchmarsh} one can obtain the following estimates for $q,\bar q\in {\bf Q}$: \begin{eqnarray} |{\lambda}(q)-{\lambda}(\bar q)|\,&\le&\,\|q-\bar q\|_{\infty}\,:=\,\max_{x\in [0,1]}|q(x)-\bar q(x)|,\label{111}\\ \|u_q-u_{\bar q}\|_{\infty}\,&=&\,O\left(\|q-\bar q\|_{\infty} \right),\label{222}\\ {\lambda}(q)\,&=&\,{\lambda}(\bar q)\,+\,\int_0^1\left(q(x)-\bar q(x)\right) u_{\bar q}^2(x)\,dx \,+\, O\left(\|q-\bar q\|_{\infty}^2\right).\label{333} \end{eqnarray} We stress that the factors in the big-$O$ notation are independent of $q$ and $\bar q$. These relations follow by elementary arguments. Indeed, (\ref{111}) follows from (\ref{var}) by taking $u=u_{\bar q}$, which leads to ${\lambda}(q)-{\lambda}(\bar q)\le \|q-\bar q\|_{\infty}$. By replacing the roles of $q$ and $\bar q$ we get ${\lambda}(\bar q)-{\lambda}(q)\le \|q-\bar q\|_{\infty}$, which implies (\ref{111}). The next relation (\ref{222}) can be also proved by a matrix approximation to the operator ${\mathbb L}_q$, which will be done in Section~4. Finally, (\ref{333}) follows by again taking $u=u_{\bar q}$ in (\ref{var}), which leads to \begin{eqnarray*} {\lambda}(q)\,&\le&\,{\lambda}(\bar q)+ \int_0^1\left(q(x)-\bar q(x)\right)\,u_{\bar q}^2(x)\,dx \\ &=&\,{\lambda}(\bar q)+ \int_0^1\left(q(x)-\bar q(x)\right)\,u_{q}^2(x)\,dx + \int_0^1\left(q(x)-\bar q(x)\right)\,\left(u_{\bar q}^2(x)-u_q^2(x)\right)\,dx. \end{eqnarray*} By (\ref{222}), the last term is of order $\|q-\bar q\|^2_{\infty}$. Taking $u=u_q$ in the expression (\ref{var}) defining ${\lambda}(\bar q)$, we obtain $$ {\lambda}(\bar q)\,\le\,{\lambda}(q)+\int_0^1\left(\bar q(x)-q(x)\right)u_q^2(x)\,dx. $$ The last two inequalities imply (\ref{333}). We shall see later that the formula (\ref{333}) will be very useful in deriving lower bounds for classical algorithms. Note that if we take $\bar q\equiv \tfrac12$, then the formula (\ref{333}) becomes (\ref{444}). \section{Classical Algorithms} In this section we consider classical algorithms, i.e., algorithms on a classical (non-quantum) computer. These algorithms can be either deterministic or randomized. They use information about the functions $q$ from ${\bf Q}$ by computing $q(t_i)$ for some discretization points $t_i\in [0,1]$. Here, $i=1,2,\dots,n_q$, for some $n_q$, and the points $t_i$ can be adaptively chosen, i.e., $t_i$ can be a function $$ t_i\,=\,t_i(t_1,q(t_1),\dots,t_{i-1},q(t_{i-1})), $$ of the previously computed function values and points for $i\ge 2$. The number $n_q$ can also be adaptively chosen, see, e.g., \cite{traub} for details. A classical deterministic algorithm produces an approximation $$ \phi(q)\,=\,\phi (q(t_1),\dots,q(t_{n_q})) $$ to the smallest eigenvalue ${\lambda}(q)$ based on finitely many values of $q$ computed at deterministic points. Let $n=\sup_{q\in {\bf Q}}n_q$. We assume that $n<\infty$. The worst case error of such a deterministic algorithm $\phi$ is given by \begin{equation} e^{\cld}(\phi,n) = \sup_{q\in {\bf Q}}|{\lambda}(q) - \phi(q)|. \label{eq:cde} \end{equation} A classical randomized algorithm produces an approximation to ${\lambda}(q)$ based on finitely many values of $q$ computed at random points, and is of the form $$ \phi_{\omega}(q)\,=\,\phi_{\omega}(q(t_{1,\omega}), \dots,q(t_{n_{q,\omega},\omega})), $$ where $\phi_\omega,t_{i,\omega}$ and $n_{q,\omega}$ are random variables. We assume that the mappings \begin{eqnarray} \omega &\mapsto& t_{i,\omega}\,=\, t_i(t_{1,\omega},q(t_{1,\omega}),\dots,t_{i-1,\omega},q(t_{i-1,\omega})), \nonumber \\ \omega &\mapsto& \phi_\omega \nonumber,\\ \omega &\mapsto& n_{q,\omega} \nonumber \end{eqnarray} are measurable. Let $n_q={\mathbb E}(n_{q,\omega})$ be the expected number of values of the function $q$ with respect to~$\omega$ . As before, we assume that $n\,=\,\sup_{q\in {\bf Q}}n_q<\infty$. The randomized error of such a randomized algorithm $\phi$ is given by \begin{equation} e^{\clr}(\phi, n)=\sup_{q\in {\bf Q}}\left( {\mathbb E}[{\lambda}(q) - \phi_\omega(q)]^2 \right)^{1/2}. \label{eq:cre} \end{equation} For simplicity and brevity we consider the error of randomized algorithms in the $L_2$ sense. It is straightforward to extend our results for the error of randomized algorithms defined in the $L_p$-sense with $p\in [1,\infty]$. We denote the minimal number of function values needed to compute an ${\varepsilon}$-approximation of the Sturm-Liouville eigenvalue problem in the worst case and randomized settings by \begin{eqnarray*} \nwor\,&=&\,\min\{\,n:\ \exists\ \phi \ \mbox{such that}\ e^{\cld}(\phi,n)\,\le\,{\varepsilon}\;\}\ \ \mbox{and}\\ \nran\,&=&\,\min\{\,n:\ \exists\ \phi \ \mbox{such that}\ e^{\clr}(\phi,n)\,\le\,{\varepsilon}\;\}, \end{eqnarray*} respectively. \subsection{Lower Bounds} We now prove lower bounds on $\nwor$ and $\nran$. \begin{thm}\label{thm1} $$ \nwor\,=\,\Omega\left({\varepsilon}^{-1/2}\right),\qquad \nran\,=\,\Omega\left({\varepsilon}^{-2/5}\right). $$ \end{thm} \vskip 1pc \noindent {\it Proof.\ } Define \begin{equation}\label{classF} F\,=\,\left\{\,f:\ f \in C^2([0,1]),\ \max\left(\|f\|_{\infty},\|f^{\prime}\|_{\infty}, \|f^{\prime\prime}\|_{\infty} \left. \right)\,\le\,1\,\right\}\right), \end{equation} and consider the weighted integration problem $$ I(f)\,=\int_0^1f(x)\sin^2(\pi x)\,dx\qquad \forall\,f\in F. $$ It is well-known that any algorithm using $n$ function values for approximating of this weighted integration problem has worst case error at least proportional to $n^{-2}$ in the worst case setting, and to $n^{-2.5}$ in the randomized setting, see \cite{novak,traub}\footnote{Formally, these results are proved for $I(f)=\int_0^1f(x)\,dx$. However, the same proofs can be applied for the integration problem with the weight $\sin^2(\pi x)$ and the same lower bounds hold.}. For $c\,\in\,(0,\tfrac12]$, consider the class \begin{equation}\label{classfc} F_c\,=\,F\,\cap\,\{\,f\in F\,: \ \|f\|_{\infty}\,\le\,c\,\}. \end{equation} For $n^{-2}$ much less than $c$, the proofs for the class $F$ can be used to deduce the same lower bounds on algorithms for approximation of the weighted integration problem for the class~$F_c$. For $f\in F_c$ define $q=\tfrac12+f$. Then $q\in {\bf Q}$. {}From (\ref{444}) we have $$ {\lambda}(q)\,=\,\pi^2\,+\,\tfrac12\,+\,2\,I(f)\,+\,O(c^2). $$ For any algorithm $\phi$ using $n$ function values of $q$ for the Sturm-Liouville eigenvalue problem, define the algorithm $\psi(f)=\tfrac12(\phi(q)-\pi^2-\tfrac12)$ for the weighted integration problem. Then $\psi$ uses $n$ function values of $f$, and \begin{equation}\label{777} {\lambda}(q)-\phi(q)\,=\,2\left(I(f)-\psi(f)\right)\,+\,O(c^2). \end{equation} Let $c=n^{-3/2}$. Then $n^{-2}=o(c)$, and therefore the error of $\phi$ is lower bounded by $\Omega(n^{-2})$ in the worst case setting, and by $\Omega(n^{-2.5})$ in the randomized setting. Hence, the error of $\phi$ is at most ${\varepsilon}$ when $n=\Omega({\varepsilon}^{-1/2})$ in the worst case setting, and $n=\Omega({\varepsilon}^{-2/5})$ in the randomized setting. Since this holds for an arbitrary algorithm $\phi$, the proof is complete. \qed \subsection{Upper Bounds in the Worst Case Setting} We now discuss upper bounds on $\nwor$, as well as bounds on the complexity in the worst case setting. The worst case cost of an algorithm $\phi$ using $n$ function values is defined as $$ \costwor(\phi)\,=\,\sup_{q\in {\bf Q}}\left(\cc\, n_q + m_q\right) , $$ where $m_q$ is the number of arithmetic operations used by the algorithm for a function $q$ from~${\bf Q}$. The worst case complexity $\compwor({\varepsilon})$ is defined as the minimal cost of an algorithm whose worst case error is at most ${\varepsilon}$, $$ \compwor({\varepsilon})\,=\,\min\left\{\,\costwor(\phi)\,:\ \phi\ \mbox{such that}\ e^{\cld}(\phi,n)\,\le\, {\varepsilon}\,\right\}. $$ Obviously, $\compwor({\varepsilon})\,\ge\,\cc\,\nwor$. We now discuss the classical algorithm for the Sturm-Liouville eigenvalue problem, see e.g., \cite{demmel,keller}, and show that it is almost optimal in the worst case setting. This algorithm uses $n=\Theta({\varepsilon}^{-1/2})$ function values of $q$ at the equidistant points $i/(n+1)$ for $i=1,2,\dots,n$. Then the operator ${\mathbb L}_q$ is approximated by the tridiagonal $n\times n$ matrix $M_q$ of the form $$ M_q\,=\,(n+1)^2\, \left[ \begin{array}{ccccc} 2 & -1 & & & \\ -1 & 2 & -1 & &\\ & \ddots & \ddots & \ddots & \\ & & -1 & 2 & -1 \\ & & & -1 & 2 \end{array} \right] + \left[ \begin{array}{ccccc} q(\tfrac1{n+1}) & & & & \\ & q(\tfrac2{n+1}) & & & \\ & & \ddots & & \\ & & & q(\tfrac{n-1}{n+1}) & \\ & & & & q(\tfrac{n}{n+1}) \end{array} \right]. $$ Clearly, $M_q$ is a symmetric and positive definite matrix. Let ${\lambda}_j={\lambda}_j(M_q)$ and $z_j=z_j(M_q)$ be the eigenvalues and eigenvectors of $M_q$, i.e., $M_qz_j={\lambda}_jz_j$ with $$ {\lambda}_1\,\le\,{\lambda}_2\,\le\,\cdots\,\le\,{\lambda}_n, $$ where the vectors $z_j$ are orthogonal and normalized such that $$ \|z_j\|_{L_2}^2\,:=\,\frac1n\sum_{k=1}^nz_{j,k}^2=1 $$ with $z_{j,k}$ being the $k$th component of $z_j$. Note that we use the subscript $L_2$ in the norm of a vector to stress similarity to the $L_2$ norm of functions, and to distinguish from the Euclidean second norm. Clearly, $\|z_j\|_{L_2}=\tfrac1{\sqrt{n}}\|z\|_2$. For $q\equiv c$, it is known, see, e.g., \cite{demmel}, that $$ {\lambda}_j(M_c)\,=\,c\,+\,4(n+1)^2\sin^2\left(\frac{j\pi}{2(n+1)}\right), $$ and $z_j(M_c)=[z_{j,1}(M_c),z_{j,2}(M_c),\dots,z_{j,n}(M_c)]^T$ with $$ z_{j,k}(M_c)\,=\,\left(\frac{2n}{n+1}\right)^{1/2}\, \sin\left(\frac{jk\pi}{n+1}\right). $$ It is known, see, e.g., \cite{keller}, that the smallest eigenvalue ${\lambda}_1(M_q)$ of the matrix $M_q$ approximates the smallest eigenvalue ${\lambda}(q)$ of the operator ${\mathbb L}_q$ with error of order $n^{-2}$, i.e., $$ {\lambda}(q)\,-\,{\lambda}_1(M_q)\,=\,O\left(n^{-2}\right)\,=\,O({\varepsilon}). $$ Hence, it is enough to approximate ${\lambda}_1(M_q)$ with error of order ${\varepsilon}$. This can be achieved by using roughly $\log\,{\varepsilon}^{-1}$ bisection steps. Each step consists of computing the $n$ terms of the Sturm sequence, and this can be done in cost proportional to $n$. The total cost is of order $(\cc\,+\log\,{\varepsilon}^{-1}){\varepsilon}^{-1/2}$. For details, see \cite{demmel,wilkinson}. Theorem \ref{thm1} and the cost of this algorithm lead to the following bounds for the minimal number of function values and for the worst case complexity. \begin{thm} $$ \nwor\,=\,\Theta({\varepsilon}^{-1/2}),\qquad \Omega(\cc\,{\varepsilon}^{-1/2})= \compwor({\varepsilon})\,=\, O(\cc\,{\varepsilon}^{-1/2}\,+\, {\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}). $$ \end{thm} \vskip 2pc {\bf Remark 4.1.} \ We now show how (\ref{222}) can be proven, based on the properties of the matrix~$M_q$. First observe that for $q=0$, the eigenvalues ${\lambda}_j(M_0)$ are well separated, since \begin{eqnarray*} {\lambda}_{j+1}(M_0)-{\lambda}_j(M_0)\,&=&\,4(n+1)^2\sin\frac{(2j+1)\pi}{2(n+1)}\, \sin\frac{\pi}{2(n+1)}\\ \,&\ge&\,4(n+1)^2\sin\frac{3\pi}{2(n+1)}\, \sin\frac{\pi}{2(n+1)}\,\approx\, 3\pi^2. \end{eqnarray*} For $q\in{\bf Q}$, the Hermitian matrix $M_q$ differs from $M_0$ by the diagonal matrix diag$\,q(i/(n+1))$ whose elements satisfy $q(i/(n+1))\in[0,\|q\|_{\infty}]$ with $\|q\|_{\infty}\le1$. Using the known estimates on the perturbed eigenvalues of Hermitian matrices, see \cite{wilkinson}, we have $$ \min_{i=1,2,\dots,n}\left|{\lambda}_j(M_q)-{\lambda}_i(M_0)\right|\,\le\,\|q\|_{\infty} $$ for all $j=1,2,\dots,n$. Since the intervals $[{\lambda}_i(M_0)-1,{\lambda}_i(M_0)+1]$ are disjoint, we conclude that $$ \left|{\lambda}_j(M_q)-{\lambda}_j(M_0)\right|\,\le\,\|q\|_{\infty}\,\le\,1, $$ and that $$ {\lambda}_{j+1}(M_q)-{\lambda}_j(M_q)\,\ge\,{\lambda}_{j+1}(M_0)-{\lambda}_j(M_0)-2 \,\approx\, 3\pi^2-2. $$ Define $$ \tilde{u}_{q,n}\,=\,\left[u_q\left(\frac1{n+1}\right),\dots, u_q\left(\frac{n}{n+1}\right)\right]^T, $$ where $u_q$ is the normalized real eigenfunction corresponding to the smallest eigenvalue. Then $\|\tilde{u}_{q,n}\|_{L_2}=1+o(1)$. We normalize $\tilde{u}_{q,n}$ and obtain $$ u_{q,n}\,=\,\frac1{\|\tilde{u}_{q,n}\|_{L_2}}\,\tilde{u}_{q,n}. $$ As mentioned in Section 3, the eigenfunction $u_q$ is defined uniquely up to its sign. Obviously, the same is true for the\ eigenvector $z_1(M_q)$. We choose the signs of $u_q$ and $z_1(M_q)$ such that $$ \|u_{q,n}-z_1(M_q)\|_{L_2}\,\le\,\|u_{q,n}+z_1(M_q)\|_{L_2}. $$ All the components of the vector $$ \eta_n\,:=\,M_qu_{q,n}-{\lambda}(q)u_{q,n} $$ are of order $n^{-2}$, and therefore $\|\eta_n\|_{L_2}=O(n^{-2})$. {}From the a posteriori error estimate, see \cite[p.~173]{wilkinson}, we conclude that $$ \|u_{q,n}-z_1(M_q)\|_{L_2}\,=\,O(n^{-2})\qquad \forall\, q\in {\bf Q} $$ with the factor in the big-$O$ notation independent of $q$. Note also that $$ M_qu_{\bar q,n}-{\lambda}(q)u_{\bar q,n}\,=\,M_{\bar q}u_{\bar q,n}-{\lambda}(\bar q)u_{\bar q,n}\,+r_n, $$ with $\|r_n\|_{L_2}=O(\|q-\bar q\|_{\infty})$. Hence $$ \|u_{\bar q,n}-z_1(M_q)\|_{L_2}\,=\,O(\|q-\bar q\|_{\infty}+n^{-2}). $$ Finally, we have $$ \|u_{q,n}-u_{\bar q,n}\|_{L_2}\,=\,\|u_{q,n}-z_1(M_q)+z_1(M_q)-u_{\bar q,n}\|_{L_2} \,=\, O(n^{-2}+\|q-\bar q\|_{\infty}). $$ Letting $n$ tend to infinity, we conclude that $$ \|u_q-u_{\bar q}\|_{L_2}\,=O(\|q-\bar q\|_{\infty}). $$ Since both $u_q$ and $u_{\bar q}$ satisfy (\ref{eq:SL1}) for $(q,{\lambda}(q))$ and $(\bar q,{\lambda}(\bar q))$, respectively, we have $$ u^{\prime\prime}_q(x)-u^{\prime\prime}_{\bar q}(x)\,=\, (q(x)-{\lambda}(q))(u_q(x)-u_{\bar q}(x))\,+\,u_{\bar q}(x)\left( (q(x)-\bar q(x))-({\lambda}(q)-{\lambda}(\bar q))\right). $$ Therefore $$ \|u^{\prime\prime}_q-u^{\prime\prime}_{\bar q}\|_{L_2}\,=\, O(\|q-\bar q\|_{\infty}). $$ This and the fact that $u-u_{\bar q}$ vanishes at $0$ and $1$ imply $$ \|u_q-u_{\bar q}\|_{\infty}\,=O(\|q-\bar q\|_{\infty}), $$ as claimed. \qed \subsection{Upper Bounds in the Randomized Setting} We now turn to the randomized setting. The cost of a randomized algorithm $\phi$, using $n=\sup_{q\in{\bf Q}}{\mathbb E}(n_{q,\omega})<\infty$ randomized function values, is now defined as $$ \costran(\phi)\,=\,\sup_{q\in {\bf Q}}\left({\mathbb E}\left(\cc\, n_{q,\omega} + m_{q,\omega}\right)^2\right)^{1/2} , $$ where $m_{q,\omega}$ is the number of arithmetic operations used by the algorithm for a function $q$ from~${\bf Q}$ and a random variable $\omega$. The randomized complexity $$ \compran({\varepsilon})\,=\,\min\left\{\,\costran(\phi)\,:\ \phi\ \ \mbox{such that}\ \ e^{\clr}(\phi,n)\,\le\, {\varepsilon}\,\right\}, $$ is the minimal cost of an algorithm whose randomized error is at most ${\varepsilon}$. Obviously, $\compran({\varepsilon})\,\ge\,\cc\,\nran$. We now derive upper bounds on $\nran$ and $\compran({\varepsilon})$ by presenting a randomized algorithm that depends on a number of parameters. Then we find the values of these parameters for which the randomized error is ${\varepsilon}$. We first compute $m+1$ function values of $q$ at deterministic points $i/m$, for $i=0,1,\dots,m$, and construct a cubic natural spline $q_{{\rm cub}}$ interpolating $q$ at these points, see e.g., \cite{CK} for information about cubic splines. It is well known that this can be done with cost proportional to~$m$, and $\|q-q_{{\rm cub}}\|_{\infty}=O(m^{-2})$. The function $q_{{\rm cub}}$ does not have to be non-negative. Since $q\ge0$ then $\bar q = q_{{\rm cub}}+c\ge0$ with a constant $c=O(m^{-2})$. We have $\bar q\in {\bf Q}$ and $\|q-\bar q\|_{\infty}=O(m^{-2})$. We apply the formula (\ref{333}) for the function $\bar q$ and obtain \begin{equation}\label{999} {\lambda}(q)\,-\,{\lambda}(\bar q)\,=\,\int_0^1\left(q(x)-\bar q(x)\right) u_{\bar q}^2(x)\,dx \,+\, O\left(m^{-4}\right). \end{equation} This suggests that we can improve the accuracy of approximating ${\lambda}(q)-{\lambda}(\bar q)$ by using the classical Monte Carlo algorithm applied to the first term of the right hand side of (\ref{999}). We will need to know, at least approximately, the eigenvalue ${\lambda}(\bar q)$ and the eigenvector $u_{\bar q}$. Suppose we approximate ${\lambda}(\bar q)$ by ${\lambda}_{\bar q}$ with the worst case error \begin{equation}\label{1111} \sup_{q\in {\bf Q}}\left|{\lambda}(\bar q)-{\lambda}_{\bar q}\right|\,\le\, \delta_1, \end{equation} and the eigenfunction $u_{\bar q}$ by $z_{\bar q}$ with the worst case error \begin{equation}\label{2222} \sup_{q\in {\bf Q}}\|u_{\bar q}-z_{\bar q}\|_{L_2}\,\le\,\delta_2. \end{equation} Assume for a moment that ${\lambda}_{\bar q}$ and $z_{\bar q}$ have been computed. For a function $v$, define $f_v(x)=(q(x)-\bar q(x))v^2(x)$ and $I(f_v)=\int_0^1f_v(x)\,dx$. The randomized algorithm $\phi$ based on the Monte Carlo with $k$ randomized samples takes the form $$ \phi_{\omega}(q)\,=\,{\lambda}_{\bar q}\,+\,\frac1k\sum_{j=1}^k\bigg(q(x_{j,\omega}) \,-\,\bar q(x_{j,\omega})\bigg)z_{\bar q}^2(x_{j,\omega}), $$ where $x_{j,\omega}$ are independent and uniformly distributed numbers from $[0,1]$. Here $\omega$ represents a random element. We have \begin{eqnarray*} \left|{\lambda}(q)-\phi_{\omega}(q)\right|\,&\le&\,|{\lambda}(\bar q)-{\lambda}_{\bar q}|\,+\, |I(f_{u_{\bar q}})-I(f_{z_{\bar q}})|\\ &+&\, \bigg|I(f_{z_{\bar q}})-\frac1k\sum_{j=1}^kf_{z_{\bar q}} (x_{j,\omega})\bigg|\,+\,O(m^{-4}). \end{eqnarray*} Clearly, $$ \left|I(f_{u_{\bar q}})-I(f_{z_{\bar q}})\right|\,\le\, \int_0^1\left|q(x)-\bar q(x)\right|\left|u_{\bar q}^2(x)-z_{\bar q}^2(x)\right|\,dx\,=\, O(m^{-2}\,\delta_2). $$ Since $\|f_{z_{\bar q}}\|_{L_2}=O(m^{-2})$, the well known formula for the randomized error of Monte Carlo yields that $$ \left({\mathbb E}_{\omega}\left(I(f_{z_{\bar q}}) -\frac1k\sum_{j=1}^k f_{z_{\bar q}}(x_{j,\omega})\right)^2\right)^{1/2}\,=\, \frac{(I(f^2_{z_{\bar q}})-I^2(f_{z_{\bar q}}))^{1/2}}{k^{1/2}}\,=\, O\left(m^{-2}k^{-1/2}\right). $$ We have obtained the bound $$ e^{\clr}(\phi,n)\,=\,O\left(\delta_1+m^{-2}\delta_2+m^{-2}k^{-1/2}+m^{-4} \right) $$ on the randomized error of $\phi$. Hence, to guarantee error at most ${\varepsilon}$, it is enough to take $$ \delta_1=\Theta({\varepsilon}),\ \ m=k=\Theta({\varepsilon}^{-2/5})\ \ \mbox{and}\ \ \delta_2=\Theta({\varepsilon}^{1/5}). $$ We now explain how to achieve (\ref{1111}) and (\ref{2222}). To get ${\lambda}_{\bar q}$ approximating ${\lambda}(\bar q)$ with error of order ${\varepsilon}$, we approximate the operator ${\mathbb L}_{\bar q}$ by the matrix $M_{\bar q}$ as in the worst case setting, now with $n=\Theta({\varepsilon}^{-1/2})$. Then ${\lambda}({\bar q})-{\lambda}_1(M_{\bar q})=O(n^{-2})=O({\varepsilon})$, and we compute ${\lambda}_{\bar q}$ as an ${\varepsilon}$-approximation of ${\lambda}_1(M_{\bar q})$ as for the worst case setting. This can be done with cost of order ${\varepsilon}^{-1/2}$ function values of $\bar q$, and of order ${\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}$ arithmetic operations. Since the cost of computing one function value of $\bar q$ is of order $1$, the total cost of computing ${\lambda}_{\bar q}$ is of order ${\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}$. To get $z_{\bar q}$ approximating $u_{\bar q}$ with error of order ${\varepsilon}^{1/5}$ we proceed as follows. Consider the eigenvector $z_1(M_{\bar q})$ of the matrix $M_{\bar q}$, with $n$ not yet specified. By Remark 4.1, we have \begin{equation}\label{3333} \|u_{\bar q,n} - z_1(M_{\bar q})\|_{L_2}\,=\,O(n^{-2}). \end{equation} We approximate the smallest eigenvalue ${\lambda}_1(M_{\bar q})$ by $\bar {\lambda}$, with error $\delta$. This can be achieved with cost of order $n\log\,\delta^{-1}$. Without loss of generality we assume that $\bar {\lambda}\not={\lambda}_1(M_{\bar q})$. Indeed, we can check this condition by computing $\mbox{det}(M_{\bar q}-{\bar {\lambda}}I)$ and if this determinant is zero we perturb $\bar {\lambda}$ a little. Then the matrix $$ A\,=\,\left(M_{\bar q}-{\bar {\lambda}}I\right)^{-1} $$ is non-singular and its eigenvalues are $\beta_j=({\lambda}_j(M_{\bar q})-{\bar {\lambda}})^{-1}$. Note that $|\beta_1|\ge \delta^{-1}$ and $\beta_j=O(1)$ for $j\ge2$. For the $j$th vector $e_j=[0,\dots,0,1,0,\dots,0]^T$ with $1$ in the $j$th position, define $$ x_j\,=\,A\,e_j. $$ We can compute $x_j$ with cost of order $n$ by solving the tridiagonal linear system $(M_{\bar q}-{\bar {\lambda}}I)x_j=e_j$. Then we compute $$ \|x_{j_0}\|_2\,=\,\max_{j=1,2,\dots,n}\|x_j\|_2, $$ and $$ z\,=\,\|x_{j_0}\|^{-1}_2\,x_{j_0}. $$ Observe that the cost of computing $z$ is of order $n^2$. Since $\{n^{-1/2}z_j(M_{\bar q})\}_{j=1}^n$ is orthonormal, we have $\|x_j\|^2_2=\sum_{\ell=1}^n\beta^2_{\ell}(e_j,n^{-1/2}\,z_{\ell}(M_{\bar q}))^2$ and $\|n^{-1/2}\,z_1(M_{\bar q})\|^2_2=1=\sum_{j=1}^n (e_j,n^{-1/2}\,z_1(M_{\bar q}))^2$. Hence, there exists an index $j$ such that $$ (e_j,n^{-1/2}\,z_1(M_{\bar q}))^2\,\ge\, n^{-1}, $$ and therefore $$ \|x_{j_0}\|_2\,\ge\,\|x_j\|_2\,\ge\,\delta^{-1}n^{-1/2}. $$ We have $$ (M_{\bar q}-{\lambda}_1(M_{\bar q})I)z\,=\, (M_{\bar q}-{\bar {\lambda}}I)z\,+\, (\bar {\lambda} -{\lambda}_1(M_{\bar q}))z\,=\,\frac1{\|x_{j_0}\|_2}e_{j_0}\,+\, (\bar {\lambda} -{\lambda}_1(M_{\bar q}))z, $$ and therefore $$ \|(M_{\bar q}-{\lambda}_1(M_{\bar q})I)z\|_2\,\le\,\delta \sqrt{n}+\delta. $$ {}From \cite[p.~173]{wilkinson}, we conclude that $\|n^{-1/2}\,z_1(M_{\bar q})-z\|_{2}\,=\,O(\delta\,\sqrt{n})$, and \begin{equation}\label{5555} \|z_1(M_{\bar q})-\sqrt{n}\,z\|_{L_2}\,=\,O(\delta \sqrt{n}). \end{equation} We are finally ready to define $z_{\bar q}$ by piecewise linear interpolation from the successive components of the vector $\sqrt{n}\,z=[z_1,z_2,\dots,z_n]^T$. More precisely, for $j=0,1,\dots,n$ let $t_j=j/(n+1)$. For $t\in[t_j,t_{j+1}]$, we set $$ z_{\bar q}(t)\,=\,z_j (1-(n+1)t+j)\,+\,z_{j+1} ((n+1)t-j) $$ with $z_0=z_{n+1}=0$. We need to estimate $u_{\bar q}-z_{\bar q}$ in the $L_2$ norm. Observe that for $t\in[t_j,t_{j+1}]$ we have $$ u_{\bar q}(t)\,=\,u_{\bar q}(t_j)(1-(n+1)t+j)\,+\,u_{\bar q}(t_{j+1})((n+1)t-j)\,+\,O(n^{-2}) $$ since $u_{\bar q}\in {\bf Q}$. Therefore $$ |u_{\bar q}(t)-z_{\bar q}(t)|\,\le\, |u_{\bar q}(t_j)-z_{\bar q}(t_j)|\,+\, |u_{\bar q}(t_{j+1})-z_{\bar q}(t_{j+1})|\,+\,O(n^{-2}). $$ This yields \begin{eqnarray*} \|u_{\bar q}-z_{\bar q}\|^2_{L_2}\,&=&\,\sum_{j=0}^n\int_{t_j}^{t_{j+1}} \left(u_{\bar q}(t)-z_{\bar q}(t)\right)^2dt\\ &=&\,O\left(\frac1{n+1}\sum_{j=0}^n\left(u_{\bar q}(t_j)- z_{\bar q}(t_j)\right)^2\,+\,n^{-4}\right). \end{eqnarray*} Hence, $$ \|u_{\bar q}-z_{\bar q}\|_{L_2}\,=\,O\left(\|u_{\bar q,n}-\sqrt{n}\,z\|_{L_2}\,+n^{-2}\right) $$ Since $\|u_{\bar q,n}-\sqrt{n}\,z\|_{L_2}\,\le \|u_{\bar q,n}-z_1(M_{\bar q})\|_{L_2}\,+\, \|z_1(M_{\bar q})-\sqrt{n}\,z\|_{L_2}$, we use (\ref{3333}) and (\ref{5555}) to see that $$ \|u_{\bar q}-z_{\bar q}\|_{L_2}\,=\,O(\delta \sqrt{n}+n^{-2}). $$ For $\delta=n^{-5/2}$ we obtain $$ \|u_{\bar q}-z_{\bar q}\|_{L_2}\,=\,O(n^{-2}). $$ Setting $n=\Theta({\varepsilon}^{-1/10})$ we obtain (\ref{2222}) with $\delta_2=\Theta({\varepsilon}^{1/5})$. The cost of computing $z_{\bar q}$ is of order $n^2=\Theta({\varepsilon}^{-1/5})$. Theorem \ref{thm1} and the cost of this randomized algorithm lead to the following bounds on the minimal number of function values and the randomized complexity. \vskip 1pc \begin{thm} $$ \nran\,=\,\Theta({\varepsilon}^{-2/5}),\qquad \Omega(\cc\,{\varepsilon}^{-2/5})= \compran({\varepsilon})\,=\, O(\cc\,{\varepsilon}^{-2/5}\,+\, {\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}). $$ \end{thm} \section{Quantum Setting} We now turn our attention to the quantum setting. In this setting, we are using {\it hybrid} algorithms that are combinations of classical algorithms using function values, as explained in the previous sections, and quantum algorithms which we now describe. A quantum algorithm applies a sequence of unitary transformations to an initial state, and the final state is measured, see \cite{beals,cleve,heinrich,nielsen} for the details of the quantum model of computation. We briefly summarize this model to the extent necessary for this paper. The initial state $|\psi_0\rangle$ is a unit vector of the Hilbert space $\Cal{H}_\nu=\complex^2\otimes \cdots\otimes \complex^2$, $\nu$ times, for some appropriately chosen integer $\nu$, where $\complex^2$ is the two dimensional space of complex numbers. Obviously, the dimension of $\Cal{H}_\nu$ is $2^{\nu}$. The number $\nu$ denotes the number of qubits used in quantum computation. The final state $|\psi\rangle$ is also a unit vector of $\Cal{H}_\nu$ and is obtained from the initial state $|\psi_0\rangle$ by applying a number of unitary $2^{\nu}\times 2^{\nu}$ matrices, i.e., \begin{equation} |\psi\rangle\,:=\,U_TQ_YU_{T-1}Q_Y\cdots U_1Q_YU_0 |\psi_0\rangle. \label{eq:qa} \end{equation} Here, $U_0,U_1,\dots,U_T$ are unitary matrices that do not depend on the input function $q$. The unitary matrix $Q_Y$ with $Y=[q(t_1),\dots,q(t_n)]$ is called a quantum query and depends on $n$, with $n\le 2^{\nu}$, function evaluations of $q$ computed at some non-adaptive points $t_i\in[0,1]$. The quantum query $Q_Y$ is the only source of information about $q$. The integer $T$ denotes the number of quantum queries we choose to use. At the end of the quantum algorithm, a measurement is applied to its final state $|\psi \rangle$. The measurement produces one of $M$ outcomes, where $M\le 2^{\nu}$. Outcome $j\in\{0,1,\dots,M-1\}$ occurs with probability $p_Y(j)$, which depends on $j$ and the input $Y$. For example, if $M=2^{\nu}$ and the final state is $|\psi\rangle=\sum_{j=0}^{2^\nu-1}c_j|j\rangle$, with $\sum_{j=0}^{2^\nu-1}|c_j|^2=1$, then a measurement in the computational orthonormal basis $\{|j\rangle\}$ produces the outcome $j$ with probability $p_Y(j)=|c_j|^2$. Knowing the outcome $j$, we compute an approximation $\hat{\lambda}_Y(j)$ of the smallest eigenvalue on a classical computer. In principle, quantum algorithms may have many measurements applied between sequences of unitary transformations of the form presented above. However, any algorithm with many measurements and a total of $T$ quantum queries can be simulated by a quantum algorithm with only one measurement at the end, for details see e.g., \cite{heinrich}. We stress that classical algorithms in floating or fixed point arithmetic can also be written in the form of (\ref{eq:qa}). Indeed, all classical bit operations can be simulated by quantum computations, see e.g., \cite{bernstein}. Classically computed function values will correspond to bit queries which we discuss in Section 5.2. In our case, we formally use the real number model of computation. Since the Sturm-Liouville eigenvalue problem is well conditioned and properly normalized, we obtain practically the same results in floating or fixed point arithmetic. More precisely, it is enough to use $O(\log\,{\varepsilon}^{-1})$ mantissa bits, and the cost of bit operations in floating or fixed point arithmetic is of the same order as the cost in the real number model multiplied by a power of $\log\,{\varepsilon}^{-1}$. Hence, a hybrid algorithm may be viewed as a finite sequence of algorithms of the form (\ref{eq:qa}). It is also known that if we use finitely many algorithms of the form (\ref{eq:qa}) then they can be written as one quantum algorithm of the form (\ref{eq:qa}), see \cite{heinrich,H03}. That is why an arbitrary hybrid algorithm in the quantum setting is of the form (\ref{eq:qa}). This is important when we want to prove lower bounds because it is enough to work with algorithms of the form (\ref{eq:qa}). For upper bounds, it seems to us more natural to distinguish between classical and quantum computations and charge their cost differently. The cost of classical computations is defined as before whereas the cost of quantum computations is defined as the sum of the number of quantum queries multiplied by the cost of one query, and the number of quantum operations besides quantum queries. It will be also important to indicate how many qubits are used by the quantum computations. We now define the error in the quantum setting. In this setting, we want to approximate the smallest eigenvalue ${\lambda}(q)$ with a probability $p>\tfrac12$. For simplicity, we take $p=\tfrac34$ for the rest of this section. As it is common for quantum algorithms, we can achieve an ${\varepsilon}$-approximation with probability arbitrarily close to $1$ by repetition of the original quantum algorithm, and by taking the median as the final approximation. The local error of the quantum algorithm with $T$ queries that computes $\hat {\lambda}_Y(j)$ for the function $q\in {\bf Q}$ and the outcome $j\in\{0,1,\dots,M-1\}$ is defined by \begin{equation*} e(\hat{\lambda}_Y,T)\,=\,\min \bigg\{\, {\alpha} :\quad \sum_{j:\ |{\lambda}(q) - \hat {\lambda}_{Y}(j)|\,\le\, {\alpha}\,}p_Y(j)\geq \tfrac34\,\bigg\}. \label{eq:localperr} \end{equation*} This can be equivalently rewritten as $$ e(\hat{\lambda}_Y, T)\,=\,\min_{A:\, \mu(A)\ge \tfrac34}\max_{j\in A} \big|{\lambda}(q)-\hat {\lambda}_Y(j)\big|, $$ where $A\subset\{0,1,\dots,M-1\}$ and $\mu(A)=\sum_{j\in A}p_Y(j)$. The {\it worst probabilistic} error of a quantum algorithm $\hat{\lambda}$ with $T$ queries for the Sturm-Liouville eigenvalue problem is defined by \begin{equation} e^{\w}(\hat{\lambda}, T)\,=\,\sup\bigg\{\, e(\hat {\lambda}_Y,T)\colon \ Y=[q(t_1),\dots,q(t_n)],\ \ t_i\in [0,1], \ \ \mbox{for}\ \ q\in {\bf Q} \,\bigg\}. \label{eq:wperr} \end{equation} \subsection{Bit Queries} Quantum queries are important in the complexity analysis of quantum algorithms. A quantum query corresponds to a function evaluation in classical computation. By analogy with the complexity analysis of classical algorithms, we analyze the cost of quantum algorithms in terms of the number of quantum queries that are necessary to compute an ${\varepsilon}$-approximation with probability~$\tfrac34$. Clearly, this number is a lower bound on the quantum complexity, which is defined as the minimal total cost of a quantum algorithm that solves the problem. Different quantum queries have been studied in the literature. Probably the most commonly studied query is the {\it bit} query. For a Boolean function $f:\{0,1,\dots,2^m-1\}\to\{0,1\}$, the bit query is defined by $$ Q_f|j\rangle|k\rangle\,=\,|j\rangle|k\oplus f(j)\rangle. $$ Here $\nu=m+1$, $|j\rangle\in\Cal{H}_m$, and $|k\rangle\in\Cal{H}_{1}$ with $\oplus$ denoting the addition modulo $2$. For real functions, such as functions $q$, the bit query is constructed by taking the most significant bits of the function $q$ evaluated at some points $t_j$. More precisely, as in \cite{heinrich}, the bit query for $q$ has the form $$ Q_q|j\rangle|k\rangle\,=\,|j\rangle|k\oplus \beta(q(\tau(j)))\rangle, $$ where the number of qubits is now $\nu=m'+m''$ and $|j\rangle\in \Cal{H}_{m'}$, $|k\rangle\in\Cal{H}_{m''}$ with some functions $\beta:[0,1]\to\{0,1,\dots,2^{m''}-1\}$ and $\tau:\{0,1,\dots,2^{m'}-1\}\to[0,1]$. Hence, we compute $q$ at $t_j=\tau(j)\in[0,1]$ and then take the $m''$ most significant bits of $q(t_j)$ by $\beta(q(t_j))$, for details and a possible use of ancilla qubits see again \cite{heinrich}. Using bit queries, the well known quantum algorithm of Grover \cite{grover} requires $\Theta(N^{1/2})$ queries for searching an unordered database of $N$ items. Similarly, the quantum summation algorithm of Brassard et al.~\cite{brassard} computes the mean of a Boolean function defined on the set of $N$ elements with accuracy ${\varepsilon}$ and probability $\tfrac34$ using of order $\min\{N, {\varepsilon}^{-1}\}$ bit queries. Both algorithms are optimal modulo multiplicative factors in terms of the number of bit queries. The quantum summation algorithm can be also used for the approximate computation of the mean of a real function $f:[0,1]\to\reals$ with $|f(x)|\le M$ for all $x\in[0,1]$, see \cite{heinrich,novak}. More precisely, if we want to approximate $$ \mbox{S}_N(f)\,:=\,\frac1N\sum_{j=0}^{N-1}f(x_j) $$ for some $x_j\in [0,1]$ and $N$, then the quantum summation algorithm $\mbox{QS}_N(f)$ approximates $\mbox{S}_N(f)$ such that \begin{equation}\label{157} |\mbox{S}_N(f)-\mbox{QS}_N(f)|\,\le\,{\varepsilon} \qquad\mbox{with probability}\ \tfrac34 \end{equation} using of order $\min(N,M{\varepsilon}^{-1})$ bit queries, $\min(N,M{\varepsilon}^{-1})\,\log\,N$ quantum operations, and $\log\,N$ qubits. Bit queries have been also used for a number of continuous problems such as multivariate and path integration, multivariate approximation, and ordinary differential equations. Tight bit query complexity bounds are known for a number of such problems, see \cite{heinrich,H03,H04a,H04b,Kacewicz,N01,TW02}. In particular, Novak \cite{N01} proved that for the integration problem $\int_0^1f(x)\,dx$ for functions $f$ from the class $F$ given by (\ref{classF}), the bit query complexity is \begin{equation}\label{novak} n^{{\rm bit-query}}({\varepsilon},\mbox{INT}_F)\,=\,\Theta({\varepsilon}^{-1/3}). \end{equation} Here and elsewhere by the bit query complexity we understand the minimal number of bit queries needed to compute an ${\varepsilon}$-approximation to a given problem with probability~$\tfrac34$. In particular, $n^{\textrm{\rm bit-query}}({\varepsilon})$ denotes the bit query complexity of the Sturm-Liouville eigenvalue problem. Based on the result (\ref{novak}) of Novak and the relationship between the Sturm-Liouville eigenvalue problem with integration, we now prove the following theorem. \begin{thm} \begin{equation*} n^{\textrm{\rm bit-query}}({\varepsilon})\,=\,\Omega({\varepsilon}^{-1/3}). \end{equation*} \end{thm} \vskip 1pc \noindent {\it Proof.\ } We first prove that the bit query complexity for the weighted integration problem for the class $F_c$ given by (\ref{classfc}) is of the same order as for integration for the class $F$, \begin{equation}\label{6666} n^{\textrm{\rm bit-query}}({\varepsilon},\mbox{INT}_{F_c})\,=\,\Theta({\varepsilon}^{-1/3}). \end{equation} The upper bound follows from (\ref{novak}). To prove the lower bound, we use the standard proof technique of reducing the integration problem to the mean Boolean summation problem for which a lower bound on bit queries is known. Assume then that we use an arbitrary quantum algorithm with $k$ bit queries that computes an ${\varepsilon}$-approximation with probability $\tfrac34$ for the integration problem over the class $F_c$. Without loss of generality we assume that $k^{-2}\le c$. Consider the function $h(x)={\alpha} x^3(1-x)^3$ for $x\in [0,1]$ and $h(x)=0$ for $x>1$. Here, ${\alpha}$ is a positive number chosen such that $h\in F$ with $F$ given by (\ref{classF}). For $j=0,1,\dots,N-1$, with $N>k$, define $h_j(x)=N^{-2}h(N(x-j/N))$. Clearly, $h_j\in F$ and the support of $h_j$ is $(j/N,(j+1)/N$. Observe that $\|h_j\|_{\infty}\le N^{-2}$. Hence $h_j\in F_c$. We also have $\int_0^1h_j(x)\,dx=N^{-3}\int_0^1h(x)\,dx$. For an arbitrary Boolean function $B:\{0,1,\dots,N-1\}\to\{0,1\}$, define the function $$ f_B(x)\,=\,\sum_{j=0}^{N-1}B(j)h_j(x)\quad \forall\,x\in [0,1]. $$ Then $f_B\in F_c$ and $$ \int_0^1f_B(x)\,dx\,=\, \frac{\int_0^1h(x)\,dx}{N^2}\ \frac 1N\sum_{j=0}^{N-1}B(j). $$ Hence, modulo the factor of order $N^{-2}$, the computation of the Boolean mean is reduced to the integration problem. Note that $f_B(t)=B(j)h_j(t)$ if $t\in [j/N,(j+1)/N]$, and sampling of $f_B$ is equivalent to sampling of $B$. {}From \cite{nayak} we know that $\Omega(k^{-1})$ is a lower bound for the error of the quantum approximation of the Boolean mean, with $k$ bit queries, and probability $\tfrac34$, where $N\ge \beta k$ for some positive $\beta$. Letting $N=\lceil \beta k \rceil$, we conclude that the corresponding lower bound on the integration problem over the class $F_c$ is $\Omega(k^{-3})$. Hence to achieve the error ${\varepsilon}$ we must have $k=\Omega({\varepsilon}^{-1/3})$, as claimed in (\ref{6666}). The same proof techniques allows us to consider the classes $F_{c({\varepsilon})}$ with varying $c({\varepsilon})$, even with $c({\varepsilon})$ tending to zero, although not too fast. We have \begin{equation}\label{ctozero} n^{\text{bit-query}}({\varepsilon},\mbox{INT}_{F_{c({\varepsilon})}})\,=\,\Theta({\varepsilon}^{-1/3}) \qquad \mbox{if}\ \lim_{{\varepsilon}\to 0}c({\varepsilon})\,{\varepsilon}^{-2/3}\,=\,\infty. \end{equation} We now turn to the Sturm-Liouville eigenvalue problem. As in the proof of Theorem~\ref{thm1}, for $f\in F_c$ with $c\in(0,\tfrac12]$, we define $q=\tfrac12+f$ and consider an arbitrary quantum algorithm $\phi$ that uses $k$ quantum bit queries and computes an ${\varepsilon}$-approximation of the smallest eigenvalue with probability $\tfrac34$. Then $\psi(f)=\tfrac12(\phi(q)-\pi^2-\tfrac12)$ is a quantum algorithm for approximating the integration problem over the class $F_c$. We have $$ \left|I(f)-\psi(f)\right|\,=\,\left|\tfrac12\left({\lambda}(q)- \phi(q)\right)\,+\,O(c^2)\right|\,\le\, \tfrac12\,{\varepsilon}+O(c^2). $$ Take now $c=c({\varepsilon})=\Theta({\varepsilon}^{2/3-\delta})$ with $\delta\in(0,\tfrac16)$. Then $$ \left|I(f)-\psi(f)\right|\,\le\, \tfrac12\,{\varepsilon}+O({\varepsilon}^{4/3-2\delta})\,=\,\tfrac12\,{\varepsilon}(1+o(1))\, \le\,{\varepsilon}\quad\mbox{for small}\ {\varepsilon}. $$ Hence, the quantum error of $\psi$ with probability $\tfrac34$ is ${\varepsilon}$, and $\psi$ uses $k$ bit queries. Due to (\ref{ctozero}), we have $k=\Omega({\varepsilon}^{-1/3})$ which completes the proof. \ \ \qed \vskip 1pc We now derive upper bounds on the bit query complexity $n^{\text{bit-query}}({\varepsilon})$ and on the total quantum complexity ${\rm comp}^{\text{bit-quant}}({\varepsilon})$. The total quantum complexity is defined as the minimal cost of a hybrid algorithm that solves the Sturm-Liouville eigenvalue problem with error at most ${\varepsilon}$ and probability $\tfrac34$. The hybrid algorithm may require some classical computations and the use of function values and the cost of them is defined just as before. It may also require some quantum computations and the cost of them is defined as the sum of the number of bit queries multiplied by the cost of one such query plus the number of additional quantum operations. The cost of one bit query is denoted by $\cc_{{\rm bit}}$. We present a hybrid algorithm, which will be a combination of the classical algorithm from Section 4 and the quantum summation algorithm $\mbox{QS}_N$ for a properly chosen $N$. We proceed as in Section 4 and use the same notation. {}From (\ref{999}), (\ref{1111}), and (\ref{2222}), we have \begin{equation}\label{bit1} {\lambda}(q)\,=\,{\lambda}_{\bar q}\,+\,\int_0^1(q(x)-\bar q(x))z_{\bar q}(x)\,dx \,+\,O(\delta_1+m^{-2}\delta_2+m^{-4}) \end{equation} with $\delta_1,\,\delta_2$ and $m$ to be specified later. Let $$ f(x)\,=\,(q(x)-\bar q(x))z_{\bar q}(x)\qquad x\in[0,1]. $$ Observe that $f(x)=O(m^{-2})$, and $f(x)$ depends on $q(x)$, and $q(i/m)$ for $i=0,1,\dots,m$, which are used in the construction of $\bar q$. Furthermore, we can compute $f(x)$ by computing one function value $q(x)$ and one function value of the already computed functions $\bar q$ and $z_{\bar q}$ at $x$. We approximate $\int_0^1f(x)\,dx$ by $$ \mbox{S}_N(f)\,=\,\frac1N\sum_{j=0}^{N-1}f\left(\frac{j+1}N\right) $$ with $N=(m+1)k$, where the parameters $m$ and $k$ will be specified later. Since $f$ is twice continuously differentiable and $f^{\prime\prime}(x)$ is uniformly bounded on the subintervals $(i/m,(i+1)/m)$ for $i=0,1,\dots,m-1$, it is easy to see that $$ \int_0^1f(x)\,dx\,-\,\mbox{S}_N(f)\,=\, O\left( \frac1{N^2}\right). $$ We define $N$ such that $N^{-2}$ is of order ${\varepsilon}$. We now apply $\mbox{QS}_N(f)$ algorithm to compute an $\Theta({\varepsilon})$-approximation with probability~$\tfrac34$ to $\mbox{S}_N(f)$, or, equivalently to $\int_0^1f(x)\,dx$. To do it, we need to use the bit query $Q_f$ for the function $f$, although so far we assumed that we can use only bit queries $Q_q$ for the functions~$q$ from ${\bf Q}$. This problem is resolved in Section 2 of \cite{H03} where it is shown that algorithms using the bit query $Q_f$ can be simulated by algorithms using bit queries $Q_q$ at the expense of multiplying the number of bit queries by a factor of $2$. {}From this and (\ref{157}) with $M=O(m^{-2})$, we conclude that its is enough to perform of order $\min({\varepsilon}^{-1/2},m^{-2}{\varepsilon}^{-1})$ bit queries, $\min({\varepsilon}^{-1/2},m^{-2}{\varepsilon}^{-1})\log\,{\varepsilon}^{-1}$ quantum operations, and using of order $\log\,{\varepsilon}^{-1/2}$ qubits. We finally approximate ${\lambda}(q)$ by the following algorithm \begin{equation}\label{158} \phi(q)\,=\,{\lambda}_{\bar q}\,+\, {\rm QS}_N(f). \end{equation} This algorithm differs from the randomized algorithm of Section 4 since we now apply the $\mbox{QS}_N$ quantum algorithm instead of Monte Carlo to approximate $\int_0^1f(x)\,dx$. Its error is clearly of the form \begin{equation*} e^{\textrm{\rm bit-quant}}(\phi,T)\,=\, O\left(\delta_1+m^{-2}\delta_2+m^{-4}+{\varepsilon}\right). \end{equation*} To guarantee that this error is at most ${\varepsilon}$, we take $$ \delta_1=\Theta({\varepsilon}),\ \ m=\Theta({\varepsilon}^{-1/3}),\ \ k=\Theta({\varepsilon}^{-1/6})\ \ \mbox{and}\ \ \delta_2=\Theta({\varepsilon}^{1/3}). $$ Using the cost analysis of Section 4 and the results of this section, we conclude the following theorem. \vskip 1pc \begin{thm} The Sturm-Liouville eigenvalue problem can be solved in the quantum setting with bit queries by the algorithm $\phi$ defined by (\ref{158}). This algorithm approximates the smallest eigenvalue ${\lambda}(q)$ with error at most ${\varepsilon}$ and probability $\tfrac34$ using of order \begin{itemize} \item ${\varepsilon}^{-1/3}$ bit queries and function values, \item ${\varepsilon}^{-1/3}\,\log\,{\varepsilon}^{-1}$ quantum operations, \item ${\varepsilon}^{-1/2}\,\log\,{\varepsilon}^{-1}$ classical operations, \item $\log\,{\varepsilon}^{-1}$ qubits. \end{itemize} Furthermore, \begin{equation*} n^{\textrm{\rm bit-query}}\,=\,\Theta({\varepsilon}^{-1/3}), \end{equation*} and \begin{equation*} \Omega(\cc_{{\rm bit}}\,{\varepsilon}^{-1/3})\,=\, {\rm comp}^{\textrm{\rm bit-query}}({\varepsilon})\,=\, O\left((\cc+\cc_{{\rm bit}})\,{\varepsilon}^{-1/3}\,+\, {\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}\right). \end{equation*} \end{thm} Hence, we have a sharp bound of order ${\varepsilon}^{-1/3}$ on the number of bit queries whereas the upper bound on the total cost depends, as in the worst case and randomized settings, on ${\varepsilon}^{-1/2}\log\,{\varepsilon}^{-1}$, which is the cost of classical computations. \subsection{Power Queries} In this subsection we study {\em power} queries. We formally define them as follows. For some problems, a quantum algorithm can be written in the form \begin{equation} |\psi\rangle\,:=\,U_m\widetilde W_mU_{m-1}\widetilde W_{m-1}\cdots U_1 \widetilde W_1U_0 |\psi_0\rangle. \label{eq:qac} \end{equation} Here $U_1,\dots,U_m$ denote unitary matrices independent of the function $q$, just as before, whereas the unitary matrices $\widetilde W_j$ are of the form controlled-$W_j$, see \cite[p.~178]{nielsen}. That is, $W_j=W^{p_j}$ for an $n\times n$ unitary matrix $W$ that depends on the input of the computational problem, and for some non-negative integers $p_j$, $j=1,2,\dots,m$. Without loss of generality we assume that $n$ is a power of two. Let $\{|y_k\rangle\}$ be orthonormalized eigenvectors of $W$, $W|y_k\rangle=\alpha_k|y_k\rangle$ with the corresponding eigenvalue $\alpha_k$, where $|\alpha_k|=1$ and $\alpha_k=e^{\mathrm{i}{\lambda}_k}$ with ${\lambda}_k\in[0,2\pi)$ for $k=1,2,\dots,n$. For the unit vectors $|x_{\ell}\rangle= {\alpha}_{\ell}|0\rangle+\beta_{\ell}|1\rangle\in \complex^2$, $\ell=1,2,\dots,r$, the quantum query $\widetilde W_j$ is defined as \begin{equation}\label{control} \widetilde W_j\, |x_1\rangle|x_2\rangle\cdots|x_r\rangle|y_k\rangle\,=\, |x_1\rangle|\cdots|x_{j-1}\rangle\bigg({\alpha}_j|0\rangle+\beta_j e^{\mathrm{i}\gamma p_j{\lambda}_k}|1\rangle\bigg)|x_{j+1}\rangle\cdots |x_r\rangle|y_k\rangle. \end{equation} Hence, $\widetilde W_j$ is a $2^{\nu}\times 2^{\nu}$ unitary matrix with $\nu=r+\log\,n$. We stress that the exponent $p_j$ only affects the power of the complex number $e^{\mathrm{i}\gamma {\lambda}_k}$. We call $\widetilde W_j$ a {\it power} query since they are derived from powers of $W$. Power queries have been successfully used for a number of problems, see again \cite{nielsen}, including the phase estimation problem that will be discussed in the next section. The phase estimation algorithm, see \cite{cleve,nielsen}, is at the core of many quantum algorithms. It plays a central role in the fast quantum algorithms for factoring and discrete logarithms of Shor \cite{shor}. We stress that for Shor's algorithm, power queries can be implemented by a number of elementary quantum gates that is polylog in $n$. The phase estimation algorithm approximates an eigenvalue of a unitary operator $W$ using the corresponding eigenvector, or its approximation, as part of the initial state. The powers of $W$ are defined by $p_i=2^{i-1}$. Therefore, phase estimation uses queries with $W_1=W$, $W_2=W^{2}$, $W_3=W^{2^2}$, $\dots$, $W_m=W^{2^{m-1}}$. It is typically assumed, see \cite{cleve}, that we do not explicitly know $W$ but we are given quantum devices that perform controlled-$W$, controlled-$W^2$, controlled-$W^{2^2}$, and so on. For the Sturm-Liouville eigenvalue problem, as well as for problems studied in \cite{PW04}, we will use the matrix \begin{equation}\label{eq:W} W\,=\,\exp\left(\mathrm{i}{\gamma} M_q\right)\quad\mbox{with}\ \mathrm{i}=\sqrt{-1}\ \mbox{and a positive}\ {\gamma}, \end{equation} where the $n\times n$ matrix $M_q$ was introduced in Section 3.2 as a discretization of the differential operator ${\mathbb L}_q$. The matrix $W$ is unitary since $M_q$ is symmetric. For the $\widetilde W_j$ with the matrix $W$ of (\ref{eq:W}) we modify the query definition in equation (\ref{eq:qa}) and assume, as in \cite[Ch.~5]{nielsen}, that for each $j$ the $\widetilde W_j$ is one quantum query. Accordingly, for algorithms that can be expressed in the form (\ref{eq:qac}), the number of power queries is $m$, independently of the powers $p_j$. By analogy with (\ref{eq:wperr}), we denote their error by $e^\w(\hat{\lambda},m)$. Allowing quantum algorithms of the form (\ref{eq:qac}) with power queries, we define the power query complexity $n^{\rm power-query}({\varepsilon})$ to be the minimal number of power queries required to approximate the Sturm-Liouville eigenvalue problem with error ${\varepsilon}$, i.e., $$ n^{\rm power-query}({\varepsilon}) = \min \{ m:\; \exists\; \hat\lambda \ \ \mbox{such\ that}\ \ e^\w(\hat\lambda, m)\le {\varepsilon} \}. $$ The cost of one power query is denoted by $\cc_{{\rm power}}$. The total complexity, $\comp^{{\rm power-query}}({\varepsilon})$, is the defined as the minimal cost of a hybrid algorithm in the same way as for bit queries. We will use the phase estimation algorithm as a basic module for approximating the smallest eigenvalue ${\lambda}(q)$. As shown by Abrams and Lloyd~\cite{abrams}, the phase estimation algorithms can also be used if a good approximation of the eigenvector corresponding to the smallest eigenvalue is known. Such an approximation is obtained by the algorithm of Jaksch and Papageorgiou~\cite{jaksch}. Combining these algorithms, we obtain the quantum algorithm that computes the smallest eigenvalue with error ${\varepsilon}$ and probability $\tfrac34$ using $\Theta(\log\, {\varepsilon}^{-1})$ power queries, and $\Theta(\log\,{\varepsilon}^{-1})$ qubits. For the sake of completeness, we review the phase estimation problem and algorithm, the results of Abrams and Lloyd and the results of Jaksch and Papageorgiou in the next subsections. \subsection{Phase Estimation} Consider $W$ defined by (\ref{eq:W}) with $\gamma=\tfrac12$, i.e., $$ W\,=\,\exp\left(\frac12\,\mathrm{i}\,M_q\right). $$ The eigenvalues of $W$ are $e^{\mathrm{i}{\lambda}_j(M_q)/2}$ with ${\lambda}_j(M_q)$ being the eigenvalues of the $n\times n$ matrix $M_q$ and $n$ is assumed to be a power of two. These eigenvalues can be written as $e^{2\pi \mathrm{i} \varphi_j}$, where $$ \varphi_j\,=\,\varphi_j(M_q)\,=\,\frac1{4\pi}\,{\lambda}_j(M_q) $$ are called {\it phases}. We are interested in estimating the smallest phase $\varphi_1(M_q)$, which belongs to $(0,1)$ since ${\lambda}_1(M_q)\in[\pi^2,\pi^2+1]$. For convenience, we renumber and normalize the eigenvectors of $M_q$, and also of $W$, as $$ |y_j\rangle\,=\,\sqrt{n}\,|z_{j+1}(M_q)\rangle, $$ for $j=0,1,\dots,n-1$. We will use $\{|y_j\rangle\}$ as the orthonormal basis of the space. Phase estimation, see \cite[Section 5.2]{nielsen}, is a quantum algorithm that approximates the phase $\varphi_1(M_q)$. Note that to compute an ${\varepsilon}$-approximation of ${\lambda}_1(M_q)$, it is enough to compute an ${\varepsilon}/(4\pi)$-approximation of $\varphi_1(M_q)$. The original phase estimation algorithm has been derived for the initial state $|0^{\otimes m}\rangle|y_0\rangle$, where $m$ is related to the accuracy and will be determined later, and $|y_0\rangle=|y_0(M_q)\rangle$ is the eigenvector of the matrix $M_q$ corresponding to the smallest eigenvalue ${\lambda}_1(M_q)$. Abrams and Lloyd \cite{abrams} showed that phase estimation can still be used if the eigenvector $|y_0\rangle$ is replaced by a {\it good} approximation $|\psi_0\rangle$ as the initial state. More precisely, expanding $\ket{\psi_0}$ in the basis of the eigenvectors $|y_j\rangle$, the initial state takes the form \begin{equation*} \ket{0}^{\otimes m}\ket{\psi_0} = \ket{0}^{\otimes m} \sum_{j=0}^{n-1} d_j\ket{y_j}. \end{equation*} Using $m$ Hadamard gates, we place the first register in an equal superposition, which gives the state \begin{equation*} |\psi_1\rangle\,=\, \frac{1}{\sqrt{2^m}}\sum_{x_1=0}^1\sum_{x_2=0}^1\cdots \sum_{x_m=0}^1\ket{x_1}\ket{x_2}\cdots\ket{x_m}\sum_{j=0}^{n-1}d_j\ket{y_j}. \end{equation*} We now apply the controlled quantum gates, see (\ref{control}), to create the state \begin{eqnarray*} |\psi_2\rangle\,&=&\,\widetilde W_{2^{m-1}}\widetilde W_{2^{m-2}}\cdots \widetilde W_{2^0}\,|\psi_1\rangle\\ &=&\,\frac{1}{\sqrt{2^m}}\sum_{j=0}^{n-1}d_j\ket{\eta_j}|y_j\rangle \end{eqnarray*} with \begin{eqnarray*} |\eta_{j}\rangle\,&=&\, \bigg(|0\rangle+e^{2\pi\mathrm{i}\varphi_j}|1\rangle\bigg)\otimes \bigg(|0\rangle+e^{2\pi\mathrm{i}2\varphi_j}|1\rangle\bigg)\otimes\cdots \otimes \bigg(|0\rangle+e^{2\pi\mathrm{i}2^{m-1}\varphi_j}|1\rangle\bigg)\\ &=&\,\sum_{x_1=0}^1\sum_{x_2=0}^1\cdots\sum_{x_m=0}^1 e^{2\pi\mathrm{i}(x_12^0+x_22^1+\cdots x_m2^{m-1})\varphi_j}|x_1\rangle |x_2\rangle\cdots|x_m\rangle\\ &=&\,\sum_{\ell=0}^{2^m-1} e^{2\pi\,\mathrm{i}\,\ell\,\varphi_j}|\ell\rangle, \end{eqnarray*} see also \cite[p.~222]{nielsen}. Hence, $$ |\psi_2\rangle\,=\,\frac{1}{\sqrt{2^m}}\sum_{j=0}^{n-1}d_j \left(\sum_{\ell=0}^{2^m-1}e^{2\pi\,\mathrm{i}\,\ell\,\varphi_j} |\ell\rangle\right) |y_j\rangle. $$ The inverse Fourier transform performed on the first register creates the state \begin{equation*} \sum_{j=0}^{n-1} d_j \left( \sum_{\ell=0}^{2^m-1} g(\varphi_j,\ell) \ket{\ell} \right) \ket{y_j}, \end{equation*} where \begin{equation*} g(\varphi_j,\ell) = \left\{ \begin{array}{ll} \frac{\sin (\pi(2^m\varphi_j -\ell)) e^{\pi\mathrm{i}(\varphi_j - \ell 2^{-m})(2^m-1)}} {2^m\sin(\pi (\varphi_j-\ell2^{-m}))} & \quad \mbox{if}\ \ \varphi_j \neq 2^{-m}\ell, \\ 1, & \quad \mbox{if} \ \ \varphi_j = 2^{-m}\ell. \end{array} \right. \end{equation*} A measurement of the first register produces the outcome $j$ with probability \begin{equation*} p_j = \sum_{\ell=0}^{n-1} |d_{\ell}|^2|g(\varphi_{\ell},j)|^2, \end{equation*} and the second register collapses to the state \begin{equation*} \sum_{\ell=0}^{n-1} \frac{d_{\ell} g(\varphi_{\ell},j)}{\sqrt{p_j}} \ket{y_{\ell}}. \end{equation*} The quantity $$ \Delta(\phi_0,\phi_1)\,=\,\min_{x \in \integers}\{|x + \phi_1 - \phi_0|\} \qquad \mbox{for}\ \ \phi_0, \phi_1 \in \reals $$ is defined in \cite{brassard} and is the fractional part of the distance between two phases $\phi_0$ and $\phi_1$. It is used to derive the relationship between the approximation error and the success probability. A measurement of the first register produces an outcome from the set $$ \mathcal{G}_k\,=\, \{ j: \Delta(j/2^b,\varphi_1(M_q)) \leq k/2^m\, \}, $$ where $k>1$, with probability \begin{equation*} \Pr (\mathcal{G}_k)\,=\, \sum_{j \in \mathcal{G}_k} \sum_{\ell=0}^{n-1} |d_{\ell} g(\varphi_{\ell},j)|^2 \,\ge\, |d|^2 \sum_{j \in \mathcal{G}_k} |g(\varphi_1(M_q) ,j)|^2 \,\ge\, |d|^2 - \frac{|d|^2}{2(k-1)}, \end{equation*} where $d=\langle y_0 | \psi_0\rangle $. For $k=1$ the probability that \begin{equation}\label{k=1} \Delta(j/2^m,\varphi_1(M_q))\,\leq\, 2^{-m} \quad\mbox{is bounded from below by}\ \frac{8}{\pi^2}|d|^2. \end{equation} The proof of the probability bounds can be found in \cite{brassard,nielsen}. Using this fact, the authors of~\cite{abrams} conclude that as long as $|d|^2$ is {\it large enough} or, equivalently, $\ket{\psi_0}$ is {\it close enough} to $\ket{y_0}$ then phase estimation can be used to approximate the phase $\varphi_1(M_q)$ with probability close to $8/\pi^2=0.81\dots$. We stress that the phase estimation algorithm uses $m$ power queries. In addition to the cost of the queries there is a quantum operations cost proportional to at most $m^2$, which is an upper bound on the cost of the quantum inverse Fourier transform, see \cite[Section 5.2]{nielsen}. \subsection{Eigenvalue and Eigenvector Approximation} The results of Jaksch and Papageorgiou \cite{jaksch} can be applied to efficiently construct a good approximate eigenvector when $W=e^{\frac{\mathrm{i}}{2}M_q}$ as in the previous subsection. The matrix $M_q=M_q^{(n)}$ has been derived from the discretization of the operator ${\mathbb L}_q$ with mesh size $h_n= (n+1)^{-1}$. Its eigenvectors are also eigenvectors of $W=W^{(n)}$, and we denote them here by $\ket{y_j^{(n)}}$, where $j=0,1,\dots,n-1$. We want to approximate ${\lambda}_1(M_q^{(n)})=4\pi \varphi_1(M_q^{(n)})$ but we do not know the corresponding eigenvector $$ \ket{y^{(n)}}\,:=\,\ket{y_0^{(n)}}. $$ The expansion of $\ket{y^{(n)}}$ in the computational basis is denoted by \begin{equation} \ket{y^{(n)}} = \sum_{j=0}^{n-1} y_j^{(n)}\ket{j}, \label{compexpansion} \end{equation} Recall that $u_q$ is the normalized, $\|u_q\|_{L_2}= \left(\int_0^1 u^2(x)\, dx\right)^{1/2}=1$, eigenfunction of the differential operator ${\mathbb L}_q$ that corresponds to ${\lambda}(q)$, and $u_q$ as well as $u_q^{\prime}$ and $u_q^{\prime\prime}$ are uniformly bounded, i.e., $\|u_q \|_\infty $, $\| u^{\prime}_q \|_\infty $ and $\|u^{\prime\prime}_q\|_{\infty}$ are $O(1)$. Let $\ket{U^{(n)}}=\sum_{j=0}^{n-1} u_q((j+1)h_n) \ket{j}$ be the vector obtained by sampling $u_q$ at the discretization points. Then it is known, see \cite{gary,keller} as well as Remark 4.1, that \begin{eqnarray} \left\| \ket{y^{(n)}} - \frac{\ket{ U^{(n)} }}{\| \ket{ U^{(n)} }\|_2} \right\|_2 &=&O(h_n^2) \quad {\rm and} \label{pointwise} \\ |{\lambda}(q)-{\lambda}_1(M_q^{(n)})|&=&O(h_n^2). \nonumber \end{eqnarray} Consider a coarse discretization of ${\mathbb L}_q$ with mesh size $h_{n_0}=(n_0+1)^{-1}$ with $n_0$ being a power of two. Assume that $$ \ket{\tilde{z}^{(n_0)}}\,=\,\sum_{j=0}^{n_0-1}\tilde{z}^{(n_0)}_j\ket{j}, \quad \|\ket{\tilde{z}^{(n_0)}}\|_2=1, $$ approximates the eigenvector $\ket{y^{(n_0)}}$ that corresponds to the smallest eigenvalue of the matrix $M_q^{(n_0)}$ such that, \begin{equation}\label{newapprox} \|\,\ket{\tilde{z}^{(n_0)}}-\ket{y^{(n_0)}}\,\|_2\,=\,O(n_0^{-2}). \end{equation} We place the vector $\ket{\tilde{z}^{(n_0)}}$ in a $\log\,n_0$ qubit register. As explained in Section 4.3, we can compute $\ket{\tilde{z}^{(n_0)}}$ on a classical computer with cost of order $n_0^2$. For $n=2^s n_0$, we construct an approximation $\ket{\tilde{z}^{(n)}}$ of $\ket{y^{(n)}}$ by first appending $s$ qubits, all in the state $\ket{0}$, to $\ket{\tilde{z}^{(n_0)}}$ and then performing the Hadamard transformation on each one of these $s$ qubits, i.e., \begin{equation} \ket{ \tilde{z}^{(n)}} = \ket{\tilde{z}^{(n_0)}} \left( \frac{\ket{0}+\ket{1}}{\sqrt{2}} \right)^{\otimes s} = \frac{1}{\sqrt{2^s}} \sum_{j=0}^{n-1} \tilde{z}_{g(j)}^{(n_0)}\,\ket{j}, \label{eq:JPalg} \end{equation} where $\tilde{z}_{g(j)}^{(n_0)}$'s denote the coordinates of $\ket{\tilde{z}^{(n_0)}}$ in the computational basis, and $g(j) = \lfloor j/2^s \rfloor$. The effect of $g$ is to replicate $2^s$ times the coordinates of $\ket{\tilde{z}^{(n_0)}}$. As in Jaksch and Papageorgiou \cite{jaksch}, we use the vector $\ket{ \tilde{z}^{(n)} }$ as part of the input to the phase estimation algorithm. Let $d^{(n)}= \langle y^{(n)} | \tilde{z}^{(n)}\rangle$. We show that $|d^{(n)}|^2$ can be made arbitrarily close to one by choosing a sufficiently large $n_0$. Hence, we can make the success probability of the phase estimation algorithm at least equal to $\tfrac34$. Consider two different expansions of $\ket{\tilde{z}^{(n)}}$, \begin{eqnarray} \ket{\tilde{z}^{(n)}} &=& \sum_{j=0}^{n-1} \tilde{u}_{j}^{(n)} \ket{j} \label{approxexpansion}\\ \ket{\tilde{z}^{(n)}} &=& \sum_{j=0}^{n-1} d_{j}^{(n)} \ket{y_j^{(n)}}. \label{eigenexpansion} \end{eqnarray} The first expansion is in the computational basis $\{|j\rangle\}$ and, by (\ref{eq:JPalg}), $$ \tilde{u}_j^{(n)}=2^{-s/2}z_{g(j)}^{(n_0)}\qquad \mbox{for}\ j=0,1,\dots,n-1, $$ while the second expansion is with respect to the eigenvectors of $M_q^{(n)}$. Note that $d^{(n)}=d_0^{(n)}$ and clearly $\sum_{j=0}^{n-1}|d_j^{(n)}|^2=1$. Equation (\ref{eigenexpansion}) implies \begin{equation} \ket{\tilde{z}^{(n)}} - \ket{y^{(n)}} = (d^{(n)} - 1) \ket{y^{(n)}} + \sum_{j=1}^{n-1}d_{j}^{(n)} \ket{y_j^{(n)}}. \end{equation} Taking norms on both sides we obtain \begin{equation}\label{errorbound} \left| \left|\, \ket{y^{(n)}} - \ket{\tilde{z}^{(n)}}\, \right| \right|^2_2 \,=\, |d^{(n)}-1|^2 + \sum_{j=1}^{n-1}|d_{j}^{(n)}|^2 \,\geq\, \sum_{j=1}^{n-1}|d_j^{(n)}|^2 \,=\, 1-|d^{(n)}|^2. \end{equation} We now bound the left hand side of (\ref{errorbound}) from above. Using the expression (\ref{compexpansion}) for $\ket{y^{(n)}}$ and the definition of $\ket{\tilde{z}^{(n)}}$, see (\ref{eq:JPalg}), (\ref{approxexpansion}), we have \begin{eqnarray*} \left\| \ket{y^{(n)}} - \ket{\tilde{z}^{(n)}} \right\|^2_2 &=& \sum_{j=0}^{n-1} |y_j^{(n)} - 2^{-s/2}z_{g(j)}^{(n_0)}|^2 \\ &=& \sum_{j=0}^{n-1} \Biggl| \frac{u_q((j+1)h_{n})}{\| \ket{U^{(n)}} \|_2} \Biggr. - \frac{u_q((g(j)+1)h_{n_0})}{\sqrt{2^s} \| \ket{U^{(n_0)}} \|_2} + \Delta_{j}^{(n)} - \frac{\Delta_{g(j)}^{(n_0)}}{\sqrt{2^s}} \Biggl. \Biggr|^2, \end{eqnarray*} where, by (\ref{pointwise}) and (\ref{newapprox}), we have $$ \sum_{j=0}^{n-1} |\Delta_{j}^{(n)}|^2 = O(h_n^{4})\qquad\mbox{and}\qquad \sum_{j=0}^{n-1} |\Delta_{g(j)}^{(n_0)}|^2 = 2^s O(h_{n_0}^{4}). $$ Applying the triangle inequality, we get \begin{equation} \left\|\ket{y^{(n)}} - \ket{\tilde{z}^{(n)}} \right\|_2\,\leq\, \left(\sum_{j=0}^{n-1} \left| \frac{u_q((j+1)h_{n})} {\| \ket{U^{(n)}} \|_2} \right. \right. - \left. \left. \frac{u_q((g(j)+1)h_{n_0})}{\sqrt{2^s} \| \ket{U^{(n_0)}} \|_2} \right|^2 \right)^{1/2} + O(h_{n_0}^2). \label{sumestimate} \end{equation} The definition of $\ket{U^{(n)}}$ and the fact that the derivative of $u_q$ is Lipschitz\footnote{ A function $f:[0,1]\to\reals$ is Lipschitz if there is a number $L\ge 0$ such that $|f(x)-f(y)|\le L|x-y|$ for all $x,y\in[0,1]$.} with the uniform Lipschitz constant imply that $\| \ket{U^{(n)}} \|_2 = \sqrt{n}(1+O(h_n))$. Hence, the square of the term in the parentheses above is equal to \begin{equation} \frac{1}{n} \sum_{j=0}^{n-1} | u_q((j+1)h_{n}) (1 + O(h_n)) - u_q((g(j)+1)h_{n_0}) (1 + O(h_{n_0})) |^2. \label{uk} \end{equation} Since $u_q$ is continuous with a bounded first derivative, we have that \begin{equation} u_q(x_{2,j}) = u_q(x_{1,j}) + O(|x_{2,j}-x_{1,j}|), \label{meanvalue} \end{equation} where $x_{2,j}=(j+1)h_n$ and $x_{1,j}=(g(j)+1)h_{n_0}$, $j=0,1,\ldots,n-1$. Let $\lfloor j/2^s\rfloor=j/2^s-{\alpha}$ with ${\alpha}\in(0,1)$. Then \begin{eqnarray*} |x_{2,j}-x_{1,j}|\,&=&\,\left|\frac{j+1}{2^sn_0+1}-\frac{j/2^s+1-{\alpha}}{n_0+1} \right|\\ &=&\,j\,\frac{2^s-1}{(2^sn_0+1)2^s(n_0+1)}+O(h_{n_0})\,=\, O(h_{n_0}). \end{eqnarray*} Using (\ref{uk}), (\ref{meanvalue}) and the triangle inequality, we obtain from (\ref{sumestimate}) that $$ \left\| \ket{y^{(n)}} - \ket{\tilde{z}^{(n)}} \right\|_2\,=\, O(h_{n_0}) \,\le\,\frac{c}{n_0+1} $$ for some positive number $c$ independent of $n$ and $n_0$. Combining this with (\ref{errorbound}) we finally conclude that \begin{equation}\label{failure} |d^{(n)}|^2\,\ge\,1\, -\,\frac{c^2}{(n_0+1)^2} \end{equation} and $d^{(n)}$ can be made arbitrarily close to one by taking a sufficiently large $n_0$. \subsection{Quantum Algorithm for the Smallest Eigenvalue} We combine the results of the previous two subsections to derive a quantum algorithm for computing an ${\varepsilon}$-approximation of the smallest eigenvalue with probability $\tfrac34$. We choose the parameters for the phase estimation algorithm. Without loss of generality we assume that ${\varepsilon}^{-1}$ is an even power of $2$, that is ${\varepsilon}^{-1}=2^{m}$ with an even $m$. We set $n={\varepsilon}^{-1/2}=2^{m/2}$ and we will be working with the matrix $M_q^{(n)}$. The index $n_0=2^{k_0}$ is chosen as the smallest power of two for which \begin{equation} \frac8{\pi^2}\,\left(1-\frac{c^2}{(n_0+1)^2}\right)\,\ge\,\frac34, \label{eq:n0} \end{equation} where the number $c$ is from (\ref{failure}). Clearly, $n_0=O(1)$. Without loss of generality we assume that $\tfrac12m>k_0=\log\,n_0$, i.e., we assume that ${\varepsilon}$ is sufficiently small. We finally set $s=\tfrac12m-k_0$. We then compute $\ket{\tilde{z}^{(n_0)}}$ on a classical computer as in Section 4.3 with cost $O(1)$ function values and operations. We run the phase estimation algorithm for the matrix $W=e^{\frac{\mathrm{i}}2M_q^{(n)}}$ with the initial state, see (\ref{eq:JPalg}), $$ \ket{0}^{\otimes m}\ket{\tilde{z}^{(n)}}\,=\ket{0}^{\otimes m} \ket{\tilde{z}^{(n_0)}} \left( \frac{\ket{0}+\ket{1}}{\sqrt{2}} \right)^{\otimes s}. $$ Let $j$ be the outcome of the phase estimation algorithm. We finally compute $$ \bar {\lambda}_j\,=\,4\pi\,j\,2^{-m} $$ as an approximation of the smallest eigenvalue ${\lambda}(q)$. We have \begin{eqnarray*} \bar {\lambda}_j\,-\, {\lambda}(q)\,&=&\,\bar {\lambda}_j\,-\,{\lambda}_1(M_q^{(n)})\,+\, {\lambda}_1(M_q^{(n)})\,-\,{\lambda}(q)\\ &=&\,4\pi\,\left(\frac{j}{2^m}-\varphi_1(M_q^{(n)})\right)\,+\,O({\varepsilon}). \end{eqnarray*} {}From (\ref{k=1}) we know that $$ \left(\frac{j}{2^m}-\varphi_1(M_q^{(n)})\right)\,\le\,{\varepsilon} \quad \mbox{with probability}\ \frac8{\pi^2}\,|d^{(n)}|^2. $$ By (\ref{failure}) and the definition of $n_0$ we have $$ \frac8{\pi^2}|d^{(n)}|^2\,\ge\, \frac8{\pi^2}\left(1-\frac{c^2}{(n_0+1)^2}\right)\,\ge\,\frac34. $$ Hence, $$ |\bar {\lambda}_j\,-\,{\lambda}(q)|\,=\,O({\varepsilon}) \quad \mbox{with probability at least}\ \frac34. $$ The computation of $\bar {\lambda}_j$ requires $$ m+k_0+s\,=\,\tfrac32\,m\,=\,\tfrac32\,\log\,{\varepsilon}^{-1} $$ qubits, $m=\log\,{\varepsilon}^{-1}$ power queries, plus a number of quantum operations proportional to $m^2=\log^2{\varepsilon}^{-1}$. This yields $n^{\textrm{power-query}}({\varepsilon})=O(\log\,{\varepsilon}^{-1})$. A lower bound on $n^{\textrm{power-query}}({\varepsilon})$ of the same order is proved in \cite{Bessen}. Hence, \begin{equation*} n^{\textrm{power-query}}({\varepsilon})\,=\,\Theta(\log\,{\varepsilon}^{-1}). \end{equation*} We summarize the results of this section in the following theorem. \begin{thm} The Sturm-Liouville eigenvalue problem can be solved in the quantum setting with power queries by the phase estimation algorithm applied to the discretized matrix of the differential operator ${\mathbb L}_q$ with the initial state given as an approximate eigenvector computed by the Jaksch and Papageorgiou algorithm. This quantum algorithm approximates the smallest eigenvalue ${\lambda}(q)$ with error ${\varepsilon}$ and probability $\tfrac34$ using \begin{itemize} \item $\tfrac32\log\,{\varepsilon}^{-1}+O(1)\ \ \mbox{power queries}$, \item $O(1)$ function values and classical operations, \item $O(\log^2{\varepsilon}^{-1})\ \ \mbox{quantum operations besides the power queries}$, and \item $\tfrac32\log\,{\varepsilon}^{-1}\,+O(1)\ \ \mbox{qubits}$. \end{itemize} Furthermore, \begin{equation*} n^{\textrm{\rm power-query}}\,=\,\Theta(\log\,{\varepsilon}^{-1}), \end{equation*} and \begin{equation*} \Omega(\cc_{{\rm power}}\,\log\,{\varepsilon}^{-1})\,=\, {\rm comp}^{\textrm{\rm power-query}}({\varepsilon})\,=\, O\left(\cc_{{\rm power}}\,\log\,{\varepsilon}^{-1}\,+\, \cc\,+ \log^2{\varepsilon}^{-1}\right). \end{equation*} \end{thm} \subsection{Qubit Complexity} In this section we address the qubit complexity, $\mathrm{comp}^{\mathrm{qub}}({\varepsilon})$, which is defined as the minimal number of qubits required to approximate the smallest eigenvalue with error ${\varepsilon}$ and probability~$\tfrac34$ by quantum algorithms of the form (\ref{eq:qa}). Clearly, $\mathrm{comp}^{\mathrm{qub}}({\varepsilon})$ is upper bounded by $\frac32\log\,{\varepsilon}^{-1}+O(1)$ since that many qubits are used by the phase estimation algorithm of Section~5.5. Observe that the cost of the classical algorithm computing $\ket{\tilde{z}^{(n_0)}}$ as well as its quantum simulation \cite[p.~189-193]{nielsen} is constant since $n_0$ is bounded by a constant due to (\ref{eq:n0}). We turn to a lower bound on $\mathrm{comp}^{\mathrm{qub}}({\varepsilon})$. Based on the results obtained in this paper, it is easy to see that the number of qubits necessary to solve our problem must be proportional at least to roughly $\tfrac12\log\,{\varepsilon}^{-1}$. Indeed, assume that there is a quantum algorithm of the form (\ref{eq:qa}) that computes ${\lambda}(q)$ with error~${\varepsilon}$ and probability $\tfrac34$, and uses $k({\varepsilon})$ qubits. This algorithm can use arbitrary quantum queries, assuming that each quantum query is based on at most $2^{k({\varepsilon})}$ function evaluations of~$q$. Note that this holds for bit queries, as well as for the power queries studied in this paper. Then such an algorithm can be simulated by a classical algorithm that uses at most $2^{k({\varepsilon})}$ function evaluations of $q$. {}From Theorem 3.2 we know that $2^{k({\varepsilon})}=\Omega({\varepsilon}^{-1/2})$ and therefore $k({\varepsilon})\ge\tfrac12\log\,{\varepsilon}^{-1}+\Omega(1)$. Hence, the qubit complexity is lower bounded by $\tfrac12\log\,{\varepsilon}^{-1}+\Omega(1)$. This proves the following theorem. \begin{thm} The qubit complexity of the Sturm-Liouville eigenvalue problem in the quantum setting with bit or power queries is bounded by $$ \tfrac12\,\log\,{\varepsilon}^{-1}\,+\,O(1)\,\le\,\mathrm{comp}^{\mathrm{qub}}({\varepsilon})\,\le\, \tfrac32\,\log\,{\varepsilon}^{-1}\,+\,O(1). $$ \end{thm} \section*{Acknowledgments} \vskip 1pc This research has been supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and the Air Force Research Laboratory. We are grateful for valuable comments from Stefan Heinrich, Marek Kwas, Joseph F. Traub and Arthur G. Werschulz. \vskip 2pc
2,869,038,155,180
arxiv
\section{Acoustic word embedding models} \label{sec:awe_models} We first provide an overview of two existing acoustic word embedding (AWE) models. We then introduce a new contrastive model. Each of these models can be trained using labelled word segments (making them supervised) or by using discovered words from a UTD system (making them unsupervised); in this section we are agnostic to the training method, but we discuss how we use the different models in detail in Section~\ref{sec:embedding_methods}. \subsection{Correspondence autoencoder RNN} \label{ssec:cae} The correspondence autoencoder recurrent neural network~(\system{CAE-RNN})~\cite{kamper_icassp19} is an extension of an autoencoder RNN~\cite{chung+etal_interspeech16}. Both models consist of an encoder RNN and a decoder RNN. The encoder produces a fixed-dimensional representation of a variable-length word segment which is then fed to the input of the decoder to reconstruct the original input sequence. In the \system{CAE-RNN}, unlike the autoencoder, the target output is not identical to the input, but rather an instance of the same word type. Figure~\ref{fig:cae_rnn} illustrates this model. Formally, the \system{CAE-RNN} is trained on pairs of speech segments $(X, X^\prime)$, with $X = \mathbf{x}_1, \ldots, \mathbf{x}_T$ and $X^\prime = \mathbf{x}^{\prime}_1, \ldots, \mathbf{x}^{\prime}_T$, containing different instances of the same word type, with each $\mathbf{x}_t$ an acoustic feature vector. The loss for a single training pair is therefore $J = \sum_{t=1}^{T^{\prime}} \norm{\mathbf{x}_t^{\prime} - \boldsymbol{f}_t(X)}^2$, where $\boldsymbol{f}_t(X)$ is the $t^{th}$ decoder output conditioned on the embedding $\mathbf{z}$. The embedding $\mathbf{z}$ is a projection of the final encoder RNN hidden state. As in~\cite{kamper_icassp19}, we first pretrain the \system{CAE-RNN} as an autoencoder and then switch to the loss function for correspondence training. \begin{figure}[!t] \centering {\includegraphics[scale=0.24]{figures/CAE_RNN.pdf}} \vspace*{-2pt} \caption{ The \system{CAE-RNN} is trained to reconstruct an instance $X^\prime$ of the same word type as the input sequence $X$. $T^\prime$ and $T$ are the lengths of $X^\prime$ and $X$, respectively. } \label{fig:cae_rnn} \end{figure} \subsection{Siamese RNN} \label{ssec:siamese} Unlike the reconstruction loss used in the \system{CAE-RNN}, the \system{SiameseRNN} model explicitly optimises relative distances between embeddings~\cite{settle+livescu_slt16}. Given input sequences $X_a$, $X_p$, $X_n$, the model produces embeddings $\mathbf{z}_a$, $\mathbf{z}_p$, $\mathbf{z}_n$, as illustrated in Figure~\ref{fig:siamese_rnn}. Inputs $X_a$ and $X_p$ are from the same word type (subscripts indicate anchor and positive) and $X_n$ is from a different word type (negative). For a single triplet of inputs, the model is trained using the triplet loss function,\footnote{Some studies~\cite{he+etal_iclr17,ng+lee_arxiv20,kamper+etal_arxiv2020} refer to this as a \textit{contrastive loss}, but we use \textit{triplet loss} here to explicitly distinguish it from the loss in Section~\ref{ssec:contrastive}.} defined as~\cite{weinberger+saul_jmlr09,chechik+etal_jmlr10}: J = \text{max} \{0, m + d(\mathbf{z}_a, \mathbf{z}_p) - d(\mathbf{z}_a, \mathbf{z}_n)\}$, with $m$ a margin parameter and $d(\mathbf{u}, \mathbf{v}) = 1 - \mathbf{u}^{\top}\mathbf{v}/\norm{\mathbf{u}}\norm{\mathbf{v}}$ denoting the cosine distance between two vectors $\mathbf{u}$ and $\mathbf{v}$. This loss is at a minimum when all embedding pairs $(\mathbf{z}_a , \mathbf{z}_p)$ of the same type are more similar by a margin $m$ than pairs $(\mathbf{z}_a , \mathbf{z}_n)$ of different types. To sample negative examples we use an online batch hard strategy~\cite{hermans+etal_arxiv2017}: for each item (anchor) in the batch we select the hardest positive and hardest negative example. \begin{figure}[!t] \centering {\includegraphics[scale=0.24]{figures/SiameseRNN.pdf}} \vspace*{-2pt} \caption{In the \system{SiameseRNN}, three encoder RNNs use the same set of parameters to produce embeddings $\mathbf{z}_a$, $\mathbf{z}_p$, $\mathbf{z}_n$ from input segments $X_a$, $X_p$, $X_n$. The model is trained to minimise the distance between the anchor and the positive item while maximising the distance between the anchor and negative item.} \label{fig:siamese_rnn} \end{figure} \subsection{Contrastive RNN} \label{ssec:contrastive} As an extension of the triplet loss function, we consider a loss that incorporates multiple negative examples for each positive pair. Concretely, given inputs $X_a$ and $X_p$ and multiple negative examples $X_{n_{1}}, \ldots, X_{n_{K}}$, the \system{ContrastiveRNN} produces embeddings $\mathbf{z}_a, \mathbf{z}_p, \mathbf{z}_{n_{1}}, \ldots, \mathbf{z}_{n_{K}}$. Let $\text{sim}(\mathbf{u}, \mathbf{v}) = \mathbf{u}^{\top}\mathbf{v}/\norm{\mathbf{u}}\norm{\mathbf{v}}$ denote the cosine similarity between two vectors $\mathbf{u}$ and $\mathbf{v}$. The loss given a positive pair $(X_a, X_p)$ and the set of negative examples is then defined as~\cite{chen+etal_arxiv2020}: \begin{equation*} J = -\text{log}\frac{\text{exp}\big\{\text{sim}(\mathbf{z}_a, \mathbf{z}_p)/\tau\big\}}{\sum_{j \in \{p, n_1, \hdots, n_K\}}^{}\text{exp}\big\{\text{sim}(\mathbf{z}_a, \mathbf{z}_j)/\tau\big\}}\,\text{,} \label{eqn:contrastive_loss} \end{equation*} where $\tau$ is a temperature parameter. The difference between this loss and the triplet loss used in the \system{SiameseRNN} is illustrated in Figure~\ref{fig:contrastive_rnn}. To sample negative examples we use an offline batch construction process. To construct a single batch, we choose $N$ distinct positive pairs. Given a positive pair $(X_a, X_p)$, the remaining $2(N-1)$ items are then treated as negative examples. The final loss is calculated as the sum of the loss over all $N$ positive pairs within the batch. As far as we are aware, the \system{ContrastiveRNN} has not been used as an AWE model in any previous work. \begin{figure}[!t] \begin{minipage}[a]{0.45\linewidth} \centering \centerline{\includegraphics[width=0.95\linewidth]{figures/contrastive_RNN_1.pdf}} \centerline{(a) Single negative example.}\medskip \end{minipage} \hfill \begin{minipage}[a]{0.45\linewidth} \centering \centerline{\includegraphics[width=0.95\linewidth]{figures/contrastive_RNN_2.pdf}} \centerline{(b) Multiple negative examples.}\medskip \end{minipage} \vspace*{-2pt} \caption{A visualisation of the difference in the optimisation of (a) the \system{SiameseRNN} and (b) the \system{ContrastiveRNN} for a single positive pair $(\mathbf{z}_a, \mathbf{z}_p)$ in the embedding space. } \label{fig:contrastive_rnn} \end{figure} \section{Acoustic word embeddings for zero-resource languages} \label{sec:embedding_methods} In Section~\ref{sec:awe_models} we were agnostic to how training targets for the different AWE models are obtained. In this section we describe different strategies for training AWE models, specifically for zero-resource languages where labelled data is not available. One option is to train unsupervised monolingual models directly on unlabelled data (Section~\ref{ssec:monolingual}). Another option is to train a supervised multilingual model on labelled data from well-resourced languages and then apply the model to a zero-resource language (Section~\ref{ssec:multilingual}). All three of the AWE models in Section~\ref{sec:awe_models} can be used in both of these settings, as explained below. Finally, in Section~\ref{ssec:multilingual_adapt} we describe a new combined approach where multilingual models are fine-tuned to a zero-resource language using unsupervised adaptation. \subsection{Unsupervised monolingual models} \label{ssec:monolingual} For any of the AWE models in Section~\ref{sec:awe_models}, we need pairs of segments containing words of the same type; for the \system{SiameseRNN} and \system{ContrastiveRNN} we additionally need negative examples. In a zero-resource setting there is no transcribed speech to construct such pairs. But pairs can be obtained automatically~\cite{jansen+etal_icassp13b}: we apply an unsupervised term discovery (UTD) system~\cite{jansen+vandurme_asru11} to an unlabelled speech collection from the target zero-resource language. This system discovers pairs of word-like segments, predicted to be of the same unknown type. The discovered pairs can be used to sample positive and negative examples for any of the three models in Section~\ref{sec:awe_models}. Since the UTD system has no prior knowledge of the language or word boundaries within the unlabelled speech data, the entire process can be considered unsupervised. Using this methodology, we consider purely unsupervised monolingual version of each of the three AWE models in Section~\ref{sec:awe_models}. \subsection{Supervised multilingual models} \label{ssec:multilingual} Instead of relying on discovered words from the target zero-resource language, we can exploit labelled data from well-resourced languages to train a single multilingual supervised AWE model~\cite{kamper+etal_arxiv2020,hu+etal_arxiv20}. This model can then be applied to an unseen zero-resource language. Since a supervised model is trained for one task and applied to another, this can be seen as a form of \textit{transfer learning}~\cite{pan+yang_tkde09,ruder_phd19}. We consider supervised multilingual variants of the three models in Section~\ref{sec:awe_models}. Experiments in~\cite{kamper+etal_arxiv2020} showed that multilingual versions of the \system{CAE-RNN} and \system{SiameseRNN} outperform unsupervised monolingual variants. A multilingual \system{ContrastiveRNN} hasn't been considered in a previous study, as far as we know. \subsection{Unsupervised adaptation of multilingual models} \label{ssec:multilingual_adapt} While previous studies have found that multilingual AWE models (Section~\ref{ssec:multilingual}) are superior to unsupervised AWE models (Section~\ref{ssec:monolingual}), one question is whether multilingual models could be tailored to a particular zero-resource language in an unsupervised way. We propose to adapt a multilingual AWE model to a target zero-resource language: a multilingual model's parameters (or a subset of the parameters) are fine-tuned using discovered word pairs. These discovered segments are obtained by applying a UTD system to unlabelled data from the target zero-resource language. The idea is that adapting the multilingual AWE model to the target language would allow the model to learn aspects unique to that language. We consider the adaptation of multilingual versions of all three AWE models in Section~\ref{sec:awe_models}. On development data, we experimented with which parameters to update and which to keep fixed from the source multilingual model. For the \system{CAE-RNN}, we found that it is best to freeze the multilingual encoder RNN weights and only update the weights between the final encoder RNN hidden state and the embedding; we also found that it is best to re-initialise the decoder RNN weights randomly before training on the target language. For the \system{SiameseRNN} and \system{ContrastiveRNN}, we update all weights during adaptation. As far as we know, we are the first to perform \textit{unsupervised} adaptation of multilingual AWE models for the zero-resource setting. However,~\cite{hu+etal_arxiv20} showed the benefit of \textit{supervised} adaptation, where (limited) labelled data from a target language is used to update the parameters of a multilingual AWE model. \section{Conclusion} \label{sec:conclusion} We have compared a self-supervised contrastive acoustic word embedding approach to two existing methods in a word discrimination task on six zero-resource languages. In a purely unsupervised setting where words from a term discovery system are used for self-supervision, the contrastive model outperformed unsupervised correspondence autoencoder and Siamese embedding models. In a multilingual transfer setting where a model is trained on several well-resourced languages and then applied to a zero-resource language, the contrastive model didn't show consistent improvements. However, it performed best in a setting where multilingual models are adapted to a particular zero-resource language using the unsupervised discovered word segments, leading to the best reported results on this data. Analysis shows that the contrastive approach abstracts away from speaker identity more than the other two approaches. Future work will involve extending our analysis and performing comparative experiments in a downstream query-by-example search task. \section{Experimental setup} \label{sec:experiment_setup} We perform experiments using the GlobalPhone corpus of read speech~\cite{schultz+etal_icassp13}. As in~\cite{hermann+etal_csl20, kamper+etal_arxiv2020}, we treat six languages as our target zero-resource languages: Spanish (ES), Hausa (HA), Croatian (HR), Swedish (SV), Turkish (TR) and Mandarin~(ZH). Each language has on average 16 hours of training, 2 hours of development and 2 hours of test data. We apply the UTD system of~\cite{jansen+vandurme_asru11} to the training set of each zero-resource language and use the discovered pairs to train unsupervised monolingual embedding models (Section~\ref{ssec:monolingual}). The UTD system discovers around 36k pairs for each language, where pair-wise matching precisions vary between $32\%$ (SV) and $79\%$ (ZH). Training conditions for the unsupervised monolingual \system{CAE-RNN}, \system{SiameseRNN} and \system{ContrastiveRNN} models are determined by doing validation on the Spanish development data. The same hyperparameters are then used for the five remaining zero-resource languages. For training supervised multilingual embedding models (Section~\ref{ssec:multilingual}), six other GlobalPhone languages are chosen as well-resourced languages: Czech, French, Polish, Portuguese, Russian and Thai. Each well-resourced language has on average 21 hours of labelled training data. We pool the data from all six well-resourced languages and train a multilingual \system{CAE-RNN}, a \system{SiameseRNN} and a \system{ContrastiveRNN}. Instead of using the development data from one of the zero-resource languages, we use another well-resourced language, German, for validation of each model before applying it to the zero-resource languages. We only use 300k positive word pairs for each model, as further increasing the number of pairs did not give improvements on the German validation data. As explained in Section~\ref{ssec:multilingual_adapt}, we adapt each of the multilingual models to each of the six zero-resource languages using the same discovered pairs as for the unsupervised monolingual models. We again use Spanish development data to determine hyperparameters. All speech audio is parametrised as $D = 13$ dimensional static Mel-frequency cepstral coefficients (MFCCs). All our models have a similar architecture: encoders and decoders consist of three unidirectional RNNs with 400-dimensional hidden vectors, and all models use an embedding size of 130 dimensions. Models are optimised using Adam optimisation~\cite{kingma+ba_iclr15}. The margin parameter $m$ in Section \ref{ssec:siamese} and temperature parameter $\tau$ in Section \ref{ssec:contrastive} are set to $0.25$ and $0.1$, respectively. We implement all our models in PyTorch.\footnote{ \url{https://github.com/christiaanjacobs/globalphone_awe_pytorch}} We use a word discrimination task~\cite{carlin+etal_icassp11} to measure the intrinsic quality of the resulting AWEs. To evaluate a particular AWE model, a set of isolated test word segments is embedded. For every word pair in this set, the cosine distance between their embeddings is calculated. Two words can then be classified as being of the same or different type based on some distance threshold, and a precision-recall curve is obtained by varying the threshold. The area under this curve is used as final evaluation metric, referred to as the average precision~(AP). We are particularly interested in obtaining embeddings that are speaker invariant. We therefore calculate AP by only taking the recall over instances of the same word spoken by different speakers, i.e.\ we consider the more difficult setting where a model does not get credit for recalling the same word if it is said by the same speaker. \section{Introduction} \label{sec:intro} A \textit{zero-resource} language is one for which no transcribed speech resources are available for developing speech systems~\cite{jansen+etal_icassp13,dunbar+etal_interspeech19}. Although conventional speech recognition is not possible for such languages, researchers have shown how speech search~\cite{levin+etal_icassp15,huang+etal_arxiv18,yuan+etal_interspeech18}, discovery~\cite{park+glass_taslp08,jansen+vandurme_asru11,ondel+etal_arxiv19,rasanen+blandon_arxiv20}, and segmentation and clustering~\cite{kamper+etal_asru17,seshadri+rasanen_spl19,kreuk+etal_interspeech20} applications can be developed without any labelled speech audio. In many of these applications, a metric is required for comparing speech segments of different durations. This is typically done using dynamic time warping (DTW). But DTW is computationally expensive and can be difficult to incorporate directly into downstream systems (see e.g.\ the alterations required in~\cite{anastasopoulos+etal_emnlp16}). \textit{Acoustic word embeddings}~(AWEs) have emerged as an alternative. Instead of using alignment, speech segments are mapped to vectors in a fixed-dimensional space. Proximity in this embedding space should indicate similarity of the original acoustic segments~\cite{levin+etal_asru13}. Several AWE models have been proposed~\cite{bengio+heigold_interspeech14,he+etal_iclr17,audhkhasi+etal_stsp17,wang+etal_icassp18,chen+etal_slt18,holzenberger+etal_interspeech18,chung+glass_interspeech18,haque+etal_icassp19,shi+etal_arxiv19,palaskar+etal_icassp19,settle+etal_icassp19,jung+etal_asru19}. For zero-resource settings, one approach is to train an unsupervised model on unlabelled data from the target language. Chung et al.~\cite{chung+etal_interspeech16} trained an autoencoding encoder-decoder recurrent neural network (RNN) on unlabelled speech segments and used (a projection of) the final encoder hidden state as embedding. Kamper~\cite{kamper_icassp19} extended this approach: instead of reconstructing an input segment directly, the correspondence autoencoder RNN (\system{CAE-RNN}) attempts to reconstruct another speech segment of the same type as the input. Since labelled data isn't available for zero-resource languages, the input-output pairs for the \system{CAE-RNN} are obtained from an unsupervised term discovery (UTD) system, which automatically finds recurring word-like patterns in an unlabelled speech collection~\cite{park+glass_taslp08,jansen+vandurme_asru11}. A recent alternative for obtaining embeddings on a zero-resource language is to use multilingual transfer learning~\cite{ma+etal_arxiv20,kamper+etal_icassp20,kamper+etal_arxiv2020,hu+etal_arxiv20}. The idea is to train a supervised multilingual AWE model jointly on a number of well-resourced languages for which labelled data is available, but to then apply the model to an unseen zero-resource language. This multilingual transfer approach was found to outperform monolingual unsupervised learning approaches in~\cite{kamper+etal_arxiv2020,hu+etal_arxiv20}. One question is whether unsupervised learning and multilingual transfer are complementary. More concretely, can multilingual transfer further benefit from incorporating unsupervised learning? In this paper we answer this question by using unsupervised adaptation: a multilingual AWE model is updated by fine-tuning (a subset of) its parameters to a particular zero-resource language. To obtain training targets, we use the same approach as for the unsupervised model in~\cite{kamper_icassp19}, and apply a UTD system to unlabelled data from the target language. We consider unsupervised adaptation of multilingual \system{CAE-RNN} models, \system{SiameseRNN} models~\cite{settle+livescu_slt16}, and a new AWE approach based on self-supervised contrastive learning. \textit{Self-supervised learning} involves using proxy tasks for which target labels can automatically be obtained from the data~\cite{doersch+zisserman_iccv17,asano+etal_iclr20}. Originally proposed for vision problems~\cite{doersch+etal_iccv15,noroozi+favaro_eccv16,gidaris+etal_arxiv18}, it has since also been used as an effective pretraining step for supervised speech recognition~\cite{pascual+etal_arxiv19,synnaeve+etal_arxiv19,baevski+etal_iclr20,baevski+mohamed_icassp20,wang+etal_icassp20,ravanelli+etal_icassp20}. It is somewhat difficult to distinguish self-supervised from unsupervised learning.\footnote{E.g., the unsupervised monolingual \tablesystem{CAE-RNN}~\cite{kamper_icassp19} is referred to as a self-supervised model in~\cite{algayres+etal_arxiv20}, since it fits the definition exactly: training targets are automatically obtained from the data for a reconstruction task. } But, importantly for us, a number of loss functions have been introduced in the context of self-supervised learning which have not been considered for AWEs. Here we specifically consider the contrastive loss of~\cite{chen+etal_arxiv2020, sohn_nips2016}. While a Siamese AWE model~\cite{kamper+etal_icassp16,settle+livescu_slt16} optimises the relative distance between one positive and one negative pair, our contrastive AWE model jointly embeds a number of speech segments and then attempts to select a positive item from among several negative items. We compare the \system{ContrastiveRNN} to \system{CAE-RNN} and \system{SiameseRNN} models in both the purely unsupervised monolingual and the supervised multilingual transfer settings. We use an intrinsic word discrimination task on six languages (which we treat as zero-resource). Our main contributions are as follows. (i)~For purely unsupervised monolingual AWEs, we show that a \system{ContrastiveRNN} using UTD segments as training targets outperforms previous unsupervised models by between 5\% and 19\% absolute in average precision (AP). (ii)~We compare contrastive learning to other supervised AWE models for multilingual transfer (without adaptation) and find that the multilingual \system{ContrastiveRNN} only gives improvements on some (but not all) zero-resource languages compared to the multilingual \system{CAE-RNN} and \system{SiameseRNN}. (iii)~However, when performing unsupervised adaptation, adapted multilingual \system{ContrastiveRNN}s outperform the other adapted models on five out of six zero-resource languages, with improvements of up to 12\% absolute in AP on some languages, resulting in the best reported results on these data sets. (iv)~We perform probing experiments which show that the \system{ContrastiveRNN} is generally better at abstracting away from speaker identity. \section{PAGE NUMBERING} \end{document} \section{Experimental results} \label{sec:results} We start in Section~\ref{ssec: word_discrimination} by evaluating the different AWE models using the intrinsic word discrimination task described above. Instead of only looking at word discrimination results, it is useful to also use other methods to try and better understand the organisation of AWE spaces~\cite{matusevych+etal_baics20}, especially in light of recent results~\cite{algayres+etal_arxiv20} showing that AP has limitations. We therefore look at speaker classification performance in Section~\ref{ssec: speaker_invariance}, and give a qualitative analysis of adaptation in Section~\ref{ssec: qualitative_analysis}. \subsection{Word discrimination} \label{ssec: word_discrimination} We first consider purely unsupervised monolingual models (Section~\ref{ssec:monolingual}). We are particularly interested in the performance of the \system{ContrastiveRNN}, which has not been considered in previous work. The top section in Table~\ref{tbl:multi} shows the performance for the unsupervised monolingual AWE models applied to the test data from the six zero-resource languages.\footnote{We note that the results for the \tablesystem{CAE-RNN} and \tablesystem{SiameseRNN} here are slightly different to that of~\cite{kamper+etal_icassp20,kamper+etal_arxiv2020}, despite using the same test and training setup. We believe this is due to the different negative sampling scheme for the \tablesystem{SiameseRNN} and other small differences in our implementation.} As a baseline, we also give the results where DTW is used directly on the MFCCs to perform the word discrimination task. We see that the \system{ContrastiveRNN} consistently outperforms the \system{CAE-RNN} and \system{SiameseRNN} approaches on all six zero-resource languages. The \system{ContrastiveRNN} is also the only model to perform better than DTW on all six zero-resource languages, which is noteworthy since DTW has access to the full sequences for discriminating between words. \input{table2} Next, we consider the supervised multilingual models (Section~\ref{ssec:multilingual}). The middle section of Table~\ref{tbl:multi} shows the performance for the supervised multilingual models applied to the six zero-resource languages. By comparing these supervised multilingual models to the unsupervised monolingual models (top), we see that in almost all cases the multilingual models outperform the purely unsupervised monolingual models, as also in~\cite{kamper+etal_icassp20,kamper+etal_arxiv2020}. However, on Mandarin, the unsupervised monolingual \system{ContrastiveRNN} model outperforms all three multilingual models. Comparing the three multilingual models, we do not see a consistent winner between the \system{ContrastiveRNN} and \system{CAE-RNN}, with one performing better on some languages while the other performs better on others. The multilingual \system{SiameseRNN} generally performs worst, although it outperforms the \system{ContrastiveRNN} on Swedish. Finally, we consider adapting the supervised multilingual models (Section~\ref{ssec:multilingual_adapt}). The results after adapting each multilingual model to each of the zero-resource languages are shown in the bottom section of Table~\ref{tbl:multi}. Comparing the middle and bottom sections of the table, we see that most of the adapted models outperform their corresponding source multilingual models, with the \system{ContrastiveRNN} and \system{SiameseRNN} improving substantially after adaptation on some of the languages. The adapted \system{ContrastiveRNN} models outperform the adapted \system{CAE-RNN} and \system{SiameseRNN} models on five out of the six zero-resource languages, achieving some of the best reported results on these data sets~\cite{ann+etal_csl20,kamper+etal_icassp20}. We conclude that unsupervised adaptation of multilingual models to a target zero-resource language is an effective AWE approach, especially when coupled with the self-supervised contrastive loss. \input{table3} One question is whether adapted models close the gap between the zero-resource setting and the best-case scenario where we have labelled data available in a target language. To answer this, Table~\ref{tbl:top-line} compares multilingual models (bottom) to ``oracle'' supervised monolingual models trained on labelled data from the six evaluation languages (top) on development data. Although Table~\ref{tbl:multi} shows that adaptation greatly improves performance in the zero-resource setting, Table~\ref{tbl:top-line} shows that multilingual adaptation still does not reach the performance of supervised monolingual models. \subsection{Speaker classification} \label{ssec: speaker_invariance} \input{table4} To what extent do the different AWEs capture speaker information? How does adaptation affect speaker invariance? To measure speaker invariance, we use a linear classifier to predict a word's speaker identity from its AWE. Specifically, we train a multi-class logistic regression model on 80\% of the development data and test it on the remaining 20\%. The top section of Table~\ref{tbl:monolingual_speaker} shows speaker classification results on development data for the three types of monolingual unsupervised models (Section~\ref{ssec:monolingual}). Since we are interested in how well models abstract away from speaker information, we consider lower accuracy as better (shown in bold). The \system{ContrastiveRNN} achieves the lowest speaker classification performance across all languages, except on Croatian where it performs very similarly to the \system{SiameseRNN}. This suggests that among the unsupervised monolingual models, the \system{ContrastiveRNN} is the best at abstracting away from speaker identity (at the surface level captured by a linear classifier). Next, we consider speaker classification performance for the multilingual models (Section~\ref{ssec:multilingual}). Comparing the middle and top sections of Table~\ref{tbl:monolingual_speaker}, we see that for each multilingual model (middle) the speaker classification performance drops from its corresponding unsupervised monolingual version (top) across all six languages, again indicating an improvement in speaker invariance. Comparing the three multilingual models to each other (middle), the \system{ContrastiveRNN} has the lowest speaker classification performance on four out of the six evaluation languages. Finally, we look at the impact of unsupervised adaptation (Section~\ref{ssec:multilingual_adapt}) on speaker invariance, shown at the bottom of Table~\ref{tbl:monolingual_speaker}. After adaptation (bottom) we see that speaker classification results improve consistently compared to their corresponding source multilingual model (middle). Although this seems to indicate that the adapted AWEs capture more speaker information, these embeddings still lead to better word discrimination performance (Table~\ref{tbl:multi}). A similar trend was observed in~\cite{kamper+etal_arxiv2020}: a model leading to better (linear) speaker classification performance does not necessarily give worse AP. Recent results in~\cite{algayres+etal_arxiv20} showed that AP is limited in its ability to indicate downstream performance for all tasks. Further analysis is required to investigate this seeming contradiction. Importantly for us, it seems that unsupervised adaptation using unlabelled data in a target zero-resource language leads to representations which better distinguish between speakers in that language. \subsection{Qualitative analysis of adaptation} \label{ssec: qualitative_analysis} \begin{figure}[!t] \begin{minipage}[a]{.49\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{figures/HA_multi_test_cherry_final.pdf}} \centerline{(a) Before adaptation.}\medskip \end{minipage} \hfill \begin{minipage}[a]{0.49\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{figures/HA_adapt_test_cherry_final.pdf}} \centerline{(b) After adaptation.}\medskip \end{minipage} \vspace*{-2pt} \caption{t-SNE visualisations of acoustic embeddings for the most frequent words in the Hausa data, produced by (a) the multilingual \system{ContrastiveRNN} model and (b) the multilingual \system{ContrastiveRNN} model adapted to Hausa.} \label{fig:tsne} \end{figure} Figure~\ref{fig:tsne} shows t-SNE visualisations~\cite{maaten+hinton_jmlr2008} of the AWEs produced by the \system{ContrastiveRNN} on Hausa data before and after adaptation. In this curated example, we see how some of the words that are clustered together by the multilingual model (e.g. ``amfani'' and ``hankali'') are separated after adaptation.
2,869,038,155,181
arxiv
\section{Introduction and presentation of the results} In this paper, we highlight the special properties of incomparability graphs by considering the behavior of paths. We consider the problem of the existence of infinite paths, either induced or isometric, in the incomparability graph of a poset. We apply one of our results in the theory of hereditary classes of certain permutation classes that are well quasi ordered by embeddability. \vskip0.5cm The graphs we consider are undirected, simple and have no loops. That is, a {\it graph} is a pair $G:=(V, E)$, where $E$ is a subset of $[V]^2$, the set of $2$-element subsets of $V$. Elements of $V$ are the {\it vertices} of $G$ and elements of $ E$ its {\it edges}. The graph $G$ be given, we denote by $V(G)$ its vertex set and by $E(G)$ its edge set. The {\it complement} of a graph $G=(V,E)$ is the graph $G^c$ whose vertex set is $V$ and edge set $E^c:=[V]^2\setminus E$. Throughout, $P :=(V, \leq)$ denotes an ordered set (poset), that is a set $V$ equipped with a binary relation $\leq$ on $V$ which is reflexive, antisymmetric and transitive. We say that two elements $x,y\in V$ are \emph{comparable} if $x\leq y$ or $y\leq x$, otherwise we say they are \emph{incomparable}. The \emph{comparability graph}, respectively the \emph{incomparability graph}, of a poset $P:=(V,\leq)$ is the undirected graph, denoted by $\comp(P)$, respectively $\inc(P)$, with vertex set $V$ and edges the pairs $\{u,v\}$ of comparable distinct vertices (that is, either $u< v$ or $v<u$) respectively incomparable vertices. A result of Gallai, 1967 \cite {gallai}, quite famous and nontrivial, characterizes comparability graphs among graphs in terms of obstructions: a graph $G$ is the comparability graph of a poset if and only if it does not contain as an induced subgraph a graph belonging to a minimal list of finite graphs. Since the complement of a comparability graph is an incomparability graph, Gallai's result yields a similar characterization of incomparability graphs. In this paper, we consider incomparability graphs as metric spaces by means of the distance of the shortest path. The metric properties of a graph, notably of an incomparability graph, and metric properties of its complement seem to be far apart. In general, metric properties of graphs are based on paths and cycles. It should be noted that incomparability graphs have no induced cycles of length at least five (\cite {gallai}; for a short proof see after Lemma \ref{lem:inducedpath}) while comparability graphs have no induced odd cycles but can have arbitrarily large induced even cycles. In the sequel, we will illustrate the specificity of the metric properties of incomparability graphs by emphasising the properties of paths. We start with few definitions. Let $G:=(V, E)$ be a graph. If $A$ is a subset of $V$, the graph $G_{\restriction A}:=(A, E\cap [A]^2)$ is the \emph{graph induced by $G$ on $A$}. A \emph{path} is a graph $\mathrm P$ such that there exists a one-to-one map $f$ from the set $V(\mathrm P)$ of its vertices into an interval $I$ of the chain $\NN$ of nonnegative integers in such a way that $\{u,v\}$ belongs to $E(\mathrm P)$, the set of edges of $\mathrm P$, if and only if $|f(u)-f(v)|=1$ for every $u,v\in V(\mathrm P)$. If $I$ is finite, say $I=\{1,\dots,n\}$, then we denote that path by $\mathrm P_n$; its \emph{length} is $n-1$ (so, if $n=2$, $\mathrm P_2$ is made of a single edge, whereas if $n=1$, $\mathrm P_1$ is a single vertex). We denote by $\mathrm P_\infty$ the one way infinite path i.e. $I=\NN$. If $x,y$ are two vertices of a graph $G:= (V, E)$, we denote by $d_G(x,y)$ the length of a shortest path joining $x$ and $y$ if any, and $d_G(x,y):= \infty$ otherwise. This defines a distance on $V$, the \emph{graphic distance}. A graph is \emph{connected} if any two vertices belong to some path. The \emph{diameter} of $G$, denoted by $\delta_{G}$, is the supremum of the set $\{d_G(x,y) : x,y\in V\}$. If $A$ is a subset of $V$, the graph $G'$ induced by $G$ on $A$ is an \emph{isometric subgraph} of $G$ if $d_{G'}(x,y)=d_G(x,y)$ for all $x,y\in A$. The supremum of the length of induced finite paths of $G$, denoted by $D_G$, is sometimes called the (induced) \emph{detour} of $G$ \cite{buckley-harary}.\\ The main results of the paper are presented in the next four subsections. Section \ref{sec:application} is devoted to an application of one of our main results (Theorem \ref{thm:infinitepath-kite}). The remaining sections contain intermediate results and proofs of our main results. \subsection{Induced paths of arbitrarily large length in incomparability graphs and in arbitrary graphs} We now consider the question of the existence of infinite induced paths in incomparability graphs with infinite detour. In order to state our main result of this subsection we need to introduce the notions of direct sum and complete sum of graphs. Let $G_n:=(V_n,E_n)$ for $n\in \NN$ be a family of graphs having pairwise disjoint vertex sets. The \emph{direct sum} of $(G_n)_{n\in \NN}$, denoted $\oplus_n G_n$, is the graph whose vertex set is $\bigcup_{n\in \NN}V_n$ and edge set $\bigcup_{n\in \NN}E_n$. The \emph{complete sum} of $(G_n)_{n\in \NN}$, denoted $\sum_n G_n$, is the graph whose vertex set is $\bigcup_{n\in \NN}V_n$ and edge set $\bigcup_{i\neq j}\{\{v,v'\} : v\in V_i \wedge v'\in V_j\}\cup \bigcup_{n\in \NN}E_n$.\\ A necessary condition for the existence of an infinite induced path in a graph is to have infinite detour. On the other hand, the graphs consisting of the direct sum of finite paths of arbitrarily large length and the complete sum of finite paths of arbitrarily large length are (incomparability) graphs with infinite detour and yet do not have an infinite induced path. We should mention that \emph{in the case of incomparability graphs, having infinite detour is equivalent to having a direct sum or a complete sum of finite paths of arbitrarily large length}. This is Theorem 2 from \cite{pouzet-zaguia20}. \begin{theorem}[\cite{pouzet-zaguia20}]\label{thm:pouzet-zaguia-pathmin}Let $G$ be the incomparability graph of a poset. Then $G$ contains induced paths of arbitrarily large length if and only if $G$ contains $\sum_{n\geq 1} \mathrm P_n$ or $\oplus_{n\geq 1} \mathrm P_n$ as an induced subgraph. \end{theorem} For general graphs, the statement of Theorem \ref{thm:pouzet-zaguia-pathmin} is false. Indeed, in \cite{pouzet-zaguia20} we exhibited uncountably many graphs of cardinality $\aleph_0$, containing finite induced paths of unbounded length and neither a direct sum nor a complete sum of finite paths of unbounded length. In particular, these graphs do not have an infinite induced path. \\ In the case of incomparability graphs of posets coverable by two chains, having infinite detour is equivalent to the existence of an infinite induced path. Our first result is this. \begin{theorem}\label{thm:widthtwo}Let $P$ be a poset coverable by two chains (that is totally ordered sets). If $\inc(P)$, the incomparability graph of $P$, is connected then the following properties are equivalent: \begin{enumerate}[(i)] \item $\inc(P)$ contains the direct sum of induced paths of arbitrarily large length; \item the detour of $\inc(P)$ is infinite; \item the diameter of $\inc(P)$ is infinite; \item $\inc(P)$ contains an infinite induced path. \end{enumerate} \end{theorem} A proof of Theorem \ref{thm:widthtwo} will be provided in Section \ref{proof:thm:widthtwo}. The implication $(i)\Rightarrow (iv)$ of Theorem \ref{thm:widthtwo} becomes false if the condition "coverable by two chains" is dropped (see Figure \ref{width-three} for an example). Indeed, \begin{example}\label{thm:infi-detour-no-path}There exists a poset with no infinite antichain whose incomparability graph is connected and embeds the direct sum of finite induced paths of arbitrarily large length and yet does not have an infinite induced path (See Figure \ref{width-three}). \end{example} Example \ref{thm:infi-detour-no-path} and a proof that it verifies the required properties will be given in Section \ref{proof:thm:infi-detour-no-path}. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize=2.4in \epsfbox{width-three.eps} \end{center} \caption{The Hasse diagram of a poset of width three and its incomparability graph that has a vertex $y$ of infinite induced detour but no infinite induced path.} \label{width-three} \end{figure} \subsection{Infinite induced paths, combs and kites} We now consider the question of the existence of infinite induced paths in incomparability graphs with infinite diameter. In order to state our main result of this subsection we need to introduce two types of graphs: comb and kite. Let us recall that a graph $G:=(V,E)$ is a \emph{caterpillar} if the graph obtained by removing from $V$ the vertices of degree one is a path (finite or not, reduced to one vertex or empty). A \emph{comb} is a caterpillar such that every vertex is adjacent to at most one vertex of degree one. Incidentally, a path on three vertices is not a comb. It should be mentioned that caterpillars are incomparability graphs of interval orders coverable by two chains (see Lemma 14 of \cite{zaguia2008}). We now give the definition of a \emph{kite}. This is a graph obtained from an infinite path $\mathrm P_\infty :=(x_i)_{i\in \NN}$ by adding a new set of vertices $Y$ (finite or infinite). We distinguish three types of kites (see Figure \ref{fig:comb-kite}) depending on how the vertices of $Y$ are adjacent to the vertices of $\mathrm P_\infty$. A \emph{kite of type $(1)$}: every vertex of $Y$ is adjacent to exactly two vertices of $\mathrm P_\infty$ and these two vertices are consecutive in $\mathrm P_\infty$. Furthermore, two distinct vertices of $Y$ share at most one common neighbour in $\mathrm P_\infty$. A \emph{kite of type $(2)$}: every vertex of $Y$ is adjacent to exactly three vertices of $\mathrm P_\infty$ and these three vertices must be consecutive in $\mathrm P_\infty$. Furthermore, for all $x,x'\in Y$, if $x$ is adjacent to $x_i,x_{i+1},x_{i+2}$ and $x'$ is adjacent to $x_{i'},x_{i'+1},x_{i'+2}$ then $i+2\leq i'$ or $i'+2\leq i$. A \emph{kite of type $(3)$}: every vertex of $Y$ is adjacent to exactly two vertices of $\mathrm P_\infty$ and these two vertices must be at distance two in $\mathrm P_\infty$. Furthermore, for all $x,x'\in X$, if $x$ is adjacent to $x_i$ and $x_{i+2}$ and $x'$ is adjacent to $x_{i'}$ and $x_{i'+2}$ then $i+2\leq i'$ or $i'+2\leq i$. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize=3.5in \epsfbox{comb-kite.eps} \end{center} \caption{A comb and the three types of kites.} \label{fig:comb-kite} \end{figure} \begin{theorem}\label{thm:infinitepath-kite}If $G$ is a connected incomparability graph with infinite diameter. Then \begin{enumerate}[$(1)$] \item Every vertex of $G$ has an induced path of infinite diameter starting at it. \item If the set of vertices of degree at least $3$ in $G$ has infinite diameter, then $G$ contains an induced comb or an induced kite having an infinite diameter and infinitely many vertices of degree at least $3$. \end{enumerate} \end{theorem} Theorem \ref{thm:infinitepath-kite} will be proved in Section \ref{section:proof-thm:infinitepath-kite} (an important ingredient of its proof is Theorem \ref{thm:orderconvex} below). \subsection{Infinite isometric paths in incomparability graphs} A basic result about the existence of an infinite isometric path in a graph is K\"onig's lemma \cite{konig}. Recall that a graph is \emph{locally finite} if every vertex has a finite degree. \begin{theorem}[\cite{konig}] \label{thn:konig}Every connected, locally finite, infinite graph contains an isometric infinite path. \end{theorem} Moreover, \begin{theorem}\label{thm:polat} If a connected graph $G$ has an infinite isometric path, then every vertex has an isometric path starting at it. \end{theorem} Theorem \ref{thm:polat} was proved by Watkins in the case of locally finite graphs (see \cite{watkins}, Lemma 3.2). The general case is contained in Theorem 3.5 and Lemma 3.7 of \cite{polat}.\\ A necessary condition for a graph to have an infinite isometric path is to have infinite diameter. Note that a graph has an infinite diameter if and only if it has finite isometric paths of arbitrarily large length. The existence of such paths does not necessarily imply the existence of an infinite isometric path even if the graph is connected. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize=2in \epsfbox{width2-no-isometric.eps} \end{center} \caption{The Hasse diagram of a poset of width two whose incomparability graph is connected, has infinite diameter but no infinite isometric path.} \label{width-two} \end{figure} \begin{example}\label{thm:noisometric} There exists a poset coverable by two chains whose incomparability graph is connected, having infinite diameter and no isometric infinite path (see Figure \ref{width-two}). \end{example} We provide Example \ref{thm:noisometric} and a proof that it verifies the required properties in Section \ref{section:proof-thm:noisometric}. We obtain a positive result in the case of incomparability graphs of interval orders with no infinite antichains. A poset $P$ is an {\it interval order} if $P$ is isomorphic to a subset $\mathcal J$ of the set $Int(C)$ of non-empty intervals of a chain $C$, ordered as follows: if $I, J\in Int(C)$, then \begin{equation}\label{ordre-sur-intervalles} I<J \mbox{ if } x<y \mbox{ for every } x\in I \mbox{ and every } y\in J. \end{equation} Interval orders were considered in Fishburn \cite{fishburn-book,fishburn} and Wiener \cite{wiener} in relation to the theory of measurement. \begin{theorem}\label{thm:intervalorder-isometric}If $P$ is an interval order with no infinite antichains so that $\inc(P)$ is connected and has infinite diameter, then $\inc(P)$ has an infinite isometric path. \end{theorem} The proof of Theorem \ref{thm:intervalorder-isometric} will be provided in Section \ref{section:intervalorders}. The conclusion of Theorem \ref{thm:intervalorder-isometric} becomes false if the condition "no infinite antichains" is removed. Indeed, \begin{example}\label{thm:intervalorder-non-isometric} There exists an interval order whose incomparability graph is connected, has an infinite diameter and no infinite isometric path. \end{example} Example \ref{thm:intervalorder-non-isometric} and a proof that it verifies the required properties will be provided in Section~\ref{section:intervalorders}. \subsection{Convexity and isometry of metric balls in incomparability graphs} In this subsection we compare the notions of order convexity and metric convexity with respect to the distance on the incomparability graph of a poset. Before stating our result we need few definitions. An \emph{initial segment} of a poset $P:=(V,\leq)$ is any subset $I$ of $V$ such that $x\in V$, $y\in I$ and $x\leq y$ imply $x\in I$. If $X$ is a subset of $V$, the set $\downarrow X:=\{y\in P: y\leq x \; \text{for some}\; x\in X\}$ is the least initial segment containing $X$, we say that it is \emph{generated} by $X$. If $X$ is a one element set, say $X=\{x\}$, we denote by $\downarrow x$, instead of $\downarrow X$, this initial segment and say that it is \emph{principal}. \emph{Final segments} are defined similarly. Let $P:=(V,\leq)$ be a poset. A subset $X$ of $V$ is \emph{order convex} or \emph{convex} if for all $x,y\in X$, $[x,y]:=\{z : x\leq z\leq y\}\subseteq X$. For instance, initial and final segments of $P$ are convex. Note that any intersection of convex sets is also convex. In particular, the intersection of all convex sets containing $X$, denoted $Conv_P(X)$, is convex. This is the smallest convex set containing $X$. Note that \[Conv_P(X)=\{z\in P : x\leq z\leq y \mbox{ for some } x,y\in X\}=\downarrow X\cap \uparrow X.\] Let $G:=(V,E)$ be a graph. We equip it with the graphic distance $d_G$. A \emph{ball} is any subset $B_G(x, r):= \{y\in V: d_G(x,y)\leq r\}$ where $x\in V, r\in \NN$. A subset of $V$ is \emph{convex} w.r.t. the distance $d_G$ if this is an intersection of balls. The \emph{least convex subset} of $G$ containing $X$ is \[Conv_{G}(X):=\displaystyle \bigcap_{X\subseteq B_G(x,r)}B_G(x,r).\] Let $X\subseteq V$ and $r\in \NN$. Define \[B_G(X,r):=\{v\in V : d_G(v,x)\leq r \mbox{ for some } x\in X\}.\] With all needed definitions in hand we are now ready to state the following theorem. \begin{theorem} \label{thm:orderconvex}Let $P:=(V,\leq)$ be a poset, $G$ be its incomparability graph, $X\subseteq V$ and $r\in \NN$. \begin{enumerate}[$(a)$] \item If $X$ is an initial segment, respectively a final segment, respectively an order convex subset of $P$ then $B_G(X,r)$ is an initial segment, respectively a final segment, respectively an order convex subset of $P$. In particular, for all $x\in V$ and $r\in \NN$, $B_G(x,r)$ is order convex; \item \label{lem:convex-connected} If $X$ is order convex then the graph induced by $G$ on $B_G(X,r)$ is an isometric subgraph of $G$. In particular, if $X$ is included into a connected component of $G$ then the graph induced by $G$ on $B_G(X,r)$ is connected. \end{enumerate} \end{theorem} It follows from Theorem \ref{thm:orderconvex} that \emph{every ball in an incomparability graph $G$ of a poset is order convex and that the graph induced on it is an isometric subgraph of $G$}. The proof of Theorem \ref{thm:orderconvex} is provided in Section \ref{section:proof-thm-orderconvex}. \subsection{An application of Theorem \ref{thm:infinitepath-kite} in the theory of well quasi order}\label{sec:application} The purpose of this subsection is to provide an application of Theorem \ref{thm:infinitepath-kite} in the theory of well quasi order. Let us first recall some notions from the Theory of Relations \cite{fraissetr}. A graph $G$ is \emph{embeddable} in a graph $G'$ if $G$ is isomorphic to an induced subgraph of $G'$. The embeddability relation is a quasi order on the class of graphs. A class $\mathcal C$ of graphs, finite or not, is \emph{hereditary} if it contains every graph which embeds in some member of $\mathcal C$. The \emph{age} of a graph $G$ is the collection of finite graphs, considered up to isomorphy, that embed in $G$ (or alternatively, that are isomorphic to some induced subgraph of $G$). We recall that an age of finite graphs, and more generally a class of finite graphs, is \emph{well quasi ordered} (w.q.o. for short) if it contains no infinite antichain, that is an infinite set of graphs $G_n$ pairwise incomparable with respect to embeddability. There are several results about w.q.o. hereditary classes of graphs, see for examples \cite{korpelainen-lozin-razgon,korpelainen-lozin}, \cite{lozin-mayhill} and \cite{oudrar}. We recall that a graph $G:= (V, E)$ is a \emph{permutation graph} if there is a linear order $\leq $ on $V$ and a permutation $\sigma$ of $V$ such that the edges of $G$ are the pairs $\{x, y\}\in [V]^2$ which are reversed by $\sigma$. The study of permutations graphs became an important topic due to the Stanley-Wilf Conjecture, formulated independently by Richard P. Stanley and Herbert Wilf in the late 1980s, and solved positively by Marcus and Tard\"os \cite{marcus-tardos} 2004. It was proved by Lozin and Mayhill 2011\cite{lozin-mayhill} that a hereditary class of finite bipartite permutation graphs is w.q.o. by embeddability if and only there is a bound on the length of the double ended forks (see Figure \ref{fig:doublefork}) it may contain (for an alternative proof see \cite{pouzet-zaguia-wqo20}). In \cite{pouzet-zaguia-wqo20}, we extend results of Lozin and Mayhill \cite{lozin-mayhill} and present an almost exhaustive list of properties of w.q.o. ages of bipartite permutation graphs. One of our results is a positive answer, in the case of an age of bipartite permutation graphs, to a long standing unsolved question by the first author, of whether the following equivalence is true in general: an age is not w.q.o. if and only if it contains $2^{\aleph_0}$ subages (see subsection I-4 Introduction \`a la comparaison des \^ages, page 67, \cite{pouzet-israel}). This result, Theorem \ref{thm:3} below, is a consequence of (2) of Theorem \ref{thm:infinitepath-kite}. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize=3in \epsfbox{double-fork.eps} \end{center} \caption{Double-ended forks: an antichain of finite graphs with respect to embeddability.} \label{fig:doublefork} \end{figure} \begin{theorem}[\cite{pouzet-zaguia-wqo20}] \label{thm:3}Let $\mathcal{C}$ be an age that consists of finite bipartite permutation graphs. Then $\mathcal{C}$ is not w.q.o. if and only if it contains the age of a direct sum $\bigoplus_{i\in I} \mathrm{DF}_i$ of double ended forks of arbitrarily large length for some infinite subset $I$ of $\NN$. In particular, if $\mathcal{C}$ is not w.q.o., it contains $2^{\aleph_0}$ subages which are not w.q.o. \end{theorem} A proof is given in \cite{pouzet-zaguia-wqo20}. For completeness we provide the proof here. \begin{proof} The set of double-ended forks forms an infinite antichain, hence if $\mathcal C$ contains the direct sum $\bigoplus_{i\in I} \mathrm{DF}_i$ of double ended forks of arbitrarily large length for some infinite subset $I$ of $\NN$, it is not w.q.o. Conversely, suppose $\mathcal C$ is not w.q.o. Then it embeds double-ended forks of unbounded length. This important result is due Lozin and Mayhill (see Theorem 7 in \cite{lozin-mayhill}). Let $G$ be a graph with $\age(G)= \mathcal C$. We consider two cases:\\ $(1)$ Some connected component of $G$, say $G_i$, embeds double forks of unbounded length. In this case, the detour of $G_i$, that is the supremum of the lengths of induced paths in $G_i$, is unbounded. Since $G_i$ is the incomparability graph of a poset of width at most two, its diameter is unbounded (See Corollary \ref{cor:detour}). In fact, since the vertices of degree $3$ in the forks are end vertices of induced paths, the diameter of the set of vertices of degree $3$ in $G_i$ is unbounded. Thus from $(2)$ of Theorem \ref{thm:infinitepath-kite}, $G_i$ embeds an induced caterpillar or an induced kite with infinitely many vertices of degree at least $3$. Since $G$ is bipartite, it can only embed a kite of type $(3)$. As it is easy to see, this caterpillar or that kite embeds a direct sum $\bigoplus_{i\in I} \mathrm {DF}_i$ of double-ended forks of arbitrarily large length, as required. \\ $(2)$ If the first case does not hold, there are infinitely many connected components $G_i$, each embedding some double-ended fork $ \mathrm {DF}_i$, and the length of these double-ended forks is unbounded. This completes the proof of Theorem \ref{thm:3}. \end{proof} The paper is organised as follows. In Section \ref{section:prequisite} we present some prerequisites on graphs and posets. In Section \ref{section:fund-lemma} we state a fundamental lemma on paths in incomparability graphs and some consequences. In Section \ref{posetswidth2} we present few metric properties of posets of width $2$. In Section \ref{proof:thm:widthtwo} we present the proof of Theorem \ref{thm:widthtwo}. In Section \ref{proof:thm:infi-detour-no-path} we present Example \ref{thm:infi-detour-no-path}. In Section \ref{section:convexity} we present various metric properties of incomparability graphs. In Section \ref{section:proof-thm-orderconvex} we present a proof of Theorem \ref{thm:orderconvex} and some consequences. In Section \ref{section:proof-thm:infinitepath-kite} we give a proof of Theorem \ref{thm:infinitepath-kite} (an important ingredient of the proof is Theorem \ref{thm:orderconvex}). In Section \ref{section:proof-thm:noisometric} we present Example \ref{thm:noisometric}. Finally a proof of Theorem \ref{thm:intervalorder-isometric} and Example \ref{thm:intervalorder-non-isometric} are provided in Section \ref{section:intervalorders}. \section{Graphs and Posets}\label{section:prequisite} \subsection{Posets}Throughout, $P :=(V, \leq)$ denotes an ordered set (poset). The \emph{dual} of $P$ denoted $P^{*}$ is the order defined on $V$ as follows: if $x,y\in V$, then $x\leq y$ in $P^{*}$ if and only if $y\leq x$ in $P$. Let $P :=(V, \leq)$ be a poset. We recall that two elements $x,y\in V$ are \emph{comparable} if $x\leq y$ or $y\leq x$, otherwise, we say they are \emph{incomparable}, denoted $x\parallel y$. A set of pairwise comparable elements is called a \emph{chain}. On the other hand, a set of pairwise incomparable elements is called an \emph{antichain}. The \emph{width} of a poset is the maximum cardinality of its antichains (if the maximum does not exist, the width is set to be infinite). Dilworth's celebrated theorem on finite posets \cite{dilworth} states that the maximum cardinality of an antichain in a finite poset equals the minimum number of chains needed to cover the poset. This result remains true even if the poset is infinite but has finite width. If the poset $P$ has width $2$ and the incomparability graph of $P$ is connected, the partition of $P$ into two chains is unique (picking any vertex $x$, observe that the set of vertices at odd distance from $x$ and the set of vertices at even distance from $x$ form a partition into two chains). According to Szpilrajn \cite{szp}, every order on a set has a linear extension. Let $P:=(V,\leq)$ be a poset. A \emph{realizer} of $P$ is a family $\mathcal{L}$ of linear extensions of the order of $P$ whose intersection is the order of $P$. Observe that the set of all linear extensions of $P$ is a realizer of $P$. The \emph{dimension} of $P$, denoted $dim(P)$, is the least cardinal $d$ for which there exists a realizer of cardinality $d$ \cite{dushnik-miller}. It follows from the Compactness Theorem of First Order Logic that an order is intersection of at most $n$ linear orders ($n\in \NN$) if and only if every finite restriction of the order has this property. Hence the class of posets with dimension at most $n$ is determined by a set of finite obstructions, each obstruction is a poset $Q$ of dimension $n+1$ such that the deletion of any element of $Q$ leaves a poset of dimension $n$; such a poset is said \emph{critical}. For $n\geq 2$ there are infinitely many critical posets of dimension $n+1$. For $n=2$ they have been described by Kelly\cite{kelly77}; beyond, the task is considered as hopeless. \subsubsection{Comparability and incomparability graphs, permutation graph} A graph $G:= (V, E)$ is a \emph{comparability graph} if the edge set is the set of comparabilities of some order on $V$. From the Compactness Theorem of First Order Logic, it follows that a graph is a comparability graph if and only if every finite induced subgraph is a comparability graph. Hence, the class of comparability graphs is determined by a set of finite obstructions. The complete list of minimal obstructions was determined by Gallai \cite{gallai}. A graph $G:= (V, E)$ is a \emph{permutation graph} if there is a linear order $\leq $ on $V$ and a permutation $\sigma$ of $V$ such that the edges of $G$ are the pairs $\{x, y\}\in [V]^2$ which are reversed by $\sigma$. Denoting by $\leq_{\sigma}$ the set of oriented pairs $(x, y)$ such that $\sigma(x) \leq \sigma (y)$, the graph is the comparability graph of the poset whose order is the intersection of $\leq$ and the opposite of $\leq_{\sigma}$. Hence, a permutation graph is the comparability graph of an order intersection of two linear orders, that is the comparability graph of an order of dimension at most two \cite{dushnik-miller}. If the graph is finite, the converse holds. Hence, as it is well known, a finite graph $G$ is a permutation graph if and only if $G$ and $G^c$ are comparability graphs \cite{dushnik-miller}; in particular, a finite graph is a permutation graph if and only if its complement is a permutation graph. Via the Compactness Theorem of First Order Logic, an infinite graph is the comparability graph of a poset intersection of two linear orders if an only if each finite induced graph is a permutation graph (sometimes these graphs are called permutation graphs, while there is no possible permutation involved). For more about permutation graphs, see \cite{klazar}. \subsubsection{Lexicographical sum}Let $I$ be a poset such that $|I|\geq 2$ and let $\{P_{i}:=(V_i,\leq_i)\}_{i\in I}$ be a family of pairwise disjoint nonempty posets that are all disjoint from $I$. The \emph{lexicographical sum} $\displaystyle \sum_{i\in I} P_{i}$ is the poset defined on $\displaystyle \bigcup_{i\in I} V_{i}$ by $x\leq y$ if and only if \begin{enumerate}[(a)] \item There exists $i\in I$ such that $x,y\in V_{i}$ and $x\leq_i y$ in $P_{i}$; or \item There are distinct elements $i,j\in I$ such that $i<j$ in $I$, $x\in V_{i}$ and $y\in V_{j}$. \end{enumerate} The posets $P_{i}$ are called the \emph{components} of the lexicographical sum and the poset $I$ is the \emph{index set}. If $I$ is a totally ordered set, then $\displaystyle \sum_{i\in I} P_{i}$ is called a \emph{linear sum}. On the other hand, if $I$ is an antichain, then $\displaystyle \sum_{i\in I} P_{i}$ is called a \emph{direct sum}. Henceforth we will use the symbol $\oplus$ to indicate direct sum. The decomposition of the incomparability graph of a poset into connected components is expressed in the following lemma which belongs to the folklore of the theory of ordered sets. \begin{lemma}\label{lem:folklore} If $P:= (V, \leq)$ is a poset, the order on $P$ induces a total order on the set $Connect(P)$ of connected components of $\inc(P)$, the incomparability graph of $P$, and $P$ is the lexicographical sum of these components indexed by the chain $Connect(P)$. In particular, if $\preceq$ is a total order extending the order $\leq$ of $P$, each connected component $A$ of $\inc(P)$ is an interval of the chain $(V, \preceq)$. \end{lemma} The next two sections introduce the necessary ingredients to the proof of Theorem \ref{thm:widthtwo}. \section{A fundamental lemma}\label{section:fund-lemma} We state an improvement of I.2.2 Lemme, p.5 of \cite{pouzet78}. \begin{lemma}\label{lem:inducedpath} Let $x,y$ be two vertices of a poset $P$ with $x<y$. If $x_0, \dots, x_n$ is an induced path in the incomparability graph of $P$ from $x$ to $y$ then $x_i< x_j$ for all $j-i\geq 2$. \end{lemma} \begin{proof} Induction on $n$. If $n\leq 2$ the property holds trivially. Suppose $n\geq 3$. Taking out $x_0$, induction applies to $x_1, \dots x_n$. Similarly, taking out $x_n$, induction applies to $x_0, \dots x_{n-1}$. Since the path from $x_0$ to $x_n$ is induced, $x_0$ is comparable to every $x_j$ with $j\geq 2$ and $x_n$ is comparable to every $x_j$ with $j<n-1$. In particular, since $n\geq 3$, $x_0$ is comparable to $x_{n-1}$. Necessarily, $x_0< x_{n-1}$. Otherwise, $x_{n-1}<x_0$ and then by transitivity $x_{n-1}<x_{n} $ which is impossible since $\{x_{n-1}, x_{n}\} $ is an edge of the incomparability graph. Thus, we may apply induction to the path from $x_0, \dots, x_{n-1}$ and get $x_0<x_j$ for every $j>2$. Similarly, we get $x_1<x_n$ and via the induction applied to the path from $x_1$ to $x_n$, $x_j< x_n$ for $j<n-1$. The stated result follows. \end{proof} An immediate corollary is this. \begin{corollary}\label{cor:cover-distance}Let $P$ be a poset such that $\inc(P)$ is connected and let $a<b$. If $(a,b)$ is a covering relation in $P$, then $2\leq d_{\inc(P)}(a,b)\leq 3$. \end{corollary} Another consequence of Lemma \ref{lem:inducedpath} is that incomparability graphs have no induced cycles of length at least five \cite{gallai}. Indeed, let $P$ be a poset and let $x_0,\dots,x_l, x_0$ be an induced cycle of $\inc(P)$. Suppose for a contradiction that $l\geq 4$. We will apply Lemma \ref{lem:inducedpath} successively to the induced paths $x_0,\dots, x_{l-1}$ and $x_1,\dots, x_{l}$ and will derive a contradiction. We may assume without loss of generality that $x_0<x_{l-1}$. It follows from Lemma \ref{lem:inducedpath} applied to $x=x_0$ and $y=x_{l-1}$ that $x_0<x_{l-2}$ (recall that $l\geq 4$) and $x_1<x_{l-1}$. We now consider the induced path $x_1,\dots, x_{l}$. Then $x_1$ and $x_l$ are comparable. It follows from $x_1<x_{l-1}$ and Lemma \ref{lem:inducedpath} applied to $x=x_1$ and $y=x_{l}$ that $x_1<x_l$. Hence, $x_{l-2}<x_l$. By transitivity we get $x_0<x_l$ which is impossible. Here is yet another consequence of Lemma \ref{lem:inducedpath}. \begin{proposition}Let $P:=(V,\leq)$ be a poset. A sequence $a_0,...,a_n,...$ of vertices of $V$ forms an induced path in $\inc(P)$ originating at $a_0$ if and only if for all $i\in \NN$, $a_i,a_{i+1},a_{i+2},a_{i+3}$ is an induced path of $\inc(P)$ with extremities $a_i,a_{i+3}$. \end{proposition} \begin{proof}$\Rightarrow$ Obvious. \\ $\Leftarrow$ Suppose that for all $i\in \NN$, $a_i,a_{i+1},a_{i+2},a_{i+3}$ is an induced path with extremities $a_i,a_{i+3}$. We prove by induction that for all $n\in \NN$, $a_0,...,a_n$ is an induced path in $G$. Suppose $a_0,...,a_n$ is an induced path in $G$ and assume without loss of generality that $a_0<a_n$. Then $a_i<a_{n}$ for all $i\leq n-2$ (follows from Lemma \ref{lem:inducedpath}). From $a_{n-2},a_{n-1},a_{n},a_{n+1}$ is an induced path with extremities $a_{n-2},a_{n+1}$ and $a_{n-2}<a_{n}$ we deduce that $a_{n-2}<a_{n+1}$ and $a_{n-1}<a_{n+1}$. Therefore, $a_i<a_{n+1}$ for all $i\leq n-1$ proving that $a_0,...,a_n,a_{n+1}$ is an induced path in $G$. \end{proof} We should mention that the value $3$ is the previous proposition is best possible. Indeed, if $P$ the direct sum of two copies of the chain of natural numbers, then $\inc(P)$ is a complete bipartite graph and every path on $3$ vertices is an induced path. Yet an infinite sequence of vertices that alternates between the copies of $\NN$ does not constitute an infinite induced path of $\inc(P)$. \section{Posets of width $2$ and their distances}\label{posetswidth2} \subsection{Posets of width $2$ and bipartite permutation graphs} In this subsection we recall some properties about posets of width at most $2$ and permutation graphs. We start with a characterization of bipartite permutation graphs, next we give some properties of the graphic distance and the detour in comparability graphs of posets of width at most $2$. We recall the existence of a universal poset of width at most $2$ \cite{pouzet78}. We describe the incomparability graph of a variant of this poset more appropriate for our purpose. \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=2in \epsfbox{CriticalPosetsDim3.eps} \end{center} \caption{Critical posets of dimension $3$ and height $2$.} \label{fig:critique3} \end{figure} We note that a poset $P$ of width at most $2$ has dimension at most $2$, hence its comparability graph is an incomparability graph. As previously mentioned, a finite graph $G$ is a comparability and incomparability graph if and only if it is a permutation graph. Incomparability graphs of finite posets of width $2$ coincide with bipartite permutation graphs. For arbitrary posets, the characterization is as follows. \begin{lemma}\label{lem:w2}Let $G$ be a graph. The following are equivalent. \begin{enumerate}[(i)] \item $G$ is bipartite and is the comparability graph of a poset of dimension at most two; \item $G$ is bipartite and embeds no even cycles of length at least six and none of the comparability graphs of the posets depicted in Figure $\ref{fig:critique3}$. \item $G$ is the incomparability graph of a poset of width at most $2$. \item $G$ is a bipartite incomparability graph. \end{enumerate} \end{lemma} \begin{proof} $(i)\Leftrightarrow (ii)$. If $G$ is finite, this is Theorem 1 of \cite{moore-trotter}. Hence, the equivalence between $(i)$ and $(ii)$ holds for the restrictions of $G$ to every finite set $F$ of vertices. This gives immediately the implication $(i)\Rightarrow (ii)$. For the converse implication, we get that every finite induced subgraph of $G$ is bipartite and the comparability graph of a poset of dimension at most two. The Compactness Theorem of First Order Logic implies that these properties extend to $G$.\\ $(iii)\Rightarrow (i)$. Suppose $G$ is the incomparability graph of a poset of width at most $2$. Then $G$ has no 3-element cycles. Also, $G$ has no induced odd cycles of length at least five (see \cite{gallai}, Section 3.8, Table 5). This shows that $G$ is bipartite. Since $P$ is coverable by two chains it has order dimension two (the dimension of a poset is at most its width \cite{dilworth}) and therefore its incomparability graph is also a comparability graph \cite{dushnik-miller}. Thus $G$ is a comparability graph of a poset of dimension at most two.\\ $(i)\Leftrightarrow (iv)$. Follows from the fact that a graph $G$ is the incomparability graph of a poset of dimension at most $2$ if and only if this is the comparability graph of a poset of dimension at most $2$ \cite{dushnik-miller}. $(iii) \Leftrightarrow (iv)$. Implication $(iv) \Rightarrow (iii)$ is trivial. For the converse, suppose that $G$ is a bipartite incomparability graph of a poset $P$, apply Dilworth's theorem\cite{dilworth} or pick any vertex $x$, and observe that the set of vertices at odd distance from $x$ and the set of vertices at even distance from $x$ form a partition of $P$ into two chains, hence $G$ is bipartite. \end{proof} We should mention the following result (this is essentially Lemma 14 from \cite{zaguia2008}) which states that a bipartite permutation graph without cycles must embed a caterpillar. A key observation is that if a vertex has at least three neighboring vertices in $\inc(P)$, then at least one has degree one. Otherwise, $\inc(P)$ would have a spider (see Figure \ref{fig:critique3}) as an induced subgraph, which is impossible. \begin{lemma}\label{nospider}Let $P$ be a poset of coverable by two chains. Then the following properties are equivalent. \begin{enumerate}[(i)] \item The incomparability graph of $P$ has no cycles of length three or four. \item The incomparability graph of $P$ has no cycle. \item The connected components of the incomparability graph of $P$ are caterpillars. \end{enumerate} \end{lemma} \subsection{Detour of bipartite permutation graphs}\label{subsection:detour-perm-bip} We are going to evaluate the detour of connected components of the incomparability graph of a poset of width at most $2$. Let $P:=(V,\leq)$ be a poset of width $2$. Suppose that $\inc( P)$ is connected. In this case, the partition of $P$ into two chains is unique. An \emph{alternating sequence} in $P$ is any finite monotonic sequence $(x_0, \dots, x_i, \dots x_n)$ of elements of $V$ (i.e., increasing or decreasing) such that no two consecutive elements $x_i$ and $x_{i+1}$ belong to the same chain of the partition. The integer $n$ is the \emph{oscillation} of the sequence; $x$ and $y$ are its \emph{extremities}. We recall that the oscillation of an alternating sequence with extremities $x$, $y$ is either $0$ or at most $d_{\inc( P)}$ (see I.2.4. Lemme p.6 of \cite{pouzet78}). This allows to define the following map. Let $d_P$ be the map from $V\times V$ into $\NN$ such that. \begin{enumerate} \item $d_P(x,x)= 0$ for every $x\in V$; \item $d_P(x,y)= 1$ if $x$ and $y$ are incomparable; \item $d_P(x,y)=2$ if $x$ and $y$ are comparable and there is no alternating sequence from $x$ to $y$; \item $d_P(x,y)=n+2$ if $n\not =0$ and $n$ is the maximum of the oscillation of alternating sequences with extremities $x$ and $y$. \end{enumerate} We recall a result of \cite{pouzet78} II.2.5 Lemme, p. 6. \begin{lemma}\label{lem:oscillation-distance}The map $d_P$ is a distance on any poset $P$ of width $2$ such that the incomparability graph is connected. Moreover, for every $x,y\in P$ the following inequalities hold: \begin{equation} 0\leq d_{\inc( P)}(x,y)-d_P(x,y) \leq 2\lfloor d_{\inc( P)}(x,y)/3\rfloor. \end{equation} \end{lemma} We give a slight improvement of \cite{pouzet78} I.2.3. Corollaire, p. 5. \begin{lemma}\label{lem:oscillation2} Let $P$ be poset of width $2$ such that $\inc( P)$ is connected. Let $n\in \NN$, $r\in \{0,1\}$ and $x, y\in P$ such that $\inc( P)$ contains an induced path of length $3n+r$ and extremities $x$ and $y$. If $r\not =1$ and $n\geq 1$(resp. $r=1$ and $n\geq 2$) then there is an alternating sequence with extremities $x$,$y$ and oscillation $n$ (resp. $n-1$). \end{lemma} \begin{proof}Since $n\geq 1$, $x$ and $y$ are comparable and we may suppose $x<y$. Let $x_0, \dots, x_{3n+r}$ be a path with $x_0=x$, $x_{3n+r}=y$. According to Lemma \ref{lem:inducedpath} the sequence $x_0, \dots, x_{3i}, \dots x_{3n}$ is alternating. If $r\not =1$, we may replace $x_{3n}$ by $x_{3n+r}$ in the above sequence and get an alternating sequence with extremities $x$,$y$ and oscillation $n$. If $r=1$, we delete $x_{3n}$ and replace $x_{3(n-1)}$ by $x_{3n+r}$ in the above sequence. We get an alternating sequence of oscillation $n-1$. \end{proof} From Lemma \ref{lem:oscillation-distance}, the oscillation between two vertices $x$ and $y$ of $P$ is bounded above. With this lemma, the length of induced paths between $x$ and $y$ is bounded too, that is the detour $D_{\inc( P)} (x,y)$ is an integer. In fact we have: \begin{proposition} Let $P$ be poset of width $2$ such that $\inc( P)$ is connected and let $x,y\in P$. Then: \begin{enumerate}[$(1)$]\label{prop:oscillation} \item $d_{\inc( P)}(x,y)=d_P(x,y)= D_{\inc( P)}(x,y)$ if either $x=y$, in which case this common value is $0$, or $x$ and $y$ are incomparable, in which case this common value is $1$. \item $d_{\inc( P)}\geq d_P(x,y)\geq \lfloor D_{\inc( P)}(x,y)/3 \rfloor +\epsilon$ where $\epsilon=1$ if $D_{\inc( P)}(x,y)\equiv 1 \mod 3$ and $\epsilon=2$ otherwise. \end{enumerate} \end{proposition} \begin{proof} Assertion $(1)$ is obvious. For $(2)$, we may suppose $x<y$. The first inequality is embodied in Lemma \ref{lem:oscillation-distance}. As observed above, $D_{\inc( P)}(x,y)$ is bounded. We may write $D_{\inc( P)}(x,y)=3n+r$ with $r$ be the remainder of $D_{\inc( P)}(x,y) \mod 3$. Let $\alpha:=\lfloor D_{\inc (P)}(x,y)/3 \rfloor +\epsilon$. We have $\alpha=n+1$ if $r=1$ and $\alpha=n+2$ otherwise. If $n=0$ then since $x<y$, $r\not =1$, hence $\alpha=2$, since $d_P(x,y)=2$, the inequality holds. We may suppose $n\geq 1$. If $r\not =1$ then $\alpha= n+2$, while by definition of $d_P$ and Lemma \ref{lem:oscillation2}, $d_P(x,y) \geq n+2$. Hence, the second inequality holds. If $r=1$ then $\alpha = n+1$. If $n=1$ $d_{P}(x,y)\geq 2$ and the second inequality holds. Suppose $n\geq 2$. Then, by definition of $d_{P}(x,y)$ and by Lemma \ref{lem:oscillation2}, $d_{P}(x,y)\geq n+1$. Thus second inequality holds. \end{proof} \begin{corollary}\label{cor:detour} If a bipartite permutation graph has diameter at most $k$ it contains no induced path of length $3k$. \end{corollary} \section{A proof of Theorem \ref{thm:widthtwo}}\label{proof:thm:widthtwo} \begin{proof} The implication $(i) \Rightarrow (ii)$ is obvious. The implication $(ii) \Rightarrow (iii)$ follows from Proposition \ref{prop:oscillation} given in Subsection \ref{subsection:detour-perm-bip}. The implication $(iii) \Rightarrow (iv)$ follows from Theorem \ref{thm:infinitepath-kite}. The implication $(iv) \Rightarrow (i)$ is obvious. \end{proof} \section{Example \ref{thm:infi-detour-no-path}}\label{proof:thm:infi-detour-no-path} \begin{proof}Let $X:=\{y,x_0,x_1,x_2,\dots\}$ and for every integer $i\geq 0$ let $Z_{i}:=\{z_{0,i},z_{1,i},\dots,z_{i+3,i}\}$ be disjoint sets. We set $V:=\bigcup_{i\geq 0}Z_i\cup X$ and $P:=(V,\leq)$ where $\leq $ is the binary relation on $V$ defined as follows: $X\setminus \{y\}$ is totally ordered by $\leq$ and $x_0<x_1<x_2<\dots <x_i<\dots$. For all $0\leq i<j$, every element of $Z_i$ is below every element of $Z_j$. For all $i\geq 0$, $y$ is smaller than all elements in $Z_i$ and is incomparable to $x_i$. For all $i\geq 0$, $x_i$ is smaller than all element of $Z_i\setminus \{z_{0,i}\}$ and $x_i$ is incomparable to all elements in $\bigcup_{j<i}Z_i\cup \{z_{0,i}\}$. For all integers $i\geq 0$ and for all $j\geq i+1$, $x_i$ is smaller than all element in $Z_j$. Finally, the restriction of $\inc(P)$ to $Z_i$ is the induced path $z_{0,i},z_{1,i},\dots,z_{i+3,i}$ so that $z_{0,i}<z_{2,i}<z_{4,i}<\dots$ and $z_{1,i}<z_{3,i}<z_{5,i}<\dots$ (see Figure \ref{width-three}). It is not difficult to see that $\leq$ is an order relation and that the corresponding poset $P$ can be covered by three chains.\\ \textbf{Claim 1:} The diameter of $\inc(P)$ is $3$.\\ Let $a,b$ be two distinct vertices of $\inc(P)$. If $a,b\in X$, then either $a=y$ or $b=y$ in which case $d_{\inc(P)}(a,b)=1$, or $y\not \in \{a,b\}$ in which case $d_{\inc(P)}(a,b)=2$ (indeed, say $a=x_i$ and $b=x_j$ with $i<j$, then $a,z_{0,i},b$ is an induced path in $\inc(P)$). Suppose now $a\in X$ and $b\not \in X$, say $b\in Z_i$ for some $i\geq 0$. If $a=y$, then $d_{\inc(P)}(a,b)=2$ (indeed, $a,x_{i+1},b$ is an induced path in $\inc(P)$). Else if $a=x_j$ for some $j\geq 0$, then $d_{\inc(P)}(a,b)=1$ if $i<j$ and $d_{\inc(P)}(a,b)=3$ otherwise (indeed, $a,z_{0,j},x_{i+1},b$ is the shortest path joining $a$ to $b$). Next we suppose that $\{a,b\}\cap X=\varnothing$. If $a,b\in Z_i$ for some $i\geq 0$, then $d_{\inc(P)}(a,b)=2$ (indeed, $a,x_{i+1},b$ is an induced path in $\inc(P)$). Else if $a\in Z_i$ and $b\in Z_j$ for some $i\neq j$, then $d_{\inc(P)}(a,b)=2$ (indeed, $a,x_{i+j},b$ is an induced path in $\inc(P)$).\\ \textbf{Claim 2:} An induced infinite path in $\inc(P)$ contains necessarily finitely many elements of $X$.\\ Suppose an induced infinite path $C$ contains infinitely many vertices from $X$. Since $\inc(P)$ induces an independent set on $X\setminus \{y\}$ and $C$ is connected we infer that $C$ must meet infinitely many $Z_i$'s. Hence, there exists some $x_i\in C$ which has degree at least $3$ in $C$ and this is not possible.\\ \textbf{Claim 3:} Deleting all vertices of $X$ from $\inc(P)$ leaves a disconnected graph.\\ Clearly, for all $i\geq 0$, $Z_i$ is a connected component of $\inc(P)\setminus X$.\\ \noindent Now suppose for a contradiction that $\inc(P)$ embeds an infinite induced path $C$. It follows from Claim 2 that we can assume $V(C)\cap X=\varnothing$. Hence, $C$ is an induced infinite path of $\inc(P)\setminus X$. We derive a contradiction since all connected components of $\inc(P)\setminus X$ are finite (indeed, the connected components of $\inc(P)\setminus X$ are finite paths i.e. the subgraphs of $\inc(P)\setminus X$ induced on the $Z_i$'s). \noindent \textbf{Claim 4:} The vertex $y$ has an infinite induced detour.\\ Indeed, $\inc(P)$ induces a path on $\{y,x_i,\}\cup Z_i$ of length $i+5$ for all $i\geq 0$. \end{proof} \section{Order and metric convexities of incomparability graphs}\label{section:convexity} In this section we compare the notions of order convexity and metric convexity with respect to the distance on the incomparability graph of a poset. We recall few definitions already provided in the introduction. Let $P:=(V,\leq)$ be a poset. We recall $Conv_P(X)$ is the smallest convex set containing $X$ and that \[Conv_P(X)=\{z\in P : x\leq z\leq y \mbox{ for some } x,y\in X\}=\downarrow X\cap \uparrow X.\] Let $G:=(V,E)$ be a graph. We equip it with the graphic distance $d_G$. A \emph{ball} is any subset $B_G(x, r):= \{y\in V: d_G(x,y)\leq r\}$ where $x\in V, r\in \NN$. A subset of $V$ is \emph{convex} with respect to the distance $d_G$ if this is an intersection of balls. The \emph{least convex subset} of $G$ containing $X$ is \[Conv_{G}(X):=\displaystyle \bigcap_{X\subseteq B_G(x,r)}B_G(x,r).\] Let $X\subseteq V$ and $r\in \NN$. Define \[B_G(X,r):=\{v\in V : d_G(v,x)\leq r \mbox{ for some } x\in X\}.\] The proof of the following lemma is elementary and is left to the reader. \begin{lemma}\label{lem:b_g}Let $G$ be a graph, $X\subseteq V(G)$ and $r\in \NN$. Then \begin{enumerate}[$(1)$] \item $B_G(X,r)=B_G(B_G(X,1),r-1)= B_G(B_G(X,r-1), 1)$ for all $r\geq 1$. \item $B_G(X\cup Y,r)=B_G(X,r)\cup B_G(X,r)$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:convex-boule}Let $P:=(V,\leq)$ be a poset and $G$ be its incomparability graph, $X\subseteq V$ and $r\in \NN$. Then \begin{equation}\label{eq1} B_G(\downarrow X,r)=(\downarrow X)\cup B_G(X,r)=\downarrow B_G(X,r). \end{equation} \begin{equation}\label{eq2} B_G(\uparrow X,r)=(\uparrow X)\cup B_G(X,r)=\uparrow B_G(X,r). \end{equation} \begin{equation}\label{eq3} B_G(\uparrow X\cap \downarrow X,r)= B_G(\uparrow X,r)\cap B_G(\downarrow X,r). \end{equation} \begin{equation}\label{eq4} B_G(Conv_P(X),r)=Conv_P(X)\cup B_G(X,r)=Conv_P(B_G(X,r)). \end{equation} \end{lemma} \begin{proof}We mention at first that all above equalities are clearly true for $r=0$. We claim that it is enough to prove (\ref{eq1}). Indeed, (\ref{eq2}) is obtained from (\ref{eq1}) applied to $P^{*}$. We now show how to obtain (\ref{eq3}) using (\ref{eq1}) and (\ref{eq2}). The proof is by induction on $r$. \\ Basis step: $r=1$.\\ Clearly, $B_G(\uparrow X\cap \downarrow X,1)\subseteq B_G(\uparrow X,1)\cap B_G(\downarrow X,1).$ Let $x\in B_G(\uparrow X,1)\cap B_G(\downarrow X,1)$. There are $y_1\in \downarrow X$ and $y_2\in \uparrow X$ such that $x$ is equal to $y_1$ or incomparable to $y_1$ and similarly $x$ is equal to $y_2$ or incomparable to $y_2$. Since $y_1\in \downarrow X$ and $y_2\in \uparrow X$ there are $x_1, x_2\in X$ such that $y_1\leq x_1$ and $x_2\leq y_2$. If $x$ is incomparable or equal to $x_1$ or to $x_2$, then $x\in B_G(X, 1)\subseteq B_G(\uparrow X\cap \downarrow X,1)$ as required. If not, $x_2\leq x\leq x_1$ (since $x$ is equal to $y_1$ or incomparable to $y_1$ and $x$ is equal to $y_2$ or incomparable to $y_2$), hence $x\in \downarrow X\cap \uparrow X\subseteq B_G(\downarrow X\cap \uparrow X, 1)$, as required. Inductive step: Suppose $r>1$. We have \begin{eqnarray*} B_G(\uparrow X\cap \downarrow X,r) &=& B_G(B_G(\uparrow X\cap \downarrow X,r-1),1)\\ &=& B_G(B_G(\uparrow X,r-1)\cap B_G(\downarrow X,r-1),1)\; \mbox{(by the induction hypothesis})\\ &=& B_G(\uparrow B_G(X,r-1)\cap \downarrow B_G(X,r-1),1)\; \mbox{(by equations (\ref{eq1}) and (\ref{eq2})})\\ &=& B_G(\uparrow B_G(X,r-1),1)\cap B_G(\downarrow B_G(X,r-1),1)\; \mbox{(follows from the basis step $r=1$})\\ &=& \uparrow B_G( B_G(X,r-1),1)\cap \downarrow B_G( B_G(X,r-1),1)\; \mbox{(follows from (\ref{eq1}) and (\ref{eq2})} )\\ &=& \uparrow B_G(X,r)\cap \downarrow(B_G(X,r))\\ &=& B_G(\uparrow X,r)\cap (\downarrow B_G(X,r))\; \mbox{(follows from (\ref{eq1})} ). \end{eqnarray*} We now show how to obtain (\ref{eq4}) using (\ref{eq1}), (\ref{eq2}) and (\ref{eq3}).\\ From (\ref{eq1}) and (\ref{eq2}) we obtain \[B_G(\downarrow X,r)\cap B_G(\uparrow X,r) =((\downarrow X)\cup B_G(X,r))\cap((\uparrow X)\cup B_G(X,r))=\downarrow(B_G(X,r))\cap \uparrow(B_G(X,r)).\] This is equivalent to \[B_G(\downarrow X,r)\cap B_G(\uparrow X,r) =(\downarrow X\cap \uparrow X)\cup B_G(X,r))=\downarrow(B_G(X,r))\cap \uparrow(B_G(X,r)).\] Using (\ref{eq3}) we have \[B_G(\downarrow X\cap \uparrow X,r) =(\downarrow X\cap \uparrow X)\cup B_G(X,r))=\downarrow(B_G(X,r))\cap \uparrow(B_G(X,r)).\] The required equalities follow by definition of the operator $Conv$. We now prove (\ref{eq1}).\\ Basis step: $r=1$.\\ Since $X\subseteq \downarrow X$ we have $B_G(X,1)\subseteq B_G(\downarrow X,1)$. Hence, we have $B_G(\downarrow X,1)\supseteq (\downarrow X)\cup B_G(X, 1)$. From $X\subseteq B_G(X,1)$ we deduce that $\downarrow X\subseteq \downarrow(B_G(X, 1))$. Hence, $(\downarrow X)\cup B_G(X,1)\subseteq \downarrow B_G(X,1)$. Next, we prove that $B_G(\downarrow X,1)\subseteq (\downarrow X)\cup B_G(X,1)$. Let $x\in B_G(\downarrow X,1)$. There exists then $y\in \downarrow X$ at distance at most $1$ from $x$ that is either $y=x$ or $y\parallel x$. If $y= x$ then $x\in \downarrow X$. Otherwise, since $y\in \downarrow X$ there is $y_1\in X$ such that $y\leq y_1$. If $y_1$ is incomparable or equal to $x$ then $x\in B_G(X,1)$. Otherwise $y_1$ is comparable to $x$. Necessarily, $x \leq y_1$ since $x\parallel y$. Hence $x\in \downarrow X$.\\ Inductive step: Let $r>1$. We suppose true the equalities \[B_G(\downarrow X,r-1)=(\downarrow X)\cup B_G(X,r-1)=\downarrow B_G(X,r-1).\] We apply the operator $T\longrightarrow B_G(T,1)$ to each term of the previous equalities and obtain \[B_G(B_G(\downarrow X,r-1),1)=B_G((\downarrow X)\cup B_G(X,r-1),1)=B_G(\downarrow B_G(X,r-1),1).\] We have \[B_G(B_G(\downarrow X,r-1),1)= B_G(\downarrow X,r) \mbox{ (see (1) of Lemma \ref{lem:b_g})}.\] Also, \begin{eqnarray*} B_G((\downarrow X)\cup B_G(X,r-1),1) &=& B_G(\downarrow X, 1)\cup B_G(B_G(X,r-1),1)\mbox{ (see (2) of Lemma \ref{lem:b_g}))}\\ &=& B_G(\downarrow X,1)\cup B_G(X,r) \mbox{ (see (2) of Lemma \ref{lem:b_g})}\\ &=& (\downarrow X) \cup B_G(X,1) \cup B_G(X,r) \mbox{ (follows from (\ref{eq1}) with $r=1$)}.\\ &=& (\downarrow X)\cup B_G(X,r). \end{eqnarray*} Finally we have \begin{eqnarray*} B_G(\downarrow(B_G(X,r-1)),1) &=& \downarrow B_G((B_G(X,r-1)),1)\\ &=& \downarrow (B_G(X,r)). \end{eqnarray*} \end{proof} \section{A proof of Theorem \ref{thm:orderconvex} and some consequences}\label{section:proof-thm-orderconvex} We now proceed to the proof of Theorem \ref{thm:orderconvex}. \begin{proof} $(a)$ Apply successively equations (\ref{eq1}), (\ref{eq2}) and (\ref{eq4}) of Lemma \ref {lem:convex-boule}. $(b)$ Suppose $r=1$. Let $G': =G_{\restriction B_G(X, 1)}$ and $x,y\in B_G(X, 1)$. Let $n:=d_G(x,y)$. Clearly, $n\leq d_{G'}(x,y)$. To prove that the equality holds, we may suppose that $2\leq n<\infty$. We argue by induction on $n$. Let $u_0, \dots, u_n$ be a path in $G$ connecting $x$ and $y$. If $n\geq 4$, we have $x_0< x_2<x_n$ by Lemma \ref{lem:inducedpath}. Since $B_G(X, 1)$ is convex, it contains $x_2$, hence, by induction, $d_G(x,x_2)= d_{G'}(x,x_2)=2$ and $d_G(x_2,y)= d_{G'}(x_2, y)=n-2$, hence $d_G(x,y)= d_{G'}(x,y)$. Thus, to conclude, it suffices to solve the cases $n=2$ and $n=3$. Let $x',y'\in X$ with $x'$ incomparable or equal to $x$ and $y'$ incomparable or equal to $y$. If $u_{n-1}$ is incomparable or equal to $y'$ then $x_{n-1}\in B_G(X, 1)$. From the induction, $d_{G'}(x, x_{n-1})= d_{G}(x, x_{n-1})$ hence $d_{G'}(x, y)= d_{G}(x, y)$ as required. Hence, we may suppose $u_{n-1}$ comparable to $y'$, and similarly $u_{1}$ comparable to $x'$. Also, if $x'$ is incomparable or equal to $x_2$ then $x, x', x_2$ is a path in $B_G(X)$; if $n=2$ we have $d_{G'}(x,y)=2$ as required, if $n=3$, then $x, x', x_2, y$ is a path in $B_G(X)$ and $d_{G'}(x,y)=3$ as required. Thus we may suppose $x'$ comparable to $x_2$ and, similarly, $y'$ comparable to $x_{n-2}$. Since $x'$ is incomparable or equal to $x_0$ and, by Lemma \ref{lem:inducedpath}, $x_0<x_2$, we have $x'< x_2$. Similarly, we have $x_{n-2}<y'$. Since $x_1$ is comparable to $x'$ and incomparable to $x_2$ we deduce $x' \leq x_1$ from $x'<x_2$. Similarly, we deduce $x_{n-1}\leq y'$. For $n=2$ we have $x'\leq x_1, \leq y'$ and for $n=3$, $x'\leq x_1, x_2\leq y'$. By order convexity of $X$, $x_1\in X$ (and also $x_2\in X$ if $n=3$, hence the path $x=x_0,x_1,x_2=y$ if $n=2$ and the path $x=x_0,x_1,x_2, x_3=y$ if $n=3$ is in $B_{G}(X)$ and thus $d_{G'}(x,y)=n$. Suppose $r>1$. Then from $(a)$ above, $B_G(X, 1)$ order convex. Via the induction hypothesis, $G_{\restriction B_G(B_G(X, 1), r-1)}$ is an isometric subgraph of $G$. Since $B_G(X, r)=B_G(B_G(X, 1), r-1)$, $G_{\restriction B_G(X,r)}$ is an isometric subgraph of $G$. \end{proof} As the proof of the Lemma \ref{lem:convex-boule} suggests, balls are not necessarily geodesically convex (for an example, look at the ball $B(x, 1)$ in a four element cycle). A consequence of Theorem \ref{thm:orderconvex} is that the order convexity of balls is equivalent to the following inequality: \begin{corollary}\label{lem:monotone-distance}Let $P$ be a poset and let $G$ be its incomparability graph. Then \begin{equation} \label{eq:metric-inequalities} d_{G}(u,v)\leq d_{G}(x,y) \mbox{ for all } x\leq u\leq v\leq y \mbox{ in } P. \end{equation} \end{corollary} \begin{proof} The inequality above amounts to $d_{G}(u,v)\leq d_{G}(x,v)\leq d_{G}(x,y)$. We prove the first inequality; the second inequality follows by the same argument applied to the dual of $P$. We may suppose that $x<u<v$, otherwise nothing to prove. Let $n:=d_{G}(v,x)$. By $(a)$ of Theorem \ref{thm:orderconvex}, $B_G(v, n)$ is order convex. Since $x, v\in B_G(v, n)$ and $x\leq u\leq v$, then $u\in B(v, n)$ amounting to $d_G(u, v)\leq n=d_{G}(x,v)$. Conversely, assuming that inequality (\ref{eq:metric-inequalities}) holds, observe that every ball $B_G(x,r)$ is order-convex. We may suppose $r\geq 1$, otherwise the conclusion is obvious. Let $u,v\in B_G(x,r)$ and $w\in P$ with $u<w<v$. If $x \parallel w$, then $d_G(x, w)=1\leq r$ hence $w\in B_G(x,r)$. If not, then either $x<w$ or $w<x$. In the first case, from $x<w<v$, inequality (\ref{eq:metric-inequalities}) yields $d_G(x, w)\leq d_G(x, v)\leq r$ hence $w\in B_G(x,r)$, whereas in the second case, from $u<w<x$, inequality (\ref{eq:metric-inequalities}) yields $d_G(w, x)\leq d_G(u, x)\leq r$ hence $w\in B_G(x,r)$. \end{proof} \begin{corollary}\label{lem:intermediateinducedpath} $\delta_G(X)= \delta_G(Conv_{P}(X))=\delta_G(Conv_{G}(X))$ for every subset $X$ of a poset $P$. \end{corollary} \begin{proof} Since by $(a)$ of Theorem \ref{thm:orderconvex}, each ball $B_G(x,r)$ is order convex, $Conv_{P}(X) \subseteq Conv_{G}(X)$. Hence $\delta_G(X)\leq \delta_G(Conv_{P}(X))\leq \delta_G(Conv_{G}(X))$. The equality $\delta_G(X)=\delta_G(Conv_{G}(X))$ is a general convexity property of metric spaces. Let $r:= \delta_G(X)$. Let $x,y \in Conv_G(X)$. We prove that $d_G(x,y) \leq r$. First $X\subseteq B_G(x,r)$. Indeed, let $z\in X$; since $\delta_G(X)=r$, $X \subseteq B_G(x, r)$. Since $Conv_G(X)$ is the intersections of balls containing $X$, we have $Conv(X) \subseteq B_G(z,r)$, hence $z\in B_G(x, r)$. Next, from $X\subseteq B_G(x,r)$ we deduce $Conv(X) \subseteq B_G(x,r)$ hence $y\in B_G(x,r)$ that is $d_G(x,y)\leq r$. \end{proof} \begin{lemma}\label{lem:inequality}Let $P:=(V,\leq)$ be a poset and $G$ be its incomparability graph. Let $x,y,z\in V$ be such that $x<z<y$. Then \[\max\{d_G(x,z),d_G(z,y)\}\leq d_G(x,y)\leq d_G(x,z)+d_G(z,y)\leq d_G(x,y) +2.\] \end{lemma} \begin{proof}The first inequality follows from Corollary \ref{lem:monotone-distance}. The second inequality is the triangular inequality. We now prove the third inequality. Let $p:=d_G(x,z)$, $q:=d_G(z,y)$, $r:=d_G(x,y)$. \\ \textbf{Claim:} Let $x_0:=x,\dots, x_r:=y$ be a path from $x$ to $y$. Then there exist $i\not \in \{0,r\}$ such that $z$ is incomparable to $x_i$.\\ \noindent{\bf Proof the claim.} By induction on $r$. Note that since $x<y$ we have $r\geq 2$. If $r=2$, then necessarily $z$ is incomparable to $x_1$. Suppose $r>2$. Then $z\nleq x_1$. If $z$ is incomparable to $x_1$, then we are done. Otherwise $x_1<z$ and we may apply the induction hypothesis to $x_1,y$ and the path $x_1,\dots, x_r=y$. This completes the proof of the claim.\hfill $\Box$ Let $i$ be such in the Claim. Then $x_0:=x,\dots, x_i, z$ is a path from $x$ to $z$ of length $i+1$ and $z, x_i, x_{i+1}, \dots, x_r$ is a path from $z$ to $y$ of length $r-i+1$. Then $p+q\leq i+1+r-i+1=r+2$. The proof of the lemma is now complete. \end{proof} \begin{lemma}Let $x_0,...,x_n$ be an isometric path in a graph $G$ with $n\geq 2$. There exists a vertex $x_{n+1}$ such that $x_0,...,x_n,x_{n+1}$ is an isometric path in $G$ if and only if $B_G(x_n,1)\nsubseteq B_G(x_0,n)$. \end{lemma} \begin{proof}$\Rightarrow$ is obvious.\\ $\Leftarrow$ Suppose $B_G(x_n,1)\nsubseteq B_G(x_0,n)$ and let $x_{n+1} \in B_G(x_n,1)\setminus B_G(x_0,n)$. \\ \textbf{Claim 1:} $d_G(x_0,x_{n+1})=n+1$.\\ Indeed, since $x_{n+1} \in B_G(x_n,1)\setminus B_G(x_0,n)$ we have $d_G(x_0,x_{n+1})>n$. From the triangular inequality $d_G(x_0,x_{n+1})\leq d_G(x_0,x_{n})+ d_G(x_n,x_{n+1})=n+1$.\\ \textbf{Claim 2:} $d_G(x_{j},x_{n+1})=n+1-j$ for all $0\leq j\leq n$.\\ Indeed, From the triangular inequality $d_G(x_{j},x_{n+1})\leq d_G(x_{j},x_{n})+ d_G(x_{n},x_{n+1})=n-j+1$. Similarly, $d_G(x_{0},x_{n+1})\leq d_G(x_{0},x_{j})+ d_G(x_{j},x_{n+1})$ and therefore $d_G(x_{j},x_{n+1})\geq d_G(x_{0},x_{n+1})-d_G(x_{0},x_{j})=n+1-j$. The equality follows. \end{proof} We could restate the previous lemma as follows. There is an isometric path of length $n+1$ starting at some vertex $x_0$ if there is some $x_n\in B_G(x_0,n)$ such that $B_G(x_n,1)\nsubseteq B_G(x_0,n)$. An other consequence of the convexity of balls in an incomparability graph is the following: \begin{lemma} \label{ball-infinitepath}Let $G$ be the incomparability graph of a poset $P$. If a ball contains infinitely many vertices of a one way infinite induced path then it contains all vertices except may be finitely many vertices of that path. \end{lemma} \begin{proof} Let $\mathrm P_{\infty}$ be an infinite induced path of $G$ and $(x_n)_{n\in \NN}$ be an enumeration of its vertices, so that $(x_n, x_{n+1}) \in E(G)$ for $n\in \NN$. Without loss of generality we may suppose that $x_0<x_2$ (otherwise, replace the order of $P$ by its dual). By Lemma \ref {lem:inducedpath} we have $x_i< x_j$ for every $i+2\leq j$. Let $B_G(x, r)$ be a ball of $G$ containing infinitely many vertices of $\mathrm P_{\infty}$. Let $x_i\in \mathrm P_{\infty} \cap B(x,r)$. We claim that $x_j\in \mathrm P_{\infty} \cap B(x,r)$ for all $j\geq i+2$. Indeed, due to our hypothesis, we may pick $x_r \in \mathrm P_{\infty} \cap B(x,r)$ with $r\geq j+2$. We have $x_i<x_j<x_r$. Due to the convexity of $B(x,r)$ we have $x_j\in B(x, r)$. This proves our claim. \end{proof} Said differently: \begin{lemma}\label{ball-infinitepath2} If a one way infinite induced path $\mathrm P_{\infty}$ has an infinite diameter in the incomparability graph $G$ of a poset then every ball of $G$ with finite radius contains only finitely many vertices of $\mathrm P_{\infty}$. \end{lemma} \section{Induced infinite paths in incomparability graphs: A proof of Theorem \ref{thm:infinitepath-kite}}\label{section:proof-thm:infinitepath-kite} The proofs of $(1)$ and $(2)$ of Theorem \ref{thm:infinitepath-kite} are similar. We construct a strictly increasing sequence $(y_n)_{n\in \NN}$ of vertices such that $3\leq d_G(y_n,y_{n+1})<+\infty$ for all $n\in \NN$ and we associate to each $n\in \NN$ a finite path $\mathrm P _n:=z_{(n,0)},z_{(n,1)},...,z_{(n,r_n)}$ of $G$ of length $r_n:=d_G(y_n,y_{n+1})$ joining $y_n$ and $y_{n+1}$. We show first that the graph $G':=G_{\restriction \bigcup_{n\in \NN}V(\mathrm P_n)}$ is connected and has an infinite diameter. Next, we prove that it is locally finite. Hence from K\H{o}nig's Lemma (\ref{thn:konig}), it contains an isometric path. This path yields an induced path of $G$. The detour via K\H{o}nig's Lemma is because the union of the two consecutive paths $\mathrm P_n$ and $\mathrm P_{n+1}$ do not form necessarily a path. In the first proof, our paths have length $3$. In the second proof, their end vertices have degree at least $3$. \begin{lemma}\label{lem:oneside}Let $P:=(V,\leq)$ be a poset so that its incomparability graph $G$ is connected and has infinite diameter. Let $x\in V$ be arbitrary. Then at least one of the sets $d_G^+(x):=\{d_G(x, y) : x<y\in V\}$ or $d_G^-(x):=\{d_G(x, y): y\in V \;\text{and}\; y<x\}$ is unbounded in $\NN$. Furthermore, if $d_G^+(x):=\{d_G(x, y) : x<y\in V\}$ is unbounded in $\NN$ and $z>x$, then $d_G^+(z):=\{d_G(z, y) : z<y\in V \}$ is unbounded in $\NN$ (in particular, $z$ cannot be maximal in $P$). \end{lemma} \begin{proof}Suppose for a contradiction that the sets $d_G^+(x):=\{d_G(x, y) : x<y\in V\}$ and $d_G^-(x):=\{d_G(x, y): y\in V \;\text{and}\; y<x\}$ are bounded. Let $r:=\max d_G^+(x)$ and $r':=\max d_G^-(x)$. Then $V:=B_G(x,\max\{2, r,r'\})$ and therefore the diameter of $G$ is bounded contradicting our assumption. Now let $z>x$ and suppose for a contradiction that $d_G^+(z):=\{d_G(z, y) : z<y\in V \}$ is bounded and let $r:=\max d_G^+(z)$. Let $x<y$. If $y\leq z$ then $d(x,y)\leq d(x, z)$ by Lemma \ref{lem:inequality}; if $z\parallel y$ then $d(x,y)\leq d(x,z)+1$; if $z\leq y$ then we have $d(x, y)\leq d(x,z)+d(z,y)\leq d(x,z)+r$, hence, the set $d_G^+(x)$ is bounded, contradicting our assumption. \end{proof} \noindent {\bf Proof of (1) of Theorem \ref{thm:infinitepath-kite}.} We construct a sequence $(x_{n})_{n\in \NN}$ of vertices (see Figure \ref{fig:sequence}). We pick $x_0\in V$. According to Lemma \ref{lem:oneside}, one of the set $d_G^+(x_0):=\{d_G(x_0, y) : x_0<y\in V\}$ and $d_G^-(x_0):=\{d_G(x_0, y): y\in V \;\text{and}\; y<x_0\}$ is unbounded. We may assume without loss of generality that the set $d_G^+(x_{0} )$ is unbounded. Choose an element $x_3>x_0$ at distance three from $x_0$ in $G$ and let $x_{0}, x_{1}, x_{2}, x_{3}$ be a path joining $x_0$ to $x_1$. Note that necessarily we have $x_0<x_{2}$ and $x_{1}<x_3$. Now suppose constructed a sequence $x_0, x_1, \dots, x_{3n}$ such that $x_0< x_3\dots <x_{3n}$ and such that $x_{3i}, x_{3i+1}, x_{3i+2}, x_{3i+3}$ is a path of extremities $x_{3i}$ and $x_{3(i+1)}$ for $i<n$. According to Lemma \ref{lem:oneside}, the set $d_G^+(x_{3n})$ is unbounded. Hence, it contains a vertex $x_{3(n+1)}$ at distance three from $x_{3n}$. Let $x_{3n}, x_{3n+1}, x_{3n+2}, y_{3n+3}$ be a path of extremities $x_{3n}$ and $x_{3(n+1)}$. By Lemma \ref{lem:inducedpath} we have necessarily: \begin{equation}\label{equ:inequality inducedpath} x_{3n}<x_{3n+2}\; \text{and}\; x_{3n+ 1}<x_{3n+3}. \end{equation} \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=1.2in \epsfbox{sequence1.eps} \end{center} \caption{} \label{fig:sequence} \end{figure} Let $P'$ be the poset induced on the set $V':= \{x_n :n\in \NN\}$ and $G'$ be the incomparability graph of $P'$. According to our construction $G'$ contains a spanning path (not necessarily induced), hence it is connected. \begin{claim}\label{claim:inequalityG} $d_{G}(x_0,x_{3n})\geq n+2$ for every $n\geq 1$. \end{claim} Since $d_{G'}(x_0,x_{3n})\geq d_{G}(x_0,x_{3n)})$, it follows that the diameter of $G'$ is infinite. \noindent{\bf Proof of Claim \ref {claim:inequalityG}.} We prove the inequality of the claim by induction on $n\geq 1$. By definition, the inequality holds for $n=1$. Suppose the inequality holds for $n$. It follows from Lemma \ref{lem:inequality} that $n+5 \leq d_G(x_0, x_{3n})+ d_G(x_{3n}, x_{3(n+1)})\leq d_G(x_0, x_{3(n+1)})+2$ and therefore the inequality holds for $n+1$. \begin{claim}\label{claim:pathkonig} The incomparability graph of $P'$ is locally finite, that is for all $x\in P'$, $\iinc_{P'}(x):= \{y\in V': x \parallel y\}$ is finite. \end{claim}In fact, $\iinc_{P'}(x)$ has at most six elements. \noindent{\bf Proof of Claim \ref {claim:pathkonig}.} We have \begin{enumerate}[$(a)$] \item $\iinc_{P'}(x_{3n})\subseteq \{x_{3n-1}, x_{3n+1}\}$ for $n\geq 1$. \item $\iinc_{P'}(x_{3n+1})\subseteq \{x_{3(n-2)+2}, x_{3(n-1)+1}, x_{3(n-1)+2}, x_{3n}, x_{3n+2}, x_{3(n+1)+1}\}$ for $n\geq 2$. \item $\iinc_{P'}(x_{3n+2})\subseteq \{x_{3n-1}, x_{3(n+1)}, x_{3(n+1)+1} x_{3(n+1)+2}, x_{3(n+2)+1}\}$ for $n\geq 1$. \item $\iinc_{P'}(x_{0})=\{x_1\}$, $\iinc_{P'}(x_{1})\subseteq \{x_0,x_2,x_4\}$, $\iinc_{P'}(x_2)\subseteq \{x_1,x_3,x_5,x_7\}$ and $\iinc_{P'} (x_4)\subseteq \{x_1,x_3,x_5,x_7\}$. \end{enumerate} \begin{proof} \begin{enumerate}[$(a)$] \item Let $n\in \NN$. By inequalities (\ref{equ:inequality inducedpath}) stated above, we have $x_{3n-2}<x_{3n}< x_{3n+2}$. Let $n'\in \NN$ be such that $n<n'$. By construction, $x_{3n}<x_{3n'}$. By inequalities (\ref{equ:inequality inducedpath}) again we have $x_{3n'}<x_{3n'+2}$, hence $x_{3n}<x_{3n'+2}$. Since $x_{3n}<x_{3n'}$ and $x_{3n'+1}$ is incomparable to $x_{3n'}$ we infer that $x_{3n'+1}\nleq x_{3n}$. We have $d_G(x_{3n},x_{3n'})\geq 3$; indeed, if $n'=n+1$, $d_G(x_{3n},x_{3n'})=3$ by construction, otherwise apply the first inequality of Lemma \ref{lem:inequality} with $x=x_{3n}$, $z=x_{3(n+1)}$ and $y=x_{3n'}$). Since $d_G(x_{3n},x_{3n'})\geq 3$ and $x_3$ is incomparable to $x_{3n+1}$, the vertices $x_{3n}$ and $x_{3n'+1}$ cannot be incomparable; it follows that $x_{3n}<x_{3n'+1}$. Since a poset and its dual have the same incomparability graph, we deduce that if $n'<n$, then $x_{3n'},x_{3n'+1}, x_{3n'+2}<x_{3n}$. Hence, $\iinc_{P'}(x_{3n})\subseteq \{x_{3n-1}, x_{3n+1}\}$ for $n\geq 1$. \item Since $x_{3n-3}<x_{3n}$ and $x_{3n}$ and $x_{3n+1}$ are incomparable we infer that $x_{3n+1}\nleqslant x_{3n-3}$. It follows that $x_{3n-3}<x_{3n+1}$ because otherwise $x_{3n-3},x_{3n+1},x_{3n}$ would be a path of length two contradicting our assumption that $d_G(x_{3n-3},x_{3n})=3$. From Lemma \ref{lem:inducedpath}, we deduce that if $k<3n-4$, then $x_k<x_{3n-3}$ and hence $x_k<x_{3n+1}$. Hence, if $k<3n-1$ and $x_k$ is incomparable to $x_{3n+1}$, then $k\in \{3n-4,3n-2\}$. Since $x_{3n+1}<x_{3n+3}$ it follows from Lemma \ref{lem:inducedpath} that if $k>3n+4$, then $x_k>x_{3n+4}$ and hence $x_k\not \in \iinc_{P'}(x_{3n+1})$. Hence, $x_{3n}$ and $x_{3n+1}$ are possible elements incomparable to $x_{3n+1}$, hence the required inclusion. \item Since $x_{3n}<x_{3n+2}$ it follows from Lemma \ref{lem:inducedpath} that if $x_k$, for $k<3n$, is incomparable to $x_{3n+2}$ then $k\in \{3n-1,3n+3\}$. Now observe that $x_{3n+2}<x_{3n+6}$ because otherwise $x_{3n+3},x_{3n+2},x_{3n+6}$ is a path of length two contradicting $d_G(x_{3n+3},x_{3n+6})=3$. By duality we infer that if $k>3n+4$, then $x_k$ incomparable to $x_{3n+2}$ implies $k\in \{3n+5,3n+7\}$. The required inclusion readily follows. \item We have $x_0<x_3$ and $x_0<x_2$. Since $d_G(x_0,x_3)=3$ and $x_3$ incomparable to $x_4$ we must have $x_0<x_4$. From $\iinc_{P'}(x_{3})\subseteq \{x_{2}, x_{4}\}$ we deduce that $x_1$ is the only element incomparable to $x_0$. From $x_1<x_3$ we deduce that $\iinc_{P'}(x_1)\subseteq \{x_0\}\cup \iinc_{P'}(x_3)$ and therefore $\iinc_P( x_{1})\subseteq \{x_0,x_2,x_4\}$. From $x_2<x_6$ and $\iinc_P( x_{6})\subseteq \{x_{5}, x_{7}\}$ we derive $\iinc_{P'}(x_2)\subseteq \{x_1,x_3,x_5,x_7\}$. Similarly, we have $\iinc_{P'}(x_4)\subseteq \{x_1,x_3,x_5,x_7\}$. \end{enumerate} \hfill $\Box$ From Claim \ref{claim:inequalityG} and Claim \ref{claim:pathkonig}, $\inc(P')$ is connected, locally finite and has an infinite diameter. From K\H{o}nig's Lemma, $G'$ contains an infinite isometric path, hence $G$ contains an infinite induced path. This completes the proof of $(1)$. \end{proof} \noindent {\bf Proof of (2) of Theorem \ref{thm:infinitepath-kite}.} We break the proof into two parts. \begin{claim} \label{claim:part1} If $G$ is a connected incomparability graph of infinite diameter and if the set of vertices of degree at least $3$ in $G$ has infinite diameter, then $G$ contains an infinite induced path such that the set of vertices of this path with degree at least $3$ in $G$ has an infinite diameter. \end{claim} \noindent{\bf Proof of Claim \ref{claim:part1}.} Let $x$ be any vertex in $G$, $I:=\iinc_P(x)\cup \downarrow x$ and $F:=\iinc_P(x)\cup \uparrow x$. According to Theorem \ref{thm:orderconvex}, $I$ and $F$ are order convex and $G_{\restriction I}$ and $G_{\restriction F}$ are isometric subgraphs of $G$. Since, trivially, $V(G)=I\cup F$, every vertex of degree at least $3$ belongs to $I$ or to $F$. Since the diameter in $G$ of the set of vertices of degree at least $3$ is infinite and $G_{\restriction I}$ and $G_{\restriction F}$ are isometric subgraphs we infer that the diameter in $G_{\restriction I}$ or in $G_{\restriction F}$ of the set of vertices of degree at least $3$ is infinite. Choose $y$ of degree at least $3$. We may assume without loss of generality that the diameter in $G_{\restriction F}$ of the set of vertices of degree at least $3$ is infinite. We start by showing that $P$ contains an infinite chain of elements whose degree is at least $3$ in $G$. Suppose constructed a sequence $y_0:=y<y_1<\dots <y_{n-1}$ of vertices of degree at least $3$ such that $d_G(y_i,y_{i+1})> 3$ for all $i\leq n-2$. Let $y_n>x_0$ be a vertex of degree at least $3$ such that $d_G(y_{n-1},y_{n})>\sum^{n-2}_{j=0}d_G(y_j,y_{j+1})$. This choice of $y_n$ is possible since the diameter in $G_{\restriction F}$ of the set of vertices of degree at least $3$ is infinite. Then $y_{n-1}$ and $y_n$ are comparable in $P$. It follows from Corollary \ref{lem:monotone-distance} that $y_{n-1}<y_n$. Hence, the sequence $(y_i)_{i\in \NN}$ forms a chain in $P$. For all $n\in \NN$, let $\mathrm P_n:=z_{(n,0)},z_{(n,1)},...,z_{(n,r_n)}$ be a path in $G$ of length $r_n:=d_G(y_n,y_{n+1})$ joining $y_n$ and $y_{n+1}$. The graph $G':=G_{\restriction \cup_{i\in \NN}V(P_i)}$ is connected and has infinite diameter. \begin{subclaim}\label{subclaim:part1} $G'$ is locally finite.\end{subclaim} \noindent{\bf Proof of Subclaim \ref{subclaim:part1}.} It suffices to prove that for $n+2\leq m$, every vertex of $\mathrm P_n$ is comparable to every vertex of $\mathrm P_{m}$. Let $z_{n, i}\in \mathrm P_n$ and $z_{m, j}\in \mathrm P_n$. $\bullet$ Suppose first $i= r_{n}-1$. \begin{enumerate}[{(a)}] \item $z_{(n, r_n-1)}\leq y_{m}, z_{(m,1)}$. Indeed, $z_{(n,r_n-1)}$ and $y_{m}$ are comparable, otherwise $y_{n+1}$, $z_{(n, r_n-1)}$, $y_{m}$ form a path with extremities $y_{n+1}$ and $y_{m}$ hence $d_G(y_{n+1}, y_{m})\leq 2$. This is impossible since $d_G(y_{n+1}, y_{m})\geq d_G(y_{n+1}, y_{n+2}) \geq 4$. Furthermore, $z_{(n, r_n-1)}<y_{m}$, otherwise, since $y_{n+1} <y_m$, we obtain $y_{n+1} < z_{(n, r_n-1)}$ by transitivity, while these vertices are incomparable. Similarly, $z_{(n, r_n-1)}$ and $z_{(m, 1)}$ are comparable otherwise $y_{n+1}$, $z_{(n, r_n-1)}$, $z_{(m, 1)} $, $y_{m}$ form a path with extremities $y_{n}$ and $y_{m}$ hence $d_G(y_{n+1}, y_{m})\leq 3$, while this distance is at least $4$. Necessarily, $z_{(n, r_n-1)}<z_{(m,1)}$, otherwise since $z_{(n, r_n-1)}< y_{m}$, we have $z_{m,1}< y_m$ which is impossible. \item By symmetry, $y_{n+1}, z_{(n, r_n-1)}\leq z_{m,1}$. \item $z_{(n, r_n-1)}\leq z_{(m,j)}$. We just proved it for $j=0, 1$. If $j>1$, this follows from $y_m<z_{m, i}$ by transitivity. \end{enumerate} $\bullet$Next, suppose $i= r_n$. In this case $z_{(n,i)}= y_{n+1}$. If $j\geq 2$, we have $z_{(n,i)}= y_{n+1}<y_{m}= z_{(m, 0)} <z_{(m, j)}$. If $j=1$, this is just item (c) above. $\bullet$ Finally, suppose that $i<r_{n}-1$. In this case, $z_{(n, i)} <y_{n+1}<z_{(m,j)}$. \hfill $\Box$ Since $G'$ is connected, locally finite and has an infinite diameter, K\H{o}nig's Lemma ensures that it contains an infinite isometric path $\mathrm P_\infty$. We claim that $\mathrm P_\infty$ contains an infinite number of vertices of degree at least $3$ in $G$. Clearly, $V(\mathrm P_\infty)$ meets infinitely many $\mathrm P_i$'s. For each $i\in \NN$ let $j_i\in V(\mathrm P_i)$ be the largest such that $z_{(i,j_i)}\in V(\mathrm P_\infty)$. Then the degree of $z_{(i,j_i)}$ is at least $3$ in $G$. Indeed, if $z_{(i,j_i)}\in \{y_i,y_{i+1}\}$, then we are done. Otherwise $z_{(i,j_i)}$ is not an end vertex of $\mathrm P_i$. Then $z_{(i,j_i)}$ must have a neighbour in $\mathrm P_{\infty}$ which is not in $\mathrm P_i$ and therefore must have degree three. So far we have proved that $G'$ contains an infinite isometric path $\mathrm P_\infty$ containing infinitely many vertices of degree at least $3$. Hence, $G$ contains an infinite induced path $\mathrm P_\infty$ containing infinitely many vertices of degree at least $3$. This proves our claim. \hfill $\Box$ \begin{claim}\label{claim:part2}If $G$ is a connected incomparability graph containing an infinite induced path such that the set of vertices of this path with degree at least $3$ in $G$ has an infinite diameter then $G$ contains either a caterpillar or a kite. \end{claim} \noindent{\bf Proof of Claim \ref{claim:part2}.} Let $(x_n)_{n\in \NN}$ be a sequence of vertices of $G$ with $(x_n, x_{n+1})\in E(G)$ for $n\in \NN$ forming an infinite induced path $\mathrm P_{\infty}$. Suppose that this path contains infinitely many vertices with degree at least $3$ in $G$ forming a set of infinite diameter in $G$. \begin{subclaim}\label{subclaim:part2}There is an infinite sequence $(y_n)_{n}$ of vertices in $V\setminus \mathrm P_{\infty}$ forming an independent set and a family of disjoint intervals $I_n:= [l(n), r(n)]$ of $\NN$ such that $\{l(n), r(n)\} \subseteq B_G(y_n, 1)\cap \mathrm P_{\infty} \subseteq I_n$ for all $n\in \NN$. \end{subclaim} \noindent{\bf Proof of Subclaim \ref{subclaim:part2}.} Pick $x_{i_0}\in \mathrm P_{\infty}$ with degree at least $3$ in $G$ and set $y_0$ arbitrary in $B_G(x_{i_0}, 1)\setminus \mathrm P_{\infty}$. According to Lemma \ref{ball-infinitepath2} the ball $ B_G(y_0, 1)$ contains only finitely many vertices of $\mathrm P_{\infty}$. Let $l(0)$, resp., $r(0)$ be the least, resp., the largest integer $k$ such that $x_k\in B_G(y_0, 1)$. Let $n >0$. Suppose $(y_m)_m$, $I_m:= [l(m), r(m)]$ be defined for $m<n$. By Lemma \ref{ball-infinitepath2}, $\mathrm P_{\infty} \cap (\bigcup_{m<n}B_G(y_m, 2))$ is finite, hence there is a vertex $x_{i_n}\in \mathrm P_{\infty}$ with degree at least $3$ such that every vertex in the infinite subpath of $\mathrm P_{\infty}$ starting at $x_{i_n}$ is at distance at least $3$ of any $y_m$. Pick $y_n\in B(x_{i_n}, 1)\setminus \mathrm P_{\infty}$ and set $I_n=[l(n), r(n)]$ where $l(n)$, resp., $r(n)$ be the least, resp., the largest integer $k$ such that $x_k\in B_G(y_n, 1)$. \hfill $\Box$ In order to complete the proof of Claim \ref{claim:part2} we show that the graph $G'$ induced on $\mathrm P_{\infty} \bigcup \{y_n: n\in \NN\}$ contains a caterpillar or a kite. For that, we classify the vertices $y_n$. We say that $y_n$ has \emph{type} $(0)$ if $l(n)=r(n)$ (that is $y_n$ has just one neighbour on $\mathrm P_{\infty}$. If the set $Y_0$ of vertices of type $(0)$ is infinite then trivially $G_{\restriction \mathrm P_{\infty} \bigcup Y_0}$ is a caterpillar (see Figure \ref{fig:comb-kite}). We say that $y_n$ has \emph{type} $(1)$ if $r(n)= l(n)+1$. Again, trivially, if the set $Y_1$ of vertices of type $(1)$ is infinite then $G_{\restriction \mathrm P_{\infty} \bigcup Y_1}$ is a kite of \emph{type} $(1)$. We say that $y_n$ has \emph{type} $(2)$ if $r(n)=l(n)+2$. It has \emph{type} $(2.1)$ if $(y(n), x_{l(n)+1})\in E(G)$ while it has \emph{type} $(2.2)$ if $(y(n), x_{l(n)+1})\not \in E(G)$. If for $i=1,2$ the set $Y_{2.i}$ of vertices of type $(2.i)$ is infinite then $G_{\restriction \mathrm P_{\infty} \bigcup Y_{2.i}}$ is a kite of type $(i+1)$ (see Figure \ref{fig:comb-kite}). We say that $y_n$ has type $(3)$ if $r(n)\geq l(n)+3$. It has type $(3.1)$ if $(y(n), x_{l(n)+1})\in E(G)$ while it has type $(3.2)$ if $(y(n), x_{l(n)+1})\not \in E(G)$. If the set $Y_{3.i}$ of vertices of type $3.i$ is infinite delete from $\mathrm P_{\infty}$ the set $Y:= \bigcup_{n\in Y_{3.i}}\{x_{m}: m\in \{l(n+2,\dots, r(n)-1 \}$. Then $G_{\restriction (\mathrm P_{\infty} \bigcup Y_{3.i})\setminus Y}$ is a kite of type $(2)$ if $i=1$ or a caterpillar if $i=2$ (see Figure \ref{fig:comb-kite}).\hfill $\Box$ \section{Example \ref{thm:noisometric}} \label{section:proof-thm:noisometric} We define the poset satisfying the conditions stated in Example \ref{thm:noisometric}. For a poset $P=(V,\leq)$ we set for every $x\in V$ we set $\iinc_P(x):= \{y\in V: x\parallel y\}$.\\ Let $P:=(X,\leq)$ be the poset defined on $X:=\NN\times \NN \times \{0,1\}$ as follows. We let $(m,n,i)\leq (m',n',i')$ if \[i=i' \mbox{ and } [ n<n' \mbox{ or } (n=n' \mbox{ and } m\leq m') ],\] \[\mbox{or}\] \[i\neq i' \mbox{ and } [n+1<n' \mbox{ or } (n+1=n' \mbox{ and } m\leq m')].\] We set $A_n:=\{(m,n,1) : m\in \NN\}$ for all $n\geq 0$ and $B_n:=\{(m,n,0) : m\in \NN\}$ and note that $\cup_{n\in \NN}A_n$ and $\cup_{n\in \NN} B_n$ are two total orders of order type $\omega^2$. In particular $P$ is coverable by two chains and hence has width two. \\ \textbf{Claim 1:} $\leq$ is an order relation.\\ Reflexivity and antisymmetry are obvious. We now prove that $\leq$ is transitive. Let $(m,n,i)$, $(m',n',i')$, $(m'',n'',i'')$ be such that $(m,n,i)\leq (m',n',i')\leq (m'',n'',i'')$. Note that since $\{i,i',i''\}\subseteq \{0,1\}$ at least two elements of $\{i,i',i''\}$ are equal. if $i=i'=i''$ then clearly $(m,n,i)\leq (m'',n'',i'')$. Next we suppose that there are exactly two elements of $\{i,i',i''\}$ that are equal. There are three cases to consider.\\ $\bullet$ $i=i'$.\\ Since $(m,n,i)\leq (m',n',i')$ we have \begin{equation}\label{1 i=i'} n<n' \mbox{ or } (n=n' \mbox{ and } m\leq m'). \end{equation} Since $i'\neq i''$ and $(m',n',i')\leq (m'',n'',i'')$ we have \begin{equation}\label{2 i' not i''} n'+1<n'' \mbox{ or } (n'+1=n'' \mbox{ and } m'\leq m''). \end{equation} If $n+1<n''$, then since $i\neq i''$ it follows that $(m,n,i)\leq (m'',n'',i'')$. Suppose $n''\leq n+1$. If $n'+1<n''$, then $n'<n$. It follows from (\ref{1 i=i'}) that $n=n'$ and hence $n+1<n''$ proving that $(m,n,i)\leq (m'',n'',i'')$. Else, $n''\leq n'+1$. It follows from (\ref{2 i' not i''}) that $n'+1=n''$ and $m'\leq m''$. If $n<n'$, then $n+1<n''$ and once again we have $(m,n,i)\leq (m'',n'',i'')$. Otherwise it follows from (\ref{1 i=i'}) that $n=n'$ and $m\leq m'$. Hence, $n+1=n''$ and $m\leq m''$ proving that $(m,n,i)\leq (m'',n'',i'')$. \\ $\bullet$ $i=i''$.\\ Since $(m,n,i)\leq (m',n',i')$ and $i\neq i'$ we have \begin{equation}\label{3 i not i'} n+1<n' \mbox{ or } (n+1=n' \mbox{ and } m\leq m'). \end{equation} Since $(m',n',i')\leq (m'',n'',i'')$ and $i'\neq i''$ we have \begin{equation}\label{4 i' not i''} n'+1<n'' \mbox{ or } (n'+1=n'' \mbox{ and } m'\leq m''). \end{equation} We prove that $n<n''$. We suppose $n''\leq n$ and we argue to a contradiction. We Claim that none of $n+1<n'$ and $n'+1<n''$ can hold. Indeed, suppose $n+1<n'$. Then $n''<n'$ and hence $n'+1<n''$ cannot be true. It follows from (\ref{4 i' not i''}) that $n'+1=n''$. But then $n''=n'+1>n'>n''$ which is impossible. Now suppose $n'+1<n''$. Then $n'+1<n<n+1<n'$ and this is impossible. It follows from (\ref{3 i not i'}) and (\ref{4 i' not i''}) that $n+1=n' \mbox{ and } m\leq m'$ and $n'+1=n'' \mbox{ and } m'\leq m''$. Hence, we have proved our claim that none of $n+1<n'$ and $n'+1<n''$ can hold. It follows from (\ref{3 i not i'}) and (\ref{4 i' not i''}) that $n+1=n'$ and $n'+1=n''$, and in particular $n+2=n''$. This contradicts $n''\leq n$. Hence, $n<n''$ and therefore $(m,n,i)\leq (m'',n'',i'')$ since $i=i''$.\\ $\bullet$ $i'=i''$.\\ Since $(m,n,i)\leq (m',n',i')$ and $i\neq i'$ we have \begin{equation}\label{5 i not i'} n+1<n' \mbox{ or } (n+1=n' \mbox{ and } m\leq m'). \end{equation} \noindent Since $(m',n',i')\leq (m'',n'',i'')$ and $i'= i''$ we have \begin{equation}\label{6 i'=i''} n'<n'' \mbox{ or } (n'=n'' \mbox{ and } m'\leq m''). \end{equation} If $n+1<n''$, then $(m,n,i)\leq (m'',n'',i'')$ since $i\neq i''$. We Claim that none of $n+1<n'$ and $n'<n''$ can hold. Suppose $n+1<n'$. Then $n''<n$ and it follows from (\ref{6 i'=i''}) that $n'=n''$. But then $n''\leq n+1<n'=n''$ which is impossible. Suppose $n'<n''$. Then $n'<n+1$ and it follows from (\ref{5 i not i'}) that $n+1=n'$. But then $n+1=n'<n''<n+1$ which is impossible. Hence, none of $n+1<n'$ and $n'<n''$ can hold. It follows from (\ref{5 i not i'}) and (\ref{6 i'=i''}) that $(n+1=n' \mbox{ and } m\leq m')$ and $(n'=n'' \mbox{ and } m'\leq m'')$. Therefore, $(n+1=n'' \mbox{ and } m\leq m'')$ proving that $(m,n,i)\leq (m'',n'',i'')$ as required.\\ \textbf{Claim 2:} Let $j\in \NN$. Then for all $x\in A_j$, $|B_{\inc(P)}(x,1)\cap B_{j+1}|$ is finite.\\ Let $x:=(m,j,1)\in A_j$. Then $B_{\inc(P)}(x,1)\cap B_{j+1}=\{(k,j+1,0): 0\leq k\leq m-1\}$.\\ \textbf{Claim 3:} Let $j\in \NN$. Then for all $x\in B_j$, $|B_{\inc(P)}(x,1)\cap A_{j+1}|$ is finite.\\ Let $x:=(m,j,0)\in B_j$. Then $B_{\inc(P)}(x,1)\cap A_{j+1}=\{(k,j+1,1): 0\leq k\leq m-1\}$.\\ \textbf{Claim 4:} Let $j\in \NN$. Then for all $x\in A_j$ and for all $y\in B_{\inc(P)}(x,1)\cap B_{j+1}$, $|B_{\inc(P)}(y,1)\cap A_{j+2}|<|B_{\inc(P)}(x,1)\cap B_{j+1}|$.\\ Let $x:=(m,j,1)\in A_j$. It follows from Claim 2 that $|N(x)\cap B_{j+1}|=m$. Let $y\in B_{\inc(P)}(x,1) \cap B_{j+1}$, say $y=(m',j+1,0)$ and note that $m'<m$. Then it follows from Claim 3 that $|B_{\inc(P)}(x,1)\cap B_{j+2}|=m'$. Since $m'<m$ we are done.\\ \textbf{Claim 5:} Let $j\in \NN$. Then for all $x\in B_j$ and for all $y\in B_{\inc(P)}(x,1)\cap A_{j+1}$, $|B_{\inc(P)}(y,1)\cap B_{2+1}|<|B_{\inc(P)}(x,1)\cap A_{j+1}|$.\\ Symmetry and Claim 4.\\ \textbf{Claim 6:} If there exists and infinite isometric path $(x_i)_{i\in \NN}$ in $\inc(P)$ starting at $x_0=(0,0,1)$, then $x_{2n}\in A_{2n-1}$ and $x_{2n+1}\in B_{2n}$.\\ From $B_0=\iinc_P(x_0):= \{y\in X: y\; \text{incomparable to}\; x \; \text{in}\; P\}$ follows that $x_1\in B_0$. Suppose for a contradiction that $x_2\not \in A_1$. Then $x_2\in A_0$. In this case $x_3\in B_1$ (this is because $x_0<x_3$). But then $x_4\in A_2$ because otherwise $x_4\in A_1$ and hence the distance from $x_0$ to $x_4$ would be two which is not possible. By the same token $x_5\in B_3$ and more generally $x_{2n+1}\in B_{2n-1}$ and $x_{2n-2}\in A_{n}$. This is impossible. Indeed, suppose $x_2=(i,0,0)$ then $x_3=(j,1,1)$ with $j<i$ and then $x_4=(k,2,0)$ with $k<j$. Continuing this way we have a decreasing sequence of nonnegative integers.\\ \textbf{Claim 7:} Let $y\in B_{\inc(P)}(x_0,1)\cap B_0$. Then the lengths of isometric paths starting at $x_0$ and going through $y$ is bounded.\\ Follows from Claims 4, 5 and 6.\\ We conclude that there is no isometric path in $\inc(P)$ starting at $(0,0,1)$. It follows from Theorem \ref{thm:polat} that $\inc(P)$ has no isometric infinite path. \section{Interval orders: A proof of Theorem \ref{thm:intervalorder-isometric} and Example \ref{thm:intervalorder-non-isometric}}\label{section:intervalorders} We recall that an order $P$ is an {\it interval order} if $P$ is isomorphic to a subset $\mathcal J$ of the set $Int(C)$ of non-empty intervals of a chain $C$, ordered as follows: if $I, J\in Int(C)$, then \begin{equation}\label{ordre-sur-intervalles-recall} I<J \mbox{ if } x<y \mbox{ for every } x\in I \mbox{ and every } y\in J. \end{equation} The following proposition encompasses some known equivalent properties of interval orders. Its proof is easy and is left to the reader. \begin{proposition}Let $P:=(V,\leq)$ be a poset. The following propositions are equivalent. \begin{enumerate}[(i)] \item $P$ is an interval order. \item $P$ does not embed $2\oplus 2$. \item The set $\{(\downarrow{x})\setminus \{x\} : x\in V\}$ is totally ordered by set inclusion. \item The set $\{(\uparrow{x})\setminus \{x\} : x\in V\}$ is totally ordered by set inclusion. \end{enumerate} \end{proposition} \begin{lemma}\label{lemma:neigbour-antichain}Let $P=(V,\leq)$ be an interval order and $x\in V$. Then the neighbours of $x$ (in $\inc( P)$) that lay on an induced path of length at least two in $\inc( P)$ and starting at $x$ and whose vertices are in $\iinc_P( x)\cup \uparrow x$ form an antichain in $P$. \end{lemma} \begin{proof} Let $x:=x_0, x_1,\dots, x_{n}$ and $x:=x'_0, x'_1,\dots, x'_{n'}$ be two induced paths in $\inc( P)$ with $n,n'\geq 2$ and whose vertices are in $\iinc_P( x)\cup \uparrow x$. Note that necessarily $x<x_2$ and $x<x'_2$. Suppose for a contradiction that $x_1$ and $x'_1$ are comparable. Suppose $x_1<x'_1$. Since $x<x_2$ and $x_1$ is incomparable to $x$ and to $x_2$ and $x$ is incomparable to $x'_1$ and $P$ is an interval order we infer that $x'_1$ is comparable to $x_2$ and hence $x<x'_1$ or $x_1<x_2$, which is impossible . The case $x'_1<x_1$ can be dealt with similarly by considering the comparabilities $x'_1<x_1$ and $x<x'_2$. \end{proof} \begin{proof}(Of Theorem \ref{thm:intervalorder-isometric}) Let $x_0\in P$ and set $I_0:=\iinc_P( x_0)\cup \downarrow x_0$ and $F_0:=\iinc_P( x_0)\cup \uparrow x_0$. Clearly, $V(G)=I_0\cup F_0$. Furthermore, since the diameter of $G$ is infinite and $G_{\restriction I_0}$ and $G_{\restriction F_0}$ are connected graphs we infer that the diameter in $G_{\restriction I_0}$ or in $G_{\restriction F_0}$ is infinite. We may assume without loss of generality that the diameter of $G_0:=G_{\restriction F_0}$ is infinite. Hence, the lengths of isometric paths in $G_0$ starting at $x_0$ are unbounded.\\ \textbf{Claim 1:} There exists $x_1\in \iinc_P( x_0)$ such that the lengths of isometric paths in $G_0$ starting at $x_0$ and going through $x_1$ are unbounded.\\ Since the antichains of $P$ are finite, there are only finitely many neighbours of $x_0$ in $G_0$ laying on isometric paths starting at $x_0$ and of length at least two. Hence there must be a neighbour $x_1$ of $x$ in $G_0$ such that the lengths of isometric paths in $G_{\restriction {F_0}}$ starting at $x_0$ and going through $x_1$ are unbounded.\\ Now suppose constructed an isometric path $x_0,...,x_n$ such that $x_i< x_j$ for all $j-i\geq 2$ and that the lengths of isometric paths starting at $x_0$ and going through $x_0,...,x_n$ are unbounded. From Lemma \ref{lemma:neigbour-antichain} we deduce that there are only finitely many neighbours of $x_n$ that lay on such isometric paths. Applying Claim 1 to $x_n$ we deduce that there exists $x_{n+1}>x_{n-1}$ such that $x_0,...,x_n,x_{n+1}$ is an isometric path of length $n+1$. \end{proof} We now proceed to the proof of Example \ref{thm:intervalorder-non-isometric}. \begin{proof} We totally order the set $\NN\times \NN$ as follows: $(n,m)\leq (n', m')$ if $m<m'$ or ($m=m'$ and $n\leq n'$). Consider the set $Q$ of intervals $X_{n, m}:=[(n, m), (n, m+1)[$ ordered as in (\ref{ordre-sur-intervalles}) above and set $G:=\inc(Q)$. Then $X_{n, m} \leq X_{n', m'}$ if and only if $m+1<m'$ or ($m+1=m'$ and $n\leq n'$). Equivalently, $\{X_{n, m},X_{n', m'}\}$ is an edge of $G$ if and only if $m=m'$ or ($m'=m+1$ and $n'<n$) or ($m=m'+1$ and $n<n'$). \\ \textbf{Claim 1:} $G$ is connected and has infinite diameter.\\ Let $X_{n, m}$ and $X_{n', m'}$ be two elements of $Q$ so that $n\leq n'$. We may suppose without loss of generality that $X_{n, m}\cap X_{n', m'}=\varnothing$. We may suppose without loss of generality that $m< m'$. Consider the sequence of intervals $X_{n,m},X_{n',m},X_{n+1,m+1}, X_{n,m+2},X_{n',m+2},...,X_{n',m'}$. This is easily seen to be a path in $G$ proving that $G$ is connected.\\ \textbf{Claim 2:} $G$ has no isometric infinite path starting at $X_{0,0}$.\\ Let $X_{0,0}=:Y_0, \dots Y_r\dots$ be an isometric path. Then $Y_1=X_{n_1,0}$ for some $n_1\in \NN$. Now $Y_2$ must intersect $Y_1$ but not $Y_0$. Hence, $Y_2=X_{n_2,1}$ for some $n_2<n_1$. Now $Y_3$ must intersect $Y_2$ but not $Y_1$. Suppose $Y_3=X_{n',1}$. Then $n_1<n'$. But then $X_{n'+1,0}$ intersects $Y_3$ and $Y_0$ and therefore the distance in $G$ between $Y_0$ and $Y_3$ is two contradicting our assumption that $X_{0,0}=:Y_0, \dots Y_n\dots$ is isometric. Hence, we must have $Y_3=X_{n_3,2}$ for some $n_3<n_2$. An induction argument shows that $Y_r=X_{n_r,r-1}$ with $n_r<n_{r-1}<...<n_1$. Since there are no infinite strictly decreasing sequences of positive integers the isometric path $X_{0,0}=:Y_0, \dots Y_r\dots$ must be finite. This completes the proof of Claim 2.\\ It follows from Theorem \ref{thm:polat} that $G$ has no isometric path. \end{proof} \section*{Acknowledgement} We are indebted to an anonymous referee for their careful examination of the paper and for their comments which improved its presentation.
2,869,038,155,182
arxiv
\section{Introduction}\label{S:intro} A key tool in the algebraic theory of data structures is their specification by operations (constructors) and equations that they ought to satisfy. Hence, the study of models of equational specifications has been of long standing interest both in mathematics and computer science. The seminal result in this field is Birkhoff's celebrated HSP theorem~\cite{Birkhoff35}. It states that a class of algebras over a signature $\Sigma$ is a \emph{variety} (i.e.~closed under \underline{h}omomorphic images, \underline{s}ubalgebras, and \underline{p}roducts) iff it is axiomatizable by equations $s=t$ between $\Sigma$-terms. Birkhoff also introduced a complete deduction system for reasoning about equations. In algebraic approaches to the semantics of programming languages and computational effects, it is often natural to study algebras whose underlying sets are equipped with additional computationally relevant structure and whose operations preserve that structure. An important line of research thus concerns extensions of Birkhoff's theory of equational axiomatization beyond ordinary $\Sigma$-algebras. On the syntactic level, this requires to enrich Birkhoff's notion of an equation in ways that reflect the extra structure. Let us mention a few examples: \begin{enumerate} \item \emph{Ordered algebras} (given by a poset and monotone operations) and \emph{continuous algebras} (given by a complete partial order and continuous operations) were identified by the ADJ group \cite{goguen77} as an important tool in denotational semantics. Subsequently, Bloom \cite{bloom76} and Ad\'amek, Nelson, and Reiterman \cite{adamek85,adamek88} established ordered versions of the HSP theorem along with complete deduction systems. Here, the role of equations $s=t$ is taken over by inequations $s\leq t$. \item \emph{Quantitative algebras} (given by an extended metric space and nonexpansive operations) naturally arise as semantic domains in the theory of probabilistic computation. In recent work, Mardare, Panangaden, and Plotkin \cite{Mardare16,MardarePP17} presented an HSP theorem for quantitative algebras and a complete deduction system. In the quantitative setting, equations $s=_\epsilon t$ are equipped with a non-negative real number $\epsilon$, interpreted as ``$s$ and $t$ have distance at most $\epsilon$''. \item \emph{Nominal algebras} (given by a nominal set and equivariant operations) are used in the theory of name binding \cite{pitts_2013} and have proven useful for characterizing logics for data languages \cite{boj13,clp15}. Varieties of nominal algebras were studied by Gabbay~\cite{gabbay09} and Kurz and Petri\c{s}an~\cite{KP10}. Here, the appropriate syntactic concept involves equations $s=t$ with constraints on the support of their variables. \item \emph{Profinite algebras} (given by a profinite topological space and continuous operations) play a central role in the algebraic theory of formal languages \cite{pin09}. They serve as a technical tool in the investigation of \emph{pseudovarieties} (i.e. classes of {finite} algebras closed under homomorphic images, subalgebras, and {finite} products). As shown by Reiterman \cite{Reiterman1982} and Eilenberg and Schützenberger~\cite{es76}, pseudovarieties can be axiomatized by \emph{profinite equations} (formed over free profinite algebras) or, equivalently, by sequences of ordinary equations $(s_i=t_i)_{i<\omega}$, interpreted as ``all but finitely many of the equations $s_i=t_i$ hold''. \end{enumerate} The present paper proposes a general category theoretic framework that allows to study classes of algebras with extra structure in a systematic way. Our overall goal is to isolate the domain-specific part of any theory of equational axiomatization from its generic core. Our framework is parametric in the following data: \begin{itemize} \item a category $\mathscr{A}$ with a factorization system $(\mathcal{E},\mathcal{M})$; \item a full subcategory $\mathscr{A}_0\subseteq \mathscr{A}$; \item a class $\Lambda$ of cardinal numbers; \item a class $\mathscr{X}\subseteq \mathscr{A}$ of objects. \end{itemize} Here, $\mathscr{A}$ is the category of algebras under consideration (e.g. ordered algebras, quantitative algebras, nominal algebras). Varieties are formed within $\mathscr{A}_0$, and the cardinal numbers in $\Lambda$ determine the arities of products under which the varieties are closed. Thus, the choice $\mathscr{A}_0 = $ finite algebras and $\Lambda =$ finite cardinals corresponds to pseudovarieties, and $\mathscr{A}_0= \mathscr{A}$ and $\Lambda=$ all cardinals to varieties. The crucial ingredient of our setting is the parameter $\mathscr{X}$, which is the class of objects over which equations are formed; thus, typically, $\mathscr{X}$ is chosen to be some class of freely generated algebras in $\mathscr{A}$. Equations are modeled as $\mathcal{E}$-quotients $e\colon X\twoheadrightarrow E$ (more generally, filters of such quotients) with domain $X\in \mathscr{X}$. The choice of $\mathscr{X}$ reflects the desired expressivity of equations in a given setting. Furthermore, it determines the type of quotients under which equationally axiomatizable classes are closed. More precisely, in our general framework a \emph{variety} is defined to be a subclass of $\mathscr{A}_0$ closed under $\mathcal{E}_\mathscr{X}$-quotients, $\mathcal{M}$-subobjects, and $\Lambda$-products, where $\mathcal{E}_\mathscr{X}$ is a subclass of $\mathcal{E}$ derived from $\mathscr{X}$. Due to its parametric nature, this concept of a variety is widely applicable and turns out to specialize to many interesting cases. The main result of our paper is the \begin{HSP} A subclass of $\mathscr{A}_0$ forms a variety if and only if it is axiomatizable by equations. \end{HSP} In addition, we introduce a generic deduction system for equations, based on two simple proof rules (see \autoref{S:logic}), and establish a \begin{CompThm} The generic deduction system for equations is sound and complete. \end{CompThm} The above two theorems can be seen as the generic building blocks of the model theory of algebras with structure. They form the common core of numerous Birkhoff-type results and give rise to a systematic recipe for deriving concrete HSP and completeness theorems in settings such as (1)--(4). In fact, all that needs to be done is to translate our abstract notion of equation and equational deduction, which involves (filters of) quotients, into an appropriate syntactic concept. This is the domain-specific task to fulfill, and usually amounts to identifying an ``exactness'' property for the category $\mathscr{A}$. Subsequently, one can apply our general results to obtain HSP and completeness theorems for the type of algebras under consideration. Several instances of this approach are shown in \autoref{S:app}. Proofs of all results and details for the examples can be found in the Appendix. \paragraph{Related work.} Generic approaches to universal algebra have a long tradition in category theory. They aim to replace syntactic notions like terms and equations by suitable categorical abstractions, most prominently Lawvere theories and monads \cite{arv10,manes76}. Our present work draws much of its inspiration from the classical paper of Banaschewski and Herrlich~\cite{BanHerr1976} on HSP classes in $(\mathcal{E},\mathcal{M})$-structured categories. These authors were the first to model equations as quotients $e\colon X\twoheadrightarrow E$. However, their approach does not feature the parameter $\mathscr{X}$ and assumes that equations are formed over $\mathcal{E}$-projective objects $X$. This limits the scope of their results to categories with enough projectives, a property that frequently fails in categories of algebras with structure (including continuous, quantitative or nominal algebras). The introduction of the parameter $\mathscr{X}$ in our paper, along with the identification of the derived parameter $\mathcal{E}_\mathscr{X}$ as a key concept, is therefore a crucial step in order to gain a categorical understanding of such structures. Equational logics on the level of abstraction of Banaschewski and Herrlich's work were studied by Ro\c{s}u \cite{rosu01,rosu06} and Ad\'amek, H\'ebert, and Sousa \cite{ahs07}. These authors work under assumptions on the category $\mathscr{A}$ different from our framework, e.g. they require existence of pushouts. Hence, the proof rules and completeness results in \emph{loc. cit.} are not directly comparable to our approach in \autoref{S:logic}. In the present paper, we opted to model equations as filters of quotients rather than single quotients, which allows us to encompass several HSP theorems for finite algebras \cite{es76,Reiterman1982,PinWeil1996}. The first categorical generalization of such results was given by Ad\'amek, Chen, Milius, and Urbat \cite{camu16, uacm17} who considered algebras for a monad $\mathbb{T}$ on an algebraic category and modeled equations as filters of finite quotients of free $\mathbb{T}$-algebras (equivalently, as profinite quotients of free profinite $\mathbb{T}$-algebras). This idea was further generalized by Salam\'anca \cite{s16} to monads on concrete categories. However, again, this work only applies to categories with enough projectives, which excludes most of our present applications. \paragraph{Acknowledgement.} The authors would like to thank Thorsten Wi\ss mann for insightful discussions on nominal sets. \section{Preliminaries}\label{S:prelim} We start by recalling some notions from category theory. A \emph{factorization system} $(\mathcal{E},\mathcal{M})$ in a category $\mathscr{A}$ consists of two classes $\mathcal{E},\mathcal{M}$ of morphisms in $\mathscr{A}$ such that (1) both~$\mathcal{E}$ and $\mathcal{M}$ contain all isomorphisms and are closed under composition, (2)~every morphism $f$ has a factorization $f = m\o e$ with $e\in \mathcal{E}$ and $m\in \mathcal{M}$, and (3)~the \emph{diagonal fill-in} property holds: for every commutative square $g\o e = m\o f$ with $e\in \mathcal{E}$ and $m\in \mathcal{M}$, there exists a unique $d$ with $m\o d = g$ and $d\o e = f$. The morphisms $m$ and $e$ in (2) are unique up to isomorphism and are called the \emph{image} and \emph{coimage} of $f$, resp. The factorization system is \emph{proper} if all morphisms in $\mathcal{E}$ are epic and all morphisms in $\mathcal{M}$ are monic. From now on, we will assume that $\mathscr{A}$ is a category equipped with a proper factorization system $(\mathcal{E},\mathcal{M})$. Quotients and subobjects in $\mathscr{A}$ are taken with respect to $\mathcal{E}$ and $\mathcal{M}$. That is, a \emph{quotient} of an object $X$ is represented by a morphism $e\colon X \twoheadrightarrow E$ in $\mathcal{E}$ and a \emph{subobject} by a morphism $m\colon M \rightarrowtail X$ in $\mathcal{M}$. The quotients of $X$ are ordered by $e \leq e'$ iff $e'$ factorizes through $e$, i.e. there exists a morphism $h$ with $e' = h \o e$. Identifying quotients $e$ and $e'$ which are isomorphic (i.e. $e\leq e'$ and $e'\leq e$), this makes the quotients of $X$ a partially ordered class. Given a full subcategory $\mathscr{A}_0 \subseteq \mathscr{A}$ we denote by $X\mathord{\epidownarrow} \mathscr{A}_0$ the class of all quotients of $X$ represented by $\mathcal{E}$-morphisms with codomain in $\mathscr{A}_0$. The category $\mathscr{A}$ is \emph{$\mathcal{E}$-co-wellpowered} if for every object $X\in \mathscr{A}$ there is only a set of quotients with domain $X$. In particular, $X\mathord{\epidownarrow} \mathscr{A}_0$ is then a po\emph{set}. Finally, an object $X\in \mathscr{A}$ is called \emph{projective} w.r.t. a morphism $e\colon A\to B$ if for every $h\colon X\to B$, there exists a morphism $g\colon X\to A$ with $h = e\o g$. \section{The Generalized Variety Theorem}\label{S:hsp} In this section, we introduce our categorical notions of equation and variety, and derive the HSP theorem. For the rest of the paper, we fix the data mentioned in the introduction: a category $\mathscr{A}$ with a proper factorization system $(\mathcal{E}, \mathcal{M})$, a full subcategory $\mathscr{A}_0 \subseteq \mathscr{A}$, a class $\Lambda$ of cardinal numbers, and a class $\mathscr{X} \subseteq \mathscr{A}$ of objects. An object of $\mathscr{A}$ is called \emph{$\mathscr{X}$-generated} if it is a quotient of some object in $\mathscr{X}$. A key role in the following development will be played by the subclass $\mathcal{E}_\mathscr{X}\subseteq \mathcal{E}$ defined by \[ \mathcal{E}_\mathscr{X} = \{\,e\in \mathcal{E} \;:\; \text{every $X \in \mathscr{X}$ is projective w.r.t.~$e$}\,\}. \] Note that $\mathscr{X}\subseteq \mathscr{X}'$ implies $\mathcal{E}_{\mathscr{X}'}\subseteq \mathcal{E}_\mathscr{X}$. The choice of $\mathscr{X}$ is a trade-off between ``having enough equations'' (that is, $\mathscr{X}$ needs to be rich enough to make equations sufficiently expressive) and ``having enough projectives'' (that is, $\mathcal{E}_\mathscr{X}$ needs to generate $\mathscr{A}_0$, as stated in~\ref{A3} below). \begin{assumptions}\label{asm:setting} Our data is required to satisfy the following properties: \begin{enumerate} \item\label{A1} $\mathscr{A}$ has $\Lambda$-products, i.e. for every $\lambda \in \Lambda$ and every family $(A_i)_{i < \lambda}$ of objects in $\mathscr{A}$, the product $\prod_{i< \lambda} A_i$ exists. \item\label{A2} $\mathscr{A}_0$ is closed under isomorphisms, $\Lambda$-products and $\mathscr{X}$-generated subobjects. The last statement means that for every subobject $m\colon A \rightarrowtail B$ in $\mathcal{M}$ where $B\in \mathscr{A}_0$ and $A$ is $\mathscr{X}$-generated, one has $A \in \mathscr{A}_0$. \item\label{A3} Every object of $\mathscr{A}_0$ is an $\mathcal{E}_\mathscr{X}$-quotient of some object of $\mathscr{X}$, that is, for every object $A \in \mathscr{A}_0$ there exists some $e\colon X \twoheadrightarrow A$ in $\mathcal{E}_\mathscr{X}$ with domain $X\in \mathscr{X}$. \end{enumerate} \end{assumptions} \begin{examples}\label{ex:running} Throughout this section, we will use the following three running examples to illustrate our concepts. For further applications, see \autoref{S:app}. \begin{enumerate} \item\label{ex:running:birkhoff} \emph{Classical $\Sigma$-algebras.} The setting of Birkhoff's seminal work \cite{Birkhoff35} in general algebra is that of algebras for a signature. Recall that a \emph{(finitary) signature} is a set $\Sigma$ of operation symbols each with a prescribed finite arity, and a \emph{$\Sigma$-algebra} is a set $A$ equipped with operations $\sigma\colon A^n \to A$ for each $n$-ary $\sigma\in \Sigma$. A \emph{morphism} of $\Sigma$-algebras (or a \emph{$\Sigma$-homomorphism}) is a map preserving all $\Sigma$-operations. The forgetful functor from the category $\Alg{\Sigma}$ of $\Sigma$-algebras and $\Sigma$-homomorphisms to $\mathbf{Set}$ has a left adjoint assigning to each set $X$ the \emph{free $\Sigma$-algebra} $T_\Sigma X$, carried by the set of all $\Sigma$-terms in variables from $X$. To treat Birkhoff's results in our categorical setting, we choose the following parameters: \begin{itemize} \item $\mathscr{A} = \mathscr{A}_0 = \Alg{\Sigma}$; \item $(\mathcal{E},\mathcal{M}) =$ (surjective morphisms, injective morphisms); \item $\Lambda =$ all cardinal numbers; \item $\mathscr{X}$ = all free $\Sigma$-algebras $T_\Sigma X$ with $X\in \mathbf{Set}$. \end{itemize} One easily verifies that $\mathcal{E}_\mathscr{X}$ consists of all surjective morphisms, that is, $\mathcal{E}_\mathscr{X} = \mathcal{E}$. \item\label{ex:running:eilenschuetz} \emph{Finite $\Sigma$-algebras.} Eilenberg and Schützenberger~\cite{es76} considered classes of finite $\Sigma$-algebras, where $\Sigma$ is assumed to be a signature with only finitely many operation symbols. In our framework, this amounts to choosing \begin{itemize} \item $\mathscr{A}=\Alg{\Sigma}$ and $\mathscr{A}_0 = \FAlg{\Sigma}$, the full subcategory of finite $\Sigma$-algebras; \item $(\mathcal{E},\mathcal{M})=$ (surjective morphisms, injective morphisms); \item $\Lambda =$ all finite cardinal numbers; \item $\mathscr{X}=$ all free $\Sigma$-algebras $T_\Sigma X$ with $X\in\mathbf{Set}_\mathsf{f}$. \end{itemize} As in~\ref{ex:running:birkhoff}, the class $\mathcal{E}_\mathscr{X}$ consists of all surjective morphisms. \item\label{ex:running:mardare} \emph{Quantitative $\Sigma$-algebras.} In recent work, Mardare, Panangaden, and Plotkin \cite{Mardare16,MardarePP17} extended Birkhoff's theory to algebras endowed with a metric. Recall that an \emph{extended metric space} is a set $A$ with a map $d_A\colon A\times A\to [0,\infty]$ (assigning to any two points a possibly infinite distance), subject to the axioms (i) $d_A(a,b)=0$ iff $a=b$, (ii) $d_A(a,b)=d_A(b,a)$, and (iii) $d_A(a,c)\leq d_A(a,b)+d_A(b,c)$ for all $a,b,c\in A$. A map $h\colon A\to B$ between extended metric spaces is \emph{nonexpansive} if $d_B(h(a),h(a'))\leq d_A(a,a')$ for $a,a'\in A$. Let $\mathbf{Met}_\infty$ denote the category of extended metric spaces and nonexpansive maps. Fix a, not necessarily finitary, signature $\Sigma$, that is, the arity of an operation symbol $\sigma\in \Sigma$ is any cardinal number. A \emph{quantitative $\Sigma$-algebra} is a $\Sigma$-algebra $A$ endowed with an extended metric $d_A$ such that all $\Sigma$-operations $\sigma\colon A^n\to A$ are nonexpansive. Here, the product $A^n$ is equipped with the $\sup$-metric $d_{A^n}((a_i)_{i<n}, (b_i)_{i<n}) = \sup_{i<n} d_A(a_i,b_i)$. The forgetful functor from the category $\QAlg{\Sigma}$ of quantitative $\Sigma$-algebras and nonexpansive $\Sigma$-homomorphisms to $\mathbf{Met}_\infty$ has a left adjoint assigning to each space $X$ the free quantitative $\Sigma$-algebra $T_\Sigma X$. The latter is carried by the set of all $\Sigma$-terms (equivalently, well-founded $\Sigma$-trees) over $X$, with metric inherited from $X$ as follows: if $s$ and $t$ are $\Sigma$-terms of the same shape, i.e.~they differ only in the variables, their distance is the supremum of the distances of the variables in corresponding positions of $s$ and $t$; otherwise, it is $\infty$. We aim to derive the HSP theorem for quantitative algebras proved by Mardare et al. as an instance of our general results. The theorem is parametric in a regular cardinal number $c>1$. In the following, an extended metric space is called \emph{$c$-clustered} if it is a coproduct of spaces of size $<c$. Note that coproducts in $\mathbf{Met}_\infty$ are formed on the level of underlying sets. Choose the parameters \begin{itemize} \item $\mathscr{A} = \mathscr{A}_0 = \QAlg{\Sigma}$; \item $(\mathcal{E},\mathcal{M})$ given by morphisms carried by surjections and subspaces, resp.; \item $\Lambda = $ all cardinal numbers; \item $\mathscr{X} =$ all free algebras $T_\Sigma X$ with $X\in \mathbf{Met}_\infty$ a $c$-clustered space. \end{itemize} One can verify that a quotient $e\colon A\twoheadrightarrow B$ belongs to $\mathcal{E}_\mathscr{X}$ if and only if for each subset $B_0\subseteq B$ of cardinality $<c$ there exists a subset $A_0\subseteq A$ such that $e[A_0]=B_0$ and the restriction $e\colon A_0\to B_0$ is isometric (that is, $d_B(e(a),e(a')) = d_A(a,a')$ for $a,a'\in A_0$). Following the terminology of Mardare et al., such a quotient is called \emph{$c$-reflexive}. Note that for $c=2$ every quotient is $c$-reflexive, so $\mathcal{E}_\mathscr{X} = \mathcal{E}$. If $c$ is infinite, $\mathcal{E}_\mathscr{X}$ is a proper subclass of $\mathcal{E}$. \end{enumerate} \end{examples} \begin{defn}\label{D:eq} An \emph{equation over $X\in\mathscr{X}$} is a class $\mathscr{T}_X\subseteq X\mathord{\epidownarrow} \mathscr{A}_0$ that is \begin{enumerate} \item\label{D:eq:1} \emph{$\Lambda$-codirected:} every subset $F\subseteq \mathscr{T}_X$ with $\under{F}\in \Lambda$ has a lower bound in $F$; \item\label{D:eq:2} \emph{closed under $\mathcal{E}_\mathscr{X}$-quotients:} for every $e\colon X\twoheadrightarrow E$ in $\mathscr{T}_X$ and $q\colon E\twoheadrightarrow E'$ in $\mathcal{E}_\mathscr{X}$ with $E'\in \mathscr{A}_0$, one has $q\o e\in \mathscr{T}_X$. \end{enumerate} An object $A\in \mathscr{A}$ \emph{satisfies} the equation $\mathscr{T}_X$ if every morphism $h\colon X\to A$ factorizes through some $e\in \mathscr{T}_X$. In this case, we write \[A \models \mathscr{T}_X.\] \end{defn} \begin{rem}\label{rem:singlequot} In many of our applications, one can simplify the above definition and replace classes of quotients by single quotients. Specifically, if $\mathscr{A}$ is $\mathcal{E}$-co-wellpowered (so that every equation is a set, not a class) and $\Lambda =$ all cardinal numbers, then every equation $\mathscr{T}_X\subseteq X\mathord{\epidownarrow} \mathscr{A}_0$ contains a least element $e_X\colon X\twoheadrightarrow E_X$, viz.~the lower bound of all elements in $\mathscr{T}_X$. Then an object $A$ satisfies $\mathscr{T}_X$ iff it satisfies $e_X$, in the sense that every morphism $h\colon X\to A$ factorizes through $e_X$. Therefore, in this case, one may equivalently define an equation to be a morphism $e_X\colon X\twoheadrightarrow E_X$ with $X\in \mathscr{X}$. This is the concept of equation investigated by Banaschewski and Herrlich \cite{BanHerr1976}. \end{rem} \begin{examples}\label{ex:running:eq} In our running examples, we obtain the following concepts: \begin{enumerate} \item \emph{Classical $\Sigma$-algebras.} By \autoref{rem:singlequot}, an equation corresponds to a quotient $e_X\colon T_\Sigma X\twoheadrightarrow E_X$ in $\Alg{\Sigma}$, where $X$ is a set of variables. \item \emph{Finite $\Sigma$-algebras.} An equation $\mathscr{T}_X$ over a finite set $X$ is precisely a filter (i.e. a codirected and upwards closed subset) in the poset $T_\Sigma X \mathord{\epidownarrow} \FAlg{\Sigma}$. \item \emph{Quantitative $\Sigma$-algebras.} By \autoref{rem:singlequot}, an equation can be presented as a quotient $e_X\colon T_\Sigma X\twoheadrightarrow E_X$ in $\QAlg{\Sigma}$, where $X$ is a $c$-clustered space. \end{enumerate} \end{examples} We shall demonstrate in \autoref{S:app} how to interpret the above abstract notions of equations, i.e. (filters of) quotients of free algebras, in terms of concrete syntax. \begin{defn}\label{D:var} A \emph{variety\xspace} is a full subcategory $\mathcal{V}\subseteq \mathscr{A}_0$ closed under $\mathcal{E}_\mathscr{X}$-quotients, subobjects, and $\Lambda$-products. More precisely, \begin{enumerate} \item for every $\mathcal{E}_\mathscr{X}$-quotient $e: A\twoheadrightarrow B$ in $\mathscr{A}_0$ with $A \in \mathcal{V}$ one has $B\in \mathcal{V}$, \item for every $\mathcal{M}$-morphism $m: A \rightarrowtail B$ in $\mathscr{A}_0$ with $B \in \mathcal{V}$ one has $A \in \mathcal{V}$, and \item for every family of objects $A_i$ ($i<\lambda$) in $\mathcal{V}$ with $\lambda\in \Lambda$ one has $\prod_{i<\lambda} A_i \in \mathcal{V}$. \end{enumerate} \end{defn} \begin{examples}\label{ex:running:variety} In our examples, we obtain the following notions of varieties: \begin{enumerate} \item \emph{Classical $\Sigma$-algebras.} A \emph{variety of $\Sigma$-algebras} is a class of $\Sigma$-algebras closed under quotient algebras, subalgebras, and products. This is Birkhoff's original concept \cite{Birkhoff35}. \item \emph{Finite $\Sigma$-algebras.} A \emph{pseudovariety of $\Sigma$-algebras} is a class of finite $\Sigma$-algebras closed under quotient algebras, subalgebras, and finite products. This concept was studied by Eilenberg and Schützenberger \cite{es76}. \item \emph{Quantitative $\Sigma$-algebras.} For any regular cardinal number $c>1$, a \emph{$c$-variety of quantitative $\Sigma$-algebras} is a class of quantitative $\Sigma$-algebras closed under $c$-reflexive quotients, subalgebras, and products. This notion of a variety was introduced by Mardare et al. \cite{MardarePP17}. \end{enumerate} \end{examples} \begin{construction}\label{constr:var} Given a class $\mathbb{E}$ of equations, put \[ \mathcal{V}(\mathbb{E}) = \{\, A\in \mathscr{A}_0 : \text{$A \models \mathscr{T}_X$ for each $\mathscr{T}_X \in \mathbb{E}$} \,\}. \] A subclass $\mathcal{V}\subseteq\mathscr{A}_0$ is called \emph{equationally presentable} if $\mathcal{V}=\mathcal{V}(\mathbb{E})$ for some $\mathbb{E}$. \end{construction} We aim to show that varieties coincide with the equationally presentable classes (see \autoref{thm:hspeq} below). The ``easy'' part of the correspondence is established by the following lemma, which is proved by a straightforward verification. \begin{lemma}\label{lem:var} For every class $\mathbb{E}$ of equations, $\mathcal{V}(\mathbb{E})$ is a variety\xspace. \end{lemma} As a technical tool for establishing the general HSP theorem and the corresponding sound and complete equational logic, we introduce the following concept: \begin{defn}\label{D:eqnth} An \emph{equational theory\xspace} is a family of equations \[ \mathscr{T} = (\,\mathscr{T}_X\subseteq X\mathord{\epidownarrow} \mathscr{A}_0\,)_{X\in \mathscr{X}} \] with the following two properties (illustrated by the diagrams below): \begin{enumerate} \item \emph{Substitution invariance.} For every morphism $h\colon X\to Y$ with $X,Y\in\mathscr{X}$ and every $e_Y\colon Y\twoheadrightarrow E_Y$ in $\mathscr{T}_Y$, the coimage $e_X\colon X\twoheadrightarrow E_X$ of $e_Y\o h$ lies in $\mathscr{T}_X$. \item \emph{$\mathcal{E}_\mathscr{X}$-completeness.} For every $Y\in \mathscr{X}$ and every quotient $e\colon Y\twoheadrightarrow E_Y$ in $\mathscr{T}_Y$, there exists an $X\in\mathscr{X}$ and a quotient $e_X\colon X\twoheadrightarrow E_X$ in $\mathscr{T}_X\cap \mathcal{E}_\mathscr{X}$ with $E_X=E_Y$. \end{enumerate} \[ \xymatrix@=19pt{ X \ar[r]^{\forall h} \ar@{->>}[d]_{ e_X} & Y \ar@{->>}[d]^{\forall e_Y}\\ E_X \ar@{>->}[r] & E_Y } \qquad \xymatrix@=19pt{ X \ar@{.>>}[d]_{\exists e_X} & Y \ar@{->>}[d]^{\forall e_Y}\\ E_X\ar@{=}[r] & E_Y } \] \end{defn} \begin{rem}\label{rem:singlequotth} In many settings, the slightly technical concept of an equational theory can be simplified. First, note that $\mathcal{E}_\mathscr{X}$-completeness is trivially satisfied whenever $\mathcal{E}_\mathscr{X}=\mathcal{E}$. If, additionally, every equation contains a least element (e.g. in the setting of \autoref{rem:singlequot}), an equational theory corresponds exactly to a family of quotients $(e_X\colon X\twoheadrightarrow E_X)_{X\in \mathscr{X}}$ such that $E_X\in \mathscr{A}_0$ for all $X\in \mathscr{X}$, and for every $h\colon X\to Y$ with $X,Y\in \mathscr{X}$ the morphism $e_Y\o h$ factorizes through $e_X$. \end{rem} \begin{example_}[Classical $\Sigma$-algebras]\label{ex:theory} Recall that a \emph{congruence} on a $\Sigma$-algebra $A$ is an equivalence relation $\mathord{\equiv}\subseteq A\times A$ that forms a subalgebra of $A\times A$. It is well-known that there is an isomorphism of complete lattices \begin{equation}\label{eq:homtheorem} \text{quotient algebras of $A$} \quad\cong\quad \text{congruences on $A$} \end{equation} assigning to a quotient $e\colon A\twoheadrightarrow B$ its \emph{kernel}, given by $a\equiv_e a'$ iff $e(a)= e(a')$. Consequently, in the setting of Example \ref{ex:running}\ref{ex:running:birkhoff}, an equational theory -- presented as a family of single quotients as in \autoref{rem:singlequotth} -- corresponds precisely to a family of congruences $(\mathord{\equiv_X}\subseteq T_\Sigma X\times T_\Sigma X)_{X\in \mathbf{Set}}$ closed under substitution, that is, for every $s,t\in T_\Sigma X$ and every morphism $h\colon T_\Sigma X\to T_\Sigma Y$ in $\Alg{\Sigma}$, \[ s\equiv_X t \quad\text{implies}\quad h(s)\equiv_Y h(t). \] \end{example_} We saw in \autoref{lem:var} that every class of equations, so in particular every equational theory\xspace $\mathscr{T}$, yields a variety\xspace $\mathcal{V}(\mathscr{T})$ consisting of all objects of $\mathscr{A}_0$ that satisfy every equation in $\mathscr T$. Conversely, to every variety one can associate an equational theory as follows: \begin{construction}\label{constr:eqnth} Given a variety\xspace $\mathcal{V}$, form the family of equations \[ \mathscr{T}(\mathcal{V}) = (\,\mathscr{T}_X \subseteq X\mathord{\epidownarrow}\mathscr{A}_0\,)_{X\in\mathscr{X}}, \] where $\mathscr{T}_X$ consists of all quotients $e_X\colon X\twoheadrightarrow E_X$ with codomain $E_X\in \mathcal{V}$. \end{construction} \begin{lemma}\label{lem:tvistheory} For every variety\xspace $\mathcal{V}$, the family $\mathscr{T}(\mathcal{V})$ is an equational theory\xspace. \end{lemma} We are ready to state the first main result of our paper, the HSP Theorem. Given two equations $\mathscr{T}_X$ and $\mathscr{T}_X'$ over $X\in \mathscr{X}$, we put $\mathscr{T}_X\leq \mathscr{T}_X'$ if every quotient in $\mathscr{T}_X'$ factorizes through some quotient in $\mathscr{T}_X$. Theories form a poset with respect to the order $\mathscr{T}\leq \mathscr{T}'$ iff $\mathscr{T}_X\leq \mathscr{T}_X'$ for all $X\in \mathscr{X}$. Similarly, varieties form a poset (in fact, a complete lattice) ordered by inclusion. \begin{theorem}[HSP Theorem]\label{thm:hsp} The complete lattices of equational theories\xspace and varieties\xspace are dually isomorphic. The isomorphism is given by \[ \mathcal{V}\mapsto \mathscr{T}(\mathcal{V}) \quad\text{and}\quad \mathscr{T}\mapsto \mathcal{V}(\mathscr{T}). \] \end{theorem} One can recast the HSP Theorem into a more familiar form, using equations in lieu of equational theories: \begin{theorem}[HSP Theorem, equational version]\label{thm:hspeq} A class $\mathcal{V}\subseteq \mathscr{A}_0$ is equationally presentable if and only if it forms a variety. \end{theorem} \begin{proof} By \autoref{lem:var}, every equationally presentable class $\mathcal{V}(\mathbb{E})$ is a variety. Conversely, for every variety\xspace $\mathcal{V}$ one has $\mathcal{V}=\mathcal{V}(\mathscr{T}(\mathcal{V}))$ by \autoref{thm:hsp}, so $\mathcal{V}$ is presented by the equations $\mathbb{E} = \{\,\mathscr{T}_X: X\in \mathscr{X}\,\}$ where $\mathscr{T} = \mathscr{T}(\mathcal{V})$. \end{proof} \takeout{ \section{The Generalized Variety Theorem}\label{S:hsp} For the rest of the paper we fix the data mentioned in the introduction: a category $\mathscr{A}$ equipped with the factorization system $(\mathcal{E}, \mathcal{M})$, a full subcategory $\mathscr{A}_0 \subseteq \mathscr{A}$, a class $\mathscr{X} \subseteq \mathscr{A}$ of objects and a class $\Lambda$ of regular cardinal numbers.\smnote{Why is regular needed?} An object of $\mathscr{A}$ is \emph{$\mathscr{X}$-generated} if it is an $\mathcal{E}$-quotient of some object in $\mathscr{X}$. We put \[ \mathcal{E}_\mathscr{X} = \{\,e\in \mathcal{E} : \text{every $X \in \mathscr{X}$ is projective w.r.t.~$e$}\,\}. \] Notice that $\mathcal{E}_\mathscr{X}$ clearly contains all isomorphisms of $\mathscr{A}$. \begin{assumptions}\label{asm:setting} For the remainder of this paper, we assume that: \begin{enumerate} \item\label{A1} $\mathscr{A}$ has $\Lambda$-products, i.e., for every $\lambda \in \Lambda$ and every family $(A_i)_{i < \lambda}$ of objects in $\mathscr{A}$, the product $\prod_{i< \lambda} A_i$ exists; \item\label{A2} $\mathscr{A}_0$ is closed under $\Lambda$-products and $\mathscr{X}$-generated subobjects; the latter means that for every $m: A \rightarrowtail B$ where $B\in \mathscr{A}_0$ and $A$ is $\mathscr{X}$-generated, we have $A \in \mathscr{A}_0$. \item\label{A3} every object of $\mathscr{A}_0$ is an $\mathcal{E}_\mathscr{X}$-quotient of some object of $\mathscr{X}$, i.e.,~for every object $A \in \mathscr{A}_0$ there exists a morphism $X \twoheadrightarrow A$ in $\mathcal{E}_\mathscr{X}$ with $X\in \mathscr{X}$. \end{enumerate} \end{assumptions} \begin{examples} Throughout this section we will use the setting of the classical Birkhoff variety theorem, Bloom's version for ordered algebras as well as Mardare et al.'s recent metric variety theorem as our running examples. Further instances of our theory are presented in \autoref{S:app}. \begin{enumerate} \item The setting of Birkhoff's classical theory is that of algebras for a signature. A signature is a set $\Sigma$ of operation symbols each with a prescribed finite arity, and a $\Sigma$-algebra is a set $A$ equipped with operations $A^n \to A$ for each $n$-ary operation symbol. Here one takes: \begin{itemize} \item $\mathscr{A} = \mathscr{A}_0 =$ the category $\Alg{\Sigma}$ of all $\Sigma$-algebras, \item $\mathscr{X}$ the class of all free $\Sigma$-algebras, i.e.~all algebras $T_\Sigma X$ of $\Sigma$-terms over any set $X$ of generators, and \item $\Lambda =$ all regular cardinals. \end{itemize} \item Similary, for Blooms variety theorem one takes $\mathscr{A} = \mathscr{A}_0 =$ the category $\OAlg{\Sigma}$ of ordered $\Sigma$-algebras, i.e.~posets $A$ equipped with monotone operations, and $\mathscr{X}$ and $\Lambda$ are as above. \item Metric algebras\dots\smnote[inline]{TODO.} \end{enumerate} \end{examples} \begin{rem} \smnote[inline]{TODO: \autoref{asm:setting}\ref{A2} needs some explanation.} One might expect that $\mathscr{A}_0$ be required to be closed under \emph{all} subobjects rather than just $\mathscr{X}$-generated ones. However, this weaker requirement will allow us to accomodate Wilke's HSP theorem~???\smnote{TODO: add citation} as an instance of our results. Cf.~the corresponding requirement in \autoref{D:var}. \smnote{We will loose the Salehi and Steinby applicatio of tree algebras; for this we'd have to assume closure under $\mathscr{X}$-generated subobjects of $\Lambda$-products.} \end{rem} We note the following properties of the class $\mathcal{E}_\mathscr{X}$: \begin{lemma}\label{lem:ex} \begin{enumerate} \item\label{lem:ex:1} The class $\mathcal{E}_\mathscr{X}$ is closed under composition. \item\label{lem:ex:2} $p\in \mathcal{E}$ and $q\o p\in \mathcal{E}_\mathscr{X}$ implies $q\in \mathcal{E}_\mathscr{X}$. \end{enumerate} \end{lemma} \begin{proof} For~\ref{lem:ex:1} let $p: A \twoheadrightarrow B$ and $q: B \twoheadrightarrow C$ be in $\mathcal{E}_\mathscr{X}$. Since $\mathcal{E}$ is closed under composition, we have $q \o p \in \mathcal{E}$. Projectivity of $q \o p$ easily follows from that of $p$ and $q$: given any morphism $h: X \to C$ with $X \in \mathscr{X}$ we obtain $h': X \to B$ with $q \o h' = h$ by projectivity of $q$, and then we obtain $h'': X \to A$ with $p \o h'' = h'$ by projectivity of $p$. Thus, we have $(q\o p) \o h'' = h$. For~\ref{lem:ex:2}, note that $q\in \mathcal{E}$ by the cancellation law. The projectivity of objects of $\mathscr{X}$ w.r.t.~$q$ follows easily from the corresponding property of $q\o p$: suppose that $h: X \to C$ with $X \in \mathscr{X}$, then we have $h': X \to A$ with $q \o (p \o h') = h$. \end{proof} \begin{rem}\label{rem:atoalgt} The conditions \ref{A1}--\ref{A3} are inherited by Eilenberg-Moore categories over $\mathscr{A}$. Indeed, suppose that $\mathscr{A}$ is a category that satisfies \ref{A1}--\ref{A3} w.r.t.~the parameters $(\mathcal{E},\mathcal{M})$, $\Lambda$, $\mathscr{X}$ and $\mathscr{A}_0$. Let $\mathbb{T}=(T,\eta,\mu)$ be a monad on $\mathscr{A}$ with $T\mathcal{E}\subseteq \mathcal{E}$. Then the category $\mathscr{A}'=\Alg{\mathbb{T}}$ of $\mathbb{T}$-algebras has the factorization system of $\mathcal{E}$-carried and $\mathcal{M}$-carried $\mathbb{T}$-algebra morphisms. Choose $\mathscr{X}'$ to be the class of all free $\mathbb{T}$-algebras $\mathbb{T} X = (TX,\mu_X)$ with $X\in \mathscr{X}$, and let $\mathscr{A}_0'$ be the class of all $\mathbb{T}$-algebras $(A,\alpha)$ with carrier $A\in \mathscr{A}_0$. With respect to these parameters, the category $\mathscr{A}'$ satisfies \ref{A1}--\ref{A3}. For~\ref{A1}, this is clear since the forgetful functor $\Alg{\mathbb{T}} \to \mathscr{A}$ creates all limits. Likewise for~\ref{A2} since the factorization system on $\mathscr{A}'$ is a lifting of the one on $\mathscr{A}$. For~\ref{A3}, one easily proves that $\mathcal{E}_\mathscr{X}'$ consists of all $\mathcal{E}_\mathscr{X}$-carried $\mathbb{T}$-algebra morphisms, for suppose we have an $\mathcal{E}_\mathscr{X}$-carried $\mathbb{T}$-algebra morphism $e: (A,\alpha) \twoheadrightarrow (B,\beta)$ and let $h: (TX, \mu_X) \to (B,\beta)$ where $X \in \mathscr{X}$. Then that $e: A \to B$ lies in $\mathcal{E}_\mathscr{X}$ on the morphism $h\o \eta_X: X \to B$. Hence, we obtain some $h_0: X \to A$ in $\mathscr{A}$ such that $e \o h_0 = h\o \eta_X$. Using the freeness of $(TX,\mu_X)$ we obtain a unique $\mathbb{T}$-algebra morphism $h': (TX, \mu_X) \to (A,\alpha)$ such that $h' \o \eta_X = h_0$. We obtain $e \o h' = h$ since both $e \o h'$ and $h$ are $\mathbb{T}$-algebra morphisms whose precompositions with the universal morphism $\eta_X$ are equal: $e \o h' \o \eta_X = e \o h_0 = h \o \eta_X$ \end{rem} \begin{defn}\label{D:eq} \begin{enumerate} \item An \emph{equation over $X\in\mathscr{X}$} is a class $\mathscr{T}_X\subseteq X\mathord{\epidownarrow} \mathscr{A}_0$ that is \emph{$\Lambda$-codirected}, i.e. every subset $F\subseteq \mathscr{T}_X$ with $\under{F}\in \Lambda$ has a lower bound in $F$; \item An object $A\in \mathscr{A}$ \emph{satisfies} the equation $\mathscr{T}_X$ if every morphism $h: X\to A$ factorizes through some $e\in \mathscr{T}_X$. We write $A \models \mathscr{T}_X$ if $A$ satisfies $\mathscr{T}_X$. \end{enumerate} \end{defn} \begin{rem} Note that if $\mathscr{A}$ is $\mathcal{E}$-cowellpowered, then every equation is a set (not a class). Otherwise our theory does not need $\mathcal{E}$-cowellpoweredness of $\mathscr{A}$ and we did not assume it in \autoref{asm:setting}. \end{rem} \begin{rem} In two important special cases, we can replace sets of quotients by single quotients: \begin{enumerate} \item\label{rem:singlequot:1} For $\Lambda =$ all regular cardinal numbers, every equation $\mathscr{T}_X\subseteq X\mathord{\epidownarrow} \mathscr{A}_0$ contains a least element $e_X: X\twoheadrightarrow E_X$, viz.~the lower bound of all elements in $\mathscr{T}_X$. An object $A$ satisfies the equation iff every morphism $h: X\to A$ factorizes through $e_X$. Thus, in this case, one may equivalently define an equation as a single morphism in $\mathcal{E}$. This notion of an equation was investigated by Banaschewski~\cite{BanHerr1976}. \item\label{rem:singlequot:2} Let $\mathbb{T} = (T,\eta,\mu)$ be a monad on $\Alg{\Sigma,E}$ with $T$ preserving surjections. In \cite{camu16} it was shown that $\mathbb{T}$ induces a monad $\widehat\MT=(\hat{T},\hat\eta,\hat\mu)$ on the category $\PAlg{\Sigma,E}$,\smnote{TODO: insert definition of profinite algebras etc.} called the \emph{profinite monad} of $\mathbb{T}$, such that (a) $\hat{T}$ preserves surjections and (b) the categories of finite $\mathbb{T}$-algebras and finite $\widehat\MT$-algebras are isomorphic. In $\PAlg{\Sigma,E}$, choose $\mathscr{X}$ = free finitely generated algebras, $\mathscr{A}_0=$ finite algebras and $\Lambda=\{\omega\}$, and inherit these parameters to $\Alg{\widehat\MT}$ as in \autoref{rem:atoalgt}. By a \emph{profinite equation} is meant a quotient $p_X: \widehat\MT X\twoheadrightarrow P_X$ where $P_X$ is a profinite $\widehat\MT$-algebra, i.e.~a codirected limit of finite $\widehat\MT$-algebras. Every equation $\mathscr{T}_X\subseteq \widehat\MT X\mathord{\epidownarrow} \mathscr{A}_0^{\widehat\MT}$ yields a profinite equation by forming the codirected limit of the inverse systen $\mathscr{T}_X$. This yields a limit cone $\pi_e: P_X\twoheadrightarrow A$ (where $e: \widehat\MT X\twoheadrightarrow A$ ranges over $\mathscr{T}_X$) and a unique mediating morphism $p_X: \widehat\MT X\twoheadrightarrow P_X$ with $\pi_e\o p_X = e$ for all $e\in\mathscr{T}_X$. Conversely, every profinite equation $p_X: \widehat\MT X\twoheadrightarrow P_X$ yields an equation $\mathscr{T}_X$, viz.~the set of all finite quotient $e: \widehat\MT X\twoheadrightarrow A$ that factor through $p_X$. It is easy to see that these two constructions are mutually inverse, and that a finite $\widehat\MT$-algebra $A$ satisfies $\mathscr{T}_X$ iff every $\widehat\MT$-homomorphism $h: \widehat\MT X\to A$ factorizes through $p_X$. \end{enumerate} \end{rem} \begin{examples} \begin{enumerate} \item In the classical setting of General Algebra an equation is by virtue of \autoref{rem:singlequot}\ref{rem:singlequot:1} a quotient $e: T_\Sigma X \twoheadrightarrow A$ of $\Sigma$-algebras. Equivalently, we have its kernel pair $E \rightrightarrows T_\Sigma X$, which is the congruence relation \[ E = \{(s,t) : \text{$s,t \in T_\Sigma X$ with $e(s) = e(t)$}\}. \] That means that an equation is, equivalently, a set of pairs of $\Sigma$-terms, viz.~the classical notion of equations in General algebra. Moreover, it is easy to show that a $\Sigma$-algebra satisfies an equation $\mathscr{T}_X$ if and only if it satisfies the equations in the set $E$ in the classical sense. \item \smnote[inline]{TODO: metric algebras.} \end{enumerate} \end{examples} \begin{defn}\label{D:var} A \emph{variety\xspace} is a full subcategory $\mathcal{V}\subseteq \mathscr{A}_0$ closed under $\mathcal{E}_\mathscr{X}$-quotients, subobjects, and $\Lambda$-products. That is, \begin{enumerate}[wide,labelindent=0pt,itemsep=5pt] \item for every $\mathcal{E}_\mathscr{X}$-quotient $e: A\twoheadrightarrow B$ in $\mathscr{A}_0$ with $A \in \mathcal{V}$ one has $B\in \mathcal{V}$, \item for every $\mathcal{M}$-morphism $m: A \rightarrowtail B$ in $\mathscr{A}_0$ with $B \in \mathcal{V}$ one has $A \in \mathcal{V}$, and \item given objects $A_i$ ($i<\lambda$) in $\mathcal{V}$ for some $\lambda\in \Lambda$, one has $\prod_{i<\lambda} A_i\in \mathcal{V}$. \end{enumerate} \end{defn} \begin{construction}\label{constr:var} For any class of equations $\mathbb{E}$ put \[ \mathcal{V}(\mathbb{E}) = \{\, A\in \mathscr{A}_0 \;:\; \text{$A \models \mathscr{T}_X$ for each $\mathscr{T}_X \in \mathbb{E}$} \,\}. \] Such classes of objects of $\mathscr{A}$ are called \emph{equationally presentable}. \end{construction} \begin{lemma}\label{lem:var} For every class $\mathbb{E}$ of equations, $\mathcal{V}(\mathbb{E})$ is a variety\xspace. \end{lemma} \begin{proof} Since $\mathcal{V}(\mathbb{E}) = \bigcap_{\mathscr{T}_X\in \mathbb{E}} \mathcal{V}(\mathscr{T}_X)$ and intersections of varieties are varieties, it suffices to consider the case where $\mathbb{E}$ consists of a single equation $\mathscr{T}_X\subseteq X\epidownarrow \mathscr{A}_0$. \begin{enumerate}[wide,labelindent=0pt,itemsep=5pt] \item \emph{Closure under $\mathcal{E}_\mathscr{X}$-quotients.} Let $q\colon A \twoheadrightarrow B$ be an $\mathcal{E}_\mathscr{X}$-quotient with $A\models \mathscr{T}_X$, and let $h: X \to B$. Since $q$ lies in $\mathcal{E}_\mathscr{X}$, there exists some $h': X \to A$ such that $h = q \o h'$. Then since $A \models \mathscr{T}_X$, there exists $e\colon X\twoheadrightarrow E$ in $\mathcal{E}_\mathscr{X}$ and $h''\colon E \twoheadrightarrow A$ such that $h' = h'' \o e$. Thus $h$ factorizes through $e$ via $h = q \o h' = (q \o h'') \o e$. It follows that $B\models \mathscr{T}_X$. \item \emph{Closure under subobjects.} Let $m: A \rightarrowtail B$ be a subobject in $\mathscr{A}_0$ where $B \models \mathscr{T}_X$, and let $h: X \to A$. Then $m \o h$ factorizes through $e$ since $B \models \mathscr{T}_X$, and we see that $h$ factorizes through $e$ using diagonal fill-in: \[ \xymatrix{ X \ar@{->>}[r]^-{e} \ar[d]_h & E \ar[d] \ar@{-->}[ld] \\ A \ar@{ >->}[r]_-m & B } \] Therefore, $A\models \mathscr{T}_X$. \item \emph{Closure under $\Lambda$-products.} Suppose that $A_i$ ($i< \lambda$) is a family of objects in $\mathscr{A}_0$, where $\lambda\in \Lambda$ and $A_i\models \mathscr{T}_X$ for all $i$. We denote by $p_i\colon \prod_{i < \lambda} A_i \to A_i$ the product projections. First, note that $\prod_{i < \lambda} A_i$ lies in $\mathscr{A}_0$ by Assumption \ref{asm:setting}\ref{A2}. Now let $h: X \to \prod_{i <\lambda} A_i$. Since $A_i\models \mathscr{T}_X$, there exists for every $i<\lambda$ some $e_i\colon X \to E_i$ in $\mathscr{T}_X$ and $h_i\colon E_i\to A_i$ with $h_i\o e_i = p_i\o h$. Since $\mathscr{T}_X$ is $\Lambda$-codirected, we obtain one $e\colon X \twoheadrightarrow E$ in $\mathscr{T}_X$ through which all $p_i \o h$ factorize. Indeed, let $e$ be a lower bound of $F = \{e_i : i < \lambda\} \subseteq \mathscr{T}_X$. Then $e \leq e_i$ means that we have $g_i'$ with $g_i' \o e = e_i$ so that $p_i \o h$ factorizes through $e$ via $k_i = g_i' \o h_i$. Thus, $\langle k_i \rangle\colon E \to \prod_{i <\lambda} A_i$ is the desired factorization since $p_i \o h = p_i \o \langle k_i \rangle \o e$ holds for every $i < \lambda$. This proves $\prod_{i< \lambda} A_i \models \mathscr{T}_X$. \end{enumerate} \end{proof} \begin{defn}\label{D:eqnth} An \emph{equational theory\xspace} is a family of equations \[ \mathscr{T} = (\mathscr{T}_X\subseteq X\mathord{\epidownarrow} \mathscr{A}_0)_{X\in \mathscr{X}} \] that is \emph{substitution invariant}: for every morphism $h: X\to Y$ with $X,Y\in\mathscr{X}$ and every $e_Y\in \mathscr{T}_Y$, the morphism $e_Y\o h$ factorizes through some $e_X\in \mathscr{T}_X$. \begin{equation}\label{eq:theory} \vcenter{ \xymatrix{ X \ar[r]^{\forall h} \ar@{->>}[d]_{\exists e_X} & Y \ar@{->>}[d]^{\forall e_Y}\\ E_X \ar[r]_{\exists \overline h} & E_Y }} \end{equation} \end{defn} \begin{rem}\label{rem:singlequotth} Again, we consider the special cases of \autoref{rem:singlequot} this time under the additional assumption that $\mathcal{E}_\mathscr{X} = \mathcal{E}$. \begin{enumerate} \item\label{rem:singlequotth:1} For $\Lambda =$ all regular cardinal numbers, a equational theory\xspace is uniquely determined by specifying the least element of every $\mathscr{T}_X$, cf.~\autoref{rem:singlequot}\ref{rem:singlequot:1}. In this case, an equational theory is thus given by a family of quotients $\mathscr{Q} = (e_X: X\twoheadrightarrow E_X)_{X\in\mathscr{X}}$ such that, for every $h: X\to Y$, the morphism $e_Y\o h$ factorizes through $e_X$. \item Consider the setting of \autoref{rem:singlequot}\ref{rem:singlequot:2}. A \emph{profinite theory over $\mathbb{T}$} \cite{camu16} is a family of profinite equations $\rho = (p_X: \widehat\MT X\twoheadrightarrow P_X)_{X}$ such that, for every morphism $h: \widehat\MT X\to \widehat\MT Y$ with $X,Y\in\mathscr{X}$, the morphism $p_Y\o h$ factorizes through $p_X$. Every equational theory\xspace yields a profinite theory by replacing the equations $\mathscr{T}_X$ by their corresponding profinite equation $p_X$, and vice versa. This gives a bijective correspondence between equational theories\xspace and profinite theories. \end{enumerate} \end{rem} Our aim is to relate equational theories\xspace to varieties\xspace. We have already seen that every class of equations, so in particular every equational theory\xspace $\mathscr{T}$, yields a variety\xspace $\mathcal{V}(\mathscr{T})$ consisting of all algebras that satisfy every equation in $\mathscr T$ (cf.~\autoref{constr:var}). Conversely, every variety induces an equational theory as follows: \begin{construction}\label{constr:eqnth} Given any variety\xspace $\mathcal{V}$, form the family \[ \mathscr{T}(\mathcal{V}) = (\mathscr{T}_X)_{X\in\mathscr{X}}, \] where $\mathscr{T}_X \subseteq X\mathord{\epidownarrow}\mathscr{A}_0$ consists of all quotients $e: X\twoheadrightarrow A$ with codomain $A\in \mathcal{V}$. \end{construction} \begin{lemma} For every variety\xspace $\mathcal{V}$, the family $\mathscr{T}(\mathcal{V})$ is a equational theory\xspace. \end{lemma} \begin{proof} To prove substitution invariance for $\mathscr{T}(\mathcal{V})$, suppose that $e_Y\in \mathscr{T}_Y$ and $h\colon X\to Y$ are given, and take the $\mathcal{E}/\mathcal{M}$-factorization $e_Y\o h = \overline h\o e_X$ of $e_Y\o h$: \[ \xymatrix{ X \ar[r]^-{h} \ar@{->>}[d]_{ e_X} & Y \ar@{->>}[d]^{e_Y} \\ E_X \ar@{>->}[r]_-{ \overline h} & E_Y } \] Since $E_Y\in \mathscr{A}_0$ and $\mathscr{A}_0$ is closed under $\mathscr{X}$-generated subobjects by Assumption \ref{asm:setting}.\ref{A2}, we get $E_X\in \mathscr{A}_0$. Moreover, we have $E_Y\in \mathcal{V}$ by definition of $\mathscr{T}_Y$, so the closure of $\mathcal{V}$ under subobjects in $\mathscr{A}_0$ implies that $E_X\in \mathcal{V}$. Thus $e_X\in \mathscr{T}_X$ by definition of $\mathscr{T}_X$. \takeout{ For $\mathcal{E}_\mathscr{X}$-completeness, note first that clearly $\mathscr{T}_X$ is closed under $\mathcal{E}_\mathscr{X}$-quotients because $\mathcal{V}$ is. To show that $\mathscr{T}_X$ is $\Lambda$-codirected, let $e_i: X\twoheadrightarrow A_i$ ($i< \lambda$) be a family of quotients in $\mathscr{T}_X$ with $\lambda\in\Lambda$. Form the $\mathcal{E}/\mathcal{M}$-factorization of $\langle e_i \rangle: X\to \prod_i A_i$: \[ \xymatrix{ & X \ar@{->>}[ld]_e \ar[d]^{\langle e_i \rangle} \ar@{->>}[rd]^{e_i} \\ A \ar@{ >->}[r]^-m & \prod_{i <\lambda} A_i \ar[r]^-{p_i} & A_i } \] By \autoref{asm:setting}\ref{A2}, $A$ lies in $\mathscr{A}_0$ and, since $\mathcal{V}$ is closed under subobjects and $\Lambda$-products, one has $A\in \mathcal{V}$. Thus $e\in \mathscr{T}_X$ and $e$ is an upper bound of the $e_i$'s. } \end{proof} \begin{lemma}\label{lem:vtau} Let $\mathscr{T}$ be a equational theory\xspace. For every quotient $e_Y\colon Y\twoheadrightarrow A$ in $\mathscr{T}_Y$, one has $A\in\mathcal{V}(\mathscr{T})$. \end{lemma} \begin{proof} Suppose that $\mathscr{T}_Y$ contains the quotient $e_Y: Y\twoheadrightarrow A$. By $\mathcal{E}_\mathscr{X}$-completeness of $\mathscr{T}$, we may assume that $e_Y\in \mathcal{E}_\mathscr{X}$. Let $h: X\to A$ with $X\in\mathscr{X}$. Since $e_Y\in \mathcal{E}_\mathscr{X}$, there exists a morphism $g: X\to Y$ with $e_Y\o g = h$. By substitution invariance, we can choose $e_X\in\mathscr{T}_X$ and $\overline g$ with $\overline g \o e_X = e_Y\o g$. Hence, $h$ factorizes through $e_X$ via $\overline g$, as shown by the commutative diagram below, and therefore $A\in \mathcal{V}(\mathscr{T})$. \[ \xymatrix{ X \ar[r]^-g \ar[dr]^h \ar@{->>}[d]_{e_X} & Y \ar@{->>}[d]^{e_Y}\\ E_X \ar[r]_-{\overline g}& A } \] \takeout{ For the ``only if'' direction, let $A\in \mathcal{V}(\mathscr{T})$. By \autoref{asm:setting}\ref{A3}, we can express $A$ as an $\mathcal{E}_\mathscr{X}$-quotient $e: Y\twoheadrightarrow A$ of some $Y\in\mathscr{X}$. Since $A\in\mathcal{V}(\mathscr{T})$, we know that $A$ satisfies $\mathscr T_Y$, i.e.~there exists $e_Y: Y\twoheadrightarrow E_Y$ in $\mathscr{T}_Y$ and a morphism $\overline e: E_Y\twoheadrightarrow A$ with $\overline e \o e_Y = e$. By \autoref{lem:ex}\ref{lem:ex:2}, we have $\overline e\in \mathcal{E}_X$, and thus $e\in \mathscr{T}_Y$ because $\mathscr{T}_Y$ is closed under $\mathcal{E}_\mathscr{X}$-quotients. } \end{proof} \begin{rem} Given two equations $\mathscr{T}_X, \mathscr{T}_X'\subseteq \epid{X}{\mathscr{A}_0}$ we put $\mathscr{T}_X\leq \mathscr{T}_X'$ if every quotient in $\mathscr{T}_X'$ factorizes through some quotient in $\mathscr{T}_X$. Theories form a poset with respect to the order $\mathscr{T}\leq \mathscr{T}'$ iff $\mathscr{T}_X\leq \mathscr{T}_X'$ for all $X\in \mathscr{X}$. Similarly, varieties form a poset (in fact, a complete lattice) ordered by inclusion. \end{rem} \begin{theorem}\label{thm:galois} There is an antitone Galois connection \[ \xymatrix{ \textbf{Varieties~~~} \ar@<0.5ex>[rr]^<<<<<<<<<<<{\mathscr{T}(\mathord{-})} & & \textbf{~~~Equational theories} \ar@<0.5ex>[ll]^<<<<<<<<<{\mathcal{V}(\mathord{-})} } \] between the posets of varieties and equational theories; that is, the maps $\mathcal{V}(\mathord{-})$ and $\tau(\mathord{-})$ are order-reversing, and for all varieties $\mathcal{V}$ and equational theories $\tau$, \[ \mathcal{V}(\mathscr{T})\subseteq \mathcal{V} \quad\Longleftrightarrow\quad \mathscr{T}(\mathcal{V})\leq \mathscr{T}. \] \end{theorem} \begin{proof} \begin{enumerate}[wide,labelindent=0pt,parsep=0pt] \item The map $\tau(\mathord{-})$ is order-reversing. To see this, suppose that $\mathcal{V}\subseteq \mathcal{V}'$ are varieties, and let $e\colon X\twoheadrightarrow A$ be a quotient in $[\mathscr{T}(\mathcal{V})]_X$. Then $A\in \mathcal{V}$ by definition of $\mathscr{T}(\mathcal{V})$, and thus $A\in \mathcal{V}'$, i.e. the quotient $e$ also lies in $[\mathscr{T}(\mathcal{V}')]_X$. This shows $\mathscr{T}'\leq \mathscr{T}$. \item The map $\mathcal{V}(\mathord{-})$ is order-reversing. Indeed, suppose that $\mathscr{T}\leq \mathscr{T}'$ are theories and let $A\in \mathcal{V}(\mathscr{T}')$. To show that $A\in \mathcal{V}(\mathscr{T})$, let $h\colon X\to A$ with $X\in \mathscr{X}$. Since $A\in \mathcal{V}(\mathscr{T}')$, the morphism $h$ factorizes through some $e'\in \mathscr{T}_X'$. Since $\mathscr{T}_X\leq \mathscr{T}_X'$, the quotient $e'$ factorizes through some $e\in \mathscr{T}_X$. Thus $h$ factorizes through $e$, which proves that $A\in \mathcal{V}(\mathscr{T})$. It follows that $\mathcal{V}(\mathscr{T}')\subseteq \mathcal{V}(\mathscr{T})$. \item To show the ``$\Rightarrow$'' implication of the claimed equivalence, suppose that $\mathcal{V}(\tau)\subseteq \mathcal{V}$, and let $e\colon X\twoheadrightarrow E$ be a quotient in $\mathscr{T}_X$. Then $E\in \mathcal{V}(\mathscr{T})$ by \autoref{lem:vtau} \end{enumerate} \end{proof} \begin{lemma}\label{lem:vtv} For every variety\xspace $\mathcal{V}$, we have $\mathcal{V}=\mathcal{V}(\mathscr{T}(\mathcal{V}))$. \end{lemma} \begin{proof} To prove $\subseteq$, let $A\in \mathcal{V}$. By \autoref{asm:setting}\ref{A3}, there exists a quotient $e: X\twoheadrightarrow A$ with $X\in \mathscr{X}$. Thus $e\in \mathscr{T}_X$ by the definition of $\mathscr{T}_X$ in \autoref{constr:eqnth}, and therefore $A\in \mathcal{V}(\mathscr{T}(\mathcal{V}))$ by \autoref{lem:vtau}. For $\supseteq$, let $A\in \mathcal{V}(\mathscr{T}(\mathcal{V}))$. Then, by \autoref{lem:vtau}, $\mathscr{T}_X$ contains some quotient $e: X\twoheadrightarrow A$ with codomain $A$. By the definition of $\mathscr{T}_X$, this implies $A\in \mathcal{V}$. \end{proof} \begin{lemma}\label{lem:tvt} For every equational theory\xspace $\mathscr{T}$, we have $\mathscr{T} = \mathscr{T}(\mathcal{V}(\mathscr{T}))$ \end{lemma} \begin{proof} Let $\mathscr{T} = (\mathscr{T}_X)_{X\in\mathscr{X}}$ and $\mathscr{T}(\mathcal{V}(\mathscr{T})) = (\mathscr{T}_X')_{X\in\mathscr{X}}$. We need to prove $\mathscr{T}_X = \mathscr{T}_X'$ for all $X\in \mathscr{X}$. For $\subseteq$, let $e: X\twoheadrightarrow A$ in $\mathscr{T}_X$. Then $A\in \mathcal{V}(\mathscr{T})$ by \autoref{lem:vtau}, and thus $e\in \mathscr{T}_X'$ by the definition of $\mathscr{T}(\mathcal{V}(\mathscr{T}))$. For $\supseteq$, let $e: X\twoheadrightarrow A$ in $\mathscr{T}_X'$. Then $A\in \mathcal{V}(\mathscr{T})$ by the definition of $\mathscr{T}(\mathcal{V}(\mathscr{T}))$. Thus, by \autoref{lem:vtau} and $\mathcal{E}_\mathscr{X}$-completeness of the theory $\mathscr{T}$, there exists some $Y\in \mathscr{X}$ an $\mathcal{E}_\mathscr{X}$-quotient $e_Y: Y\twoheadrightarrow A$ in $\mathscr{T}_Y$. Since $X$ is projective w.r.t.~$e_Y$, we can choose a morphism $h: X\to Y$ with $e_Y\o h = e$. Since $\mathscr{T}$ is a equational theory\xspace, the coimage $e_X$ of $e_Y\o h$ lies in $\mathscr{T}_X$: \[ \xymatrix{ X \ar[r]^h \ar@{->>}[d]_{e_X} \ar@{->>}[dr]^e & Y \ar@{->>}[d]^{e_Y} \\ E_X \ar@{>->}[r]_{\overline h} & A } \] Thus $e = \overline h \o e_X$, which implies that $\overline h$ lies in $\mathcal{E}$. Since it also lies in $\mathcal{M}$, we have that $\overline h$ is an isomorphism, and thus, it is contained in $\mathcal{E}_\mathscr{X}$. Hence, $e \in \mathscr{T}_X$ since $\mathscr{T}_X$ is closed under $\mathcal{E}_\mathscr{X}$-quotients. \end{proof} From the two previous lemmas we get the main result of this section. \begin{theorem}[HSP Theorem]\label{thm:hsp} The complete lattices of equational theories\xspace and varieties\xspace are dually isomorphic. The isomorphism is given by \[ \mathcal{V}\mapsto \mathscr{T}(\mathcal{V}) \quad\text{and}\quad \mathscr{T}\mapsto \mathcal{V}(\mathscr{T}). \] \end{theorem} \proof By \autoref{lem:vtv} and \autoref{lem:tvt} the two maps above are mutually inverse. Thus, it only remains to show that they are antitone. \begin{enumerate}[wide,labelindent=0pt,parsep=0pt] \item Suppose that $\mathcal{V}\subseteq \mathcal{V}'$ are varieties, and let $e\colon X\twoheadrightarrow A$ be a quotient in $[\mathscr{T}(\mathcal{V})]_X$. Then $A\in \mathcal{V}$ by definition of $\mathscr{T}(\mathcal{V})$, and thus $A\in \mathcal{V}'$, i.e. the quotient $e$ also lies in $[\mathscr{T}(\mathcal{V}')]_X$. This shows $\mathscr{T}'\leq \mathscr{T}$. \item\sloppypar Suppose that $\mathscr{T}\leq \mathscr{T}'$ are theories, and let $A'\in \mathcal{V}(\mathscr{T}')$. Then, by \autoref{lem:vtau}, there exists a quotient $e'\colon X\twoheadrightarrow A'$ in $\mathscr{T}_X$ with codomain $A'$. By definition of a theory, we may assume that $e'\in \mathcal{E}_\mathscr{X}$. Since $\mathscr{T}\leq \mathscr{T}'$, the quotient $e'$ factorizes through some quotient $e\colon X\twoheadrightarrow A$ in $\mathscr{T}_X$, i.e. $e'=q\o e$ for some $q\colon A\twoheadrightarrow A'$. Since $e'\in \mathcal{E}_\mathscr{X}$ we have $q\in \mathcal{E}_\mathscr{X}$ by \autoref{lem:ex}\ref{lem:ex:2}. Moreover, $A\in \mathcal{V}(\mathscr{T})$ by \autoref{lem:vtau}, and thus $A'\in \mathcal{V}(\mathscr{T})$ because $\mathcal{V}(\mathscr{T})$ is closed under $\mathcal{E}_\mathscr{X}$-quotients. This shows $\mathcal{V}(\mathscr{T}')\subseteq \mathcal{V}(\mathscr{T})$.\qed \end{enumerate} \doendproof One can recast the HSP Theorem into a more familiar form, using equations in lieu equational theories. Recall from \autoref{constr:var} that an equationally presentable class is a class $\mathcal{V}(\mathbb{E})$ for some class of equations. \begin{theorem}[HSP Theorem, equational version]\label{thm:hspeq} A class $\mathcal{V}\subseteq \mathscr{A}_0$ is a variety\xspace iff it is equationally presentable. \end{theorem} \begin{proof} We saw in \autoref{lem:var} that every equationally presentable class $\mathcal{V}(\mathbb{E})$ is a variety. Conversely, by \autoref{lem:vtv}, every variety\xspace $\mathcal{V}$ is presented by the equations $\mathbb{E} = \{\mathscr{T}_X: X\in \mathscr{X}\}$, where $\mathscr{T} = \mathscr{T}(\mathcal{V})$. \end{proof} } \section{Equational Logic} \label{S:logic} The correspondence between theories and varieties gives rise to the second main result of our paper, a generic sound and complete deduction system for reasoning about equations. The corresponding semantic concept is the following: \begin{defn} An equation $\mathscr{T}_X\subseteq \epid{X}{\mathscr{A}_0}$ \emph{semantically entails} the equation $\mathscr{T}_Y'\subseteq \epid{Y}{\mathscr{A}_0}$ if every $\mathscr{A}_0$-object satisfying $\mathscr{T}_X$ also satisfies $\mathscr{T}_Y'$ (that is, if $\mathcal{V}(\mathscr{T}_X)\subseteq \mathcal{V}(\mathscr{T}_Y)$). In this case, we write $\mathscr{T}_X\models \mathscr{T}_Y'$. \end{defn} The key to our proof system is a categorical formulation of term substitution: \begin{defn}\label{def:closure} Let $\mathscr{T}_X\subseteq \epid{X}{\mathscr{A}_0}$ be an equation over $X\in \mathscr{X}$. The \emph{substitution closure} of $\mathscr{T}_X$ is the smallest theory $\overline \mathscr{T} = (\overline{\mathscr{T}}_Y)_{Y\in \mathscr{X}}$ such that $\mathscr{T}_X\leq \overline \mathscr{T}_X$. \end{defn} The substitution closure of an equation can be computed as follows: \begin{lemma}\label{lem:substclosure} For every equation $\mathscr{T}_X\subseteq \epid{X}{\mathscr{A}_0}$ one has $\overline\mathscr{T} = \mathscr{T}(\mathcal{V}(\mathscr{T}_X))$. \end{lemma} The deduction system for semantic entailment consists of two proof rules: { \flushleft \begin{tabular}{ l l } (Weakening) & $\mathscr{T}_X\vdash \mathscr{T}_X'$ for all equations $\mathscr{T}_X'\leq \mathscr{T}_X$ over $X\in \mathscr{X}$.\\ (Substitution) & $\mathscr{T}_X\vdash \overline\mathscr{T}_Y$ for all equations $\mathscr{T}_X$ over $X\in \mathscr{X}$ and all $Y\in \mathscr{X}$. \end{tabular} } \smallskip\noindent Given equations $\mathscr{T}_X$ and $\mathscr{T}_Y'$ over $X$ and $Y$, respectively, we write $\mathscr{T}_X\vdash \mathscr{T}_Y'$ if $\mathscr{T}_Y'$ arises from $\mathscr{T}_X$ by a finite chain of applications of the above rules. \begin{theorem}[Completeness Theorem]\label{thm:eqlogicsoundcomplete} The deduction system for seman-tic entailment is sound and complete: for every pair of equations $\mathscr{T}_X$ and $\mathscr{T}_Y'$, \[ \mathscr{T}_X\models \mathscr{T}_Y' \quad\text{iff}\quad \mathscr{T}_X \vdash \mathscr{T}_Y'. \] \end{theorem} \section{Applications}\label{S:app} In this section, we present some of the applications of our categorical results (see Appendix~\ref{app:B} for full details). Transferring the general HSP theorem of \autoref{S:hsp} into a concrete setting requires to perform the following four-step procedure: \medskip\noindent \textbf{Step 1.} Instantiate the parameters $\mathscr{A}$, $(\mathcal{E},\mathcal{M})$, $\mathscr{A}_0$, $\Lambda$ and $\mathscr{X}$ of our categorical framework, and characterize the quotients in $\mathcal{E}_\mathscr{X}$. \medskip\noindent \textbf{Step 2.} Establish an \emph{exactness property} for the category $\mathscr{A}$, i.e. a correspondence between quotients $e\colon A\twoheadrightarrow B$ in $\mathscr{A}$ and suitable relations between elements of $A$. \medskip\noindent \textbf{Step 3.} Infer a suitable syntactic notion of equation, and prove it to be expressively equivalent to the categorical notion of equation given by \autoref{D:eq}. \medskip\noindent \textbf{Step 4.} Invoke Theorem \ref{thm:hsp} to deduce an HSP theorem. \medskip \noindent The details of Steps 2 and 3 are application-specific, but typically straightforward. In each case, the bulk of the usual work required for establishing the HSP theorem is moved to our general categorical results and thus comes for free. Similarly, to obtain a complete deduction system in a concrete application, it suffices to phrase the two proof rules of our generic equational logic in syntactic terms, using the correspondence of quotients and relations from Step 2; then \autoref{thm:eqlogicsoundcomplete} gives the completeness result. \subsection{Classical $\Sigma$-Algebras}\label{S:birkhoff} The classical Birkhoff theorem emerges from our general results as follows. \medskip \noindent\textbf{Step 1.} Choose the parameters of Example \ref{ex:running}\ref{ex:running:birkhoff}, and recall that $\mathcal{E}_\mathscr{X} = \mathcal{E}$. \medskip \noindent\textbf{Step 2.} The exactness property of $\Alg{\Sigma}$ is given by the correspondence \eqref{eq:homtheorem}. \medskip \noindent\textbf{Step 3.} Recall from Example \ref{ex:running:eq}\ref{ex:running:birkhoff} that equations can be presented as single quotients $e\colon T_\Sigma X\twoheadrightarrow E_X$. The exactness property \eqref{eq:homtheorem} leads to the following classical syntactic concept: a \emph{term equation} over a set $X$ of variables is a pair $(s,t)\in T_\Sigma X\times T_\Sigma X$, denoted as $s=t$. It is \emph{satisfied} by a $\Sigma$-algebra $A$ if for every map $h\colon X\to A$ we have $\ext h(s) = \ext h(t)$. Here, $\ext h\colon T_\Sigma X\to A$ denotes the unique extension of $h$ to a $\Sigma$-homomorphism. Equations and term equations are expressively equivalent in the following sense: \begin{enumerate} \item For every equation $e\colon T_\Sigma X\twoheadrightarrow E_X$, the kernel $\mathord{\equiv_e}\subseteq T_\Sigma X\times T_\Sigma X$ is a set of term equations equivalent to $e$, that is, a $\Sigma$-algebra satisfies the equation $e$ iff it satisfies all term equations in $\equiv_e$. This follows immediately from \eqref{eq:homtheorem}. \item Conversely, given a term equation $(s,t)\in T_\Sigma X\times T_\Sigma X$, form the smallest congruence $\equiv$ on $T_\Sigma X$ with $s\equiv t$ (viz. the intersection of all such congruences) and let $e\colon T_\Sigma X\twoheadrightarrow E_X$ be the corresponding quotient. Then a $\Sigma$-algebra satisfies $s=t$ iff it satisfies $e$. Again, this is a consequence of \eqref{eq:homtheorem}. \end{enumerate} \medskip \noindent\textbf{Step 4.} From \autoref{thm:hspeq} and Example \ref{ex:running:variety}\ref{ex:running:birkhoff}, we deduce the classical \begin{theorem}[Birkhoff \cite{Birkhoff35}] A class of $\Sigma$-algebras is a variety (i.e. closed under quotients, subalgebras, products) iff it is axiomatizable by term equations. \end{theorem} Similarly, one can obtain Birkhoff's complete deduction system for term equations as an instance of \autoref{thm:eqlogicsoundcomplete}; see Appendix \ref{sec:app:birkhoff} for details. \subsection{Finite $\Sigma$-Algebras}\label{S:reiterman} Next, we derive Eilenberg and Schützenberger's equational characterization of pseudovarieties of algebras over a finite signature $\Sigma$ using our four-step plan: \medskip \noindent\textbf{Step 1.} Choose the parameters of Example \ref{ex:running}\ref{ex:running:eilenschuetz}, and recall that $\mathcal{E}_\mathscr{X}=\mathcal{E}$. \medskip \noindent\textbf{Step 2.} The exactness property of $\Alg{\Sigma}$ is given by~\eqref{eq:homtheorem}. \medskip \noindent\textbf{Step 3.} By Example \ref{ex:running}\ref{ex:running:eilenschuetz}, an equational theory is given by a family of filters $\mathscr{T}_n\subseteq T_\Sigma n\mathord{\epidownarrow}\FAlg{\Sigma}$ ($n<\omega$). The corresponding syntactic concept involves sequences $(s_i=t_i)_{i<\omega}$ of term equations. We say that a finite $\Sigma$-algebra $A$ \emph{eventually satisfies} such a sequence if there exists $i_0<\omega$ such that $A$ satisfies all equations $s_i=t_i$ with $i\geq i_0$. Equational theories and sequences of term equations are expressively equivalent: \begin{enumerate} \item Let $\mathscr{T}=(\mathscr{T}_n)_{n<\omega}$ be a theory. Since $\Sigma$ is a finite signature, for each finite quotient $e\colon T_\Sigma n\twoheadrightarrow E$ the kernel $\equiv_e$ is a finitely generated congruence~\cite[Prop.~2]{es76}. Consequently, for each $n<\omega$ the algebra $T_\Sigma n$ has only countably many finite quotients. In particular, the codirected poset $\mathscr{T}_n$ is countable, so it contains an $\omega^\mathsf{op}$-chain $e_0^n \geq e_1^n\geq e_2^n\geq \cdots$ that is \emph{cofinal}, i.e., each $e\in \mathscr{T}_n$ is above some $e_i^n$. The $e_i^n$ can be chosen in such a way that, for each $m> n$ and $q\colon m\to n$, the morphism $e_i^n\o T_\Sigma q$ factorizes through $e_i^m$. For each $n<\omega$, choose a finite subset $W_n\subseteq T_\Sigma n\times T_\Sigma n$ generating the kernel of $e_n^n$. Let $(s_i=t_i)_{i<\omega}$ be a sequence of term equations where $(s_i,t_i)$ ranges over $\bigcup_{n<\omega} W_n$. One can verify that a finite $\Sigma$-algebra lies in $\mathcal{V}(\mathscr{T})$ iff it eventually satisfies $(s_i=t_i)_{i<\omega}$. \item Conversely, given a sequence of term equations $(s_i=t_i)_{i<\omega}$ with $(s_i,t_i)\in T_\Sigma m_i\times T_\Sigma m_i$, form the theory $\mathscr{T}=(\mathscr{T}_n)_{n<\omega}$ where $\mathscr{T}_n$ consists of all finite quotients $e\colon T_\Sigma n\twoheadrightarrow E$ with the following property: \[ \exists i_0<\omega:\forall i\geq i_0:\forall (g\colon T_\Sigma {m_i}\to T_\Sigma n): e\o g(s_i)=e\o g(t_i). \] Then a finite $\Sigma$-algebra eventually satisfies $(s_i=t_i)_{i<\omega}$ iff it lies in $\mathcal{V}(\mathscr{T})$. \end{enumerate} \medskip \noindent\textbf{Step 4.} The theory version of our HSP theorem (\autoref{thm:hspeq}) now implies: \begin{theorem}[Eilenberg-Schützenberger~\cite{es76}]\\ A class of finite $\Sigma$-algebras is a pseudovariety (i.e. closed under quotients, subalgebras, and finite products) iff it is axiomatizable by a sequence of term equations. \end{theorem} In an alternative characterization of pseudovarieties due to Reiterman~\cite{Reiterman1982}, where the restriction to finite signatures $\Sigma$ can be dropped, sequences of term equations are replaced by the topological concept of a \emph{profinite equation}. This result can also be derived from our general HSP theorem, see Appendix \ref{sec:reiterman}. \subsection{Quantitative Algebras} In this section, we derive an HSP theorem for quantitative algebras. \medskip \noindent\textbf{Step 1.} Choose the parameters of Example \ref{ex:running}\ref{ex:running:mardare}. Recall that we work with fixed regular cardinal $c>1$ and that $\mathcal{E}_\mathscr{X}$ consists of all $c$-reflexive quotients. \medskip \noindent\textbf{Step 2.} To state the exactness property of $\QAlg{\Sigma}$, recall that an \emph{(extended) pseudometric} on a set $A$ is a map $p\colon A\times A\to [0,\infty]$ satisfying all axioms of an extended metric except possibly the implication $p(a,b)=0\Rightarrow a=b$. Given a quantitative $\Sigma$-algebra $A$, a pseudometric $p$ on $A$ is called a \emph{congruence} if (i) $p(a,a')\leq d_A(a,a')$ for all $a,a'\in A$, and (ii) every $\Sigma$-operation $\sigma\colon A^n\to A$ ($\sigma\in\Sigma$) is nonexpansive w.r.t. $p$. Congruences are ordered by $p\leq q$ iff $p(a,a')\leq q(a,a')$ for all $a,a'\in A$. There is a dual isomorphism of complete lattices \vspace{-0.1cm} \begin{equation}\label{eq:homtheoremquant0} \text{quotient algebras of $A$} \quad\cong\quad \text{congruences on $A$} \end{equation} mapping $e\colon A\twoheadrightarrow B$ to the congruence $p_e$ on $A$ given by $p_e(a,b)=d_B(e(a),e(b))$. \medskip \noindent\textbf{Step 3.} By Example \ref{ex:running:eq}\ref{ex:running:mardare}, equations can be presented as single quotients $e\colon T_\Sigma X\twoheadrightarrow E$, where $X$ is a $c$-clustered space. The exactness property \eqref{eq:homtheoremquant0} suggests to replace equations by the following syntactic concept. A \emph{$c$-clustered equation} over the set $X$ of variables is an expression \begin{equation}\label{eq:cbasicgen0} x_i=_{\epsilon_i} y_i\;(i\in I)\; \vdash \; s=_\epsilon t \end{equation} where (i) $I$ is a set, (ii) $x_i,y_i\in X$ for all $i\in I$, (iii) $s$ and $t$ are $\Sigma$-terms over $X$, (iv) $\epsilon_i,\epsilon\in [0,\infty]$, and (v) the equivalence relation on $X$ generated by the pairs $(x_i,y_i)$ ($i\in I$) has all equivalence classes of cardinality $<c$. In other words, the set of variables can be partitioned into subsets of size $<c$ such that only relations between variables in the same subset appear on the left-hand side of \eqref{eq:cbasicgen0}. A quantitative $\Sigma$-algebra $A$ \emph{satisfies}~\eqref{eq:cbasicgen0} if for every map $h\colon X\to A$ with $d_A(h(x_i),h(y_i))\leq \epsilon_i$ for all $i\in I$, one has $d_A(\ext h(s),\ext h(t))\leq \epsilon$. Here $\ext h\colon T_\Sigma X\to A$ denotes the unique $\Sigma$-homomorphism extending $h$. Equations and $c$-clustered equations are expressively equivalent: \begin{enumerate} \item Let $X$ be a $c$-clustered space, i.e. $X=\coprod_{j\in J} X_j$ with $\under{X_j}<c$. Every equation $e\colon T_\Sigma X\twoheadrightarrow E$ induces a set of $c$-clustered equations over $X$ given by \begin{equation}\label{eq:cbasic0} x=_{\epsilon_{x,y}} y\;(j\in J,\, x,y\in X_j) \;\vdash\; s=_{\epsilon_{s,t}} t\quad(s,t \in T_\Sigma X), \end{equation} with $\epsilon_{x,y} = d_X(x,y)$ and $\epsilon_{s,t}=d_E(e(s),e(t))$. It is not difficult to show that $e$ and \eqref{eq:cbasic0} are equivalent: an algebra satisfies $e$ iff it satisfies all equations \eqref{eq:cbasic0}. \item Conversely, to every $c$-clustered equation \eqref{eq:cbasicgen0} over a set $X$ of variables, we associate an equation in two steps: \begin{itemize} \item Let $p$ the largest pseudometric on $X$ with $p(x_i,y_i)\leq \epsilon_i$ for all $i$ (that is, the pointwise supremum of all such pseudometrics). Form the corresponding quotient $e_p\colon X\twoheadrightarrow X_p$, see \eqref{eq:homtheoremquant0}. It is easy to see that $X_p$ is $c$-clustered. \item Let $q$ be the largest congruence on $T_\Sigma(X_p)$ with $q(T_\Sigma e_p(s), T_\Sigma e_p(t))\leq \epsilon$ (that is, the pointwise supremum of all such congruences). Form the corresponding quotient $e_q\colon T_\Sigma(X_p)\twoheadrightarrow E_q$. \end{itemize} A routine verification shows that \eqref{eq:cbasicgen0} and $e_q$ are expressively equivalent, i.e. satisfied by the same quantitative $\Sigma$-algebras. \end{enumerate} \medskip \noindent\textbf{Step 4.} From \autoref{thm:hspeq} and Example \ref{ex:running:variety}\ref{ex:running:mardare}, we deduce the following \begin{theorem}[Quantitative HSP Theorem]\label{thm:quanthsp} A class of quantitative $\Sigma$-algebras is a $c$-variety (i.e. closed under $c$-reflexive quotients, subalgebras, and products) iff it is axiomatizable by $c$-clustered equations. \end{theorem} The above theorem generalizes a recent result of Mardare, Panangaden, and Plotkin \cite{MardarePP17} who considered only signatures $\Sigma$ with operations of finite or countably infinite arity and cardinal numbers $c\leq \aleph_1$. \autoref{thm:quanthsp} holds without any restrictions on $\Sigma$ and $c$. In addition to the quantitative HSP theorem, one can also derive the completeness of quantitative equational logic \cite{Mardare16} from our general completeness theorem, see Appendix \ref{sec:app:quant}. \subsection{Nominal Algebras}\label{sec:nomalg} In this section, we derive an HSP theorem for algebras in the category $\mathbf{Nom}$ of nominal sets and equivariant maps; see Pitts \cite{pitts_2013} for the required terminology. We denote by $\mathbb{A}$ the countably infinite set of atoms, by $\mathrm{Perm}(\mathbb{A})$ the group of finite permutations of $\mathbb{A}$, and by $\mathsf{supp}_X(x)$ the least support of an element $x$ of a nominal set $X$. Recall that $X$ is \emph{strong} if, for all $x\in X$ and $\pi\in \mathrm{Perm}(\mathbb{A})$, \[ [\forall a\in \mathsf{supp}_X(x): \pi(a)=a] \quad\text{$\iff$}\quad \pi\o x = x.\] A \emph{supported set} is a set $X$ equipped with a map $\mathsf{supp}_X\colon X\to \mathcal{P}_f(\mathbb{A})$. A \emph{morphism} $f\colon X \to Y$ of supported sets is a function with $\mathsf{supp}_Y(f(x))\subseteq \mathsf{supp}_X(x)$ for all $x\in X$. Every nominal set $X$ is a supported set w.r.t. its least-support map $\mathsf{supp}_X$. The following lemma, whose first part is a reformulation of \cite[Prop. 5.10]{msw16}, gives a useful description of strong nominal sets in terms of supported sets. \begin{lemma}\label{lem:nomreflective0} The forgetful functor from $\mathbf{Nom}$ to $\mathbf{SuppSet}$ has a left adjoint $F\colon \mathbf{SuppSet}\to \mathbf{Nom}$. The nominal sets of the form $FY$ ($Y\in \mathbf{SuppSet}$) are up to isomorphism exactly the strong nominal sets. \end{lemma} Fix a finitary signature $\Sigma$. A \emph{nominal $\Sigma$-algebra} is a $\Sigma$-algebra $A$ carrying the structure of a nominal set such that all $\Sigma$-operations $\sigma\colon A^n\to A$ are equivariant. The forgetful functor from the category $\NomAlg{\Sigma}$ of nominal $\Sigma$-algebras and equivariant $\Sigma$-homomorphisms to $\mathbf{Nom}$ has a left adjoint assigning to each nominal set $X$ the \emph{free nominal $\Sigma$-algebra} $T_\Sigma X$, carried by the set of $\Sigma$-terms and with group action inherited from $X$. To derive a nominal HSP theorem from our general categorical results, we proceed as follows. \medskip \noindent \textbf{Step 1.} Choose the parameters of our setting as follows: \begin{itemize} \item $\mathscr{A} = \mathscr{A}_0 = \NomAlg{\Sigma}$; \item $(\mathcal{E},\mathcal{M})$ = (surjective morphisms, injective morphisms); \item $\Lambda = $ all cardinal numbers; \item $\mathscr{X} = \{\,T_\Sigma X \;:\; \text{$X$ is a strong nominal set} \,\}$. \end{itemize} One can show that a quotient $e\colon A\twoheadrightarrow B$ belongs to $\mathcal{E}_\mathscr{X}$ iff it is \emph{support-reflecting}: for every $b\in B$ there exists $a\in A$ with $e(a)=b$ and $\mathsf{supp}_A(a)=\mathsf{supp}_B(b)$. \medskip \noindent \textbf{Step 2.} A \emph{nominal congruence} on a nominal $\Sigma$-algebra $A$ is a $\Sigma$-algebra congruence $\mathord{\equiv}\subseteq A\times A$ that forms an equivariant subset of $A\times A$. In analogy to \eqref{eq:homtheorem}, there is an isomorphim of complete lattices \begin{equation}\label{eq:homtheoremnom0} \text{quotient algebras of $A$}\quad\cong\quad \text{nominal congruences on $A$}. \end{equation} \medskip \noindent \textbf{Step 3.} By \autoref{rem:singlequot}, an equation can be presented as a single quotient $e\colon T_\Sigma X\twoheadrightarrow E$, where $X$ is a strong nominal set. Equations can be described by syntactic means as follows. A \emph{nominal $\Sigma$-term} over a set $Y$ of variables is an element of $T_\Sigma(\mathrm{Perm}(\mathbb{A})\times Y)$. Every map $h\colon Y\to A$ into a nominal $\Sigma$-algebra $A$ extends to the $\Sigma$-homomorphism \[\hat h = (\,T_\Sigma(\mathrm{Perm}(\mathbb{A})\times Y) \xrightarrow{T_\Sigma(\mathrm{Perm}(\mathbb{A})\times h)} T_\Sigma(\mathrm{Perm}(\mathbb{A})\times A) \xrightarrow{T_\Sigma (\mathord{-} \o \mathord{-})} T_\Sigma A \xrightarrow{\ext{\mathit{id}}} A\,)\] where $\ext{\mathit{id}}$ is the unique $\Sigma$-homomorphism extending the identity map $id\colon A\to A$. A \emph{nominal equation} over $Y$ is an expression of the form \begin{equation}\label{eq:nominaleq0}\mathsf{supp}_Y\vdash s=t,\end{equation} where $\mathsf{supp}_Y\colon Y\to \mathcal{P}_f(\mathbb{A})$ is a function and $s$ and $t$ are nominal $\Sigma$-terms over $Y$. A nominal $\Sigma$-algebra $A$ \emph{satisfies} the equation $\mathsf{supp}_Y\vdash s=t$ if for every map $h\colon Y\to A$ with $\mathsf{supp}_A(h(y))\subseteq \mathsf{supp}_Y(y)$ for all $y\in Y$ one has $\hat h(s)=\hat h(t)$. Equations and nominal equations are expressively equivalent: \begin{enumerate} \item Given an equation $e\colon T_\Sigma X\twoheadrightarrow E$ with $X$ a strong nominal set, choose a supported set $Y$ with $X=FY$, and denote by $\eta_Y\colon Y\to FY$ the universal map (see \autoref{lem:nomreflective0}). Form the nominal equations over $Y$ given by \begin{equation}\label{eq:nominaleq20} \mathsf{supp}_Y\vdash s=t \quad (\, s,t\in T_\Sigma (\mathrm{Perm}(\mathbb{A})\times Y)\text{ and } e\o T_\Sigma m(s) = e\o T_\Sigma m(t)\,) \end{equation} where $m$ is the composite $\mathrm{Perm}(\mathbb{A})\times Y \xrightarrow{\mathrm{Perm}(\mathbb{A})\times \eta_Y} \mathrm{Perm}(\mathbb{A})\times X \xrightarrow{\mathord{-}\cdot\mathord{-}} X$. It is not difficult to see that a nominal $\Sigma$-algebra satisfies $e$ iff it satisfies \eqref{eq:nominaleq20}. \item Conversely, given a nominal equation \eqref{eq:nominaleq0} over the set $Y$, let $X = FY$ and form the nominal congruence on $T_\Sigma X$ generated by the pair $(T_\Sigma m(s),T_\Sigma m(t))$, with $m$ defined as above. Let $e\colon T_\Sigma X\twoheadrightarrow E$ be the corresponding quotient, see \eqref{eq:homtheoremnom0}. One can show that a nominal $\Sigma$-algebra satisfies $e$ iff it satisfies \eqref{eq:nominaleq0}. \end{enumerate} \medskip\noindent\textbf{Step 4.} We thus deduce the following result as an instance of \autoref{thm:hspeq}: \begin{theorem}[Kurz and Petri\c{s}an \cite{KP10}]\label{thm:hspnominal0} A class of nominal $\Sigma$-algebras is a variety (i.e. closed under support-reflecting quotients, subalgebras, and products) iff it is axiomatizable by nominal equations. \end{theorem} For brevity and simplicity, in this section we restricted ourselves to algebras for a signature. Kurz and Petri\c{s}an proved a more general HSP theorem for algebras over an endofunctor on $\mathbf{Nom}$ with a suitable finitary presentation. This extra generality allows to incorporate, for instance, algebras for binding signatures. \subsection{Further Applications} Let us briefly mention some additional instances of our framework, all of which are given a detailed treatment in the Appendix. \medskip\noindent \textbf{Ordered algebras.} Bloom \cite{bloom76} proved an HSP theorem for $\Sigma$-algebras in the category of posets: a class of such algebras is closed under homomorphic images, subalgebras, and products, iff it is axiomatizable by inequations $s\leq t$ between $\Sigma$-terms. This result can be derived much like the unordered case in \autoref{S:birkhoff}. \medskip\noindent \textbf{Continuous algebras.} A more intricate ordered version of Birkhoff's theorem concerns \emph{continuous algebras}, i.e. $\Sigma$-algebras with an $\omega$-cpo structure on their underlying set and continuous $\Sigma$-operations. Ad\'amek, Nelson, and Reiterman \cite{adamek85} proved that a class of continuous algebras is closed under homomorphic images, subalgebras, and products, iff it axiomatizable by inequations between terms with formal suprema (e.g.~$\sigma(x)\leq \vee_{i<\omega}\, c_i$). This result again emerges as an instance of our general HSP theorem. A somewhat curious feature of this application is that the appropriate factorization system $(\mathcal{E},\mathcal{M})$ takes as $\mathcal{E}$ the class of dense morphisms, i.e. morphisms of $\mathcal{E}$ are not necessarily surjective. However, one has $\mathcal{E}_\mathscr{X}$ = surjections, so homomorphic images are formed in the usual sense. \medskip\noindent \textbf{Abstract HSP theorems.} Our results subsume several existing categorical generalizations of Birkhoff's theorem. For instance, \autoref{thm:hsp} yields Manes' \cite{manes76} correspondence between quotient monads $\mathbb{T}\twoheadrightarrow \mathbb{T}'$ and varieties of $\mathbb{T}$-algebras for any monad $\mathbb{T}$ on $\mathbf{Set}$. Similarly, Banaschewski and Herrlich's \cite{BanHerr1976} HSP theorem for objects in categories with enough projectives is a special case of \autoref{thm:hspeq}. \section{Conclusions and Future Work}\label{S:future} We have presented a categorical approach to the model theory of algebras with additional structure. Our framework applies to a broad range of different settings and greatly simplifies the derivation of HSP-type theorems and completeness results for equational deduction systems, as the generic part of such derivations now comes for free using our Theorems \ref{thm:hsp}, \ref{thm:hspeq} and \ref{thm:eqlogicsoundcomplete}. There remain a number of interesting directions and open questions for future work. As shown in \autoref{S:app}, the key to arrive at a syntactic notion of equation lies in identifying a correspondence between quotients and suitable relations, which we informally coined ``exactness''. The similarity of these correspondences in our applications suggests that there should be a (possibly enriched) notion of \emph{exact category} that covers our examples; cf. Kurz and Velebil's \cite{kv17} $2$-categorical view of ordered algebras. This would allow to move more work to the generic theory. \autoref{thm:eqlogicsoundcomplete} can be used to recover several known sound and complete equational logics, but it also applies to settings where no such logic is known, for instance, a logic of profinite equations (however, cf.~recent work of Almeida and Kl\'ima \cite{ak17}). In each case, the challenge is to translate our two abstract proof rules into concrete syntax, which requires the identification of a syntactic equivalent of the two properties of an equational theory. While substitution invariance always translates into a syntactic substitution rule in a straightforward manner, $\mathcal{E}_\mathscr{X}$-completeness does not appear to have an obvious syntactic counterpart. In most of the cases where a concrete equational logic is known, this issue is obfuscated by the fact that one has $\mathcal{E}_\mathscr{X}=\mathcal{E}$, so $\mathcal{E}_\mathscr{X}$-completeness becomes a trivial property. Finding a syntactic account of $\mathcal{E}_\mathscr{X}$-completeness remains an open problem. One notable case where $\mathcal{E}_\mathscr{X}\neq \mathcal{E}$ is the one of nominal algebras. Gabbay's work \cite{gabbay09} does provide an HSP theorem and a sound and complete equational logic in a setting slightly different from \autoref{sec:nomalg}, and it should be interesting to see whether this can be obtained as an instance of our framework. Finally, in previous work \cite{uacm17} we have introduced the notion of a \emph{profinite theory} (a special case of the equational theories in the present paper) and shown how the dual concept can be used to derive Eilenberg-type correspondences between varieties of languages and pseudovarieties of finite algebras. Our present results pave the way to an extension of this method to new settings, such as nominal sets. Indeed, a simple modification of the parameters in \autoref{sec:nomalg} yields a new HSP theorem for \emph{orbit-finite} nominal $\Sigma$-algebras. We expect that a dualization of this result in the spirit of \emph{loc.~cit.} leads to a correspondence between varieties of data languages and varieties of orbit-finite nominal monoids, an important step towards an algebraic theory of data languages. \bibliographystyle{splncs03}
2,869,038,155,183
arxiv
\section{Introduction} The key aim of mathematical, computational and systems biology is to achieve a holistic understanding of biological systems~\cite{Kitano2002}. Over decades this aim was mainly approached by moving from the study of single molecules to the analysis of biological networks. Starting with the groundbreaking work of Hodgkin \& Huxley~\cite{HodgkinHux1952}, who described the dynamics of individual neurones using ordinary differential equations, a successful revolution began, which resulted in a significantly improved understanding of gene regulation, signalling, metabolism and many other processes~\cite{KlippBook2005}. Large-scale models for individual pathways and collections of pathways have been developed (see e.g.~\cite{SchoeberlEic2002,DuarteHer2004,Klipp2005,AlbeckBur2008b,Schlatter2009b,XuVys2010,BachmannRau2011}) along with powerful theoretical and computational methods. Despite great successes it has been recognised that the study of cellular networks is only the first step, as multi-cellular organisms are more than a collection of pathways. A mechanistic understanding of complex biological functions and processes requires the consideration of multiple spatial and temporal scales. During the last two decades, multi-scale models for a variety of processes have been developed such as the beating of the heart~\cite{Nobel2002,HunterBor2003}, the development of cancer~\cite{AndersonQua2008} in all its subtypes, the drug delivery and action~\cite{EissingKue2011,SchallerWil2013}, or the data processing in our brain~\cite{PetzoldAlb2008}. These multi-scale and multi-physics models were derived by expanding existing models using top-down, bottom-up or so called middle-out approaches~\cite{BruggemanWes2007}, as well as by integrating existing models for different aspects. As the development of coherent multi-scale models requires expertise on different biological scales, large consortia have been formed, bringing together researches to tackle complex questions. One great example is the World Physiome Initiative with the Physiome Project~\cite{HunterBor2003,PhysiomeProject}, which unites researches from all around the globe. Among others, the Physiome Project demonstrated that multi-scale models can indeed deepen our insights~\cite{tenTusscherNob2004}, e.g., of the human heart, by integrating models for different processes. Depending on the question, a significant benefit in general does not even require the consideration of all possible (intermediate) biological scales~\cite{SchallerWil2013}. Despite the demonstrated applicability of multi-scale models, many important theoretical results and computational methods for their analysis are still missing. Multi-scale models are often obtained by coupling different model classes, e.g., logical models, ordinary differential equations (ODE), partial differential equations (PDE) or agent-based models, and a coherent theory for model properties such as stability, sensitivity and bifurcations, is mostly not present. More importantly from a practical point of view, methods for the model-based multi-scale data integration are hardly available. Model-based data integration via parameter estimation~\cite{Tarantola2005} is essential for the evaluation of the consistency of models and the underlying hypotheses with experimental findings. It enables hypothesis testing and generation, uncertainty quantification, as well as the prediction of processes dynamics under unknown conditions. Model-based data integration was maybe the most important reason why mechanistic pathway models were so successful in the past~(see \cite{SchoeberlPac2009}). To exploit the full potential of multi-scale models the collection of rigorous theoretical and computational methods has to be extended. The field of multi-scale modelling has grown rapidly over the last years. Recent reviews focus on the modelling and simulation \cite{HunterBor2003,MartinsFer2010,DadaMen2011,WalpolePap2013}. Methodological challenges and potential solutions, e.g.~in the context of parameter estimation for multi-scale models, are hardly addressed by the published reviews. As this discussion is in our opinion urgently necessary, we complement previous reviews by focussing on multi-scale model inference, in particular on methods for optimisation, uncertainty analysis and model selection. Furthermore, we discuss novel approaches which can help to advance inference of multi-scale models. We introduce concepts to decrease the computational complexity associate to multi-scale simulations, including reduced order and surrogate modelling. Furthermore, interesting theoretical concepts for the analysis of interconnected systems are highlighted. The review is structured as follows: In Section~\ref{sec: modelling} common single- and multi-scale modelling approaches are summarised from a mathematical perspective. Furthermore, dependencies between spatial and temporal scales are discussed, along with a few groundbreaking contributions to the field of multi-scale modelling. In Section~\ref{sec: analysis and simulation}, we outline available analysis tools for multi-scale models, discuss the reasons for the limitations and provide suggestions for future research approaches. In Section~\ref{sec: parameter estimation and model selection} -- the central part of this paper -- we turn our attention to the data-driven modelling of multi-scale biological processes. We outline available optimisation and uncertainty analysis methods for model-based multi-scale data integration, discuss the limitations arising from computational complexity and outline methods which could be used to overcome them. The manuscript concludes in Section~\ref{sec: conclusion}. \section{Modelling} \label{sec: modelling} The term ``multi-scale models'' is widely used but for many years the scientific community lacked an appropriate definition. One has recently been provided by the \textit{Inter\-agency Modelling and Analysis Group (IMAG)}~\cite{IMAG2015}: \\[2ex] \textit{``Multi-scale, biomedical modelling uses mathematics and computation to represent and simulate a physiological system at more than one biological scale. Biological scales include atomic, molecular, molecular complexes, sub-cellular, cellular, multi-cell systems, tissue, organ, multi-organ systems, organism, population, and behaviour. These multi-scale biomedical models may also include dynamical processes which span multiple time and length scales.''} \\[2ex] This definition is, like many others, very broad, in order to capture a complete research field. In the following, we will try to provide some more details on modelling approaches for individual biological scales and their integration into multi-scale models. Furthermore, we will shortly discuss the relation of time and length scales. \subsection{Modelling of individual biological scales and processes} In bio- and life-sciences, many different multi-scale modelling approaches are used. This variety of multi-scale modelling approaches is caused by the variety of modelling approaches for the individual biological scales (see~Figure~\ref{fig: models for individual scales} and discussion below). In the following, we outline the most common modelling approaches and model classes used for the study of individual spatial scales. Our understanding of biological scales coincides with the IMAG definition for multi-scale, biomedical modelling but to ensure stringency some scales are lumped together. For the different modelling approaches, we will outline key mathematical properties. Furthermore, we summarize how these properties are exploited to capture the processes occurring in a particular scale. \subsubsection{Atomic and molecular scale} The building blocks of biological systems are deoxyribonucleic acids (DNAs), ribonucleic acids (RNAs), proteins, lipids, and metabolites. The structure, formation (e.g., folding) and interaction of these biomolecules is commonly described and analysed using quantum mechanic (QM) models and molecular mechanic (MM) models~\cite{Cramer2004}. Quantum mechanic models provide the most detailed description of atomic and subatomic processes. For their simulation \textit{ab~initio}, density function and (semi-)empirical methods are available~\cite{Cramer2004}. As these methods are computationally demanding, even on very short time-scales, only small molecules can be analysed using fully quantum mechanic models. To simulate larger biomolecules, molecular mechanic models are used. These models exploit a cruder description of molecules and describe atoms as point masses and point charges which interact via spring-like interactions and van der Waals forces~\cite{BurkertAll1982}. Mathematically, molecular mechanistic models can be viewed as systems of agents which interact and these dynamics are captured by differential equations. To capture important details, e.g., in reaction centre of an enzyme, while being able to simulate large molecules, quantum mechanic / molecular mechanic (QM/MM) models have been developed~\cite{WarshelLev1976}. To enable the study of larger biomolecules, coarse-grained and lattice models are employed. These models represent groups of atoms by a point mass and charge~\cite{SmithHal2001}. \subsubsection{Sub-cellular and cellular scale} The interaction of biomolecules gives rise to biological pathways and processes, such as gene regulation, signal transduction and metabolism. These processes have been modelled using for example continuous-time discrete-state Markov chains (CTMCs)~\cite{Gillespie1977,McAdamArk1997,Gillespie2000}, stochastic differential equations (SDEs)~\cite{Gillespie2000}, ordinary differential equations (ODEs)~\cite{KlippBook2005}, partial differential equations (PDEs)~\cite{SchaffFin1997,MoraruLoe2005}, agent-based models~\cite{TurnerSch2004}, stochastic or deterministic Boolean models~\cite{KauffmanPet2003,KlamtHau2009,KrumsiekMar2011}, Petri nets~\cite{Chaouiya2007,MarwanWag2011} and constraint-based models~\cite{DuarteHer2004}. Boolean and Petri net models are often referred to as qualitative models, as other model classes enable a more quantitative description. Gene regulation is mostly modelled using CTMCs~\cite{ShahrezaeiSwa2008}, Boolean models~\cite{WittmannBlo2009} and Petri nets~\cite{MarwanWag2011}. These models capture the discreteness and the stochasticity of the processes arising from the small copy number of genes. The dynamics of signalling pathways are mostly described using SDEs~\cite{Wilkinson2009,Fuchs2010} and ODEs~\cite{HodgkinHux1952,SchoeberlEic2002}, as large abundances of the involved biochemical species supports a continuous interpretation in terms of concentrations. For the assessment of the qualitative properties of signalling pathways, also Boolean models are used~\cite{Schlatter2009b}. As the availability of spatial information increases, also PDEs, spatially resolved CTMCs and agent-based modelling approaches are more commonly used~\cite{MoraruLoe2005,KlannLap2009}. For the description of the metabolism often ODE-based formulations are exploited. Under the assumption of fast temporal dynamics, these ODE models can be used to derive constraint-based formulations which merely require information about the stoichiometry of the biochemical reactions~\cite{DuarteHer2004}. \begin{figure*}[p] \centering \includegraphics[width=\textwidth]{fig1} \caption{Visual summary of the types of mathematical models used to describe biological processes on different scales. A coloured box indicates that the types of mathematical models are routinely used for models of the respective spatial scale. The spatial scales are split up in subcategories to allow for additional insight.} \label{fig: models for individual scales} \end{figure*} \subsubsection{Multi-cellular and tissue scale} The variability of individual cells and the dependence of cellular dynamics on the current cell state, the history of individual cells, environmental condition and/or the spatial location in the tissue is captured by cell population models. The first population model has been introduced by von Foerster~\cite{vonFoerster1959}. It described populations of non-interacting, proliferating cells. This groundbreaking work on structured population models -- systems of PDEs -- has been followed by theoretical contributions and model extensions (see~\cite{Trucco1965,Oldfield1966,SinkoStr1967,LuzyaninaRoo2007,BanksSut2010,HasenauerSch2012b} and references therein). To study stochastic dynamics of gene regulation and signal transduction in non-dividing, heterogeneous cell populations, the Chemical Master Equation (CME)~\cite{Gillespie1992} and the Fokker-Planck equation (FPE)~\cite{Gardiner2004} are employed. To account for mixed, stochastic and deterministic sources of cell-to-cell variability, also extensions to the Chemical Master Equation~\cite{ZechnerRue2012,StamatakisZyg2010} and the Fokker-Planck equation~\cite{HasenauerWal2011b,HasenauerPhdThesis2013} have been introduced. The spatial organisation present in biological tissues is typically captured by using either agent-based or PDE models. The first model class uses agents to represent cells: either each cell individually (1 agent = 1 cell)~\cite{AndersonQua2008,BuskeGal2011,LangMar2011,WalkerGeo2008,Jagiella2012,HoehmeBru2010}, or coarse-grained compartments of several cells (1 agent = many cells)~\cite{Drasdo2005}, or a single cell by an ensemble of agents (many agents = 1 cell)~\cite{GranerGlazier1992,Newman2005}. The latter, e.g. cellular Potts models, enable a detailed description of cell-cell interaction and cell movement. These models allow for direct interactions of cells. An alternative to agent-based models are provided by cellular automata~\cite{GerhardtSch1989,PeakWes2004,DeutschDor2005}, which abstract the detailed biological interactions to rules. On the other hand, the second model class -- PDE models -- provide a more macroscopic view and describe the cell densities as continuum in space and capture their temporal evolution. There is significant work available on PDE models for avascular tumour growth~\cite{PreziosiTos2009,WiseLow2008}, angiogenesis~\cite{TosinAmb2006}, developmental processes~\cite{WittmannBlo2009,HockNg2013}, bacterial biofilms~\cite{EfendievBook2013} and many other important processes. \subsubsection{Organ scale} Organs consist of many different tissues and possess a complex spatial organisation, the length scales are however too large to model individual cells. To enable organ-scale simulation models, homogenisation methods have been developed. These methods originated in the field of porous media and use local averaging to enable PDE-based description of flow, transport and biomechanics processes in tissues as well as intracellular dynamics~\cite{HunterBor2003,BassingthwaighteRay2010,ChapmanShi2008,ErbertsederRei2012,EhlersWag2013,KarajanRoh2013}. The coupling of intra- and extracellular dynamics yields bidomain models~\cite{Whiteley2008}. Commonly, these bidomain PDE models are coupled with ODE models for the vascular transport~\cite{LiChe1993,ReicholdSta2009}. As organs possess different functions, their physiological characteristics are diverse. This diversity is also reflected by the models. Models of the heart~\cite{HunterBor2003,BassingthwaighteRay2010} and other muscles~\cite{KarajanRoh2013} focus on tissue mechanics while models of the lung~\cite{LiChe1993,ErbertsederRei2012}, the brain~\cite{ReicholdSta2009,EhlersWag2013}, the craniocervical region~\cite{RutkowskaHau2012}, and solid tumours~\cite{ChapmanShi2008} focus on transport and delivery processes. Accordingly, the underlying mathematical descriptions and equation types differ. \subsubsection{Multi-organ and organism scale} On the organism scale, the properties of and crosstalk between organs are studied, e.g., in the context of drug delivery and treatment. The crosstalk between organs is mostly mediated via the vascular and the lymphoid system, which are used for the transport of substances between organs. Additionally, the nerve systems mediates information exchange between sensory organs and muscles with the brain. Models for the organism scale consider the body physiology~\cite{DerendorfMei1999}, e.g., the volume of organs and large blood vessels as well as the available area for substance exchange and vessel permeability. These physiological parameters, which are often well accessible, allow for the parameterisation of multi-compartment models. The individual compartments are organs and large blood vessels. In pharmacokinetics (PK), ODE-based multi-compartment models are used to describe liberation, absorption, distribution, metabolisation and excretion of drugs~\cite{RodgersLea2005,RodgersRow2006}. These pharmacokinetic models have been combined with pharmacodynamic (PD) models which describe the biochemical and physiological effects of drugs on the body~\cite{Nestorov2007,EdgintonThe2008,EdgintonWil2008}. The resulting physiology-based pharmacokinetic/pharmacodynamic (PK/PD) models~\cite{EissingKue2011} are widely used in the pharmaceutical industry for drug discovery, decision making and data integration~\cite{KuepferLip2012}. Most PK/PD models rely on a mathematical description in terms of ODEs. To capture fast metabolic processes, recently also hybrid models have been introduced which combine ODEs and constraint-based modelling~\cite{KraussSch2012}. Furthermore, in larger consortia like the Physiome Project~\cite{PhysiomeProject} and the Virtual Liver Network~\cite{HolzhutterDra2012,KuepferKer2014}, PK/PD models are integrated with PDE and agent-based models for a refined description of the organs of interest. \subsubsection{Population and behaviour scale} As inter-individual variability is pervasive, personalised PK/PD models have been developed. These personalised models account, among others, for differences in gender, age, height, body mass, general health and genetic alterations~\cite{EissingKue2011}. On the population scale, this yields mixed-effect models~\cite{Pinheiro1994,WillmannHoh2007}. These mixed-effect models often rely on an ODE-based description of individuals and provide a description of the population dynamics in terms of a high-dimensional system of PDEs, a Liouville-type equation. As the dimensionality is prohibitive for a direct simulation of the PDE, representative sets of ODE simulations are used to evaluate the population statistics. The corresponding results can nowadays be exploited in personalised medicine~\cite{EissingKue2011}. In social sciences, mathematical modelling is employed to study the interactions of individuals and the dynamics of groups~\cite{Bonabeau2002,Kennedy2012}. Therefore, agent-based models are frequently used as they provide a natural description of many processes, e.g., evacuation or traffic. Accordingly, they can be used to capture and to predict emergent phenomena. \\[3ex] A visualisation of the modelling approaches exploited to describe individual biological scales is provided in Figure~\ref{fig: models for individual scales}. We find that the class of modelling approaches used to capture processes on the cellular scale is most versatile. This is probably related to the focus of systems biology on the cellular scale. Furthermore, ODE and PDE models are used across scales and are the most common modelling approaches. Reason for this are most likely the existence of well established, deep theoretical results for these models~\cite{CoddingtonLev1955,Evans1998} and the availability of computationally efficient simulation methods (see~\cite{HindmarshBro2005,WendlandEfe2003} and references therein). \subsection{Multi-scale modelling} In the previous section various modelling approaches for biological processes have been outlined. The reason for the variety is the multi-scale and multi-physics nature of biological systems. To span different scale and processes, different models / model classes are necessary and have to be coupled. In the following, we will shortly outline different multi-scale modelling philosophies. We will discuss simulation and coupling approaches and conclude with a number of well-known multi-scale models. Throughout the chapter we will use the terms micro-scale and macro-scale to indicate small and large spatial scales respectively. \subsubsection{Aims of multi-scale modelling} The ultimative aim of multi-scale modelling is to describe the macroscopic behaviour of complex systems using microscopic models derived from first-principles. Accordingly, many scientists argue that: \\[2ex] \textit{``The task of multi-scale modelling is to carry out macro-scale simulations without making use of ad hoc constitutive relations. Instead the necessary micro-scale constitutive information is extracted from micro-scale models.''}~\cite{EEng2007} \\[2ex] This would allow for a holistic analysis based on fully mechanistic descriptions. While the defined task is appealing, the majority of multi-scale modelling in biology follow a different philosophy. In biology, a variety of different physics have to be covered, ranging from molecule-molecule interaction over signalling all the way to fluid and tissue mechanics. As a simultaneous first-principle modelling of these different physics is currently beyond reach, many research projects focus on the coupling of established macroscopic models to capture and study the multi-physics nature of biological systems~\cite{HunterBor2003}. Examples are multi-scale models for pharmacokinetic and pharmacodynamic models, which describe fluid transport as well as intracellular signalling~\cite{EissingKue2011,SchallerWil2013}. In a preliminary step the individual macroscopic models are derived from first-principles and than used in coupled multi-scale, multi-physics simulations. As modelling of individual processes is relatively well established, many systems and computational biologists currently aim at linking models of different processes. The two aforementioned aims \\[1ex] \textbf{Aim\;1:}\;\textit{Multi-scale modelling using first-principles.} \\[1ex] \textbf{Aim\;2:}\;\textit{Multi-scale and multi-physics modelling using coupled macroscopic models.} \\[1ex] possess a significant overlap. As the approaches and methods developed to reach the different aims are however different, we will in the following discuss them separately. In Section~2.2.2 we will discuss simulation methods for multi-scale models, while in Section~2.2.3 we focus on coupled multi-scale, multi-physics models. \subsubsection{Multi-scale modelling using first-principles} The multi-scale modelling using first-principles relies on the simulation of macro-scale processes using models with micro-scale resolution. As this is often computationally infeasible, \textit{multi-resolution methods}~\citep{Barth2001} and \textit{equation-free modelling approaches}~\cite{KevrekidisSam2009} have been developed. Multi-resolution methods, including adaptive mesh refinement and multi-grid methods~\cite{TrottenbergOos2001,AscherHab2003}, are numerical schemes which reduce the number of micro-scale model evaluations by performing a successive (local) refinement. While multi-resolution methods are well established, the consideration of computationally demanding micro-scale models is often problematic. To circumvent long-term micro-scale simulations, equation-free modelling can be used. Equation-free modelling exploits on-the-fly coupling of microscopic and macroscopic simulations~\cite{EEng2003}. A macroscopic model/solver is used, which might require missing macroscopic information. This missing macroscopic information is obtained using short microscopic simulations. Therefore, the microscopic model is first constrained using the available macroscopic data (=~lifting), under these constraints the microscopic model is evaluated for a short time interval (=~microscopic simulator), and the resulting simulation data are used to constrain the macroscopic model (=~restriction). Several equation-free modelling approaches have been proposed, among others, \textit{heterogeneous multi-scale methods}~\cite{EEng2003}, \textit{(coarse) projective integration} and the \textit{gap-tooth scheme}~\cite{KevrekidisGea2003}. All these methods treat microscopic models as black boxes and merely use simulation results, which led to the term equation-free modelling. \subsubsection{Multi-scale and multi-physics modelling using coupled macroscopic models} \label{sec: Multi-physics modelling using coupled macroscopic models} While the available mathematical results and computational approaches for the development of first-principles multi-scale models are promising, application examples in biology and medicine are limited. In biology the modelling of complex multi-physics systems is so far mostly approached by coupling models which describe different aspects of the underlying biological systems. A nice example is the integrated physiologically-based whole-body model recently published by Schaller et al.~\cite{SchallerWil2013}. This model describes the glucose-insulin-glucagon regulatory system by coupling ODE models for whole-body pharmacodynamics, sub-organ distribution and intracellular metabolism. The individual models describe completely different aspects, and unlike the first-principle models described in Section~2.2.2, the models do not provide different resolutions (e.g., micro- and macro-scale). Even the same class of mathematical models, namely ODEs, has been used to capture the different processes. Multi-physics modelling requires the coupling of different models. This can be challenging, even if the individual models describe similar processes and evolve on similar time- and length-scales~\cite{NealCoo2014,KrauseUhl2010}. Beyond interface variables, which are discussed in Section~2.2.4, tailored simulation approaches have to be introduced. Walpole et al.~\cite{WalpolePap2013} introduced the terms \textit{series simulation}, \textit{parallel simulation} and \textit{integrated simulation} to provide a nomenclature for categorising simulation approaches. Series simulation acquire information by simulating one scale and provide this information as input to the next level (without feedback). Parallel and integrated simulation require communication (of different intensities) between simulations. Parallel and integrated simulations are often computationally demanding. To circumvent this problem, approximations based on \textit{spatial homogenisation} and \textit{time-scale separation} have been introduced. The underlying assumption of spatial homogenisation is that in a particular volume element the process with the large length scale is roughly constant. As the process on the large length scale is constant over this volume element, a single evaluation of the small length scale process could be used to analyse the respective volume~\cite{ChapmanShi2008}. On the tissue level, the homogenisation would basically assume that the concentration of a substance in a particular region of an organ is constant. Although this region might be occupied by many cells, as all cells perceive the same or at least similar environment, it is appropriate to assume that all cells behave similarly and merely a representative cell is evaluated. Using this and related homogenisation approaches, the effective resolution required on the smaller scales can be reduced, which decreases the overall complexity of the simulation~\cite{ErbertsederRei2012,RoehrleSpr2013}. An alternative to spatial homogenisation is provided by the \textit{adaptive tabulation multi-scale approach}~\cite{Pope1997,Hedengren2008}. This method exploits that different simulations with similar input sequences and model parameters are required to reduce the computational complexity. In addition to the spatial properties, large differences in time-scales of different processes can be used to simplify simulations. This has already been exploited by Michaelis \& Menten~\cite{MichaelisMen1913} for the study of enzymatic reactions as well as in many other fields, such as stochastic simulations~\cite{HaseltineRaw2002}. In the last decades these ideas have been extended to multi-scale processes. Multi-scale numerical schemes nowadays allow for the efficient simulation of coupled ODE-PDE systems~\cite{Whiteley2008}. Furthermore, in agent-based modelling time-resolved simulation of diffusion is circumvented based on time-scale separation~\cite{Jagiella2012,HoehmeBru2010}. \\[2ex] \textbf{Remark:} Biological processes span many spatial and temporal scales. Intuitively, one might assume that microscopic processes are faster than macroscopic processes and, indeed, we find a correlation (Figure~\ref{fig: time and length scale dependence}). However, if we are interested in a particular biological process this does not have to hold. The example of anti-cancer treatment, involving drug distribution on the organism scale and apoptosis/necrosis induction on the cellular scale, shows that microscopic processes can be slower than macroscopic processes. This has to be appreciated during model and algorithm development. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig2} \caption{Relation of spatial and temporal scales. For a variety of biological processes the spatial and temporal scales are roughly indicated. The list is not comprehensive, but we find a clear correlation of spatial and temporal scale.} \label{fig: time and length scale dependence} \end{figure*} \subsubsection{Coupling of models and interface variables} A key ingredient of multi-scale and multi-physics models developed to achieve Aim~1 and~2 is the coupling of models. Approaches to couple individual scales are as diverse as modelling approaches for individual biological scales. However, despite differences, an essential element of any coupling scheme is the definition of interface variables. These are properties of microscopic models which influence the macroscopic model or vice versa. We propose to distinguish two classes of interface variables and physical coupling types: \textit{input-output coupling} and \textit{direct coupling}. In input-output coupling each interface variables depends either on the state of the microscopic system or on the state of the macroscopic system~(Figure~\ref{fig: interface}, left), but not on both. The interface variables are therefore inputs and outputs for the individual scales. In direct coupling, interface variables are states shared between microscopic or macroscopic model~(Figure~\ref{fig: interface}, right). Examples for interface variables on the cellular and tissue scale are the instantaneous secretion rate of a molecule from a cell which enters as source term in the spatial simulation of the tissue, and the tissue level concentration of a diffusive ligand which is bound to individual cells. The instantaneous secretion rate depends only on the current cellular state. Hence, instantaneous secretion rates provide an input-output coupling. The ligand concentration is directly influenced by tissue-level diffusion and cellular binding. Accordingly, the state is shared between microscopic or macroscopic model, which establishes an overlap of the models and a direct coupling. This direct coupling is often captured by models for the interfaces. Models only containing input-output couplings are generally easier to simulate and to analyse as modularity is ensured~\cite{Kitano2002b}. The overall problem can be decomposed in subproblems for which tailored methods might be available~\cite{EhlersZin2013}. Even toolboxes for generic (PDE) problems are available for input-output type coupling~\cite{LarsonJac2005}. Multi-scale models including one or more direct couplings require integrated simulation methods. Such methods have been, for instance, developed for coupled quantum mechanic and molecular mechanic simulations~\cite{WarshelLev1976}. In quantum mechanic / molecular mechanic (QM-MM) models, e.g., for enzymatic reactions, the active centres are resolved using quantum mechanic models, while the rest of the potentially large molecule is formulated using molecular mechanic models. For the combined model a joint energy function is derived which can be used for simulation. \\[2ex] \textbf{Remark:} The coupling type depends on the choice of interface variables. Hence, (potential) degrees of freedom can be used to render analysis and simulation more tractable. \begin{figure}[t] \centering \includegraphics[scale=0.5]{fig3} \caption{Illustration of input-output coupling and direct coupling. Input-output coupling provides an instantaneous interaction, while in direct coupling the interface possesses dynamics and is shared between micro- and macro-scale models.} \label{fig: interface} \end{figure} \subsubsection{Case studies} The methods introduced in the previous sections have been used to develop multi-scale models for broad spectrum of biological processes. In this section, we will briefly present five well-known models and discuss the structure of the respective models. A visual summary of the case studies is depicted in Figure~\ref{fig: case studies}. \\[1ex] \textit{Whole-heart model:} The heart was among the first organs for which holistic models were developed~\cite{HunterBor2003}. As part of the Physiome Project, bidomain models -- coupled ODE-PDE models -- for cardiac electromechanics and cardiac electrophysiology have been developed~\cite{PullanTom2002,Trayanova2011,NielsenLys2013}. Based on 3D imaging data, molecular data and electrocardiograms, these whole-heart models account for tissue deformation and heterogeneities, transport processes, as well as intracellular signalling. The different macroscopic models are mostly linked using input-output coupling and integrated simulations are employed. A transfer of these models into clinical practice, e.g., to improve diagnosis and risk assessment, seems feasible. Ischemic lesions have been detected in individual patients using model-based multi-scale data integration, and effect of drugs on pump functions can be predicted~\cite{NielsenLys2013}. \\[1ex] \textit{Cancer growth model:} As cancers are among the most common causes of death, the field of integrative mathematical oncology developed. This field seeks to develop multi-scale models for cancer progression and the delineation of key cancer mechanisms~\cite{AndersonQua2008}. To achieve this aim, hybrid discrete-continuum models describing discrete individual cells and continuous distributions of nutritions and signalling molecules are used. Single-cell dynamics are governed by stochastic or deterministic models, while cell-cell interaction is captured using agent-based on-lattice/lattice-based or off-lattice/lattice-free models~\cite{Jagiella2012}. The properties of this first-principle multi-scale model are evaluated using integrated simulations. Parameterised multi-scale models already contributed to the understanding of complex spatial phenomena, such as fingering~\cite{AndersonQua2008}, and will probably play a key role in the development of personalised treatment strategies. Key advantages of these model-based approaches are that the impact of genetic alterations on cellular phenotypes and ultimately on the cancer progression and treatment response might be predicted, and that patient-specific data about molecule characteristics, tumor form and stage can be accounted for~\cite{HendersonOgi2014}. \\[1ex] \textit{Glucose-insulin-glucagon regulation model:} Our understanding of diabetes -- one of the world's leading health problems -- has recently been improved using a multi-scale model for glucose-insulin-glucagon regulation. Using a high-dimensional system of ODEs, the pharmacokinetics and pharmacodynamics of glucose, insulin, and glucagon are modelled and coupled~\cite{SchallerWil2013}. This ODE system has been obtained by interface coupling an multi-organ model and sub cellular model. Using multi-scale experimental data, this model has been parameterised and adapted to individual patients. The resulting patient-specific models provided accurate predictions for glucose, insulin and glucagon dynamics after food intake. Model-guided development of novel diabetes treatment strategies, hence, becomes feasible. \\[1ex] \textit{Liver lobule model:} A variety of models improved our understanding of tissue scale processes. Models for liver lobules, the smallest organisational units of the liver, are particularly interesting examples~\cite{HoehmeBru2010}. Using high-resolution 3D imaging, realistic hybrid discrete-continuum approaches (see explanation above) of liver lobules have been developed. These 3D models facilitated an accurate description of toxification and repopulation processes. Predictions about the cell alignment made using these models could be confirmed experimentally and enabled and improved understanding of cell-cell interaction. \\[1ex] \textit{Whole-cell model:} A groundbreaking contribution to cell biology has been made by the whole-cell model for the human pathogen Mycoplasma genitalium~\cite{KarrSan2012}. This model captures all essential cellular processes, including gene regulation, metabolism, signalling and cell division. The individual processes are described using Boolean and probabilistic models, constraint-based models and ODEs. For numerical simulation an integrated simulation approach has been employed, which enables the assessment of single-cell and population dynamics. As a key feature the whole-cell model could predict transciptomics, proteomics and metabolomics data as well as the effect of gene knockouts with a surprisingly high accuracy~\cite{KarrSan2012}. This confirmed that large-scale models derived from already established multi-scale data sources can facilitate biological discovery. \\[3ex] In summary, biological processes are versatile and so are modelling approaches. There is no right or wrong, but it depends on the purpose of the model (and the personal preferences of the modeller). Multi-scale models are obtained by coupling models for different processes hierarchically or in parallel. The resulting models provide first-principle formulations for macro-scale processes or simply account for different physical processes. In both cases, a more holistic description of biological processes is achieved. In particular if its mathematical description is rigorously analysed and is parameters reliable, novel insights can be gained. The rigorous analysis of multi-scale models as well as the estimation of their parameters from experimental data requires sophisticated methods. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig4} \caption{Selection of well-known multi-scale models. Biological scales captured by the model, mathematical modelling approaches, topics and keywords, as well as a reference to an overview paper is provided.} \label{fig: case studies} \end{figure*} \section{Analysis and simulation} \label{sec: analysis and simulation} The mathematical modelling of biological processes requires (and facilitates) a rigorous formulation of hypotheses, the first steps towards mechanistic understanding. The derived models are, in general, studied using analytical and numerical methods to gain deeper insights. Of particular interest are often \begin{itemize} \setlength{\itemsep}{-1mm} \item existence and uniqueness of solutions, \item sensitivity of solutions with respect to parameters, \item local and global stability of steady states, \item bifurcation structures of steady states, and \item dynamic properties. \end{itemize} The analysis of these properties can already for individual model classes be demanding. While standard simulation frameworks for most model classes are available~\cite{SchaffFin1997,StilesBar2001,HoopsSah2006,KlamtSae2007,EissingKue2011}, for stochastic models even important tasks, such as sensitivity analysis~\cite{PlyasunovArk2006,KomorowskiCos2011}, pose challenges. This limits the study of the models and might explain the frequent usage of ODE and PDE models, for which a broad spectrum of methods is available, across spatial and temporal scales (Figure~\ref{fig: models for individual scales}). An illustration of the availability and sophistication of methods for different model classes is provided by Figure~\ref{fig: methods}. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig5} \caption{Visual summary of the development status of methods available for different model classes. The transparency indicates the degree to which the different tools and methods are established, respectively, for which model size they are applicable. Problems for which scalable, standardised approaches are available are considered to be ``solved''. The colouring indicates trends but a quantitative interpretation is not intended. For a detailed discussion see Section~3 and~4.1. } \label{fig: methods} \end{figure*} In Section~2.2.5, we reviewed multi-scale modelling approaches and presented a few well-known application examples. Interestingly, a detailed evaluation of these examples revealed that for most multi-scale systems merely simulation studies are performed. A coherent theory, in particular for coupled multi-physics models, is often missing. Methods to study existence and uniqueness, sensitivity, local and global stability and bifurcation structures are urgently needed to gain a more profound understanding of multi-scale processes. Fortunately, some brave researchers work on such methods. Among others, new theoretical methods for existence and uniqueness analysis~\cite{SonnerEfe2011,KelkelSur2012,EfendievBook2013} and stability analysis~\cite{Efendiev2010} for multi-scale ODE-PDE models have been established. However, despite significant progress, the spectrum of models which can be analysed rigorously is still rather limited. This might be related to the following three challenges: \\[1ex] \textbf{Challenge\;1.1:}\;\textit{The applicability of established methods has not been extended to coupled multi-scale models.} \\[1ex] The list of methods for which this holds is extensive. Prominent examples are methods which exploit properties of the individual (decoupled) models to study the properties of the coupled system, e.g. the small-gain theorem~\cite{KhalilBook2002}. In systems and control theory the small-gain theorem is used to prove stability of coupled ODE models. Although the underlying idea is simple, no general extensions to coupled ODE-PDE systems are available. \\[1ex] \textbf{Challenge\;1.2:}\;\textit{Directions from which the analysis of multi-scale models should be approached are often not obvious.} \\[1ex] Agent-based models for tumour growth~\cite{Jagiella2012} or tissue remodelling~\cite{HoehmeBru2010} for instance can either be interpreted as stochastic PDEs with complex noise term or as coupled Markov jump processes. These different perspectives require different mathematical and computational tools. For a satisfactory analysis probably combinations of the existing tools and theoretical results are necessary and have to be established. \\[1ex] \textbf{Challenge\;1.3:}\;\textit{Classical concepts and tools are often not applicable to novel classes of multi-scale models.} \\[1ex] Properties can easily loose their meaning if alternative model classes are considered. While stability is doubtless one of the most important properties of deterministic systems, even for simple stochastic processes stability is not easily define~\cite{Socha1986}. For stochastic systems properties such as uni- or multi-modality of the instantaneous probability density~\cite{ThomasPop2014} or statistics of noise-induced oscillation~\cite{LangWal2009} are often more essential. As models and model classes change, the properties of interest have to be adapted and generalised. \\[2ex] Despite the multilayer nature of these challenges, there is reason for hope. Walpole et al.~\cite{WalpolePap2013} outlined in a recent review that most multi-scale models capture only a small subset of the scales. Accordingly, not all combinations of model classes have to be considered, but already a few combinations might be sufficient. As coupled ODE-PDE systems and coupled CTMCs-PDE systems are widely used, these classes of multi-scale models might be appropriate starting points. In addition to analytical results, equation-free stability, sensitivity and bifurcation analysis methods have been developed~\cite{SchroffKel1993,GearKev2002}. These methods allow for the simulation-based analysis of quantitative and qualitative properties of multi-scale methods. While the use of simulations allows for a certain independence from the underlying equations, the detailed mathematical understanding is lost. Furthermore, instead of analysing the mathematical model, often the numerical implementation is studied. To circumvent this, the simulation uncertainty can be quantified and taken into account~\cite{IntervalAnalysis,ChkrebtiiCam2014}. \\[2ex] In summary, the scarcity of the available methods limits the analysis of multi-scale models. While we cannot provide a comprehensive overview here, the need for the development of novel tools is apparent. One possibility might be to exploit modularity and merely rely on the input-output characteristics of individual models. Although potentially conservative, methods relying on this idea might be widely applicable. \section{Parameter estimation and model selection} \label{sec: parameter estimation and model selection} Mathematical modelling allows for the description of biological processes and the study / prediction of their properties. However, reliable quantitative results require accurate model structures and model parameters (e.g., reaction rates and diffusion coefficients). Due to experimental constraints, structures and parameters of biological systems can often not be assessed directly. Instead, they have to be inferred from limited, noise-corrupted data. Therefore, parameter estimation and model selection methods are required. Parameter estimation and model selection are means to integrate data from different data sources. This becomes increasingly important as the availability and variety of data increases continuously, due to new technologies. It has been shown that already model-based data integration on individual scale can significantly improve our biological understanding. As multi-scale models are significantly more complex, a further improvement has to be expected. Furthermore, information on one scale can help to understand other scales. \subsection{Challenges for data-driven multi-scale modelling} \label{sec: Challenges for data-driven multi-scale modelling} Multi-scale models are obtained by coupling models for individual biological scale (Section~2.2.2 and~2.2.3). Accordingly, the most naive approach is to perform parameter estimation and model selection for the individual scales separately. As parameter estimation and model selection are fields of active research since decades (see~\cite{Tarantola2005} and references therein), a number of breakthroughs have been made for individual model classes: \begin{itemize} \item \textbf{Boolean and constraint-based models:} Boolean and constraint-based models are parameter-free and neither parameter estimation nor uncertainty analysis methods are required. For the reconstruction of Boolean models from experimental data optimisation-based methods are commonly used. Their computational complexity scales reasonably with the network size and they are applicable to networks of several hundred components~\cite{Porreca2010}. Constraint-based models are mostly derived from (literature) knowledge about the biochemistry of the underlying process. If sufficient data are available also statistical approaches, such as Gaussian Graphical Models, can be used~\cite{TohHor2002,KrumsiekSuh2011}. \item \textbf{ODEs:} For ODE models robust and scalable inference methods are available~\cite{BockPli1984,Banga2008,Engl2009}. Optimisation~\cite{RaueSch2013} and uncertainty analysis~\cite{HugRau2013} of ODE models with a few hundred parameters is feasible on standard computer infrastructures. The methods are established for a wide range of applications and implemented in several software packages (see~\cite{RaueSch2013,SchmidtJir2006} and references therein). The results allow for model selection and optimal experimental design. Model selection can become time consuming if many model alternatives have to be tested, and the genom-scale models with thousands of state variables and parameters are currently beyond reach~\cite{StanfordLub2013,HendersonOgi2014}. \item \textbf{PDEs:} Inference for PDE models~\cite{Banks1989,OptimizationwithPDEconstraints} is generally more challenging than for ODE models. Nonetheless, rigorous optimisation and uncertainty analysis of low- and medium-dimensional PDE systems with dozens of parameters is feasible~\cite{MuellerTim2004,LuzyaninaRoo2009,MenshykauGer2013,HockHas2013}. Based on existing simulation environments~\cite{LoggMar2012}, toolboxes for PDE-constrained optimisation are being introduced~\cite{FarrellHam2013} and are expected to replace problem-specific implementations. \item \textbf{SDEs and CTMCs:} Standard Bayesian and frequentist approaches to parameter estimation consider the conditional probability of the data given the parameters, commonly known as the likelihood~\cite{Tarantola2005}. For stochastic models, such as CTMCs and SDEs, evaluating the likelihood requires marginalisation over all possible paths of the stochastic process. This is often computationally intractable and limits the applicability of optimisation methods. Approximations to likelihood functions~\cite{Fuchs2010} as well as likelihood-free estimation methods have been introduced~\cite{MarjoramMol2003,SissonFan2007}. In particular, Approximate Bayesian Computing (ABC) methods are nowadays widely used~\cite{ToniWel2009}. These flexible methods enable the sampling of the posterior distributions, uncertainty and model selection, without evaluation of the likelihood function. Due to the large number of simulations, mosty problems with rather low and medium computational complexity have so far been studied~\cite{ToniWel2009,LiepeBar2010,LillacciKha2013}. For certain subclasses of models also software packages are available~\cite{LiepeBar2010}. \item \textbf{Agent-based models:} In agent-based modelling often interacting and non-interacting agents are distinguished. Models consisting of non-interacting agents are widely used in pharmacokinetics and pharmacodynamics, and are -- as mentioned above -- also denoted as mixed-effect models. For mixed-effect models standard inference methods are available~\cite{Pinheiro1994,ZechnerPel2012} and implemented in free and commercially available software packages~\cite{TornoeAge2004}.\\ Models of interacting agents are more difficult to parameterise than models of non-interacting agents. The reason is the need for integrated simulations of the complete system. The resulting computational complexity is challenging and mainly problem-specific solutions are currently employed~\cite{ThorneHay2011}. One reason is the variety of experimental data and their corresponding objective function types. \end{itemize} The integration of data-driven models for individual scales to multi-scale models is reasonable and rather tractable. Therefore, this approach has been successfully applied in the literature several times~\cite{tenTusscherNob2004,HayengaTho2011}. However, this approach suffers also from several limitations. It is clear that interface variables have to be consistent across scales. This cannot be guaranteed if model fitting is done separately. Furthermore, for multi-scale models with direct coupling a separate data-driven modelling might not even be feasible. Beyond these technical aspects, experimental data used for the data-driven modelling of the individual scales might have been collected under different conditions. Accordingly, the models might not be valid for the regimes required in a multi-scale model. Extensive studies of the transferability of models are currently not available. The aforementioned limitations establish the need for an integrated approach for data-driven multi-scale modelling. Beyond the data-driven modelling of individual scales, an important first step (if feasible), the multi-scale model has to be fitted to / validated with multi-scale data~\cite{WalpolePap2013}. Examples for this have been provided in the context of blood cell mechanics~\cite{FedosovLei2011} and heart electromechanics~\cite{NielsenLys2013}. Parameter estimation, uncertainty analysis and model selection for the multi-scale models is apparently more challenging than data-driven modelling of the individual scales. The dimension of the parameter space increases and the handling of the models becomes more demanding. Key challenges are in particular: \\[1ex] \textbf{Challenge\;2.1:}\;\textit{Computational complexity for multi-scale simulation (and sensitivity analysis) is often large.} \\[1ex] \textbf{Challenge\;2.2:}\;\textit{Stochasticity of many multi-scale models establishes the need for a large number of simulations.} \\[1ex] \textbf{Challenge\;2.3:}\;\textit{Reproducibility of parameter estimation and model selection results have to be ensured.} \\[1ex] \textbf{Challenge\;2.4:}\;\textit{Lack of tailored and generic computational tools for data-driven multi-scale modelling.} \\[1ex] Some of these challenges are inherited from models of particular scales. In the following, we will discuss novel approaches and ideas which could in the future be used to address these challenges. Thereby, we will provide a comprehensive list of potential methods helpful for data-driven multi-scale models. \subsection{Optimisation} To determine unknown model parameters, statistically motivated measures for the goodness of fit are used. These objective functions are optimised with respect to the parameters. For dynamical systems, the respective optimisation problems are in general nonlinear and non-convex. Hence, sophisticated optimisation schemes are required. Common tools employ multi-start local optimisation~\cite{RaueSch2013}, multiple shooting~\cite{Biegler2007,BockPli1984}, evolutionary algorithms~\cite{Back1996,Balsa-Canto2008c}, pattern search~\cite{Vaz2007} and particle swarm optimisation~\cite{Vaz2007,Yang2010}. Banga~\cite{Banga2008} and Weise~\cite{Weise2009} provide comprehensive surveys of local and global optimisation procedures. In the course of the optimisation, the goodness of fit is assessed at different points in parameter space. This requires the simulation of the model and -- depending on the optimisation method -- the evaluation of sensitivities (derivatives of the model output with respect to the parameters). The repeated simulation and sensitivity evaluation can be time consuming (Challenge~2.1), stochasticity (Challenge~2.2) and reproducibility (Challenge~2.3) are however even more intricate. Parameter optimisation for a number of multi-scale models has been approached in recent years. In particular for PK/PD models great successes have been reported (see, e.g.,~\cite{SchallerWil2013}). Here, different ODE models are coupled and simulation as well as parameter estimation remains efficient. The optimisation-based integration of experimental data collected on different scales provided novel insights in diseases, such as diabetes. Promising results have been obtained for mixed-effect models describing population and single-cell level~\cite{KallenbergerBea2014}, but reproducibility is often an issue. For coupled PDE models novel optimisation methods resulted in great successes, e.g., in the context of the Virtual Heart~\cite{NielsenLys2013}. For models composed of different model types, optimisation seems more challenging. This is also indicated by the aforediscussed contributions (Section~2.2.5). Hoehme et al.~\cite{HoehmeBru2010} introduced and validated a sophisticated agent-based model for liver regeneration. The available measurement data were however merely used to determine realistic simulation domains and rough parameter values. A parameter optimisation was not approached. The same is true for the whole-cell model developed by Karr et al.~\cite{KarrSan2012}. Reasons for this are stochasticity and computational complexity of model simulations. There are a series of promising mathematical methods which tackle Challenges 2.1 - 2.3. These methods split the underlying optimization problem in smaller subproblems or decrease the computational complexity associated to model evaluations. We will shortly outline the key ideas underlying these methods. A visual summary is depicted in Figure~\ref{fig: optimisation methods}. \subsubsection{Decoupling using dependent input approach} The parameter estimation for models with high-dimensional parameter and state space is often challenging. An intuitive idea is therefore to exploit the modularity of biological systems. The overall system can be decomposed into a set of interconnected subsystems~\cite{EdererSau2003}. The subsystems, for instance, describe individual biological scales or span different scales. A subsystems is interfaced with other subsystems. The \textit{dependent input approach} regards these other subsystems as unmodelled dynamics and replaces them by fictitious `dependent inputs'~\cite{vanRielSon2006}. This enables the decoupling of subsystems, assuming that the dependent inputs can be measured in experiments. The subsystems can then be fitted separately, assuming that they do not share parameters. This eliminates the need for multi-scale simulations and addresses Challenge~2.1. Furthermore, it provides a tailored tool for data-driven multi-scale modelling (Challenge~2.4). The dependent input approach is rather flexible and in principle not limited to a particular class of models. It however requires that the dependent inputs are measured. This limits the decomposition and renders it also measurement dependent. Furthermore, subsystems require continuous input signals while measurement data are mostly collected at discrete time points. Interpolation and filtering approaches can be used to close these gaps~\cite{GeorgoulasCla2012}, this can however be error-prone. More sophisticated approaches fit input data and subsystems dynamics simultaneously~\cite{KaschekTim2012,SchelkerRau2012}. The dependent input approach is widely used for optimisation~\cite{KaschekTim2012,SchelkerRau2012} as well as uncertainty analysis~\cite{WaldherrHas2011}. Open questions are mainly related to the securing of consistency between input and output signals of individual subsystems. \subsubsection{Reduced order modelling} The computational effort associated with numerical simulations is frequently the bottleneck for optimisation. Model order reduction methods reduce the complexity of high-dimensional model, while preserving their input-output behaviour as much as possible~\cite{Schilders2008}. The resulting reduced models, which can be simulated more efficiently, mimic the behaviour of the full model. For linear models efficient and reliable model order reduction methods are available~\cite{Antoulas2005book}, for instance, SVD-based~\cite{Sirovich1987} and Krylov subspace methods~\cite{Grimme1997}. In the last decade these methods have been extended to linear~\cite{HaasdonkOhl2008,HaasdonkOhl2011,BaurBen2009,RozzaHuy2008} and non-linear~\cite{Lall2002,WirtzHaa2011} parametric (partial) differential equations. This enabled the use of reduced order models in optimisation~\cite{Benner2009}. Simulations of the full model are simply substituted by simulations of the reduced order model. To account for the approximation error, a-posteriori error bounds can be used~\cite{Benner2009,HasenauerLoh2012,DihlmannHaa2013}. In multi-scale modelling, reduced order models are easily used to decrease the computational complexity of models for individual scales or processes. Furthermore, promising multi-scale model reduction methods have been developed which consider several scales simultaneously~\cite{YvonnetHe2007,GeersKou2010}. Using these approaches a speed-up of the numerical calculations by several orders of magnitude has been achieved, rendering parameter estimations feasible by addressing Challenge~2.1. \subsubsection{Surrogate modelling} Model order reduction methods exploit the structure of the governing equation. This limits the applicability of these methods -- as the equations might become high-dimensional or highly non-linear -- and led to the development of surrogate modelling approaches. Surrogate models, also known as metamodels, response surface models and emulators, are scalable analytical models for the approximation of the multivariate input-output behaviour of complex systems. Surrogate models are derived from simulated input-output data and do not consider the internal structure of the original model. Popular surrogate models are polynomial response surfaces~\cite{BoxDra2007}, Kriging~\cite{Wilkinson2011}, space mapping~\cite{BandlerDak2004}, support vector machines and radial basis function approximations~\cite{RegisShoe2007}. Similar to reduced order modelling, surrogate modelling can be used to reduce the computational complexity of a single model evaluation (Challenge~2.1). Surrogate models have been used to approximate objective functions~\cite{RegisShoe2007} as well as time-courses of (multi-scale) models~\cite{WirtzKar2015,BandlerDak2004,Wilkinson2011}. Both types of surrogates have been used in surrogate-based optimisation. Surrogate-based optimisation methods circumvent evaluation of the computationally demanding model by evaluating the surrogate model. Based on a first space filling sampling, e.g., latin hypercube sampling, an initial surrogate model is derived (Step~1). This surrogate model is optimised (Step~2). At the new optimal point the full model is evaluated (Step~3) and the surrogate model is updated (Step~4) before returning to Step~2. The individual steps in surrogate-based optimisation are involved and there exist a variety of different approaches. For details we refer to~\cite{RegisShoe2007,MullerSho2014} and references therein. \subsubsection{Multi-level Monte-Carlo methods} For stochastic models a single evaluation of the objective function requires the averaging over stochastic simulations. This averaging can be computationally demanding because it normally requires many simulations. To decrease the necessary number of simulations and to address Challenge~2.2, multi-level Monte-Carlo methods have been developed~\cite{Heinrich2001}. Multi-level methods employ series of increasingly complex models. For SDEs, this series of models could be, for instance, different numerical SDE solvers with increasing accuracy~\cite{Giles2008}. Instead of estimating the mean behaviour of the full model using Monte-Carlo integration, e.g., a numerical SDE solver with a very high accuracy, the difference of adjacent models in the series is assessed. As this difference is smaller, fewer evaluations are necessary, often resulting in a speed up and a lowered variance of estimates~\cite{Heinrich2001}. Similar approaches have recently been introduced for CTMCs~\cite{AndersonHig2012}. Multi-level Monte-Carlo methods often use approximate stochastic simulations. These methods on their own can already accelerate simulations. The simulation of CTMCs is, for instance, often approximated using tau-leaping~\cite{Gillespie2001}, time-scale separation~\cite{HaseltineRaw2005} or diffusion approximation~\cite{Fuchs2010}. Monte-Carlo integration induces stochasticity of the objective function. Two evaluations of the objective function for the same parameter will provide slightly different results. This is a severe issue for the commonly used deterministic optimisers, and therefore stochastic optimisation procedures have to be employed to ensure robustness~\cite{Weise2009}, e.g., implicit filtering~\cite{Kelley2011}. Such stochastic optimisers use involved update schemes, and tailoring of these schemes to the structure of the considered multi-scale model might be beneficial. In particular, block updates might improve the search performance~\cite{WilkinsonYeu2002}. Block update schemes, currently used in Markov-chain Monte-Carlo methods, can exploit the model structure to virtually reduce the problem size by updating merely parameters belonging to the same scale or the same process. \subsubsection{Moment equations and system-size expansions} Alternatives to Monte-Carlo integrations are provided by moment equations~\cite{Engblom2006} and system-size expansions~\cite{Grima2010}. Moment equations and system-size expansions are deterministic models describing the statistics -- mean, variance and higher-order moments -- of the solutions of stochastic processes, i.e., CTMCs, which eliminates the need for repeated stochastic simulations (Challenge~2.2). A further advantages is that moment equations and system-size expansions are deterministic models which allow for the use efficient deterministic optimisers. For these optimizers the reproducibility is general good, addressing Challenge~2.3. The key disadvantage of moment equations and system-size expansions is that they merely provide approximations, as moment closure and/or truncations of infinite series are required~\cite{Gillespie2009}. Detailed studies of the influence of the approximation error on the parameter estimation are so far missing. To minimise the effects, hybrid methods have been developed, which use a fully stochastic description for low-copy number species and a moment-based description for high-copy number species~\cite{MenzLat2011,Jahnke2011,HasenauerWol2014,ThomasPop2014}. The resulting hybrid models are often more accurate, but their simulation is computationally also more demanding. Moment equations are available for concentrated as well as distributed processes. Already in the 1990s, spatial moment equations have been developed for individual-based models~\cite{BolkerPac1997,LawDie2000}. These equations have been successfully used to study population dynamics on regular lattices, irregular networks, and continuous spatial domains. Different closure schemes have been developed to provide appropriate approximation accuracies~\cite{GandhiLev2000}. The consideration of long-range correlations, which improves the approximation accuracy but also increase the computation time, turned out to be particular critical. \\[3ex] \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig6} \caption{ Visual summary of methods which could potentially improve optimisation of multi-scale models: \textbf{(A)} decoupling using dependent input approach; \textbf{(B)} reduced order modelling; \textbf{(C)} surrogate modelling; and \textbf{(D)} computing solution statistics for stochastic processes using multi-level Monte-Carlo methods \& moment equations and system-size expansion). } \label{fig: optimisation methods} \end{figure*} \subsection{Identifiability and uncertainty analysis} As experimental data are limited and noise-corrupted, parameter estimation has to be complemented by structural and practical identifiability. Structural identifiability is concerned with the structure of the system~\cite{CobelliDiS1980} and can be used to assess a priori whether the parameters can in principle be determined from noise-free data. In contrast, practical identifiability is related to a particular dataset and its information content~\cite{Raue2009}. Using practical identifiability analysis the parameter and prediction uncertainties are quantified in terms of asymptotic and finite-sample confidence intervals. Asymptotic confidence intervals can be computed via local sensitivity-based methods, e.g., the Wald approximation~\cite{Meeker1995} or the Fisher information matrix (FIM)~\cite{MurphyVaa2000}. While these asymptotic confidence intervals are already determined by standard optimisers, these confidence intervals are not reliable. Therefore, finite sample confidence intervals derived using bootstrapping~\cite{Joshi2006}, profile likelihoods~\cite{Raue2009} and Markov chain Monte-Carlo methods~\cite{Wilkinson2007} are preferred in practice. A recent study showed however, that also the use of bootstrapping-based confidence intervals are problematic~\cite{FroehlichThe2014}. While profile likelihoods and bootstrapping require a large number of local or global optimisations, Bayesian methods exploit sampling of posterior distributions using, e.g., Markov chain Monte-Carlo methods. For multi-scale models using a coherent modelling of the scales, this is often feasible using current methods~\cite{SchallerWil2013}, while for most multi-scale models this is so far intractable. These limitations can be partially overcome using aforementioned reduced order and surrogate models, as well as, multi-level Monte-Carlo methods and moments equations. In addition, tailored and efficient identifiability and uncertainty analysis methods are being developed. A few of those will be introduced in the following. \subsubsection{Structural identifiability analysis for interconnected systems} Identifiability analysis uses the governing equations of dynamical systems to study whether parameters can in principle be determined~\cite{CobelliDiS1980}. Therefore, several approaches based on Taylor and generating series, implicit function theorem and differential algebra have been developed~\cite{ChisBan2011}. These methods are applicable to ODE and PDE models. The key advantage of these methods is that they exploit symbolic calculations and do not rely on numerics. This makes them robust, but also limits their application to small- and medium-size systems. To overcome these challenges, approaches for interconnected systems have been developed~\cite{Glad2006,GerdinGla2007}. These methods exploit the modular structure of interconnected / large-scale systems and the input-output structure of subsystems to decide about global structural identifiability. Accordingly, these methods are well-suited for the study of multi-scale models and address Challenge~2.4. \subsubsection{Simulation-based profile likelihood calculation} In addition to the development of novel methods designed for multi-scale models, the efficiency of existing methods is continuously increasing. This allows for a more rigorous assessment of problems at hand and improves the reproducibility of results (Challenges~2.3). Among others, fast simulation-based profile likelihood calculations methods have been proposed~\cite{ChenJen2002}. These methods circumvent the repeated local optimization by formulating dynamical systems evolving along the profiles. Using gradient and hessian of the objective function, the update direction for the parameters is determined directly. The resulting path in parameter space describes the profile likelihoods. First (unpublished) results indicate a significant acceleration. \subsubsection{Efficient sampling methods} To improve the efficiency of Bayesian uncertainty analysis, structure exploiting sampling schemes have been introduced. In particular, Riemann manifold and Hamiltonian Monte-Carlo methods provide a performance boost for a variety of applications~\cite{Neal2011,GirolamiCal2011}. These methods use gradient and hessian of the objective function to construct a local approximation. By sampling from this local approximation, a significantly reduction of the autocorrelation of the Markov chain samples can be achieved compared to standard Markov chain Monte-Carlo methods. Similar efficiency increases could also be achieved using adaptive single- and multi-chain Monte-Carlo methods~\cite{HaarioLai2006,MiasojedowMou2012,HugRau2013}. These adaptive samplers use the available sample path to construct an approximation of the local structure. Accordingly, the potentially demanding evaluation of gradient and hessian is circumvented. Samplers can also be combined with surrogates, e.g., radial basis function approximation of the posterior distribution~\cite{FroehlichHro2014}. Furthermore, surrogate-based sampling schemes have been developed~\cite{HigdonRee2011}. For computationally intensive problems these approaches can outperform standard methods. This natural extension of surrogate-based optimisations should be explored in the future, in particular, as information collected during optimisation can be reused. \subsubsection{Approximate Bayesian Computing} The evaluation of likelihood functions, the statistical distance of model and data, is for many models difficult. Therefore, likelihood-free methods have been proposed~\cite{MarjoramMol2003}, also known as Approximate Bayesian Computing (ABC) methods. ABC methods circumvent the evaluation of the likelihood function using model simulations~\cite{ToniWel2009} and thereby address Challenge~2.2. A simulation for a particular parameter value is accepted or rejected based on simple distance measures. For appropriate choices of the distance measure, ABC methods sample approximately from the Bayesian posterior distribution. These samples enable a direct assessment of parameter and predictions uncertainties. To ensure efficiency and reliability of ABC methods, a variety of sophisticated sampling schemes have been developed. Popular are ABC Markov chain Monte-Carlo and ABC sequential Monte-Carlo methods~\cite{ToniStu2010}. For models with time-consuming simulations also surrogate-based approaches have been developed, i.e., approximate ABC (AABC)~\cite{BuzbasRos2013}. \subsection{Model selection} In biology not only model parameters but also model structures are often unknown. Hence, subsequent to parameter estimation, model selection has to be performed to evaluate competing hypotheses. Therefore, a set of alternative models is defined and the best model is selected using likelihood ratio~\cite{Wilks1938}, Akaike information criterion~\cite{Akaike1973}, Bayesian information criterion~\cite{Schwarz1978}, Bayes factors~\cite{KassRaf1995} or flavours of these criteria. If the set of alternative models is too large, forward and backward selection methods can be used to avoid a full enumeration and exploration. The aim of model selection is to determine the underlying biological and biochemical mechanism and to avoid over- and underfitting. Model selection requires parameter estimation for all model alternatives. An acceleration of parameter estimation methods, there also results in an accelerated model selection. In addition, more sophisticated model selection methods are being developed, e.g., more reliable algorithms for evaluating Bayes factors~\cite{Vyshemirsky2008}. This is also important, as detailed studies of ABC-based model selection methods revealed. For stochastic models ABC-based model selection is the method of choice~\cite{ToniStu2010}, however, if insufficient statistics (= error norms) are used, Bayes factors can select incorrect models~\cite{RobertCor2011}. Sufficiency conditions have been derived to avoid this problem~\cite{MarinPil2014}. Case studies showed that the combination of data collected on different levels can enhance the model selection. In a recent paper, it has been shown that model selection using gene expression and metabolite data allows for an improved inference of regulation mechanism compared to the individual models/datasets~\cite{ChandrasekaranPri2013}. The reason for this improvement is that information about adjacent levels provide additional constraints for the model. Hence, model selection on individual scales has to be complemented by multi-scale model selection. In our opinion appropriate methods to tackle this problem are currently not available. Multi-scale models are mostly selected based on heuristic arguments and visual inspection. \\[3ex] In summary, this section provided an overview about state-of-the-art methods for data-driven multi-scale modelling. Challenges have been outlined and novel methods and ideas have been discussed. The discussed methods and ideas might revolutionise multi-scale data integration by shifting the bounds and allowing for much higher-dimensional problems. \section{Conclusions and outlook} \label{sec: conclusion} A multitude of modelling approaches is used to describe biological processes and to unravel the underlying working principles. This review provides an overview over the spectrum of different (multi-scale) modelling concepts, thereby not aiming for complete comprehensiveness. Qualitative and quantitative modelling approaches have been outlined along methods to integrate them in multi-scale models. To analyse multi-scale models sophisticated tools are necessary, whose development poses interesting mathematical challenges. Existing theory has to be extended (Challenges 1.1) and different mathematical disciplines have to be linked (Challenges 1.2 \& 1.3). To facilitate this process and to bundle research activities, standard classes of multi-scale models should be defined. Beyond modelling and model analysis, we should also follow model-based multi-scale integration of experimental data as a field of active research. The development of parameter optimisation, uncertainty analysis and model selection for multi-scale models is in its early phase. Computational complexity (Challenge~2.1), stochasticity (Challenge~2.2), reproducibility (Challenge~2.3) and the lack of computational tools (Challenge~2.4) provide limitations to what is currently feasible. There are however many interesting ideas and approaches waiting to be explored. In particular, the use of approximation, reduced order models and surrogate models is promising. The derivation of appropriate surrogate models might even be automated as no detailed analysis of the governing equations is required but merely input-output data are needed. However, to use surrogate-based approaches with stochastic models additional robustness improvements are required. In addition to the development of appropriate inference methods, it has to be decided which complexity is truly necessary. Sometimes simplistic models might be more appropriate than detailed models, although the later are more realistic. If the detailed models cannot be parameterised using available data and inference methods, it is unclear what we can truly learn using them. We share the opinions of two well known scientists: \textit{``Simplicity is the ultimate sophistication'' (Leonardo da Vinci)}, and \textit{``everything should be made as simple as possible, but no simpler'' (Albert Einstein)}. Accordingly, we might not search for the most appropriate and detailed model, but for the model with the highest reliable informativeness~\cite{ChehreghaniBus2012}. Extensions to such concepts to multi-scale dynamical models would be very interesting. In summary, in the last decade multi-scale modelling already contributed significantly to improving our understanding of complex biological systems. Flagship projects, like the Virtual Heart and the Virtual Liver, illustrated how multi-scale modelling can be exploited. Nowadays, there is a broad spectrum of simulation environments for multi-scale modelling, however, methods to parameterise the models are mostly missing. Mathematical and computational research has to be fostered to solve this problem and initiatives, e.g., DREAM challenges, are needed to carefully evaluate the developed methods. The availability of inference tools might turn multi-scale modelling into a standard tool in biological sciences, similar to powerful new measurement devices. Paraphrasing the idea of Galileo Galilei: We should ``measure what can be measured and make the rest measurable'' using multi-scale models. \vspace*{2mm} \section*{\normalsize Acknowledgments} The authors would like to acknowledge financial support from the German Federal Ministry of Education and Research (BMBF) within the SYS-Stomach project (Grant No. 01ZX1310B), the European Union within the ERC grant ``LatentCauses'', and the Postdoctoral Fellowship Program (PFP) of the Helmholtz Zentrum M\"unchen. \bibliographystyle{unsrt}
2,869,038,155,184
arxiv
\section{Introduction} The top-quark with $m_t= 172.76\pm 0.30$ GeV~\cite{Data2020} is the most massive of all observed elementary particles in the Standard Model (SM). The large top-quark mass corresponds to a Yukawa coupling to the Higgs boson close to unity. This suggests that the top-quark may play a special role within the SM and that its precise characterization may shed light on the electroweak symmetry breaking mechanism~\cite{PRL81-1998,PRD59-1999}. Because of the top-quark large mass, its couplings are expected to be more sensitive to new physics Beyond the SM (BSM) with respect to other particles~\cite{Cao:2020npb}. New physics can manifest itself in different forms. One possibility is that the new physics may lead to the appearance or a huge increase of new types of interactions like $tH^+b$ or anomalous Flavor Changing Neutral Current $tqg$, $tq\gamma$ and $tqZ$ ($q=u, c$) interactions. Another possibility is the modification of the SM couplings that involve $t\bar t \gamma$, $t\bar t Z$, $t\bar t g$ and $tW b$ vertices. The top-quark is a key particle in various extensions BSM and is considered a laboratory for many experimental or simulation aspects in searches for new physics. In particular, the top-quark anomalous couplings to bosons in the $t\bar t \gamma$ and $t\bar t Z$ vertices, have made the top-quark one of the most attractive particles for new physics searches. In this regard, the study of the physics of the $t$-quark by the Tevatron collider at the Fermilab \cite{PLB713-2012, PLB693-2010,PRL102-2009} and the ATLAS and CMS Collaborations \cite{ATLAS-CONF-2012-126,PRL110-2013,EPJC79-2019,JHEP03-2020,Data2020} at the Large Hadron Collider (LHC) has been developed significantly in recent years and now represents a very active physics program. One aspect of top-quark physics which is far less explored is interactions with neutral electroweak gauge bosons and the Higgs boson. The study of anomalous couplings in electroweak corrections is relatively unexplored, and therefore more detailed studies are warranted. Sensitivity to these couplings arises in hadronic collisions through the partonic subprocess $q\bar q \to (Z, \gamma) \to t\bar t$. At the LHC the $Z$ mediated process is overwhelmed by the strong production mechanism, rendering the tree level sensitivity vanishingly small. Electroweak loop corrections are another possible source of sensitivity to anomalous couplings. Currently, little is known about anomalous couplings of the top-quark with the $Z$ boson. There are no direct measurements of these couplings; indirect measurements, using LEP and SLC data, tightly constrain only the $t\bar tZ$ vector and axial- vector couplings. On the other hand, the measurement of the $t\bar t Z$ production cross-section at the LHC \cite{ATLAS-CONF-2012-126,PRL110-2013,EPJC79-2019,JHEP03-2020,Data2020} offers a direct test of anomalous $t\bar t Z$ couplings. The experimental detection of non-zero Anomalous Weak Magnetic Dipole Moment (AWMDM) or Weak Electric Dipole Moment (WEDM) of heavy fermions such as $\tau$, $b$, and $t$ at the current sensitivity of the LHC, would be clear evidence of new physics BSM. With these motivations, we carried out a study on the weak dipole moments of the quark-top in the context of the Bestest Little Higgs Model (BLHM) \cite{JHEP09-2010}. The purpose of the BLHM is to solve the hierarchy problem without fine-tuning. This is achieved through the incorporation of one-loop corrections to the Higgs boson mass through heavy top-quarks partners and heavy gauge bosons. This extension of the SM predicts the existence of new physical scalar bosons neutral and charged $h_0, H_0, A_0, \phi^{0},\eta^{0}, \sigma, H^{\pm}, \phi^{\pm}, \eta^{\pm}$, new heavy gauge bosons $Z', W'$ and new heavy quarks $B, T,T_5, T_6, T^{2/3},T^{5/3}$. At the one-loop level, the AWMDM $a^W_t$ and WEDM $d^W_t$ of the top-quark are induced via the Feynman diagrams depicted in Fig.~\ref{dipolo}, where $S_i$ represent scalar bosons, $V_i$ neutral and charged gauge bosons and $q_i$ heavy quarks. Therefore, among the new contributions of the model, there are those arising from the vertices of scalars bosons, vector bosons and heavy quarks contribution, that is to say, vertices of the form: $tq_iS_i$, $S_i=h_0, H_0, A_0, \phi^{0}, \eta^{0}, \sigma, H^{\pm}, \phi^{\pm}, \eta^{\pm}$, $tq_iV_i$, $V_i= \gamma, Z, W, Z', W'$, and $Zq_i\bar q_i$, $q_i=b, t, B, T, T_5, T_6, T^{2/3}, T^{5/3}$, respectively. With these vertices we calculate the one-loop contributions to the weak dipole moments $a^W_t$ and $d^W_t$ of the top-quark and in several scenarios with $m_{A_0}= 1000, 1500$ GeV, $m_{\eta^0}= 100, 500$ GeV, $F=[3000,6000]$ GeV, $f=[1000, 3000]$ GeV and $\tan\beta=3$. The paper is structured as follows. In Section II, we give a brief review of the BLHM. In Section III, we present the predictions of the BLHM on the weak dipole moments of the top-quark $a^W_t$ and $d^W_t$. Finally, we present our conclusions in Section IV. In Appendix A, we present the complete set of Feynman rules for the study of the weak dipole moments of the top-quark in the context of the BLHM. In Appendix B, we provide all the numerical contributions of the particles that induce the AWMDM of the top-quark. \section{Brief review of the Bestest Little Higgs Model} Various extensions of the SM have been proposed in order to solve the problem of the mass hierarchy. One of the proposed extensions is the Little Higgs models (LHM)~\cite{Arkani1,Arkani2} that employ a mechanism named collective symmetry breaking. Its main idea is to represent the SM Higgs boson as a pseudo-Nambu-Goldstone boson of an approximate global symmetry which is spontaneously broken at a scale in the TeV range. In these models, the collective symmetry breaking mechanisms is implemented in the norm sector, fermion sector and the Higgs sector, which predict new particles within the mass range of a few TeV. These new particles play the role of partners of the top-quark, of the gauge bosons and the Higgs boson, the effect of which is to generate radiative corrections for the mass of the Higgs boson, and thus cancel the divergent corrections induced by SM particles. LHM~\cite{Arkani1,Arkani2,Arkani3} on the other hand already have strong constraints from electroweak precision data. These constraints typically require the new gauge bosons of LHM to be quite heavy \cite{PRD67-2003, PRD68-2003}. In most LHM, the top partners are heavier than the new gauge bosons, and this can lead to significant fine-tuning in the Higgs potential \cite{JHEP03-2005}. An interesting and relatively recent model is the BLHM \cite{JHEP09-2010} overcomes these difficulties by including separate symmetry breaking scales at which the heavy gauge boson and top partners obtain their masses. This model features a custodial $SU(2)$ symmetry~\cite{Schmaltz:2008vd,Diaz:2001yz}, has heavy gauge boson partner masses above the already excluded mass range, and has relatively light top partners below the upper bound from fine-tuning. The BLHM is based on two independent non-linear sigma models. With the first field $\Sigma$, the global symmetry $SO(6)_A\times SO(6)_B$ is broken to the diagonal group $SO(6)_V$ at the energy scale $f$, while with the second field $\Delta$, the global symmetry $SU(2)_C \times SU(2)_D$ to the diagonal subgroup $SU(2)$ to the scale $F> f$. In the first stage are generated 15 pseudo-Nambu-Goldstone bosons that are parameterized as \begin{equation}\label{Sigma} \Sigma=e^{i\Pi/f} e^{2i\Pi_{h}/f}e^{i\Pi/f}, \end{equation} \noindent where $\Pi$ and $\Pi_h$ are complex and antisymmetric matrices given by \begin{eqnarray}\label{Pi} \Pi= \left( \begin{array}{c c c} i(\phi_a T^{a}_L +\eta_{a} T_{R}^{a})_{4\times 4} & 0 & 0 \\ 0 & 0 & i\sigma/\sqrt{2} \\ 0 &-i\sigma/\sqrt{2} & 0 \end{array} \right), \hspace{1cm} \Pi_h=\frac{i}{\sqrt{2}} \left( \begin{array}{c c c} 0_{4\times4} & h_1 & h_2 \\ -h_{1}^{T} & 0 & 0 \\ -h_{2}^{T} & 0 & 0 \end{array} \right), \end{eqnarray} \noindent where $\phi_{a}$ and $\eta_{a}$ ($ a = 1,2,3 $) are real triplets, $h_{1}$ and $h_{2}$ Higgs vectors as $\bf{4}'$s of $SO(4)$, and $\sigma$ a real singlet. For Higgs fields, their explicit representation is $h_{i}^{T}=(h_{i1}, h_{i2}, h_{i3}, h_{i4})$, while $T^{a}_{L, R}$ denote the generators of the group $SO(6)$ which are provided in~\cite{JHEP09-2010}. Regarding the second stage of spontaneous symmetry-breaking, the pseudo-Nambu-Goldstone bosons of the field $\Delta$ are parameterized as follows \begin{equation}\label{Delta} \Delta=F e^{2i \Pi_d/F},\, \,\, \, \, \Pi_d=\chi_a \frac{\tau^{a}}{2} \ \ (a=1,2,3), \end{equation} \noindent where $\chi_a$ represents the Nambu-Goldstone fields and the $\tau_a$ correspond to the Pauli matrices, which are the generators of the SU(2) group. \subsection{The scalar sector} The BLHM Higgs fields, $h_1$ and $h_2$, form the Higgs potential that undergoes spontaneous symmetry-breaking~\cite{JHEP09-2010,Kalyniak,Erikson}: \begin{equation}\label{Vhiggs} V_{Higgs}=\frac{1}{2}m_{1}^{2}h^{T}_{1}h_1 + \frac{1}{2}m_{2}^{2}h^{T}_{2}h_2 -B_\mu h^{T}_{1} h_2 + \frac{\lambda_{0}}{2} (h^{T}_{1}h_2)^{2}. \end{equation} \noindent The potential reaches a minimum when $m_1, m_2 >0$, while the spontaneous electroweak symmetry-breaking requires that $B_\mu > m_1 m_2$. The symmetry-breaking mechanism is implemented in the BLHM when the Higgs doublets acquire their vacuum expectation values (VEVs), $\langle h_1\rangle ^{T}=(v_1,0,0,0)$ and $ \langle h_2 \rangle ^{T}=(v_2,0,0,0)$. By demanding that these VEVs minimize the Higgs potential of Eq. (4), the following relations are obtained \begin{eqnarray}\label{v12} &&v^{2}_1=\frac{1}{\lambda_0}\frac{m_2}{m_1}(B_\mu-m_1 m_2),\\ &&v^{2}_2=\frac{1}{\lambda_0}\frac{m_1}{m_2}(B_\mu-m_1 m_2). \end{eqnarray} \noindent These parameters can be expressed as follows \begin{equation}\label{vvacio} v^{2}\equiv v^{2}_1 +v^{2}_2= \frac{1}{\lambda_0}\left( \frac{m^{2}_1 + m^{2}_2}{m_1 m_2} \right) \left(B_\mu - m_1 m_2\right)\simeq \left(246\ \ \text{GeV}\right)^{2}, \end{equation} \begin{equation}\label{beta} \text{tan}\, \beta=\frac{v_1}{v_2}=\frac{m_2}{m_1}. \end{equation} \noindent From the diagonalization of the mass matrix for the scalar sector, three non-physical fields $G_0$ and $G^{\pm}$, two physical scalar fields $H^{\pm}$ and three neutral physical scalar fields $h_0$, $H_0$ and $A_0$ are generated~\cite{Kalyniak,PhenomenologyBLH}. The lightest state, $h_0$, is identified as the scalar boson of the SM. The masses of these fields are given as \begin{eqnarray}\label{masaAGH} m_{G_0}&=&m_{G^{\pm}}=0,\\ m^{2}_{A_{0}}&=&m^{2}_{H^{\pm}} =m^{2}_1+m^{2}_2,\label{mHmas} \\ m^{2}_{h_0,H_{0}} &=& \frac{B_\mu}{\text{sin}\, 2\beta}\mp \sqrt{\frac{B^{2}_{\mu}}{\text{sin}^{2}\, 2\beta} -2\lambda_0 B_\mu v^{2} \text{sin}\, 2\beta +\lambda^{2}_{0} v^{4} \text{sin}^{2}\, 2\beta } \label{mh0H0}. \end{eqnarray} \noindent The four parameters present in the Higgs potential $ m_1, m_2, B_\mu$ and $\lambda_0 $, can be replaced by another more phenomenologically accessible set. That is, the masses of the states $h_0$ and $A_0$, the angle $\beta$ and the VEV $v$~\cite{Kalyniak}: \begin{eqnarray}\label{parametros} B_\mu &=&\frac{1}{2}(\lambda_0 v^{2} + m^{2}_{A_{0}} )\, \text{sin}\, 2\beta,\\ \lambda_0 &=& \frac{m^{2}_{h_{0}}}{v^{2}}\Big(\frac{ m^{2}_{h_{0}}- m^{2}_{A_{0}} }{m^{2}_{h_{0}}-m^{2}_{A_{0}} \text{sin}^{2}\, 2\beta }\Big),\\ \text{tan}\, \alpha &=& \frac{ B_\mu \text{cot}\, 2\beta+ \sqrt{(B^{2}_\mu/\text{sin}^{2}\, 2\beta)-2\lambda_0 B_\mu v^{2} \text{sin}\, 2\beta+ \lambda^{2}_{0} v^{4}\text{sin}^{2}\, 2\beta } }{B_\mu -\lambda_0 v^{2} \text{sin}\, 2\beta},\label{alpha} \\ m^{2}_{H_{0}} &=& \frac{B_\mu}{\text{sin}\, 2\beta}+ \sqrt{\frac{B^{2}_{\mu}}{\text{sin}^{2}\, 2\beta} -2\lambda_0 B_\mu v^{2} \text{sin}\, 2\beta +\lambda^{2}_{0} v^{4} \text{sin}^{2}\, 2\beta }, \label{mH0}\\ m^{2}_{\sigma}&=&(\lambda_{56} + \lambda_{65})f^{2}=2\lambda_0 f^{2} \text{K}_\sigma. \label{masaescalar} \end{eqnarray} From Eq.~(\ref{masaescalar}), the variables $\lambda_{56}$ and $\lambda_{65}$ represent the coefficients of the quartic potential defined in \cite{JHEP09-2010}, both variables take values different from zero to achieve the collective breaking of the symmetry and generate a quartic coupling of the Higgs boson \cite{Kalyniak}. The BLHM also contains scalar triplet fields that get a contribution to their mass from the explicit symmetry breaking terms in model, as define in Ref.~\cite{JHEP09-2010}, that depends on the parameter $m_4$. \begin{eqnarray} m^{2}_{\phi^{0}}&=& \frac{16}{3}F^{2} \frac{3 g^{2}_{A} g^{2}_{B}}{32 \pi^{2}} \log \left( \frac{\Lambda^{2}}{m^{2}_{W'}}\right) + m^{2}_{4} \frac{f^{4}+ F^{4}}{F^{2}(f^{2}+F^{2})},\\ m^{2}_{\phi^{\pm}}&=& \frac{16}{3}F^{2} \frac{3 g^{2}_{A} g^{2}_{B}}{32 \pi^{2}} \log \left( \frac{\Lambda^{2}}{m^{2}_{W'}}\right) + m^{2}_{4} \frac{f^{4}+f^{2}F^{2}+F^{4}}{F^{2}(f^{2}+F^{2})},\\ m^{2}_{\eta^{\pm}}&=& m^{2}_{4}+ \frac{3 f^{2} g^{2}_{Y}}{64 \pi^{2}}\frac{\Lambda^{2}}{F^{2}},\\ m^{2}_{\eta^{0}}&=&m^{2}_{4}. \end{eqnarray} \subsection{The gauge sector} In the BLHM the new gauge bosons develop masses proportional to $\sqrt{f^2+F^2}\sim F$. This makes the masses of the bosons large relative to other particles that have masses proportional to $f$. The kinetic terms of the gauge fields in the BLHM are given as follows: \begin{equation}\label{Lcinetico} \mathcal{L}=\frac{f^{2}}{8} \text{Tr}(D_{\mu} \Sigma^{\dagger} D^{\mu} \Sigma) + \frac{F^{2}}{4} \text{Tr}(D_\mu \Delta^{\dagger} D^{\mu} \Delta), \end{equation} \noindent where \begin{eqnarray}\label{derivadasC} D_{\mu}\Sigma&=&\partial_{\mu} \Sigma +i g_A A^{a}_{1\mu} T^{a}_L \Sigma- i g_B \Sigma A^{a}_{2\mu} T^{a}_L+ i g_{Y} B^{3}_{\mu}(T^{3}_{R}\Sigma-\Sigma T^{3}_{R}),\\ D_{\mu}\Delta&=&\partial_{\mu} \Delta +i g_A A^{a}_{1\mu} \frac{\tau^{a}}{2} \Delta- i g_B \Delta A^{a}_{2\mu} \frac{\tau^{a}}{2}. \end{eqnarray} \noindent $T^{a}_{L}$ are the generators of the group $SO(6)_A$ corresponding to the subgroup $SU(2)_{LA}$, while $T^3_R$ represents the third component of the $SO(6)_B$ generators corresponding to the $SU(2)_{LB} $ subgroup, these matrices are provided in~\cite{JHEP09-2010}. $g_A$ and $A^{a}_{1\mu}$ denote the gauge coupling and field associated with the gauge bosons of $SU(2)_{LA}$. $g_B$ and $A^{a}_{2\mu}$ represent the gauge coupling and the field associated with $SU(2)_{LB}$, while $g_Y$ and $B^{3}_{\mu}$ denote the hypercharge and the field. When $\Sigma$ and $\Delta$ get their VEVs, the gauge fields $A^{a}_{1\mu}$ and $A^{a}_{2\mu}$ are mixed to form a massless triplet $A^{a}_{0\mu}$ and a massive triplet $A^{a}_{H\mu}$, \begin{equation}\label{AA} A^{a}_{0\mu}=\text{cos}\, \theta_g A^{a}_{1\mu} + \text{sin}\, \theta_g A^{a}_{2\mu}, \hspace{5mm} A^{a}_{H\mu}= \text{sin}\, \theta_g A^{a}_{1\mu}- \text{cos}\, \theta_g A^{a}_{2\mu}, \end{equation} \noindent with the mixing angles \begin{equation}\label{gagb} s_g\equiv \sin \theta_g=\frac{g_A}{\sqrt{g_{A}^{2}+g_{B}^{2}} },\ \ c_g \equiv \cos \theta_g=\frac{g_B}{\sqrt{g_{A}^{2}+g_{B}^{2}} }, \end{equation} \noindent which are related to the electroweak gauge coupling $g$ through \begin{equation}\label{g} \frac{1}{g^{2}}=\frac{1}{g^{2}_A}+\frac{1}{g^{2}_B}. \end{equation} After the breaking of the electroweak symmetry, when the Higgs doublets, $h_1$ and $h_2$ acquire their VEVs, the masses of the gauge bosons of the BLHM are generated. In terms of the model parameters, the masses are given by \begin{eqnarray}\label{masaBoson} m^{2}_{\gamma} &=0&, \\ m^{2}_{Z}&=&\frac{1}{4}\left(g^{2}+g^{2}_Y \right)v^{2} \left(1-\frac{v^{2}}{12 f^2} \left(2+\frac{3f^2}{f^2+F^2} \left( s^{2}_g -c^{2}_g \right)^{2} \right) \right), \\ m^{2}_{W}&=& \frac{1}{4} g^{2} v^{2} \left(1- \frac{v^{2}}{12 f^2} \left(2+ \frac{3f^2}{f^2+F^2} \left(s^{2}_g -c^{2}_g \right)^{2}\right) \right),\\ m^{2}_{Z'}&=&m^{2}_{W'} + \frac{g^2 s^{2}_W v^4}{16 c^{2}_W (f^2+F^2)} \left(s^{2}_g -c^{2}_g \right)^{2}, \label{mzprima} \\ m^{2}_{W'}&=& \frac{g^2}{4 c^{2}_{g} s^{2}_{g}} \left(f^2+F^2 \right) - m^{2}_{W}. \label{mwprima} \end{eqnarray} The weak mixing angle is defined as \begin{eqnarray}\label{angulodebil} s_W&&\equiv\sin \theta_W = \frac{g_Y}{\sqrt{g^2+ g^{2}_Y }}, \\ c_W&&\equiv\cos \theta_W= \frac{g}{\sqrt{g^2+ g^{2}_Y }},\\ x_W&=&\frac{1}{2 c_W} s_g c_g (s^{2}_g -c^{2}_g). \end{eqnarray} \subsection{The fermion sector} \label{subsecfermion} To construct the Yukawa interactions in the BLHM, the fermions must be transformed under the group $SO(6)_A$ or $SO(6)_B$. In this model, the fermion sector is divided into two parts. First, the sector of massive fermions is represented by Eq.~(\ref{Ltop}). This sector includes the top and bottom quarks of the SM and a series of new heavy quarks arranged in four multiplets, $Q$, and $Q'$ which transform under $SO(6)_A$, while $U^c$ and $U^{'c}_5$ are transformed under the group $SO(6)_B$. Second, the sector of light fermions contained in Eq.~(\ref{Lligeros}), in this expression all the interactions of the remaining fermions of the SM with the exotic particles of the BLHM are generated. For massive fermions, the Lagrangian that describes them is given by \cite{JHEP09-2010} \begin{equation}\label{Ltop} \mathcal{L}_t=y_1 f Q^{T} S \Sigma S U^{c} + y_2 f Q'^{T} \Sigma U^{c} +y_3 f Q^{T} \Sigma U'^{c}_{5} +y_b f q_{3}^{T}(-2 i T^{2}_{R} \Sigma) U^{c}_{b}+ H.c., \end{equation} \noindent where $ S = \text{diag} (1,1,1,1, -1, -1) $. The multiplets are arranged as follows \begin{eqnarray}\label{camposf} Q^{T}&=&\frac{1}{\sqrt{2}}\left( \left(-Q_{a_1} -Q_{b_2}\right), i\left(Q_{a_1} -Q_{b_2} \right), \left(Q_{a_2} -Q_{b_1}\right), i\left(Q_{a_2} -Q_{b_1}\right), Q_{5},Q_{6} \right),\\ Q'^{T}&=&\frac{1}{\sqrt{2}} (-Q'_{a_1}, iQ'_{a_1},Q'_{a_2},iQ'_{a_2},0,0 ),\\ q_{3}^{T}&=& \frac{1}{\sqrt{2}} (-\bar{t}_L, i\bar{t}_L,\bar{b}_L,i\bar{b}_L,0,0 ),\\ U^{cT}&=& \frac{1}{\sqrt{2}} \left( (-U^{c}_{b_1} -U^{c}_{a_2}), i (U^{c}_{b_1} -U^{c}_{a_2}), (U^{c}_{b_2} -U^{c}_{a_1}), i (U^{c}_{b_2} -U^{c}_{a_1}), U^{c}_{5},U^{c}_{6} \right),\\ U'^{cT}&=&(0,0,0,0,U'^{c}_5,0),\\ U_{b}^{cT}&=&(0,0,0,0,b^{c},0). \end{eqnarray} \noindent The explicit forms of the components of the multiplets are defined in Ref.~\cite{JHEP09-2010}. For simplicity, the Yukawa couplings are assumed to be real $y_1, y_2, y_3$ $\in R$. The Yukawa coupling of the top-quark is defined as \begin{equation}\label{yt} y_t= \frac{3 y_1 y_2 y_3}{ \sqrt{ (y^{2}_1 +y^{2}_2)(y^{2}_1 +y^{2}_3)}}=\frac{m_{t}}{v \sin \beta}. \end{equation} \noindent For light fermions the corresponding Lagrangian is \begin{equation}\label{Lligeros} \mathcal{L}_{light}= \sum_{i=1,2} y_u f q^{T}_i \Sigma u^{c}_{i} + \sum_{i=1,2} y_{d} f q^{T}_{i}(-2i T^{2}_{R} \Sigma) d^{c}_i +\sum_{i=1,2,3} y_e f l^{T}_i (-2i T^{2}_{R} \Sigma) e^{c}_i + h.c., \end{equation} \noindent with \begin{eqnarray}\label{qligeros} q^{T}_i &=&\frac{1}{\sqrt{2}} (-\bar{u}_{iL}, i \bar{u}_{iL}, \bar{d}_{iL}, i \bar{d}_{iL},0,0),\\ l^{T}_i &=&\frac{1}{\sqrt{2}} (-\bar{\nu}_{iL}, i \bar{\nu}_{iL}, \bar{e}_{iL}, i \bar{e}_{iL},0,0),\\ u^{cT}_i &=&(0,0,0,0,u^{c}_i,0),\\ d^{cT}_i &=&(0,0,0,0,d^{c}_i,0),\\ e^{cT}_i &=&(0,0,0,0,e^{c}_i,0). \end{eqnarray} After breaking the electroweak symmetry, the resulting mass terms are expanded by a power series up to $\frac{v^2}{f^2}$ and the mass matrices are diagonalized using perturbation theory. The fermion mass eigenstates are calculated under the assumption that $y_{2} \neq y_{3}$, otherwise the masses of $T$ and $T_{5}$ are degenerate at lowest order~\cite{PhenomenologyBLH}: \begin{eqnarray} m^{2}_t &=&y^{2}_t v^{2}_1,\label{mt} \\ m^{2}_T &=& (y^{2}_1 + y^{2}_2)f^2 + \frac{9 v^{2}_1 y^{2}_1 y^{2}_2 y^{2}_3 }{(y^{2}_1 + y^{2}_2) (y^{2}_2 - y^{2}_3)}, \label{mT} \\ m^{2}_{T_5} &=& (y^{2}_1 + y^{2}_3)f^2 - \frac{9 v^{2}_1 y^{2}_1 y^{2}_2 y^{2}_3 }{(y^{2}_1 + y^{2}_3) (y^{2}_2 - y^{2}_3)},\label{MT5} \\ m^{2}_{T_6} &=&m^{2}_{T^{2/3}_b}=m^{2}_{T^{5/3}_b} =y^{2}_1 f^2,\label{mT6} \\ m^{2}_B & =&(y^{2}_1 + y^{2}_2)f^2,\label{mB} \end{eqnarray} \noindent with $v_1=v\sin \beta$ y $v_2=v\cos \beta$. \subsection{The currents sector} In this sector the interactions of the fermions with the gauge bosons are determined. The vertices are obtained from the following Lagrangian \cite{PhenomenologyBLH}, \begin{eqnarray}\label{LbaseW} \mathcal{L} &=& \bar{Q} \bar{\tau}^{\mu} D_{\mu}Q + \bar{Q}' \bar{\tau}^{\mu} D_{\mu}Q'- U^{c\dagger} \tau^{\mu} D_{\mu}U^{c}- U'^{c\dagger} \tau^{\mu} D_{\mu}U'^{c} - U_{b}^{c\dagger} \tau^{\mu} D_{\mu}U_{b}^{c} +\sum_{i=1,2} q^{\dagger}_i \tau^{\mu} D_{\mu} q_i \nonumber \\ &+& \sum_{i=1,2,3} l^{\dagger}_i \tau^{\mu} D_{\mu} l_i - \sum_{i=1,2,3} e_i^{c\dagger} \tau^{\mu} D_{\mu} e^{c}_i - \sum_{i=1,2} u_{i}^{c\dagger} \tau^{\mu} D_{\mu} u^{c}_{i} - \sum_{i=1,2} d_{i}^{c\dagger} \tau^{\mu} D_{\mu} d^{c}_i, \end{eqnarray} \noindent where $\tau^{\mu}$ and $\bar{\tau}^{\mu}$ are defined according to~\cite{Spremier}. The respective covariant derivatives are \begin{eqnarray}\label{dcovariantes} D_{\mu}Q & =&\partial_{\mu}Q+ \sum_{a} (i g_A A^{a}_{1\mu} T^{a}_{L} Q )+ ig_Y B_{3\mu} (T^{3}_{R} +T^{+}_{X} )Q, \\ D_{\mu}Q' & =&\partial_{\mu}Q'+\sum_{a} (i g_A A^{a}_{1\mu} T^{a}_{L} Q' )+ ig_Y B_{3\mu} \left(\frac{1}{6} \right)Q',\\ D_{\mu}U^{c} & =&\partial_{\mu}U^{c}+ \sum_{a} (i g_B A^{a}_{2\mu} T^{a}_{L} U^{c} )+ ig_Y B_{3\mu} (T^{3}_{R} +T^{-}_{X} )U^{c},\\ D_{\mu}U'^{c} & =&\partial_{\mu}U'^{c}+ ig_Y B_{3\mu} T^{-}_{X} U'^{c}, \\ D_{\mu}U_{b}^{c} & =&\partial_{\mu}U_{b}^{c}+ ig_Y B_{3\mu} \left(\frac{1}{3} \right) U_{b}^{c}, \\ D_{\mu}q_i & =&\partial_{\mu}q_i+ \sum_{a} (i g_A A^{a}_{1\mu} T^{a}_{L} q_i )+ ig_Y B_{3\mu} (T^{3}_{R} +T^{+}_{X} )q_i,\\ D_{\mu}l_i & =&\partial_{\mu}l_i +\sum_{a} (i g_B A^{a}_{2\mu} T^{a}_{L} l_i )+ ig_Y B_{3\mu} T^{3}_{R} l_i,\\ D_{\mu}e^{c}_i & =&\partial_{\mu}e_{i}^{c} + ig_Y B_{3\mu} T^{e}_{X} e^{c}_i,\\ D_{\mu}u^{c}_i & =&\partial_{\mu}u_{i}^{c}+ ig_Y B_{3\mu} T^{-}_{X} u^{c}_i,\\ D_{\mu}d^{c}_i & =& \partial_{\mu}d_{i}^{c}+ ig_Y B_{3\mu} T^{d}_{X} d^{c}_i.\label{dcovariantes70} \end{eqnarray} The Feynman rules of the BLHM involved in our calculation are obtained by transforming the gauge eigenstate in terms of the mass eigenstates for the fermions, the gauge bosons, and the scalars bosons (see Appendix C of Ref.~\cite{Martin:2012kqb}). \section{Sensitivity limits on the AWMDM $a^W_t$ at the BLHM} The weak properties of the top-quark appear in the quantum field theory of its interaction with the $Z$ boson. In this regard, the most general Lorentz-invariant vertex function describing the interaction of a $Z$ boson with two top-quarks can be written in terms of ten form factors \cite{NPB551-1999,NPB812-2009}, which are functions of the kinematic invariants. In the low energy limit, these correspond to couplings that multiply dimension-four or-five operators in an effective Lagrangian, and may be complex. If the $Z$ boson couples to effectively massless fermions, the number of independent form factors is reduced to eight. In addition, if both top-quarks are on-shell, the number is further reduced to four. In this case, the $t\bar tZ$ vertex can be written in the form \begin{eqnarray}\label{verticeZtt} ie\bar{u}(p') \Gamma^{\mu}_{t\bar tZ}(q^{2}) u(p) &=&ie \bar{u}(p')\big\{ \gamma^{\mu}\left[ F_{V}(q^{2})-F_{A}(q^{2})\gamma^{5}\right] \nonumber \\ &+& i \sigma^{\mu \nu} q_{\nu} \left[ F_{M}(q^{2})- iF_{E}(q^{2})\gamma^{5}\right] \big\}u(p), \end{eqnarray} \noindent where $e$ is the proton charge. The terms $F_{V}(0)$ and $F_{A}(0)$ in the low energy limit are the $t\bar tZ$ vector and axial-vector form factors in the SM, while $F_M(q^2)$ and $F_E(q^2)$ are associated with the form factors of the weak dipole moments. The latter appear due to quantum corrections and are a valuable tool to study the effects of new physics indirectly way, through virtual corrections of new particles predicted by extensions of the SM. Another characteristic of form factors is that they only depend on an independent dynamic variable, $q^2$, where $q = p-p'$ and denotes the incoming moment of the $Z$ boson. In this work, the $Z$ boson is off-shell mass since to produce a top-quark pair the $Z$ must necessarily be off the resonance. For this case, the well-known gauge dependence problem arises and occurs when one studies the radiative corrections to fermion-pair production at colliders with center-of-mass energy beyond the mass of the $Z$ boson, that is, $\sqrt{q^{2}}>2m_t$~\cite{Bernabeu:1995gs}. In this case, the pinch technique~\cite{Papavassiliou:1993qe,Cornwall:1989gv,Cornwall:1981zr,Alkofer:2000wg,Papavassiliou:1996zn} can be used to remove the gauge dependence. Our results are useful to determine the effects of new physics that could be potentially very important. With respect to form factors, $F_M (q^2)$ and $F_E(q^2)$, these are related to the AWMDM $a^{W}_t$ and the WEDM $d^{W}_t $: \begin{eqnarray}\label{MDD} F_{M}(q^{2})&=&-\frac{a^{W}_t}{2 m_t}, \\ F_{E}(q^{2})&=&-\frac{d^{W}_t}{ e}. \end{eqnarray} It is worth mentioning that, the weak magnetic dipole form factor $F_M (q^2)$ receives contributions at the one-loop level in the SM. However, there is no such contribution to the weak electric dipole form factor, $F_E(q^2)$ \cite{NPB551-1999}. For this reason, we only estimate limits on the AWMDM of the top-quark. \subsection{ Contribution of new scalar bosons, gauge bosons and heavy quarks to the AWMDM of the top-quark} The AWMDM of top-quark carries important information about interactions with other particles. Their small magnitude in the SM makes these couplings ideal for probing new physics interactions and for exploring the potential role of top-quark in electroweak symmetry breaking, which has yet to be elucidated. In this way, the top-quark is expected to be a window to any new physics at the TeV energy scale. In this subsection, we evaluated the AWMDM of the top-quark in the context of the BLHM. All the possible one-loop contributions to the $F_M (q^2)$ and $F_E(q^2)$ form factors can be classified in terms of the two classes of triangle diagrams depicted in Fig.~\ref{dipolo}. From this figure we can see that $S_{i}$, $V_{i}$ and $ q_{i}$ are the particles circulating in the loop: $S_{i}$ represents the scalars $h_0$ (SM Higgs boson), $H_0, A_0, \phi^{0}, \eta^{0}, \sigma, H^{\pm}, \phi^{\pm}, \eta^{\pm}$; $V_{i}$ stands for the gauge bosons $\gamma, Z, W, Z', W'$; and $q_i$ denotes the quarks $b, t, B, T, T_5, T_6, T^{2/3}, T^{5/3}$. To obtain the amplitude of each contribution we need to know the Feynman rules involved in the diagrams shown in Fig.~\ref{dipolo}, these vertices are provided in Appendix A. In the unitary gauge, there are 52 diagrams that contribute to vertex $t\bar{t}Z$. We classify these contributions into two categories, which are written in the following compact form: \begin{figure}[H] \center \subfloat[]{\includegraphics[width=5.5cm]{dipolo-top-Si.eps}} \subfloat[]{\includegraphics[width=5.5cm]{dipolo-top-Vi.eps}} \caption{ \label{dipolo} Feynman diagrams contributing to the AWMDM of the top-quark at one-loop. a) Scalar bosons $S_i=h_0, H_0, A_0, \phi^{0}, \eta^{0}, \sigma, H^{\pm}, \phi^{\pm}, \eta^{\pm}$, b) Vector bosons $V_i= \gamma, Z, W, Z', W'$, and heavy quarks $q_i=b, t, B, T, T_5, T_6, T^{2/3}, T^{5/3}$.} \end{figure} \begin{eqnarray}\label{amplitudesSV1} \mathcal{M}^{\mu}_{t}(S_{i})&=& \int \frac{d^{4}k}{(2\pi)^{4}} \bar{u}(p_{2}) \left(f^{*}_{S_{i}} +f^{*}_{P_{i}} \gamma^{5}\right) \left[i \frac{\not\! k + \not\!p_{2}+m_{q_i} }{(k+p_{2})^{2}-m^{2}_{q_{i}}} \right] \left( \gamma^{\mu} (F_{V_i}+F_{A_{i}}\gamma^{5}) \right) \nonumber \\ &\times & \left[i \frac{ \not\! k + \not\!p_{1}+m_{q_i} }{(k+p_{1})^{2}-m^{2}_{q_{i}}} \right] \left(f_{S_{i}} +f_{P_{i}} \gamma^{5}\right) u(p_{1}) \left(\frac{i}{k^{2}- m^{2}_{S_{i}}} \right), \\ \mathcal{M}^{\mu}_{t}(V_{i}) &=& \int \frac{d^{4}k}{(2\pi)^{4}} \bar{u}(p_{2}) \gamma^{\alpha} \left(f^{*}_{V_{i}} +f^{*}_{A_{i}} \gamma^{5}\right) \left[i \frac{\not\! k + \not\! p_{2}+m_{q_i} }{(k+p_{2})^{2}-m^{2}_{q_{i}}} \right] \left( \gamma^{\mu} (F_{V_{i}}+F_{A_{i}}\gamma^{5}) \right) \nonumber \\ &\times & \left[i \frac{ \not\! k + \not\! p_{1}+m_{q_i} }{(k+p_{1})^{2}-m^{2}_{q_{i}}} \right] \gamma^{\beta}\left(f_{V_{i}} +f_{A_{i}} \gamma^{5}\right) u(p_{1}) \left[\frac{i}{k^{2}- m^{2}_{V_{i}}} \left(-g_{\alpha \beta}+ \frac{k_{\alpha}k_{\beta} }{m^{2}_{V_{i}}}\right) \right],\label{amplitudesSV2} \end{eqnarray} \noindent where $f_{S_{i}}, f_{P_{i}}, f_{V_{i}}$ and $ f_{A_{i}}$ denote the form factors of the scalars, pseudoscalar, vector and axial-vector. For the virtual photon case, the longitudinal term of the propagator in Eq.~(\ref{amplitudesSV2}) is absent. For each amplitude we have to pick up only the coefficients of $\sigma^{\mu \nu} q_{\nu}$ and $\sigma^{\mu \nu} q_{\nu} \gamma^{5}$ shown in Eq.~(\ref{verticeZtt}). Therefore, the contributions of the new physics to the AWMDM and WEDM of the top-quark are given as follows \begin{eqnarray}\label{awt} a^{W}_{t}\equiv [a^{W}_{t}]^{BLHM} &=& [a^{W}_{t}]^{S_i} + [a^{W}_{t}]^{V_i}, \\ d^{W}_{t}\equiv [d^{W}_{t}]^{BLHM} &=&[d^{W}_{t}]^{S_i} + [d^{W}_{t}]^{V_i}. \end{eqnarray} In the context of the BLHM, $d^{W}_{t}$ is absent so it does not receive radiative corrections at the one-loop level. \subsection{Parameters space} We consider the following input parameters: $m_{A_{0}}$, $m_{\eta_{0}}$ y $\tan \beta$. The mass of the pseudoscalar $A_{0}$, which is fixed around 1000 GeV, is in strict agreement with the most recent experimental data on searches for new scalar particles~\cite{ATLAS:2020gxx}. On the other hand, the free parameters $m_{4, 5, 6}$~\cite{JHEP09-2010} are introduced to break all the axial symmetries in the Higgs potential, giving positive masses to all scalars. Specifically, the $\eta_{0}$ scalar receives a mass equal to $m_{4}=m_{\eta_{0}}=100$ GeV, according to the BLHM, and the restriction $m_{4}\gtrsim 10$ GeV must be considered~\cite{JHEP09-2010}. The following theoretical constraints on the BLHM parameters are imposed, primarily due to perturbativity requirements, such as the value of the mixing angle $\beta$, which is restricted to be \begin{eqnarray}\label{cotabeta} 1 < \text{tan}\ \beta < \sqrt{ \frac{2+2 \sqrt{\big(1-\frac{m^{2}_{h_0} }{m^{2}_{A_0}} \big) \big(1-\frac{m^{2}_{h_0} }{4 \pi v^{2}}\big) } }{ \frac{m^{2}_{h_0}}{m^{2}_{A_0}} \big(1+ \frac{m^{2}_{A_0}- m^{2}_{h_0}}{4 \pi v^{2}} \big) } -1 }. \end{eqnarray} \noindent From this inequality we can extract values for the $\tan \beta$ parameter. In particular, for $m_{A_{0}}=1000$ GeV, it is obtained that $1 < \tan \beta < 10.45$. For our analysis we choose $\tan \beta=3$, $m_{A_{0}}=1000$ GeV, $m_{\eta_{0}}=100$ GeV and $F= 4000$ GeV. Due to the characteristics of the BLHM, avoiding fine-tuning requires light exotic quarks whereas precision electroweak constraints require new heavy gauge bosons~\cite{JHEP09-2010}. In this way, the energy scale $F$ was chosen to be large enough to ensure that the new gauge bosons are much heavier than the exotic quarks. Recall that the new heavy gauge bosons develop masses proportional to combination of $f$ and $F$ (see Eqs.~(\ref{mzprima}) and~(\ref{mwprima})). \begin{figure}[H] \subfloat[$y_{2} < y_{3}$]{\includegraphics[width=8.0cm]{masaQ.eps}} \subfloat[$y_{2} > y_{3}$]{\includegraphics[width=8.0cm]{masaQy2mayor.eps}}\\ \centerline{ \subfloat[]{\includegraphics[width=8.0cm]{masaSV.eps}} } \caption{ \label{massSVQ} Behavior of particle masses as a function of the energy scale $f$ in the BLHM. a) Heavy quarks in the region $y_{2} < y_{3}$. b) Heavy quarks in the region $y_{2} > y_{3}$. c) Scalars and vector bosons.} \end{figure} In Fig.~\ref{massSVQ}, we show the spectrum of the new particles whose masses are proportional to the energy scale $f$. As mentioned in the Subsection~\ref{subsecfermion}, the analytical expressions for the mass eigenvalues, Eqs.~(\ref{mt})~-~(\ref{mB}), are valid only for the region $|y_{2}-y_{3}|>0$. In this way, the exotic quarks obey the following mass hierarchy in the corresponding sub-regions~\cite{PhenomenologyBLH,Godfrey:2012tf}, \begin{eqnarray} y_{2} < y_{3}, \ m_{T^{2/3}} &=& m_{T^{5/3}}= m_{T_{6}} < m_{T} < m_{B} < m_{T_5}, \end{eqnarray} \begin{eqnarray} y_{2} > y_{3}, m_{T^{2/3}} &=& m_{T^{5/3}}= m_{T_{6}}< m_{T_5} < m_{B} < m_{T} . \end{eqnarray} \noindent When $y_{2} < y_{3}$, the mass difference between the $T_{5}$ and $T_{6}$ quarks is large, this increases the decay modes available for the $T_{5}$ state through cascades decays to non-SM particles. While for $y_{2} > y_{3}$, the mass splitting between $T_{5}$ and $T_{6}$ is relatively small, so $T_{5}$ state decays predominantly to SM particles~\cite{PhenomenologyBLH,Godfrey:2012tf}. Since the range of parameter space for this model is large, we restrict our study to two sample sub-regions of parameter space that characterize the range of mass for the new heavy quarks, for this purpose we choose: $y_{1}=0.61$, $y_2=0.35$, $y_3=0.84$ ($y_2 < y_3$) and $y_{1}=0.61$, $y_2=0.84 $, $y_3=0.35$ ( $y_{2}>y_{3}$). The values of the three Yukawa couplings are fixed through the top-quark mass (see Eq.~(\ref{yt})) and satisfy a perturbative scenario. Therefore, with these values of $y_{1,2,3}$ the masses for the heavy quarks are generated, these are provided in Eqs.~(\ref{mT1})~-~(\ref{mB2}). In the established scenarios, the lightest quarks are $T^{2/3}$, $T^{5/3}$ and $T_{6}$ as they acquire lower limits on their masses up to 610 GeV~\cite{CMS:2018ubm,CMS:2013wkd,CDF:2009gat,Contino:2008hi}. \begin{itemize} \item \text{Prediction with} $y_{2} < y_{3}$: \begin{eqnarray} m_{T} &=& [663.3, 2096.8]\ \text{GeV},\label{mT1} \\ m_{T_{5}} &=& [1050.1, 3118.4]\ \text{GeV},\label{mT51} \\ m_{T^{2/3}} &=& m_{T^{5/3}}= m_{T_{6}} = [610.0, 1830.0]\ \text{GeV}, \label{mT231} \\ m_{B} &=& [703.3, 2109.8]\ \text{GeV}.\label{mB1} \end{eqnarray} \item \text{Prediction with} $y_{2} > y_{3}$: \begin{eqnarray} m_{T} &=& [1050.1, 3118.4]\ \text{GeV}, \label{mT2} \\ m_{T_{5}} &=& [663.3, 2096.8]\ \text{GeV},\label{mT52} \\ m_{T^{2/3}} &=& m_{T^{5/3}}= m_{T_{6}} = [610.0, 1830.0]\ \text{GeV}, \label{mT232} \\ m_{B} &=& [1038.1, 3114.4]\ \text{GeV}.\label{mB2} \end{eqnarray} \end{itemize} \noindent With respect to the masses of the scalar and vector particles, in Fig.~\ref{massSVQ} (c) we observe that $\sigma$ is the heaviest scalar, while $\eta^{\pm}$ is the lightest one. The new gauge bosons $Z'$ and $W'$ acquire equal masses. Due to the fine-tuning constraints, the scale $f$ is varied from 1000 to 3000 GeV, thus obtaining a range of masses for each of the new quarks, scalar bosons, and vector bosons. \begin{eqnarray} m_{\sigma} &=& [2108.7, 6326.0]\ \text{GeV},\\ m_{\phi^{0}} &=& [722.3, 975.7]\ \text{GeV},\\ m_{\eta^{\pm}} &=& [315.3, 902.7]\ \text{GeV},\\ m_{\phi^{\pm}} &=& [722.5, 976.7]\ \text{GeV}, \\ m_{Z'} &=& m_{W'}=[3831.7, 4225.9]\ \text{GeV}. \end{eqnarray} The masses of the scalars $H^{\pm}$ and $H_0$ do not depend on the energy scale $f$ but are calculated from the input parameters of the BLHM, $m_{A_{0}}$ and $\tan \beta$ (see Eqs.~(\ref{mHmas}) and~(\ref{mH0})). \begin{eqnarray} m_{H^{\pm}} &=& m_{A_{0}}, \\ m_{H_{0}} &\approx & 1010\ \text{GeV}, \ \ \ \ \text{for}\ m_{A_{0}}=1000\ \text{GeV}.\label{mH00} \end{eqnarray} \subsection{Constraints on model parameters} The structure of the BLHM was constructed to solve some problems that occur in most Little Higgs models. The reason this model succeeds is that it is built under two separate symmetry-breaking scales, $f$ and $F>f$, at which the exotic quarks and heavy gauge bosons, respectively, obtain their masses. Thus, the gauge bosons are relatively heavy, consistent with electroweak precision measurements because masses above the already excluded mass range are generated. For the fermion sector of the model, the most stringent theoretical constraint on the masses of the exotic quarks comes from fine-tuning of the Higgs potential due to fermion loops. It is therefore important to determine realistic values of the three Yukawa couplings, $y_{1,2,3}$, and the top-quark Yukawa coupling, $y_{t}$, that evade the fine-tuning constraints. In this sense, a fit on the Yukawa coupling parameters is required. In the BLHM, the size of the fine-tuning can be computed in the following way~\cite{JHEP09-2010,PhenomenologyBLH} \begin{eqnarray} \label{fine-tuning} \Psi=\frac{| \delta m^{2}_{1} |}{\lambda_0 v^{2} \cos^{2} \beta}, \ \delta m^{2}_{1} = -\frac{27 f^2}{8 \pi^{2} } \frac{ y^{2}_{1} y^{2}_{2} y^{2}_{3} }{y^{2}_{2}-y^{2}_{3} }\, \text{log} \left( \frac{y^{2}_{1} + y^{2}_{2}}{y^{2}_{1} + y^{2}_{3}} \right). \end{eqnarray} \noindent If $\Psi\sim 1$, this indicates that there is no fine-tuning in the model. On the other hand, the top-quark Yukawa coupling is determined by \begin{eqnarray} y_{t}=\frac{m_{t}}{v \sin \beta}=\frac{3 y_1 y_2 y_3}{\sqrt{\left(y^{2}_{1}+ y^{2}_{2} \right) \left(y^{2}_{1} + y^{2}_{3} \right) }}, \end{eqnarray} \noindent where $m_{t}=172.76$ GeV is the top-quark mass, thus finding $y_{t}=0.74$. With this fixed value of $y_t$, we can randomize perturbative values for the $y_{1,2,3}$ parameters. In order to obtain a numerical estimate of the fine-tuning and an upper limit for the $f$ scale where the new physics does not significantly require fine-tuning, we choose the following values: $y_1=0.61$, $y_2=0.35$ and $y_3=0.84$, with $f<3100$ GeV. In this scenario we will carry out our analysis of the AWMDM of the top-quark, as for the $y_2>y_3$ scenario, this could be the subject of another study shortly soon~\cite{EA}. Finally, the gauge couplings $g_{A}$ and $g_{B}$, associated with the $SU(2)_{LA}$ and $SU(2)_{LB}$ gauge bosons, can be parametrized in a more phenomenological form in terms of a mixing angle $\theta_{g}$ and the $SU(2)_{L}$ gauge coupling: $\tan \theta_{g}=g_{A}/g_{B}$ and $g=g_{A} g_{B}/\sqrt{g^{2}_{A}+ g^{2}_{B}} $. For simplicity, we can assume that $\tan \theta_{g}=1$, which implies that the gauge couplings $g_{A}$ and $g_{B}$ are equal. The $g_{A,B}$ values are generated using the restriction $g=0.6525$. \subsection{Feynman rules} In order to facilitate the phenomenological study of the weak dipole moments of the top-quark in the BLHM. We provide in Appendix A all the Feynman rules of the interaction vertices obtained in the unitary gauge. These three-point vertices refer to the couplings between gauge bosons and fermions, and scalars and fermions. The complete set of Feynman rules presented in this study was determined using perturbation theory and expanded up to $\mathcal{O}(\frac{1}{f^{2}})$. \subsection{ The top-quark at the ILC} Top-quark production in the process $e^{+} e^{-} \rightarrow Z^{*}/\gamma \rightarrow t\bar{t}$ at the International Linear Collider (ILC)~\cite{Behnke:2013xla,Baer:2013cma,Adolphsen:2013kya} is a powerful tool to determine indirectly the scale of new physics. Such a machine offers several advantages over a hadron collider such as the LHC, especially in performing SM precision measurementes~\cite{Baer:2013cma} since it provides an experimentally clean environment without hadronic activity in the state initial, and the collision energy is accurately known. The ILC is designed to operate in phase II at a center-of-mass energy of $\sqrt{s}=500$ GeV, at this energy top-quark pairs are produced numerously well above threshold~\cite{Cao:2015qta}. Thus, in order to give numerical results to $a^{W}_{t}$, we adopt the same collider parameters of the $e^{+} e^{-}$ linear collider, that is, $\sqrt{s}=\sqrt{q^{2}} =500$ GeV. Therefore, we have computed the contributions to the AWMDM $a^{W}_t$ of the on-shell top-quark with the $Z$ boson at the center-of-mass energy expected for the ILC. On the other hand, in this same scenario, it was obtained that the WEDM $d^{W}_t $ does not receive contributions to one-loop. In the SM, $d^{W}_t $ only receives contributions at three-loops~\cite{Hollik:1998vz,Czarnecki:1996rx}. \subsection{Numerical results} To solve the integrals involved in the generic amplitudes, the Passarino-Veltman reduction scheme was implemented in the environment of the Mathematica Feyncalc~\cite{Mertig:1990an} and Package-X~\cite{Patel:2015tea}. The kinematic conditions were used in these packages, as well as the Gordon identity to eliminate the terms proportional to ($p_1 + p_2 $)$^{\mu}$. After this, the AWMDM of the top-quark is obtained through the relation $a^{W}_{t}= -2 m_t F_{M}(q^{2 })$. In this study, we do not report the analytical expressions for $a^{W}_{t}$ because are very large expressions, for this reason, we only report our numerical results. \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FSiReal.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FSiImaginary.eps}} \caption{ \label{Si} Individual scalar contributions to $a^{W}_{t}$ in the BLHM for fixed values of $m_{A_{0}}=1000\, \text{GeV}$, $m_{\eta^{0}}=100\, \text{GeV}$ and $F=4000\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} From the 52 diagrams contributing to the $t\bar{t}Z$ vertex, we start by extracting the contributions to $a^{W}_{t}$ due to the different scalar bosons and vector bosons. In Figs.~(\ref{Si})~-~(\ref{Vi}) is shown $a^{W}_{t}$ as a function of the new physics scale $f$ for the intervalo $f=[1000,3000]$ GeV. From Fig.~\ref{Si} we can appreciate the individual contributions of the scalars involved, and we observe that the main contributions to the real and imaginary part of $a^{W}_{t}$ are generated by the Higgs bosons $h_0$ and $H_0$: $\text{Re}[ a^{W}_{t} (h_{0} ) ] =[1.51 \times 10^{-4}, 1.70 \times 10^{-5}]$ and $\text{Im} [ a^{W}_{t} (H_{0})] =[2.65,2.62 ]\times 10^{-6}$. In contrast, the smallest contributions are provided by the charged scalar $H^{\pm}$ and the neutral scalar $\eta^{0}$: $\text{Re}[ a^{W}_{t} (H^{\pm} ) ] = - [2.38, 2.33] \times 10^{-6}$ and $\text{Im}[ a^{W}_{t}(\eta^{0} ) ] = -[7.28 \times 10^{-6}, 8.09 \times 10^{-7}] $. With respect to the remaining scalars, these are suppressed by one or up to three orders of magnitude compared to the absolute value of the main or smallest contributions in their class. In Fig.~\ref{Vi}, all the contributions to the AWMDM of the top-quark, coming from virtual vector particles are displayed. In this figure, the dominant contributions to $a^{W}_{t}$ are generated when the intermediate particles are the $Z$ and $W'$ gauge bosons, that is, $ \text{Re}[ a^{W}_{t} (Z) ] = [3.12,2.51] \times 10^{-5} $ and $ \text{Im}[ a^{W}_{t} (W' ) ] = [6.42,2.99]\times 10^{-8}$. The minor contributions acquire negative values and occur for $Z'$ bosons: $ \text{Re}[ a^{W}_{t}(Z' ) ] = - [7.39, 6.22]\times 10^{-7} $ and $ \text{Im}[ a^{W}_{t} (Z' ) ] = - [1.38 \times 10^{-8},6.40 \times 10^{-9}] $. For the real part of $a^{W}_{t}$, the virtual bosons $W$, $\gamma$ and $W'$ contribute on the order of $10^{-5}$ to $10^{-8}$. While for the imaginary part, the numerical contributions of the vector bosons $\gamma$, $Z$ and $W$ are zero. Note here that the mediator particles $h_0$ and $Z$ are all from the SM and these provide the largest positive contributions to $ \text{Re}[ a^{W}_{t}]$, on the other hand, for $ \text{Im}[ a^{W}_{t}]$ the new exotic particles $H_0$ and $W'$ are the ones that contribute significantly more than the others. In Appendix B, we present all BLHM contributions to the AWMDM in the $t\bar{t}Z$ vertex. \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FViReal.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FViImaginary.eps}} \caption{ \label{Vi} Individual vector contributions to $a^{W}_{t}$ in the BLHM for a fixed value of $F=4000\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} The total contribution on the AWMDM of the top-quark involving scalar bosons, vector bosons, and exotic quarks in the loop is given in Fig.~\ref{FS}. In the range of analysis established for the $f$ scale, the total contribution on the AWMDM receives contributions coming mainly from two sectors: scalar and vector. Each sector arises from the sum of all scalar and vector contributions, respectively. In this manner, in Fig.~\ref{FS} (a) we can appreciate the real contributions to $a^{W}_{t}$ and we find that the significant contribution to the total contribution comes from the scalar contribution, as both of which contribute in the order of magnitude of $10^{-4}$ to $10^{-5}$. On the other hand, the subdominant contribution is generated by the vectors: $\text{Re}[a^{W}_{t}(\text{vector})]\sim 10^{-5}$ for $f \in [1000,3000]$ GeV. Concerning Fig.~\ref{FS} (b), the relevant contribution to the imaginary part of the AWMDM of the top arises from the scalar contribution, this contributes to the order of $10^{-5}$ to $10^{-6}$. The vector contribution is of the order of $10^{-8}$. The values of the total contribution to $a^{W}_{t}$ are listed in Table~\ref{C1000}. \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FSVTReal.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FSVTImaginary.eps}} \caption{ \label{FS} Scalar, vector and total contributions to $a^{W}_{t}$ in the BLHM for fixed values of $m_{A_{0}}=1000\, \text{GeV}$, $m_{\eta^{0}}=100\, \text{GeV}$ and $F=4000\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} Since the masses of the states, $A_{0}$ and $\eta_{0}$ are input parameters of the BLHM, in order to measure the sensitivity of the AWMDM of the top-quark, we allow us to vary the masses of the mentioned states. For the pseudoscalar $A_{0}$ we have chosen the values of $m_{A_{0}}=1000, 1500$ GeV, while for the neutral scalar $\eta^{0}$ we chose $m_{\eta^{0}}=100, 500$ GeV. In this way, we generate the Figs. \ref{FmA} and \ref{Fmeta} that show the behavior of the real and imaginary part of $a^{W}_{t}$ when the energy scale $f$ varies from $1000$ GeV to $3000$ GeV, while fixing the other free parameters of the model. With the two fixed values of $m_{A_{0}}$, identical plots are generated as shown in Fig. \ref{FmA} (a), this indicates that Re[$a^{W}_{t}$] is indifferent to the chosen values of the mass of the pseudoscalar $A_0$. Fig. \ref{FmA} (b) shows plots with a very slight difference, since both contribute to the same order of magnitude. For the established values of $m_{\eta^{0}}$, in Fig. \ref{Fmeta} (a) we observe that $|\text{Re}\, [a^{W}_{t}]|$ obtains more intense values when the mass of the $\eta^{0}$ scalar is small, specifically when $m_{\eta^{0}}=100$ GeV, $|\text{Re}\, [a^{W}_{t}]| \in [2.49 \times 10^{-4}, 5.01 \times 10^{-5}]$. The same pattern occurs in Fig. \ref{Fmeta} (b), in this case $|\text{Im}\, [a^{W}_{t}]|\in [1.26 \times 10^{-5}, 2.03 \times 10^{-6}]$ for $m_{\eta^{0}}=100$ GeV. This result is to be expected since the contribution to AWMDM of the top-quark decouples as $m_{\eta^{0}}$ increases. For the different scenarios considered above, we provide the values of $a^{W}_{t}$ in Tables \ref{C1000}-\ref{C1500}. \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FmAReal.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FmAImaginary.eps}} \caption{ \label{FmA} Total contribution to $a^{W}_{t}$ in the BLHM for different values of the mass of $A_0$ but fixed values of $m_{\eta^{0}}=100\, \text{GeV}$ and $F=4000\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FEtaReal.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FEtaImaginary.eps}} \caption{ \label{Fmeta} Total contribution to $a^{W}_{t}$ in the BLHM for different values of the mass of $\eta^{0}$ and fixed values of $m_{A_{0}}=1000\, \text{GeV}$ and $F=4000\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} Another input parameter of the BLHM is $\tan \beta$, the values it acquires are restricted to the intervals generated according to Eq. (\ref{cotabeta}). Therefore, if $m_{A_{0}}$ varies from 1000 GeV to 1500 GeV, $\tan \beta \in (1,10.45)$. For certain fixed values of $\tan \beta$, we obtain the behavior of $a^{W}_{t}$ as a function of the mass of the pseudoscalar $A_0$. In Fig. \ref{Ftan}, we observe that $|\text{Re}\, [a^{W}_{t}]|$ obtains large values when $\tan \beta=3$ while $\tan \beta=6$ and $\tan \beta=10$ yield suppressed contributions to $a^{W}_{t}$. With respect to $|\text{Im}\, [a^{W}_{t}]|$, it also acquires high values when $\tan \beta=3$. According to Figs.~\ref{Ftan} (a)~and~\ref{Ftan} (b) , it can be seen that for the different fixed values of $\tan \beta$, $|\text{Re}\, [a^{W}_{t}]|\sim 10^{-4}$ and $|\text{Im}\, [a^{W}_{t}]| \sim 10^{-5}$ are obtained. \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FTangenteRe.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FTangenteIm.eps}} \caption{ \label{Ftan} Total contribution to $a^{W}_{t}$ in the BLHM for different values of $\tan \beta$ and fixed values of $f=1000\, \text{GeV}$, $m_{\eta^{0}}=100\, \text{GeV}$ and $F=4000\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} The two symmetry breaking energy scales are also free and important parameters of the BLHM, which we can vary. First, it can be seen from Fig. \ref{Ffi} (a) that $|\text{Re}\, [a^{W}_{t}]|$ does not essentially depend on $F$ parameter variations, while in Fig. \ref{Ffi} (b), $|\text{Im}\, [a^{W}_{t}]|$ does depend slightly on $F$. These plots show a variation of the $F$ parameter from $3000$ GeV to $6000$ GeV, for three distinct energy scales, i.e. $f= 1, 2, 3$ TeV. In these figures, the main contributions to $|a^{W}_{t}|$ are generated for $f= 1$ TeV, while secondary contributions arise for $f= 2$ TeV. Second, Fig. \ref{FFi} visualizes the behavior of $|\text{Re}\, [a^{W}_{t}]|$ and $|\text{Im}\, [a^{W}_{t}]|$ as a function of the $f$ scale. In this case, the graphs do depend on the variations of the parameter $f$. We have adopted the followings specific values for the parameter $F$, $F= 4, 5, 6$ TeV so that the contribution to $|a^{W}_{t}]|$ is enhanced. However, there is not significant difference in the behavior of these plots. According to Eq.~(\ref{fine-tuning}), the energy scale $f$ is also intimately related to the measure of the fine-tuning, and we observed that for $f=1$ TeV not only large contributions are generated for $|[a^{W}_{t}]|$, we also ensure the absence of fine-tuning. \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FfiReal.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FfiImaginary.eps}} \caption{ \label{Ffi} Total contribution to $a^{W}_{t}$ in the BLHM for different values of the energy scale $f$ with fixed values of $m_{A_{0}}=1000\, \text{GeV}$ and $m_{\eta^{0}}=100\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} \begin{figure}[H] \subfloat[]{\includegraphics[width=8.0cm]{FFi-Re.eps}} \subfloat[]{\includegraphics[width=8.0cm]{FFi-Im.eps}} \caption{ \label{FFi} Total contribution to $a^{W}_{t}$ in the BLHM for different values of the energy scale $F$ with fixed values of $m_{A_{0}}=1000\, \text{GeV}$ and $m_{\eta^{0}}=100\, \text{GeV}$. a) Re($a^{W}_{t}$). b) Im($a^{W}_{t}$).} \end{figure} We have calculated the one-loop level contributions on the AWMDM of the top-quark in several scenarios. We found that the real and imaginary parts of $a^W_t$ is in the range $10^{-4}$ to $10^{-12}$ when the parameters $f$, $F$ and $m_{A_{0}}$ are varied in the corresponding intervals: $f=[1000,3000]$ GeV, $F=[3000,6000]$ GeV and $m_{A_{0}}=[1000,1500]$ GeV. The scalar sector provide the largest numerical contributions to $|\text{Re}\, [a^{W}_{t}]|$ and $|\text{Im}\, [a^{W}_{t}]|$. In general, $|\text{Im}\, [a^{W}_{t}]|$ is suppressed compared to $|\text{Re}\, [a^{W}_{t}]|$. In addition, we have found that our results are sensitive to changes in the value of $m_{\eta^{0}}$. The contribution to AWMDM from the top-quark decouples with large masses of the $ \eta^{0} $ scalar, as shown in Fig.~\ref{Fmeta}. Since our main goal in this work is to study the effect of the new particles generated in the BLHM framework, our results reported in this work on $a^{W}_{t}$ depend on the energy scales $f$ and $F$, which represent the scales of the new physics. In this scenario, we have found that the numerical values found for the AWMDM of the top-quark are comparable with the predictions of the SM or BSM. In the context of the SM, Bernabeu et al.~\cite{Bernabeu:1995gs}, they found that the numerical predictions for $a^{W}_{t}$ are on the order of $10^{-3}$. In Ref.~\cite{Bernreuther:2005gq} the one-loop level QCD contribution to the AWMDM of the top quark in the SM scenario has also been calculated, finding that $a^{W}_{t}=5.2\times 10^{-3}$ for renormalization scale $\mu=m_{t}=175$ GeV. Within the Minimal Supersymmetric Standard Model (MSSM), the contributions found for the AWMDM of the top-quark are on the order of magnitude of $10^{-4}$~\cite{Bartl:1997iq}. In models with Two Higgs Doublets (2HDM), the induced effects of new scalars in the $t\bar{t}Z$ vertex loop were also investigated, obtaining $a^{W}_{t} \sim 10^{-3}$~\cite{Bernabeu:1995gs}. Other studies performed in extended models that predict the existence of new $Z'$ gauge boson, is obtained $a^{W}_{t} \in [10^{-6},10^{-10}]$~\cite{Vivian:2019zfa}. On the experimental side, the weak dipole moments of the top-quark have not yet been tested directly. There are promising project at colliders such as the LHC~\cite{ATLAS:2016wgc,Rontsch:2015una}, the future ILC~\cite{Behnke:2013xla,Baer:2013cma,Adolphsen:2013kya}, the CLIC~\cite{deBlas:2018mhx,Robson:2018enq,Roloff:2018dqu,CLIC:2016zwp,Abramowicz:2016zbo} and the Future Circular Collider hadron-hadron (FCC-hh)~\cite{Barletta:2014vea,Koratzinos:2015fya} that have as an important part of their physics program to investigate and constrain the dipole moments of the top-quark, in particular, the AWMDM. Currently, in the LHC experiment, the most hopeful avenue for studying the top-quark electroweak couplings is through the $pp\rightarrow \bar{t}tV$ ($V=Z,\gamma,H$) processes, which produces direct sensitivity without intrinsic dilution by QCD effects. In the FCC with proton-proton collisions, SM particles will also be produced in great abundance, and in this case, the study of the AWMDM can be carried out through the $pp\rightarrow \bar{t}tZ$ process. At a leptonic collider such as the ILC or CLIC, the focus of investigation will be via the $e^{+}e^{-}\rightarrow Z^{*}/ \gamma \rightarrow \bar{t}t $ process which is extremely sensitive to top-quark electroweak couplings. From the phenomenological point of view, for instance, through the production cross-section of $t\bar{t}ZZ$, limits on $a^{W}_{t}$ are estimated at $95\ \%$ C.L. corresponding to $3000$ fb$^{-1}$ of integrated luminosity at the LHC: $-0.1 \leq a^{W}_{t} \leq 0.09$~\cite{Etesami:2017ufk}. In this same integrated luminosity scenario, and through the $p^{Z}_{T}$ distribution in $\bar{t}tZ$ production, it is found that $ -0.08 \lesssim a^{W}_{t} \lesssim 0.08$~\cite{Rontsch:2015una}. FCC-hh will provide collisions at a center-of-mass energy of 100 TeV, a factor 7 higher than the LHC. At this energy and with 10 000 fb$^{-1}$ of data, about $10^{8}$ $\bar{t}tZ$ events will be produced. At a rough estimate, the FCC-hh will provide improved limits to $a^{W}_{t}$ by a factor of 3 to 10 compared to the 3000 fb$^{-1}$ LHC~\cite{Rontsch:2015una,Barletta:2014vea,Koratzinos:2015fya}. Finally, at the ILC with $\sqrt{s}=500$ GeV and 500 fb$^{-1}$, the limits of $-0.02 \lesssim a^{W}_{t} \lesssim 0.04$ at $95 \ \%$ C.L. are expected to be reached. They are derived by exploiting the total cross-section of the top-quark pair production~\cite{Rontsch:2015una}. The ILC offers the possibility of extending the LHC top-quark program~\cite{Baer:2013cma} and is one of the most advanced proposals for an $e^{+}e^{-}$ collider. In the case of the ILC there is an improvement in the $a^{W}_{t}$ by a factor of three (through $\bar{t}tZ$ production) or four (through $\bar{t}tZZ$ production) compared to the LHC constraints. For these reasons, we believe that our results can be verified by the ILC in the future because it has the potential to reach the required level of sensitivity. \section{Conclusions} In this paper, we present a new comprehensive study on the sensitivity limits of the AWMDM of the top-quark in the context of the BLHM at the one-loop level. In our study, we have taken into account all contributions from the scalar sector, gauge sector, and heavy quarks. For which we deduced the allowed ranges of the masses of the new quarks, scalars, and vector bosons (see Eqs.~(\ref{mT1})~-~(\ref{mH00})), as well as the corresponding Feynman rules of the BLHM expanded up to $\mathcal{O}(\frac{1}{f^{2}})$ (see Tables~\ref{FeyRul-A0}~-~\ref{FeyRul-current-4}). These results are an original contribution. The sensitivity on the AWMDM of the top-quark has been explored in the region of the parameter space allowed by the fine-tuning constraints, and measured by varying the main free parameters of the BLHM: $m_{A_{0}}$, $m_{\eta^{0}}$, $f$, $F$ and $\tan \beta$. Our results are summarized through a set of Figs.~(\ref{massSVQ})~-~(\ref{FFi}), Tables~\ref{C1000}~-~\ref{C1500} (Sensitivity limits on the AWMDM at the BLHM) and Tables~\ref{FeyRul-A0}~-~\ref{FeyRul-current-4} (Feynman rules for the BLHM). We find that with the appropriated parameters of the BLHM it is possible to put limits on the AWMDM of the top-quark with a sensitivity of the order of $a^{W}_{t}= 2.49 \times 10^{-4}-1.26 \times 10^{-5}i$, $1.26 \times 10^{-4}-5.41\times 10^{-6}i$, where the main contribution comes from the scalars $h_0$ and $H_{0}$. The sensitivity limits on $a^W_{t}$ obtained in the context of the BLHM (see Tables \ref{C1000}~-~\ref{C1500}) are competitive concerning for to the reported in Refs.~\cite{Bernabeu:1995gs,Bernreuther:2005gq,Bartl:1997iq,Vivian:2019zfa}, and in some cases compare favorably. We should remark that our results found for $a^W_{t}$ fall within the phenomenological bounds provided by colliders such as LHC, FCC-hh and ILC~\cite{Etesami:2017ufk,Rontsch:2015una,Barletta:2014vea,Koratzinos:2015fya}. Present, there are no precision experimental measurements on the AWMDM of the top-quark. However, future proposed experiments are expected to reach sensitivity to predicted values for observables in the BLHM. In addition, as this topic is worthwhile yet underexplored, theoretical, experimental and phenomenological interest is of great importance in order to motivate experimental collaborations to measure this very intriguing sector of the SM, which could give evidence of new physics BSM. \vspace{7cm} \begin{table}[H] \caption{Expected sensitivity limits on the $a^W_t$ in the context of the BLHM with $\sqrt{q^{2}}=500\ \text{GeV}$, \ $\text{m}_{A_0}=1000\ \text{GeV}$, $\text{m}_{\eta_0}=100\ \text{GeV} $, $F=4000\ \text{GeV}$ and $f=1, 1.5, 2, 2.5, 3\ \text{TeV}$ are represented. All new contributions are considered, scalar bosons, vector bosons and heavy quarks. \label{C1000}} \centering \begin{tabular}{|c|c|} \hline \hline \multicolumn{2}{|c|}{$\sqrt{q^{2}}=500\ \text{GeV}$,\ $\text{m}_{A_0}=1000\ \text{GeV}$, $\text{m}_{\eta_0}=\bf{ 100}\ \text{GeV} $,\ $F=4000\ \text{GeV}$}\\ \hline $f\ [\text{TeV}]$ & $(a^{W}_{t})^{\rm Total} $ \\ \hline \hline $1_\cdot0$ & $2_\cdot 49 \times 10^{-4} - 1_\cdot 26 \times 10^{-5}\ i $ \\ \hline $1_\cdot5 $ & $ 1_\cdot 26 \times 10^{-4} - 5_\cdot 41 \times 10^{-6}\ i$ \\ \hline $2_\cdot0 $ & $ 8_\cdot 26 \times 10^{-5} - 3_\cdot 33 \times 10^{-6}\ i$ \\ \hline $2_\cdot5 $ & $ 6_\cdot 18 \times 10^{-5} - 2_\cdot 47\times 10^{-6}\ i$ \\ \hline $3_\cdot0 $ & $ 5_\cdot 01 \times 10^{-5} - 2_\cdot 03 \times 10^{-6}\ i$ \\ \hline \end{tabular} \end{table} \begin{table}[H] \caption{Expected sensitivity limits on the $a^W_t$ in the context of the BLHM with $\sqrt{q^{2}}=500\ \text{GeV}$, \ $\text{m}_{A_0}=1000\ \text{GeV}$, $\text{m}_{\eta_0}=500\ \text{GeV}$, $F=4000\ \text{GeV}$ and $f=1, 1.5, 2, 2.5, 3\ \text{TeV}$ are represented. All new contributions are considered, scalar bosons, vector bosons and heavy quarks. \label{eta100}} \centering \begin{tabular}{|c|c|} \hline \hline \multicolumn{2}{|c|}{$\sqrt{q^{2}}=500\ \text{GeV}$,\ $\text{m}_{A_0}=1000\ \text{GeV}$, $\text{m}_{\eta_0}=\bf{500}\ \text{GeV}$, $\ F=4000\ \text{GeV}$}\\ \hline $f\ [\text{TeV}]$ & $(a^{W}_{t})^{\rm Total} $ \\ \hline \hline $1_\cdot0$ & $ 1_\cdot 31 \times 10^{-4} - 3_\cdot 99 \times 10^{-6}\ i$ \\ \hline $1_\cdot5 $ & $ 7_\cdot 40 \times 10^{-5} - 2_\cdot 23 \times 10^{-6}\ i$ \\ \hline $2_\cdot0 $ & $ 5_\cdot 32 \times 10^{-5} - 1_\cdot 68 \times 10^{-6}\ i$ \\ \hline $2_\cdot5 $ & $ 4_\cdot 29 \times 10^{-5} - 1_\cdot 45 \times 10^{-6}\ i$ \\ \hline $3_\cdot0 $ & $ 3_\cdot 69 \times 10^{-5} - 1_\cdot 33 \times 10^{-6}\ i$ \\ \hline \end{tabular} \end{table} \begin{table}[H] \caption{Expected sensitivity limits on the $a^W_t$ in the context of the BLHM with $\sqrt{q^{2}}=500\ \text{GeV}$, \ $\text{m}_{A_0}=1 500\ \text{GeV}$, $\text{m}_{\eta_0}=100\ \text{GeV}$, $F=4000\ \text{GeV}$ and $f=1, 1.5, 2, 2.5, 3\ \text{TeV}$ are represented. All new contributions are considered, scalar bosons, vector bosons and heavy quarks. \label{C1500}} \centering \begin{tabular}{|c|c|} \hline \hline \multicolumn{2}{|c|}{$\sqrt{q^{2}}=500\ \text{GeV}$,\ $\text{m}_{A_0}=\bf{1500}\ \text{GeV}$, $\text{m}_{\eta_0}=100\ \text{GeV}$, $\ F=4000\ \text{GeV}$}\\ \hline $f\ [\text{TeV}]$ & $(a^{W}_{t})^{\rm Total} $ \\ \hline \hline $1_\cdot0$ & $ 2_\cdot 38 \times 10^{-4} -1_\cdot 17 \times 10^{-5}\ i$ \\ \hline $1_\cdot5 $ & $ 1_\cdot 23 \times 10^{-4} - 4_\cdot 53\times 10^{-6}\ i$ \\ \hline $2_\cdot0 $ & $ 8_\cdot 17 \times 10^{-5} - 2_\cdot 46\times 10^{-6}\ i$ \\ \hline $2_\cdot5 $ & $ 6_\cdot 20\times 10^{-5} - 1_\cdot 60 \times 10^{-6}\ i$ \\ \hline $3_\cdot0 $ & $ 5_\cdot 09 \times 10^{-5} - 1_\cdot 16 \times 10^{-6}\ i$ \\ \hline \end{tabular} \end{table} \vspace{3.5cm} \begin{center} {\bf Acknowledgements} \end{center} E. C. A. appreciates the post-doctoral stay. A. G. R. thank SNI and PROFEXCE (M\'exico). \vspace{3cm}
2,869,038,155,185
arxiv
\subsection{Introduction} Theoretical prediction of high TMR in Fe/MgO/Fe MTJ due to so-called symmetry spin filtering mechanism \cite{Butler01,Mathon01} and its quick experimental verification \cite{Parkin04,Yuasa04} revolutionized the hard disk drive (HDD) industry during the last decade. But despite a lot of theoretical and experimental attention to this MTJ dependence of the TMR on the number of MgO layers, $N$, arising from native symmetry spin filtering mechanism is still not fully understood and somewhat controversial \cite{Parkin04}. Theoretical calculations based on the density functional theory (DFT) predict that in ideal Fe/MgO/Fe junction TMR should increase very fast with increasing $N$. More specifically, TMR is predicted to change by as much as two orders of magnitude when $N$ changes form 4 to 12 \cite{Butler01,Belashchenko05}. In contrast, experimental measurements show that TMR does not depend much on the thickness of MgO \cite{Parkin04,Yuasa04}. We explain the controversy between theoretical calculations and experimental results by the fact that fast increase of the TMR predicted previously \cite{Butler01,Belashchenko05} is a consequence of the contribution to transmission function from the interface resonance states (that exist in minority Fe channel in a very narrow energy window near the Fermi energy, $E_F$ \cite{Butler01,Belashchenko05,Rungger09,Tiusan04,Feng09,Faleev12}), while native symmetry spin filtering effect, in general, leads to modest linear increase of the TMR with $N$ in the asymptotic limit $N \to \infty$. Thus, absence of strong dependence of measured TMR on $N$ could be explained by the interface roughness that destroys the interface resonance states (IRS). In this paper we also describe features of the band structure of \emph{bulk} electrode material that give more stronger $\propto N^2$ dependence of the TMR on $N$ in the limit $N \to \infty$. Proposed below analysis of the strength of the symmetry filtering effect based on the bulk band structure of candidate electrode material could serve as a tool for quick material discovery search of suitable electrodes in context of emerging technologies that require high TMR. As an example of such technology that critically depends on discovery of novel MTJs with high TMR we mention spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology (that has a potential to become an 'universal memory' \cite{Akerman05}) where the pool of candidate electrode materials includes several hundreds Heusler allays, magnetic multilayers, etc. For the Fe/MgO/Fe MTJ with sufficiently large MgO thickness the transmission function for electrons with in-plane wave vector $\mathbf{k}$ and energy $E$ inside the MgO band gap is determined by single surviving evanescent state inside the MgO barrier at this $\mathbf{k}$ and $E$, $\psi_e(\mathbf{k},E)$, that has smallest attenuation constant, $\gamma(\mathbf{k},E)$. Transmission function in the limit $N \to \infty$ is given by \cite{Belashchenko04} \begin{equation} T_{\sigma \sigma'}(\mathbf{k},E)=t_{\sigma \mathbf{k} E} \times e^{-\gamma(\mathbf{k},E) N} \times t_{\sigma' \mathbf{k} E} \ , \label{eqTT} \end{equation} where subindexes $\sigma$ and $\sigma'$ describe the spin channel of the left and right electrodes, correspondingly. (We use notations where $\sigma$ takes two values, $u$ and $d$ (short for "up" and "down") for majority and minority spin channel, correspondingly. Thus, $T_{uu}$ and $T_{dd}$ are majority-majority and minority-minority transmission in parallel configuration (PC) of the electrodes, and $T_{ud}$ and $T_{du}$ are majority-minority and minority-majority transmission in antiparallel configuration (APC) of the electrodes.) The coefficient $t_{\sigma \mathbf{k} E}$ in (\ref{eqTT}) is the so-called surface transmission function (STF) defined for each electrode separately (in the case of different electrodes) by solution of the scattering problem at the electrode-barrier interface \begin{equation} t_{\sigma \mathbf{k} E} = \sum_p |B_e/A_p|^2 \ . \label{eq2} \end{equation} Here summation is taken over all eigenstates $p$ of the electrode with given $\sigma$, $\mathbf{k}$ and E, $A_p$ is amplitude of the eigenstate $p$ incoming from the electrode and $B_e$ is corresponding amplitude of the scattering wavefunction inside the barrier taken at the reference plane - plane located at sufficient distance from the interface where scattering wavefunctions for all $p$ are already indistinguishable from surviving evanescent state $\psi_e(\mathbf{k},E)$. Strictly speaking, with such definition of the $t_{\sigma \mathbf{k} E}$, $N$ in Eq. (\ref{eqTT}) is the number of MgO layers between reference planes corresponding to the two electrode-barrier interfaces, but we will use total number of MgO layers, $N$, in Eq. (\ref{eqTT}) assuming proper re-definition of the $t_{\sigma \mathbf{k} E}$. In general, for different electrodes, $t_{\sigma \mathbf{k} E}$ should also have the electrode index (left or right), but for Fe/MgO/Fe MTJ two electrodes are the same, so the notation $t_{\sigma \mathbf{k} E}$ without reference to the left or right electrode is used in (\ref{eqTT}). Total transmission of the MTJ is given by the $\mathbf{k}$-integral over the 2D surface Brillouin zone (SBZ) \begin{equation} T_{\sigma \sigma'}(E)=\int \frac{d^2\mathbf{k}A}{(2\pi)^2} T_{\sigma \sigma'}(\mathbf{k},E) = \sum_{\mathbf{k}} T_{\sigma \sigma'}(\mathbf{k},E) \ , \label{eq1} \end{equation} where $A$ is the in-plane cross-sectional area of the device. We emphasize two important features of the Eq. (\ref{eqTT}) for transmission function: (1) due to the flux conservation the same STF $t_{\sigma \mathbf{k} E}$ describes two different processes - transmission from the electrode to the barrier and transmission from the barrier to the electrode, and (2) STF of two electrodes are independent from each other (electrodes are decoupled). One nontrivial consequence of decoupling of the two electrodes and transmission through single channel inside the barrier as described by Eq. (\ref{eqTT}) is that in the limit $N \to \infty$ transmission in APC can be expressed in terms of transmission functions for majority and minority electrons in PC, $\lim_{N \to \infty}T'_{ud}(E) = T_{ud}(E)$, where \begin{equation} T'_{ud}(E) = \sum_{\mathbf{k}} [T_{uu}(\mathbf{k},E) \times T_{dd}(\mathbf{k},E) ]^{1/2} \ . \label{TAP1} \end{equation} In order to determine at what barrier thickness the asymptotic expression (\ref{eqTT}) becomes valid in the case of Fe/MgO/Fe MTJ we performed calculations of transmission functions $T_{ud}(E)$ and $T'_{ud}(E)$ for different $N$ using an \emph{ab-initio} tight-binding linear muffin-tin orbitals (TB-LMTO) methods in its atomic spheres approximation (ASA) [\onlinecite{Schilfgaarde98,Turek97,Faleev05}]. Results of these calculations for $N=4$, 6, 8, 10, and 12 are shown on Fig. 1(a). One can see that $T_{ud}(E)$ defined by Eq. (\ref{eq1}) and $T'_{ud}(E)$ defined by Eq. (\ref{TAP1}) indeed are very close to each other even for $N=4$. For larger $N$ agreement between $T_{ud}(E)$ and $T'_{ud}(E)$ becomes better and at $N=12$ $T_{ud}(E)$ and $T'_{ud}(E)$ are almost indistinguishable. We conclude that the asymptotic behaviour described by Eq. (\ref{eqTT}) is reached for the Fe/MgO/Fe MTJ starting already with $N=4$. \begin{figure}[t] \includegraphics*[trim={0.3cm 16.0cm 1.4cm 2.4cm},clip,width=8.5cm]{TT.dd.uu.eps} \caption{(color online). (a) Comparison of two expressions for the APC transmission, $T_{ud}(E) = \sum_{\mathbf{k}} T_{ud}(\mathbf{k},E)$ (red lines), and $T'_{ud}(E)= \sum_{\mathbf{k}} [T_{uu}(\mathbf{k},E) \times T_{dd}(\mathbf{k},E)]^{1/2} $ (blue lines) for the Fe/MgO/Fe MTJ with number of MgO layers $N=4$, 6, 8, 10, and 12. (b) Majority-majority transmission in PC, $T_{uu}(E)$, for the Fe/MgO/Fe MTJ with different $N$. } \label{fig1} \end{figure} Evanescent state of MgO with smallest attenuation constant is the state $\psi_e(\mathbf{k},E)$ with $\mathbf{k}=0$ \cite{Butler01}. This state has $\Delta_1$ symmetry (function $\psi_e(0,E)$ stays invariant with respect to the square-symmetry transformations of $(x,y)$ coordinates). For small $\mathbf{k}$ the attenuation constant could be approximated as \begin{equation} \gamma_{\mathbf{k},E} = \gamma(0,E) + \alpha k^2 \ . \label{att} \end{equation} It is well known \cite{Butler01} that Fe has $\Delta_1$-symmetry \emph{majority} electron band along the $\Gamma-$H line in 3D BZ ($\Gamma-$H line in 3D BZ corresponds to the $\mathbf{k}=0$ in 2D SBZ) with energies near $E_F$ (see Fig \ref{fig4}(b)). This $\Delta_1$-symmetry state could couple with the $\Delta_1$-symmetry evanescent state of MgO, so the majority STF $t_{u, \mathbf{k}=0, E}$ is not zero. On the other hand, Fe does not have \emph{minority} electron bands with $\Delta_1$-symmetry along the $\Gamma-$H line with energies near $E_F$ (see Fig \ref{fig4}(c)). Thus, minority STF $t_{d, \mathbf{k}=0, E} = 0$ (for $\mathbf{k}=0$ the overlap integral at the Fe/MgO interface between Fe minority states with symmetries other then $\Delta_1$-symmetry and the evanescent state of MgO with $\Delta_1$-symmetry is zero by the symmetry). The $\Delta_1$-symmetry evanescent state $\psi_e(0,E)$ of MgO mostly consist of $s$-orbitals of Mg and $p_z$ orbitals of O. In the spirit of the perturbation theory, for small but non-zero $\mathbf{k}$ the eigenstate function $\psi_e(\mathbf{k},E)$ could be represented as $\psi_e(0,E)$ plus some small terms proportional to $\mathbf{k}$ (we consider here changes in the orbital composition of the eigenstate function inside the unit cell and factor out the global unit cell to unit cell translation factor $\exp{(\mathbf{kr})}$). In the linear over $\mathbf{k}$ approximation the $p_z$ orbital of O that is aligned along \emph{z}-axis will rotate to small angle proportional to $|k|$, so $\psi_e(\mathbf{k},E)$ will include $p_x k_x$ and $p_y k_y$ terms where $p_x$ and $p_y$ are the \emph{p}-orbitals of the O atom. At the Fe/MgO interface the $p_x$ and $p_y$ orbitals of O have non-zero overlap integral with two minority Fe $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ eigenstates composed primary of $d_{xz}$ and $d_{yz}$ orbitals of Fe atoms (two $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ Fe minority bands are the extensions to non-zero $\mathbf{k}$ of two degenerate $\Delta_5$-symmetry bands $\psi^{\Delta_5}_d(0,k_z)$ that exists at energies near $E_F$ along the $\Gamma-$H line ($\mathbf{k}=0$ in SBZ), see Fig 4(c)). Analogously, these two minority Fe eigenstates $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ for small but non-zero $\mathbf{k}$ could be represented as $\psi^{\Delta_5}_d(0,k_z)$ plus small terms proportional to $\mathbf{k}$. In the linear over $\mathbf{k}$ approximation the $d_{xz}$ and $d_{yz}$ orbitals of two $\psi^{\Delta_5}_d(0,k_z)$ eigenfunctions will rotate by small angles in order to accommodate new propagation direction, $(k_x,k_y,k_z)$ instead of $(0,0,k_z)$, of the $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ eigenfunctions (here $k_z$ is the wave-vector that corresponds to the energy $E$ of the $\Delta_5$-symmetry band at $\mathbf{k}=0$). (Note that rotation of the $d_{xz}$ and $d_{yz}$ orbitals of two $\psi^{\Delta_5}_d$ eigenstates in accordance with the propagation direction is consistent with the fact that propagating along rotated by $90^o$ direction, e.g. along the $x$-axis, wave functions $\psi^{\Delta_5}_d$ are composed of rotated by $90^o$ $d_{xz}$ and $d_{xy}$ orbitals.) It is straightforward to show that $d_{xz}$ orbital rotated around $y$-axis on small angle $\theta=k_x/k_z$ will have $d_{zz}$ component with weight proportional to $k_x$. Similarly, $d_{yz}$ orbital rotated around $x$-axis on small angle $\theta=k_y/k_z$ will have $d_{zz}$ component with weight proportional to $k_y$. This $d_{zz}$ component is invariant with respect to rotation around \emph{z}-axis and, at the Fe/MgO interface, it has non-zero overlap integral with $s$-orbitals of Mg and $p_z$-orbitals of O of the $\psi_e(\mathbf{k},E)$ MgO eigenstate . Both described above contributions to the overlap integral at the Fe/MgO interface between either of the two $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ eigenstates and $\psi_e(\mathbf{k},E)$ eigenstate are proportional to $\mathbf{k}$ for small $\mathbf{k}$. Thus, the scattering amplitude $B_e$ inside the the barrier originated from incoming $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ is proportional to $\mathbf{k}$. Since the expression for the $t_{\sigma,\mathbf{k}, E}$ (\ref{eq2}) includes the square of the scattering amplitude, $|B_e|^2$, the $t_{d,\mathbf{k}, E}$ is proportional to the $k^2$ for small $\mathbf{k}$. Using Eq. (\ref{eqTT}) with above estimations for majority and minority STFs, $t_{u,\mathbf{k}, E} \propto 1$ and $t_{d,\mathbf{k}, E} \propto k^2$, we can write expressions for $\mathbf{k}$-resolved transmission functions in the limit of small $\mathbf{k}$: \begin{eqnarray} T_{uu}(\mathbf{k},E) &=& A_{uu} e^{-(\gamma(0,E) + \alpha k^2)N} \ \label{T.1} \\ T_{ud}(\mathbf{k},E) &=& A_{uu}f_{ud}(\mathbf{k}/|k|) k^2 e^{-(\gamma(0,E) + \alpha k^2)N} \ \label{T.2} \\ T_{dd}(\mathbf{k},E) &=& A_{uu}f^2_{ud}(\mathbf{k}/|k|) k^4 e^{-(\gamma(0,E) + \alpha k^2)N} \label{T.3} \ \end{eqnarray} Here $A_{uu}$ is a constant (for fixed energy), and $f_{ud}(\mathbf{k}/|k|)$, in general, is functions of the $\mathbf{k}$-direction, $\mathbf{k}/|k|$. Note, that in the Eq. (\ref{T.3}) square of the function $f_{ud}(\mathbf{k}/|k|)$ is used, as prescribed by the Eq. (\ref{eqTT}) that demands the equality $T_{uu}(\mathbf{k},E) \times T_{dd}(\mathbf{k},E) = T^2_{ud}(\mathbf{k},E)$. The large-$N$ asymptotic behavior of transmission functions for energies near $E_F$ could be easily obtained now by performing the $\mathbf{k}$ integration in Eq. (\ref{eq1}): \begin{eqnarray} T_{uu}(E) &\propto & \int d^2\mathbf{k} e^{-(\gamma(0,E) + \alpha k^2)N} \propto \frac{ e^{-\gamma(0,E)N}}{N} \ \label{T3.1} \\ T_{ud}(E) &\propto & \int d^2\mathbf{k} k^2 e^{-(\gamma(0,E) + \alpha k^2)N} \propto \frac{ e^{-\gamma(0,E)N}}{N^2} \ \label{T3.2} \\ T_{dd}(E) &\propto & \int d^2\mathbf{k} k^4 e^{-(\gamma(0,E) + \alpha k^2)N} \propto \frac{ e^{-\gamma(0,E)N}}{N^3} \label{T3.3} \ \end{eqnarray} From Eqs. (\ref{T3.1}-\ref{T3.3}) we can finally derive the large-$N$ asymptotic behaviour of TMR for energies near $E_F$ \begin{eqnarray} TMR = \frac{T_{uu} + T_{dd} - 2 T_{ud}}{2 T_{ud}} = C N + D + O(1/N) \ . \label{TMR} \end{eqnarray} Eq. (\ref{TMR}) gives \emph{native} asymptotic behavior of the TMR enhanced due to the symmetry spin filtering effect (in other words, the strength of the effect) in the Fe/MgO/Fe MTJ. The $\mathbf{k}$-resolved transmission $T_{ud}(\mathbf{k},E)$ is not equal to zero at $\mathbf{k}=0$ if contributions of the higher-order channels inside the MgO barrier with attenuation constants $\gamma'(\mathbf{k},E) > \gamma(\mathbf{k},E)$ (and symmetry other than the $\Delta_1$-symmetry at $\mathbf{k}=0$) are taken into account. Such contributions to $T_{ud}(\mathbf{k},E)$ at $\mathbf{k}=0$ were used to explain the nature of the symmetry spin filtering effect in existing literature \cite{Butler01}. We note, however, that contribution to the $\mathbf{k}$-integrated transmission $T_{ud}(E)$ from such terms is proportional to $\exp{(-\gamma'(0,E)N)}$ and negligible compared to Eq. (\ref{T3.2}) in the limit of large $N$. Authors of recent work \cite{Heiliger08} correctly found that $T_{uu}(\mathbf{k},E)$ and $T_{ud}(\mathbf{k},E)$ transmissions in the Fe/MgO/Fe MTJ have the same decay rate for sufficiently thick MgO barrier and obtained correct linear increase of TMR with $N$ for $15<N<30$ at the Fermi energy (the effect of the IRS has not been discussed in the paper). However, they did not consider the effect of the difference of the majority and minority STFs at $|k|\sim 0$ on the $\mathbf{k}$-integrated transmissions and concluded, in disagreement with their own numerical results, that in the limit $N \to \infty$ TMR will eventually saturate to a constant. Linear with $N$ behavior of TMR for $10<N<20$ in the Fe/MgO/Fe MTJ has been also obtained within a tight-binding framework in \cite{Mathon01} and later reproduced in \cite{Mathon06} where effect of the chemical disorder at the interface was studied. The asymptotical behavior of the $TMR(E)$ described by Eq. (\ref{TMR}) exists in the energy window from $E_F-0.4$ eV to $E_F+1.4$ eV. For $E > E_F+1.4$ eV the $\Delta_1$-symmetry band appears in minority channel at the $\Gamma-$H line (see Fig 4 (c)), so $TMR(E)$ will decrease to $TMR\propto 1$ at $E > E_F+1.4$ eV. For $E < E_F-1.0$ eV the $\Delta_1$-symmetry band disappears in majority channel (see Fig 4 (b)), so $TMR(E)$ again will decrease to $TMR(E)\propto 1$ at $E < E_F-1.0$ eV. For $E < E_F-0.4$ eV the $\Delta_5$-symmetry band disappears in minority channel (see Fig 4 (c)). Thus, in the range of the energies $E_F - 1.0$ eV $ < E < E_F-0.4$ eV the $\Delta_1$-symmetry band still exists in majority channel and only $\Delta_2$-symmetry band exists in minority channel along the $\Gamma-$H line (see Fig 4(b,c)). In the spirit of the perturbation theory, the minority Fe eigenstate $\psi^{\Delta_2}_d(\mathbf{k},k_z)$ for small but non-zero $\mathbf{k}$ could be represented as $\psi^{\Delta_2}_d(0,k_z)$ plus small terms proportional to $\mathbf{k}$ (the $\psi^{\Delta_2}_d(\mathbf{k},k_z)$ Fe minority band is the extension to non-zero $\mathbf{k}$ of the $\Delta_2$-symmetry band $\psi^{\Delta_2}_d(0,k_z)$ shown on Fig \ref{fig4}(c) along the $\Gamma-$H line). The eigenstate $\psi^{\Delta_2}_d(0,k_z)$ is composed primary of the $d_{x^2-y^2}$ orbitals of Fe. In the linear over $\mathbf{k}$ approximation the $d_{x^2-y^2}$ orbital will rotate to small angle in order to accommodate new propagation direction $(k_x,k_y,k_z)$ instead of $(0,0,k_z)$ (here $k_z$ is the wave-vector that corresponds to energy $E$ for the $\Delta_2$-symmetry band at $\mathbf{k}=0$). It is straightforward to show that $d_{x^2-y^2}$ orbital rotated around $x$-axis or $y$-axis on small angle proportional to $|k|$ in the linear over $|k|$ approximation will have $d_{xz}$ and $d_{yz}$ components, but not $d_{zz}$ component. The $d_{zz}$ component appears only in second order over $|k|$. Schematically, the expansion over orders of $|k|$ of the orbital composition of the $\psi^{\Delta_2}_d(\mathbf{k},k_z)$ Fe eigenstate and $\psi_e(\mathbf{k},E)$ MgO eigenstate can be presented as: \begin{eqnarray} \psi^{\Delta_2}_u(\mathbf{k},k_z) &\sim & d_{x^2-y^2} + |k|(d_{xz} + d_{yz}) + |k|^2(d_{zz}+..) \nonumber \\ \psi_e(\mathbf{k},E) &\sim & (s+p_z) + |k|(p_x+p_y) + O(|k|^2) \ . \label{schem} \end{eqnarray} One can wee the the overlap integral at the Fe/MgO interface between the $\psi^{\Delta_2}_d(\mathbf{k},k_z)$ and $\psi_e(\mathbf{k},E)$ eigenfunctions is non-zero only in the second order over $|k|$. Thus, the scattering amplitude $B_e$ inside the the barrier originated from incoming $\psi^{\Delta_2}_d(\mathbf{k},k_z)$ eigenstate is proportional to $|k|^2$ and the STF $t_{d,\mathbf{k}, E}$ is proportional to $k^4$ for small $\mathbf{k}$ in the energy window $E_F - 1.0$ eV $ < E < E_F-0.4$ eV. In this energy window the $T_{uu}(\mathbf{k},E)$ is still given by Eq. (\ref{T.1}), while $T_{ud}(\mathbf{k},E)$ and $T_{dd}(\mathbf{k},E)$ will contain more orders of $|k|^2$ at small $|k|$: \begin{eqnarray} T_{ud}(\mathbf{k},E) &=& A_{uu}f_{ud}(\mathbf{k}/|k|) k^4 e^{-(\gamma(0,E) + \alpha k^2)N} \ \label{T.4} \\ T_{dd}(\mathbf{k},E) &=& A_{uu}f^2_{ud}(\mathbf{k}/|k|) k^8 e^{-(\gamma(0,E) + \alpha k^2)N} \label{T.5} \ \end{eqnarray} Performing the $\mathbf{k}$ integration (\ref{eq1}) we can obtain the asymptotic expression for $TMR(E)$ at energies $E_F - 1.0$ eV $ < E < E_F-0.4$ eV: \begin{eqnarray} TMR(E) \propto N^2 \ , \label{TMR2} \end{eqnarray} which, at practical MgO thicknesses, $N \sim 10$, gives an additional order of magnitude enhancement of the TMR compared to the $TMR\propto N$ case described by Eq. (\ref{TMR}). In order to confirm theoretical formulas derived above we performed \emph{ab initio} calculations of transmission functions for the Fe/MgO/Fe MTJ with $N=4,6,8,10,$ and $12$ by using the TB-LMTO-ASA Green's function approach [\onlinecite{Schilfgaarde98,Turek97,Faleev05}]. We used relaxed nuclear coordinates of the Fe/MgO interface from Ref. [\onlinecite{Worthmann04}]. On Fig 2 (a) we show attenuation constant $\gamma(0,E)$ estimated from Eq. (\ref{T3.1}) using $T_{uu}(E)$ for $N=10$ and $12$: \begin{eqnarray} \gamma(0,E) = \frac{1}{2} \ln{\left( \frac{10T_{uu}(E,N=10)}{12T_{uu}(E,N=12)} \right)} \label{gamma} \end{eqnarray} In order to verify convergence of calculated $\gamma(0,E)$ with respect to $N$ we plotted the product $T_{uu}(E)N \exp{[\gamma(0,E)N]}$ for $N=6,8,10$ and $12$ on Fig. 2(b). As can be seen the curves for $N=8,10$ and $12$ are indistinguishable on the figure confirming both validity of the asymptotic formula (\ref{T3.1}) and convergence of calculated $\gamma(0,E)$ with respect to $N$ for broad range of energies. Decline of the $\gamma(0,E)$ at $E = E_F - 0.85$ eV could be explained by approaching the edge of the $\Delta_1$-symmetry majority band that occurs at the energy slightly below $E = E_F - 0.85$ eV (see Fig 4(b)). \begin{figure}[h] \includegraphics*[trim={0.4cm 8.4cm 0.5cm 3.0cm},clip,width=8.5cm]{Fig.Tuu.N.exp.gamma.eps} \caption{(color online). Attenuation constant $\gamma(0,E)$ estimated from Eq. (\ref{gamma}). (b) $T_{uu}(E)N \exp{[\gamma(0,E)N]}$ calculated for $N=6,8,10$ and $12$. Curves with $N=8,10$ and $12$ are indistinguishable on the figure. } \label{fig2} \end{figure} The $\mathbf{k}$-resolved transmission functions $T_{uu}(\mathbf{k},E)$, $T_{ud}(\mathbf{k},E)$, and $T_{dd}(\mathbf{k},E)$ calculated for the Fe/MgO/Fe MTJ with $N=10$ for 6 energy points $E-E_F=-0.8, -0.4, 0, 0.05, 0.4$ and $0.8$ eV are presented on 6 panels of Fig. 3 as functions of the absolute value of the wave-vector $|k|$ (shown in units of $2\pi/a$, where $a$ is the lattice constant of Fe). The mesh of $128\times 128$ divisions of the full SBZ was used that resulted in 2145 $\mathbf{k}$-points in the irreducible wedge of the SBZ (ISBZ). (These 2145 $\mathbf{k}$ points of the ISBZ were used for plotting Fig 3.) For each transmission function corresponding theoretical curve (shown by red dashed line) that describes the small $|k|$ behavior of the transmission is also plotted. Theoretical curves for $T_{uu}(\mathbf{k})$ transmission were fitted according to the Eq. (\ref{T.1}) using $\gamma(0,E)$ shown on Fig. 2 (a) and two fitting constants: $A_{uu}$ and $\alpha$. Theoretical curves for $T_{ud}(\mathbf{k})$ transmission were fitted according to the Eq. (\ref{T.4}) for $E-E_F=-0.8$ eV and according to the Eq. (\ref{T.2}) for other energy points with additional fitting constant $f_{ud}$ that corresponds to the maximum value of the function $f_{ud}(\mathbf{k}/|k|)$, $f_{ud}=\max_{\mathbf{k}}f_{ud}(\mathbf{k}/|k|)$. Theoretical curves for $T_{dd}(\mathbf{k})$ transmission were plotted according to the Eq. (\ref{T.5}) for $E-E_F=-0.8$ eV and according to the Eq. (\ref{T.3}) for other energy points \emph{without any additional fitting constants}. \begin{figure}[h] \includegraphics*[trim={0.5cm 9.6cm 0.2cm 1.2cm},clip,width=8.5cm]{Fig.jzk1.eps} \includegraphics*[trim={0.5cm 9.6cm 0.2cm 1.2cm},clip,width=8.5cm]{Fig.jzk2.eps} \includegraphics*[trim={0.5cm 9.6cm 0.2cm 1.2cm},clip,width=8.5cm]{Fig.jzk3.eps} \caption{(color online). Transmission functions $T_{uu}(\mathbf{k},E)$ (green dots), $T_{ud}(\mathbf{k},E)$ (cyan dots), and $T_{dd}(\mathbf{k},E)$ (blue dots) shown for the Fe/MgO/Fe MTJ with $N=10$ for 6 energies $E-E_F=-0.8, -0.4, 0, 0.05, 0.4$ and $0.8$ eV as function of the absolute value of the wave-vector $|k|$ (shown in units of $2\pi/a$). Red dashed lines are theoretical curves that describe behaviour of the transmission functions at small $|k|$. Theoretical curves were plotted using Eqs. (\ref{T.1},\ref{T.4}-\ref{T.5}) for $E-E_F=-0.8$ eV and Eqs. (\ref{T.1}-\ref{T.3}) for all other energy points (see text for details). } \label{fig3} \end{figure} One can see that theoretical curves describe the small $|k|$ behavior of all transmission functions rather well in broad range of energies, including the $E-E_F=-0.8$ eV energy where behavior of the $T_{ud}(\mathbf{k})$ and $T_{dd}(\mathbf{k})$ changes from that described by Eqs. (\ref{T.2}-\ref{T.3}), to that described by Eqs. (\ref{T.4}-\ref{T.5}). We stress that behavior of the $T_{dd}(\mathbf{k})$ transmission is very well described by corresponding theoretical curve that was plotted without any additional fitting - by using only the constants derived from fitting the $T_{uu}(\mathbf{k})$ and $T_{ud}(\mathbf{k})$ functions (which provides yet another conformation of validity of Eq. (\ref{eqTT})). For all six energy points theoretical curves correctly predict small $|k|$ behavior of the $T_{uu}(\mathbf{k})$ function up to $|k|\sim 0.2$, where $T_{uu}(\mathbf{k})$ is reduced by many orders of magnitude from its maximum. Theoretical curves for $T_{ud}(\mathbf{k})$ and $T_{dd}(\mathbf{k})$ functions start to deviate from actual transmissions at $|k|\sim 0.1$, where small $|k|$ approximation becomes invalid. The theoretical curves correctly describe the local maximum of the $T_{ud}(\mathbf{k})$ for all considered energy points except $E-E_F=-0.4$ eV energy which is a transitional point where $\Delta_5$ minority band disappears (see Fig 4 (c)). Due to corresponding Van Hove singularity in the density of states (DOS) at this energy the maximum of the $T_{ud}(\mathbf{k})$ is the largest one for $E-E_F=-0.4$ eV as compared to maximums of $T_{ud}(\mathbf{k})$ for another five energy points (which leads to the smallest TMR at $N=10$ compared to other energy points, see Fig. 4 (a) and Fig. 5). The global maximum of the function $T_{ud}(\mathbf{k})$ does not coincide with the local maximum described by the theoretical curves also for two other energy points: $E-E_F=-0.8$ eV and $E=E_F$. For $E-E_F=-0.8$ eV the small $|k|$ region is strongly suppressed by the $|k|^4$ factor (see Eq. \ref{T.4}), so $T_{ud}(\mathbf{k})$ near the $M$ point ($M$ point on Fig. 3 corresponds to largest $|k|=1/\sqrt{2}$) is larger compared to $T_{ud}(\mathbf{k})$ near $|k|=0$. At sufficiently large $N$ the contribution from $|k|=0$ region will eventually become dominant, but this asymptotic has not been reached yet at $N=10$ for $E-E_F=-0.8$ eV. The global maximum of the $T_{ud}(\mathbf{k})$ at the $E=E_F$ energy point reached at $|k|\sim 0.15$ is not described by Eq. (\ref{T.2}) and corresponds to the interface resonance states existing in narrow energy window near $E_F$ \cite{Butler01,Belashchenko05,Rungger09,Tiusan04,Feng09,Faleev12}. The IRS are very sensitive to small changes of the energy and, as can be seen on Fig 3, the peak in $T_{ud}(\mathbf{k})$ associated with IRS disappears already at $E-E_F=0.05$ eV. The IRS contribution to the APC transmission can be seen as narrow peak on Fig. 1(a) with maximum at $E-E_F=-0.009$ eV and width $\sim 0.02$ eV (at $N = 10$), and also, as narrow dip in the TMR, on Fig. 4(a). We note that energy position of the IRS states is very sensitive to the details of the Fe/MgO interface and depends on the DFT functional used for relaxation of the interface structure \cite{Feng09}. In addition, recent beyond-DFT QSGW calculations show that IRS peak in DOS is shifted from $E=E_F$ (as predicted by DFT) to $E=E_F+0.12$ eV \cite{Faleev12}, which is in agreement with experimental measurements \cite{Zermatten08}. \begin{figure}[h] \includegraphics*[trim={0.6cm 1.8cm 3.6cm 2.0cm},clip,width=4.6cm]{Fig.ratio.withN.eps} \includegraphics*[trim={0.6cm 0.4cm 3.8cm 0.4cm},clip,width=4.0cm]{fe.bands.eps} \caption{(color online). (a) $T_{uu}(E)/(N \times T_{ud}(E))$ shown as function of the energy for $N=4,6,8,10$ and $12$. (b) majority and (c) minority Fe bands plotted along the $\Gamma-$H symmetry line. } \label{fig4} \end{figure} Figure 4(a) shows $T_{uu}(E)/(N \times T_{ud}(E))$ as function of the energy for $N=4,6,8,10$ and $12$. It is seen that for all considered $N$ functions $T_{uu}(E)/(N \times T_{ud}(E))$ are very close to each other in energy range $E> E_F+0.4$ eV, thus confirming that linear with $N$ asymptotic behavior of the TMR (\ref{TMR}) resulted from \emph{native} symmetry filtering effect is established in Fe/MgO/Fe MTJ starting already with $N=4$ for $E> E_F+0.4$ eV. Established linear asymptotical behavior is also seen on Fig. 5(b) where TMR is plotted as function of $N$ for several energy points with $E \geqslant E_F+0.4$ eV. For energies between, approximately, $E_F-0.2$ eV and $E_F+0.2$ eV the asymptotic behavior is reached at larger $N$ due to two factors (1) contribution of the IRS to $T_{ud}(E)$, and (2) generally smaller STF of minority electrons, $t_{d \mathbf{k} E}$, at $|k| \sim 0$ in this energy window (as compared, for example, to $t_{d \mathbf{k} E}$ at $E> E_F + 0.4$ eV) that leads to increased relative contribution to the $T_{ud}(E)$ from parts of the SBZ other then $|k|\sim 0$ (although the contribution of the $|k|\sim 0$ region to $T_{ud}(E)$ still increases with increased $N$). As seen on Fig 4(a) and also on Fig 1(a), the width of the IRS peak reduces with increased $N$ due to decaying of the IRS states with $|k| > 0$ inside the barrier with attenuation constant larger then $\gamma(0,E)$. As a result, curves with $N=10$ and $12$ shown on Fig 4(a) are very close to each other for the whole range $E>E_F-0.4$ eV, except small region with width $\sim 0.02$ eV near $E_F$ where the contribution of some IRS states (states with $|k| \sim 0$) to the $T_{ud}(E)$ still survives. The fact that minority STF $t_{d \mathbf{k} E}$ at $|k| \sim 0$ for $E$ between $E_F-0.2$ eV and $E_F+0.2$ eV is smaller compared to that outside of this energy window can also be seen by comparing the $\mathbf{k}$-resolved transmission $T_{ud}(\mathbf{k})$ on Fig. 2 for $E-E_F=0$ and $0.05$ eV with that for $E-E_F=-0.4, 0.4$ and $0.8$ eV. (Note that majority STF $t_{u \mathbf{k} E}$ does not change much in broad energy range $E>E_F-0.4$ eV, as can be concluded from comparing $T_{uu}(\mathbf{k})$ on panels corresponding to different energy points on Fig 2 and smooth behavior of the $T_{uu}(E)$ shown on Fig 1(b) and Fig 2(b).) Small STF of minority electrons for energies between $E_F-0.2$ eV and $E_F+0.2$ eV results in larger values of TMR for $N \geqslant 6$ at this energy window (see Fig. 4(a) and Fig. 5) as compared to the TMR outside of this window, but within broader window $E>E_F-0.4$ eV where $\Delta_5$ minority Fe state still exists. Finally, in the combination with the symmetry filtering effect, reduced STF of minority electrons at the Fe/MgO interface at $|k| \sim 0$ is responsible for large $TMR>10,000\%$ predicted for Fe/MgO/Fe MTJ at $E=E_F$ for $N \geqslant 8$ in this and previous works. In the energy window from $E_F-1.0$ eV to $E_F-0.4$ eV there is no $\Delta_5$-symmetry state along the $\Gamma-$H line in minority Fe channel (see Fig 4(c)), so $TMR \propto N$ asymptotic behavior changes to the $TMR \propto N^2$ asymptotic behavior (see Fig. 4(a) and Fig. 5(a,c)). The maximum of $T_{uu}(E)/(N \times T_{ud}(E))$ occurs at $E=E_F-0.85$ eV where $T_{ud}(E)$ is small due to the $|k|^4$ factor in $T_{ud}(\mathbf{k})$, while $T_{uu}(\mathbf{k})$ is enhanced due to the Van Hove singularity at the edge of the $\Delta_1$ majority Fe band (see Fig 1(b) and Fig 4(b)). \begin{figure}[h] \includegraphics*[trim={1.6cm 8.6cm 2.0cm 2.2cm},clip,width=8cm]{TMR.all.eps} \caption{ (color online). Calculated TMR shown as function of $N$ (a) for 6 energy points with $E \leqslant E_F+0.2$ eV and (b) for 5 energy points with $E\geqslant E_F+0.4$ eV. (c) TMR for $E = E_F-0.4$ eV (shown on larger scale) and $TMR^{1/2}$ for $E - E_F=-0.8$ and $-0.6$ eV energy points. } \label{fig5} \end{figure} Calculated TMR is shown as function of $N$ on Fig 5(a) for 6 energy points with $E \leqslant E_F+0.2$ eV and on Fig 5(b) for 5 energy points with $E\geqslant E_F+0.4$ eV. Fig 5(c) shows TMR for $E = E_F-0.4$ eV (in the scale larger then that of Fig 5(a)) and $TMR^{1/2}$ for $E - E_F=-0.8$ and $-0.6$ eV. The TMR shown on Fig 5 is calculated using the definition $TMR=T_{uu}/(2T_{ud}) -1$ that neglects the $T_{dd}$ contribution to transmission in PC. In general, $T_{dd}$ is much smaller compared to $T_{uu}$ except the case of small $N$ where at energies near $E_F$ IRS contribution to $T_{dd}$ is significant and $T_{dd}$ becomes comparable with (or even larger then) $T_{uu}$. As was noted in \cite{Belashchenko05} the contribution of IRS to $T_{dd}$ is significant due to the energy matching of the surface resonances at the left and right Fe/MgO surfaces that occurs "only for ideal, symmetric junctions, and only at zero bias". Slight non-ideality in any of the electrode or bias voltage as small as 0.01 V is sufficient to destroy this resonance matching \cite{Belashchenko05}. The TMR curves shown on Fig 5 have linear with $N$ asymptotic behaviour for all energy points except $E-E_F=-0.8,-0.6 $ and $0$ eV. For energy points with $E\geqslant E_F+0.4$ eV linear with $N$ behavior starts already with $N=4$. For $E-E_F=-0.4$ eV linear behavior starts somewhat latter, at $N=8$ due to approaching Fe minority $\Delta_5$ band edge (see Fig. 4(c)) and corresponding reduction of the $\mathbf{k}$ integration range where $\psi^{\Delta_5}_d(\mathbf{k},k_z)$ bands still exist (see fast drop of the $T_{ud}(\mathbf{k},E)$) at $|k|\sim 0.08$ at this energy shown on Fig 2). For $E-E_F=-0.2$ and $0.2$ eV linear with $N$ behavior begins also somewhat latter, at $N=8$, due to generally small STF $t_{d \mathbf{k} E}$ at $|k|\sim 0$ for $E$ between $E_F-0.2$eV and $E_F+0.2$ eV and, therefore, enhanced weight of the contributions from other then $|k|\sim 0$ parts of the BZ at smaller $N$. For $E=E_F$ linear asymptotic regime is not established yet even at $N=12$ due to narrow IRS-related peak in $T_{ud}(E)$ (although, as seen on Fig 4(a), linear asymptotic is established already for $N=10$ for energies just $0.1$ eV smaller or larger then $E_F$). (Here we stress again that in real experiment the contribution of the IRS will be suppressed due to interface roughness.) As can be concluded from linear behavior of the $TMR^{1/2}$ as function of $N$ shown on Fig 5(c) for energy $E-E_F=-0.6$ eV, the asymptotic behavior $TMR\propto N^2$ starts already with $N=6$. For $E-E_F=-0.8$ eV asymptotic behavior $TMR\propto N^2$ (or $TMR^{1/2}\propto N$) begins somewhat latter, at $N=8$, due to enhanced weight of the contributions from other then $|k|\sim 0$ parts of the BZ at smaller $N$, as we noted in discussion of the Fig 3. In conclusion, from the analysis of the band structure of bulk Fe and complex band structure of MgO we predicted \emph{native} to the symmetry spin filtering effect asymptotical behavior of the TMR in Fe/MgO/Fe MTJ: $TMR \propto N$ for energies from $E_F-0.4$ eV to $E_F+1.5$ eV and $TMR \propto N^2$ for energies from $E_F-1.0$ eV to $E_F-0.4$ eV. \emph{Ab initio} calculations of transmission functions performed for the Fe/MgO/Fe MTJ confirm these theoretical predictions in broad range of energies and $N$. Large TMR obtained at energies near $E_F$ ($TMR> 10,000\%$ for $N\geqslant 8$) is attributed to the \emph{combination} of the symmetry spin filtering effect and small surface transmission function of the minority Fe electrons at the Fe/MgO interface for $|k|\sim 0$ that leads to additional $\times 10$ enhancement of the $TMR$ at energy near $E_F$ compared to that at $E>E_F+0.4$ eV or $E \sim E_F-0.4$ eV. Super-linear behavior of $TMR$ at energies near $E_F$ obtained in this and previous theoretical works \cite{Butler01,Belashchenko05} is associated with contribution of the interfacial resonance states (quickly decaying with $N$) to the APC transmission. In real experiment the IRS contribution is suppressed due to surface roughness thus providing natural explanation why no strong dependance of the TMR on $N$ have been found experimentally. Moreover, since the overlap integral at the Fe/MgO interface between Fe minority eigenstates and the $\Delta_1$-symmetry MgO eigenstate is proportional to $|k|^2$ at $|k|\sim 0$ \emph{only} because of mismatching symmetry of these eigenfunctions, surface roughness and/or interface chemical disorder that breaks the symmetry of the wave functions at the interface will inevitably lead to non-zero value of the the overlap integral at $|k|=0$ and therefore to saturation of the TMR at large $N$ which is observed experimentally \cite{Parkin04,Yuasa04}. In addition non ideal surface (due to interface chemical disorder or steps in surface layers) induces scattering of $|k|>0$ Fe minority states into $|k|=0$ MgO barrier eigenstate that also leads to the saturation of the TMR at large $N$ \cite{Mathon06}. Our prediction for the strength of the symmetry filtering effect is based on simple analysis of the band structure of bulk electrode material. Thus, such analysis could be used as a tool for quick material discovery search of novel MTJs in context of emerging technologies that requires high TMR, including STT-MRAM technology where the pool of candidate electrode materials includes several hundreds Heusler allays and magnetic multilayers. S.F. and O.N.M acknowledge the CNMS User support by Oak Ridge National Laboratory Division of Scientific User facilities. O.N.M acknowledge partial support by C-SPIN, one of the six centers of STARnet, a Semiconductor Research Corporation program, sponsored by MARCO and DARPA. S.F. would like to thank Barbara Jones for useful discussions.
2,869,038,155,186
arxiv
\section{Introduction} \noindent Nowadays, many engineering applications can be posed as convex quadratic problems (QP). Several important applications that can be modeled in this framework such us model predictive control for a dynamical linear system \cite{DomZgr:12,NecNed:13,NedNec:12,PatBem:12,StaSzu:14} and its dual called moving horizon estimation \cite{FraVuk:14}, DC optimal power flow problem for a power system \cite{ZimMur:11}, linear inverse problems arising in many branches of science \cite{BecTeb:09,WanLin:13} or network utility maximization problems \cite{WeiOzd:10} have attracted great attention lately. Since the computational power has increased by many orders in the last decade, highly efficient and reliable numerical optimization algorithms have been developed for solving the optimization problems arising from these applications in very short time. For example, these hardware and numerical recent advances made it possible to solve linear predictive control problems of nontrivial sizes within the range of microseconds and even on hardware platforms with limited computational power and memory \cite{JerLin:12}. \noindent The theoretical foundation of quadratic programming dates back to the work by Frank \& Wolfe \cite{FraWol:56}. After the publication of the paper \cite{FraWol:56} many numerical algorithms have been developed in the literature that exploit efficiently the structure arising in this class of problems. Basically, we can identify three popular classes of algorithms to solve quadratic programs: active set methods, interior point methods and (dual) first order methods. \noindent \textit{Active set methods} are based on the observation that quadratic problems with equality constraints are equivalent to solving a linear system. Thus, the iterations in these methods are based on solving a linear system and updating the active set (the term active set refers to the subset of constraints that are satisfied as equalities by the current estimate of the solution). Active set general purpose solvers are adequate for small-to-medium scale quadratic problems, since the numerical complexity per iteration is cubic in the dimension of the problem. Matlab's \textit{quadprog} function implements a primal active set method. Dual active set methods are available in the codes \cite{BarBie:06,FerKir:14}. \noindent \textit{Interior point methods} remove the inequality constraints from the problem formulation using a barrier term in the objective function for penalizing the constraint violations. Usually a logarithmic barrier terms is used and the resulting equality constrained nonlinear convex problem is solved by the Newton method. Since the iteration complexity grows also cubically with the dimension, interior-point solvers are also the standard for small-to-medium scale QPs. However, structure exploiting interior point solvers have been also developed for particular large-scale applications: e.g. several solvers exploit the sparse structure of the quadratic problem arising in predictive control (CVXGEN \cite{MatBoy:09}, FORCES \cite{DomZgr:12}). A parallel interior point code that exploits special structures in the Hessian of large-scale structured quadratic programs have been developed in~\cite{GonGro:07}. \noindent \textit{First order methods} use only gradient information at each iterate by computing a step towards the solution of the unconstrained problem and then projecting this step onto the feasible set. Augmented Lagrangian algorithms for solving general nonconvex problems are presented in the software package Lancelot \cite{ConGou:92}. For convex QPs with simple constraints we can use primal first order methods for solving the quadratic program as in \cite{Ulm:11}. In this case the main computational effort per iteration consists of a matrix-vector product. When the projection on the primal feasible set is hard to compute, an alternative to primal first order methods is to use the Lagrangian relaxation to handle the complicated constraints and then to apply dual first order algorithms for solving the dual. The computational complexity certification of first order methods for solving the (augmented) Lagrangian dual of general convex problems is studied e.g. in \cite{LanMon:08,NecNed:13,NedNec:12,NecFer:15,NedOzd:09} and of quadratic problems is studied in \cite{Gis:14,PatBem:12,StaSzu:14}. In these methods the main computational effort consists of solving at each iteration a Lagrangian QP problem with simple constraints for a given multiplier, which allows us to determine the value of the dual gradient for that multiplier, and then update the dual variables using matrix-vector products. For example, the toolbox FiOrdOs \cite{Ulm:11} auto-generates code for primal or dual fast gradient methods as proposed in \cite{RicMor:11}. The algorithm in \cite{PatBem:12} dualizes only the inequality constraints of the QP and assumes available a solver for linear systems that is able to solve the Lagrangian inner problem. However, both implementations \cite{Ulm:11,PatBem:12} do not consider the important aspect that the Lagrangian inner problem cannot be solved exactly in practice. The effect of inexact computations in dual gradient values on the convergence of dual first order methods has been analyzed in detail in \cite{NecNed:13,NedNec:12,NecFer:15}. Moreover, most of these papers generate approximate primal solutions through averaging \cite{NecNed:13,NedNec:12,NedOzd:09}. On the other hand, in practice usually the last primal iterate is employed, since in practice these methods converge faster in the primal last iterate than in a primal average sequence. These issues motivate our work~here. \noindent \textit{Contributions}. In this paper we analyze the computational complexity of several (augmented) dual first order methods implemented in DuQuad for solving convex quadratic problems. Contrary to most of the results from the literature \cite{NedOzd:09,PatBem:12}, our approach allows us to use inexact dual gradient information (i.e. it allows to solve the (augmented) Lagrangian inner problem approximately) and therefore is able to tackle more general quadratic convex problems and to solve practical applications. Another important feature of our approach is that we provide also complexity results for the primal latest iterate, while in much of the previous literature convergence rates in an average of primal iterates are given \cite{NecNed:13,NedNec:12,NedOzd:09,PatBem:12}. We derive in a unified framework the computational complexity of the dual and augmented dual (fast) gradient methods in terms of primal suboptimality and feasibility violation using inexact dual gradients and two types of approximate primal solutions: the last primal iterate and an average of primal iterates. From our knowledge this paper is the first where both approaches, dual and augmented dual first order methods, are analyzed uniformly. These algorithms are also implemented in the efficient programming language C in DuQuad, and optimized for low iteration complexity and low memory footprint. The toolbox has a dynamic Matlab interface which make the process of testing, comparing, and analyzing the algorithms simple. The algorithms are implemented using only basic arithmetic and logical operations and thus are suitable to run on low cost hardware. The main computational bottleneck in the methods implemented in DuQuad is the matrix-vector product. Therefore, this toolbox can be used for solving either QPs on hardware with limited resources or sparse QPs with large dimension. \noindent \textit{Contents}. The paper is organized as follows. In section \ref{sec_pf} we describe the optimization problem that we solve in DuQuad. In Section \ref{sec_duquad} we describe the the main theoretical aspects that DuQuad is based on, while in Section \ref{numerical_tests} we present some numerical results obtained with DuQuad. \noindent \textit{Notation}. For $x,y \in \rset^n$ denote the scalar product by $\langle x,y \rangle = x^T y$ and the Euclidean norm by $\|x\|=\sqrt{x^T x}$. Further, $[u]_X$ denotes the projection of $u$ onto convex set $X$ and $\text{dist}_{X}(u) =\|u -[u]_X\|$ its distance. For a matrix $G \in \rset^{m \times n}$ we use the notation $\|G\|$ for the spectral norm. \section{Problem formulation} \label{sec_pf} \noindent In DuQuad we consider a general convex quadratic problem (QP) in the form: \begin{align}\label{original_primal} F^* = & \min_{u \in U} F(u) \quad \left(= \frac{1}{2} u^T Q u + q^T u\right) \\ & \text{s.t:} \;\;\; G u + g \in \mathcal{K}, \nonumber \end{align} where $F:\rset^n \rightarrow \rset^{}$ is a convex quadratic function with the Hessian $Q \succeq 0$, $G \in \rset^{p \times n}$, $U \subseteq \rset^n$ is a simple compact convex set, i.e. a box $U= [\text{lb} \; \text{ub}]$, and ${\mathcal{K}}$ is either the cone $\rset^p_{-}$ or the cone $\{0\}$. Note that our formulation allows to incorporate in the QP either linear inequality constraints $\mathcal{K}=\rset^p_{-}$ (arising e.g. in sparse formulation of predictive control and network utility maximization) or linear equality constraints $\mathcal{K}=\{0\}$ (arising e.g. in condensed formulation of predictive control and DC optimal power flow). In fact the user can define linear constraints of the form: $\bar{\text{lb}} \leq \bar{G} u+ \bar{g} \leq \bar{\text{ub}}$ and depending on the values for $\bar{\text{lb}}$ and $\bar{\text{ub}}$ we have linear inequalities or equalities. Throughout the paper we assume that there exists a finite optimal Lagrange multiplier $\lambda^*$ for the QP \eqref{original_primal} and it is difficult to project on the feasible set of problem~\eqref{original_primal}: $${\cal X}_{QP} = \{u \in U: \; G u+ g \in \mathcal{K}\}.$$ Therefore, solving the primal problem \eqref{original_primal} approximately with primal first order methods is numerically difficult and thus we usually use (augmented) dual first order methods for finding an approximate solution for \eqref{original_primal}. By moving the complicating constraints $G u+ g \in \mathcal{K}$ into the cost via Lagrange multipliers we define the (augmented) dual function: \begin{align} \label{dual_fc} d_\rho(\lambda) = \min_{u \in U} \mathcal{L}_\rho(u,\lambda), \end{align} where $\mathcal{L}_\rho(u,\lambda)$ denotes the (augmented) Lagrangian w.r.t.~the complicating constraints $G u +g \in \mathcal{K}$, i.e.: \begin{equation} \label{auglag} \mathcal{L}_\rho(u,\lambda) = \min\limits_{s \in \mathcal{K}} \; F(u) + \langle \lambda, G u + g - s \rangle + \frac{\rho}{2}\norm{Gu + g - s}^2 \end{equation} where the regularization parameter $\rho \geq 0$. We denote $s(u,\lambda) = \arg\min\limits_{s \in \mathcal{K}} \langle \lambda, G u + g -s \rangle + \frac{\rho}{2}\norm{Gu+g - s}^2$ and observe that: \begin{equation*} s(u,\lambda) = \begin{cases} \left[ Gu+g + \frac{1}{\rho}\lambda \right]_{\mathcal{K}} & \text{if} \;\; \rho>0 \\ 0 &\text{if} \;\; \rho=0. \end{cases} \end{equation*} Using this observation in the formulation \eqref{auglag}, we obtain: \begin{align} \label{auglag1} \mathcal{L}_{\rho}(u, \lambda) = \begin{cases} F(u) + \frac{\rho}{2} \text{dist}_{\mathcal{K}} \left( Gu+g + \frac{1}{\rho} \lambda \right)^2 - \frac{1}{2\rho}\norm{\lambda}^2, & \text{if} \; \rho > 0\\ F(u) + \langle \lambda, G u + g \rangle, & \text{if}\; \rho = 0. \end{cases} \end{align} In order to tackle general convex quadratic programs, in DuQuad we consider the following two options: \noindent \textbf{Case 1}: if $Q \succ 0$, i.e. $Q$ has the smallest eigenvalue $\lambda_{\min}(Q) >0$, then we consider $\rho=0$ and recover the ordinary Lagrangian function. \noindent \textbf{Case 2}: if $Q \succeq 0$, i.e. $Q$ has the smallest eigenvalue $\lambda_{\min}(Q) =0$, then we consider $\rho >0$ and recover the augmented Lagrangian function. \noindent Our formulation of the (augmented) Lagrangian \eqref{auglag1} and the previous two cases allow us to thereat in a unified framework both approaches, dual and augmented dual first order methods, for general convex QPs. We denote by $u(\lambda)$ the optimal solution of the \textit{inner problem} with simple constraints $u \in U$: \begin{align} \label{ul} u(\lambda) = \arg \min_{u \in U} \mathcal{L}_\rho(u,\lambda). \end{align} Note that for both cases described above the (augmented) dual function is differentiable everywhere. Moreover, the gradient of the (augmented) dual function $d_\rho(\lambda)$ is $L_{\text{d}}$-Lipschitz continuous and given by \cite{NecNed:13,NedNec:12,Nes:04,Roc:76}: \begin{equation} \label{Ld} \nabla d_{\rho} (\lambda) = Gu(\lambda) + g - s(u(\lambda),\lambda) \;\; \text{and} \;\; L_{\text{d}}= \frac{\|G\|^2}{\lambda_{\min}(Q) + \rho\|G\|^2} \end{equation} for all $\lambda \in \rset^p$. Since the dual function has Lipschitz continuous gradient, we can derive bounds on $d_\rho$ in terms of a linear and a quadratic model (the so-called \textit{descent lemma}) \cite{Nes:04}: \begin{align} \label{descentlemma} 0 \leq [d_\rho(\mu) + \langle {\nabla} d_\rho(\mu),\lambda - \mu \rangle] - d_\rho(\lambda) \leq \frac{L_{\text{d}}}{2} \|\mu - \lambda\|^2 \quad \forall \lambda, \mu \in \rset^p. \end{align} Descent lemma is essential in proving convergence rate for first order methods \cite{Nes:04}. Since we assume the existence of a finite optimal Lagrange multiplier for \eqref{original_primal}, strong duality holds and thus the \textit{outer problem} is smooth and satisfies: \begin{align} \label{dual_pr} F^* = \max_{\lambda \in \mathcal{K}_d} d_\rho(\lambda), \end{align} where $$\mathcal{K}_d = \begin{cases} \rset^p_+, & \text{if} \;\; Q \succ 0 \;\; \text{and} \;\; \mathcal{K} = \rset^p_{-} \\ \rset^p, & \;\; \text{otherwise}. \end{cases}$$ Note that, in general, the smooth (augmented) dual problem \eqref{dual_pr} is not a QP, but has simple constraints. We denote a primal optimal solution by $u^*$ and a dual optimal solution by $\lambda^*$. We introduce $\Lambda^* \subseteq \mathcal{K}_d$ as the set of optimal solutions of the smooth dual problem \eqref{dual_pr} and define for some $\lambda^0 \in \rset^p$ the following finite quantity: \begin{equation} \label{eq_multipleirs_bounded} \mathcal{R}_{\text{d}} = \min \limits_{\lambda^* \in \Lambda^*} \|\lambda^* - \lambda^0\|. \end{equation} In the next section we present a general first order algorithm for convex optimization with simple constraints that is used frequently in our toolbox. \subsection{First order methods} \noindent In this section we present a framework for first order methods generating an approximate solution for a smooth convex problem with simple constraints in the form: \begin{equation} \label{aux_prob} \phi^* = \min_{x \in X} \; \phi(x), \end{equation} where $\phi: \rset^n \to \rset^{}$ is a convex function and $X$ is a simple convex set (i.e. the projection on this set is easy). Additionally, we assume that $\phi$ has Lipschitz continuous gradient with constant $L_{\phi} >0$ and is strongly convex with constant $\sigma_{\phi} \geq 0$. This general framework covers important particular algorithms \cite{Nes:04,BecTeb:09}: e.g. gradient algorithm, fast gradient algorithm for smooth problems, or fast gradient algorithm for problems with smooth and strongly convex objective function. Thus, we will analyze the iteration complexity of the following general first order method that updates two sequences $(x^k, y^k)_{k \geq 0}$ as follows: \begin{center} \framebox{ \parbox{7cm}{ \begin{center} \textbf{ Algorithm {\bf FOM ($\phi,X$)} } \end{center} {Given $x^0 = y^1 \in X$, for $k\geq 1$ compute:} \begin{enumerate} \item ${x}^{k}= \left[ y^k - \frac{1}{L_{\phi}} \nabla \phi(y^k)\right]_X$, \item $y^{k+1} = x^k + \beta_k (x^k - x^{k-1})$, \end{enumerate} }} \end{center} where $\beta_k$ is the parameter of the method and we choose it in an appropriate way depending on the properties of function $\phi$. More precisely, $\beta_k$ can be updated as follows: \noindent \textbf{GM}: in the Gradient Method $\beta_k= \frac{\theta_k -1 }{\theta_{k+1}}$, where $\theta_k=1$ for all $k$. This is equivalent with $\beta_k=0$ for all $k$. In this case $y^{k+1} = x^k$ and thus we have the classical gradient update: ${x}^{k+1}= [ x^k - \frac{1}{L_{\phi}} \nabla \phi(x^k)]_X$. \noindent \textbf{FGM}: in the Fast Gradient Method for smooth convex problems $\beta_k=\frac{\theta_k -1 }{\theta_{k+1}}$, where $\theta_{k+1} = \frac{1 + \sqrt{1 + 4 \theta_k^2 }}{2}$ and $\theta_1=1$. In this case we get a particular version of Nesterov's accelerated scheme \cite{Nes:04} that updates two sequences $(x^{k},y^k)$ and has been analyzed in detail in \cite{BecTeb:09}. \noindent \textbf{FGM}$_\sigma$: in fast gradient algorithm for smooth convex problems with strongly convex objective function, with constant $\sigma_{\phi} > 0$, we choose $\beta_k=\frac{\sqrt{L_{\phi}} - \sqrt{\sigma_{\phi}}}{\sqrt{L_{\phi}} + \sqrt{\sigma_{\phi}}}$ for all $k$. In this case we get a particular version of Nesterov's accelerated scheme \cite{Nes:04} that also updates two sequences $(x^{k},y^k)$. \noindent The convergence rate of Algorithm {\bf FOM}($\phi,X$) in terms of function values is given in the next lemma: \begin{lemma} \cite{BecTeb:09,Nes:04} \label{lemma_sublin_dfg} For smooth convex problem \eqref{aux_prob} assume that the objective function $\phi$ is strongly convex with constant $\sigma_\phi \geq 0$ and has Lipschitz continuous gradient with constant $L_{\phi} >0$. Then, the sequences $\left(x^k, y^k\right)_{k\geq 0}$ generated by Algorithm {\bf FOM}($\phi,X$) satisfy: \begin{equation} \label{bound_gen_dfg1} \phi(x^k) - \phi^* \!\leq\! \min \!\left(\!\! \left( \!1 \!-\! \sqrt{\frac{\sigma_\phi}{L_\phi}}\right)^{k-1} \!\!\!\!\!\! L_{\phi} \mathcal{R}^2_{\phi}, \frac{2 L_{\phi} \mathcal{R}^2_{\phi}}{(k\!+\!1)^{p(\beta_k)}} \!\right) \end{equation} where $\mathcal{R}_{\phi} = \min\limits_{x^*_{\phi} \in X^*_{\phi}} \| x^0 - x^*_{\phi}\|$, with $X^*_{\phi}$ the optimal set of \eqref{aux_prob}, and $p(\beta_k)$ is defined as follows: \begin{align} \label{pbetak} p(\beta_k) = \begin{cases} 1 & \mbox{if } \beta_k = 0 \\ 2 & \mbox{otherwise}. \end{cases} \end{align} \end{lemma} Thus, Algorithm {\bf FOM} has linear convergence provided that $\sigma_\phi>0$. Otherwise, it has sublinear convergence. \section{Inexact (augmented) dual first order methods} \label{sec_duquad} \noindent In this section we describe an inexact dual (augmented) first order framework implemented in DuQuad, a solver able to find an approximate solution for the quadratic program \eqref{original_primal}. For a given accuracy $\epsilon >0$, $u_\epsilon \in U$ is called an $\epsilon$-\textit{primal solution} for problem \eqref{original_primal} if the following inequalities hold: \[ \text{dist}_{\mathcal{K}} (G u_\epsilon + g) \leq {\cal O}(\epsilon) \quad \text{and} \quad |F(u_\epsilon) - F^*| \leq {\cal O}(\epsilon). \] \noindent The main function in DuQuad is the one implementing the general Algorithm {\bf FOM}. Note that if the feasible set ${\cal X}_{QP}$ of \eqref{original_primal} is simple, then we can call directly {\bf FOM}($F,{\cal X}_{QP}$) in order to obtain an approximate solution for \eqref{original_primal}. However, in general the projection on ${\cal X}_{QP}$ is as difficult as solving the original problem. In this case we resort to the (augmented) dual formulation \eqref{dual_pr} for finding an $\epsilon$-primal solution for the original QP \eqref{original_primal}. The main idea in DuQuad is based on the following observation: from \eqref{ul}--\eqref{Ld} we observe that for computing the gradient value of the dual function in some multiplier $\lambda$, we need to solve exactly the inner problem \eqref{ul}; despite the fact that, in some cases, the (augmented) Lagrangian $\mathcal{L}_\rho(u,\lambda)$ is quadratic and the feasible set $U$ is simple in \eqref{ul}, this inner problem generally cannot be solved exactly. Therefore, the main iteration in DuQuad consists of two steps: \vspace{0.2cm} \noindent \textbf{Step 1}: for a given inner accuracy $\epsilon_{\text{in}}$ and a multiplier $\mu \in \rset^p$ solve approximately the inner problem \eqref{ul} with accuracy $\epsilon_{\text{in}}$ to obtain an approximate solution $\bar{u}(\mu)$ instead of the exact solution $u(\mu)$, i.e.: \begin{align} \label{duquad_in} 0 \le \mathcal{L}_\rho(\bar{u}(\mu),\mu) - d_\rho(\mu) \leq \epsilon_{\text{in}}. \end{align} In DuQuad, we obtain an approximate solution $\bar{u}(\mu)$ using the Algorithm {\bf FOM}($\mathcal{L}_\rho(\cdot,\mu),U$). From, Lemma \ref{lemma_sublin_dfg} we can estimate tightly the number of iterations that we need to perform in order to get an $\epsilon_{\text{in}}$-solution $\bar{u}(\mu)$ for \eqref{ul}: the Lipschitz constant is $L_{\mathcal{L}} = \lambda_{\max} (Q) + \rho \|G\|^2$, the strong convexity constant is $\sigma_{\mathcal{L}} = \lambda_{\min} (Q + \rho G^T G)$ (provided that e.g. $\mathcal{K}=\{0\}$) and ${\cal R}_{\mathcal{L}} \leq D_{U}$ (the diameter of the box set $U$). Then, the number of iterations that we need to perform for computing $\bar{u}(\mu)$ satisfying \eqref{duquad_in} can be obtained from \eqref{bound_gen_dfg1}. \vspace{0.2cm} \noindent \textbf{Step 2}: Once an $\epsilon_{\text{in}}$-solution $\bar{u}(\mu)$ for \eqref{ul} was found, we update at the outer stage the Lagrange multipliers using again Algorithm {\bf FOM}($d_\rho,{\mathcal{K}}_d$). Note that for updating the Lagrange multipliers we use instead of the true value of the dual gradient $\nabla d_\rho(\mu) = G u(\mu) + g - s(u(\mu),\mu)$, an approximate value given by: $\bar{\nabla} d_\rho(\mu) = G \bar{u}(\mu) + g - s(\bar{u}(\mu),\mu)$. \noindent In \cite{DevGli:14,NedNec:12,NecNed:13,NecFer:15} it has been proved separately, for dual and augmented dual first order methods, that using an appropriate value for $\epsilon_{\text{in}}$ (depending on the desired accuracy $\epsilon$ that we want to solve the QP \eqref{original_primal}) we can still preserve the convergence rates of Algorithm {\bf FOM}($d_\rho,{\mathcal{K}}_d$) given in Lemma \ref{lemma_sublin_dfg}, although we use inexact dual gradients. In the sequel, we derive in a unified framework the computational complexity of the dual and augmented dual (fast) gradient methods. From our knowledge, this is the first time when both approaches, dual and augmented dual first order methods, are analyzed uniformly. First, we show that by introducing inexact values for the dual function and for its gradient given by the following expressions: \begin{equation}\label{inexact_framework} \bar{d}_\rho(\mu) = \mathcal{L}_\rho(\bar{u}(\mu),\mu) \;\; \text{and} \;\; \bar{\nabla} d_\rho(\mu) = G \bar{u}(\mu) + g - s(\bar{u}(\mu),\mu), \end{equation} then we have a similar descent relation as in \eqref{descentlemma} given in the following lemma: \begin{lemma} Given $\epsilon_{\text{in}} > 0$ such that \eqref{duquad_in} holds, then based on the definitions \eqref{inexact_framework} we get the following inequalities: \begin{align} \label{descentlemmainexact} 0 \leq [\bar{d}_\rho(\mu) \!+ \langle \bar{\nabla} d_\rho(\mu),\lambda - \mu \rangle] - d_\rho(\lambda) \leq\! L_{\text{d}} \|\mu - \lambda\|^2 +\! 2 \epsilon_{\text{in}} \quad \forall \lambda,\mu \!\in\! \rset^p. \end{align} \end{lemma} \begin{proof} From the definition of $d_{\rho}$, \eqref{duquad_in} and \eqref{inexact_framework} it can be derived: \begin{align*} d_{\rho} (\lambda) & = \min\limits_{u \in U, \; s \in \mathcal{K}} F(u) + \langle \lambda, Gu+g -s \rangle + \frac{\rho}{2}\norm{Gu+g-s}^2\\ &\le F(\bar{u}(\mu)) + \left\langle \lambda, G \bar{u}(\mu) + g - s(\bar{u}(\mu),\mu) \right\rangle + \frac{\rho}{2}\left \| G \bar{u}(\mu) + g - s(\bar{u}(\mu),\mu) \right \|^2 \\ & = \mathcal{L}_{\rho}(\bar{u}(\mu),\mu) + \langle G\bar{u}(\mu) + g - s(\bar{u}(\mu),\mu), \lambda - \mu \rangle \\ & = \bar{d}_{\rho}(\mu) + \langle \bar{\nabla} d_{\rho}(\mu), \lambda - \mu\rangle, \end{align*} which proves the first inequality. In order to prove the second inequality, let $\tilde{u} \in U$ be a fixed primal point such that $\mathcal{L}_{\rho}(\tilde{u},\mu) \ge d_{\rho}(\mu)$. Then, we note that the nonnegative function $h(\mu) = \mathcal{L}_{\rho}(\tilde{u},\mu) -d_{\rho}(\mu) \ge 0$ has Lipschitz gradient with constant $L_d$ and thus we have \cite{Nes:04}: \begin{align*} \frac{1}{2 L_d} & \left\| \left(G \tilde{u} + g - s(\tilde{u},\mu) \right) - \nabla d_{\rho}(\mu)\right\|^2 = \frac{1}{2 L_d} \norm{\nabla h(\mu)}^2 \\ & \le h(\mu) - \min_{\nu \in \rset^p} h(\nu) \le \mathcal{L}_{\rho}(\tilde{u},\mu) - d_{\rho}(\mu). \end{align*} Taking now $\tilde{u} = \bar{u}(\mu)$ and using \eqref{duquad_in}, then we obtain: \begin{equation}\label{inexact_gradient_rel} \norm{\bar{\nabla} d_{\rho} (\mu) - \nabla d_{\rho}(\mu)} \le \sqrt{2 L_d \epsilon_{\text{in}}} \qquad \forall \mu \in \rset^p. \end{equation} Furthermore, combining \eqref{inexact_gradient_rel} with \eqref{descentlemma} and \eqref{duquad_in} we have: \begin{align*} & d_{\rho}(\lambda) \ge \bar{d}_{\rho}(\mu) + \langle \nabla d_{\rho}(\mu), \lambda -\mu \rangle - \frac{L_d}{2}\norm{\lambda - \mu}^2 - \epsilon_{\text{in}} \\ & \ge \bar{d}_{\rho}(\mu) + \langle \bar{\nabla} d_{\rho}(\mu), \lambda - \mu \rangle - \frac{L_d}{2}\norm{\lambda - \mu}^2 + \langle \nabla d_{\rho} (\mu) - \bar{\nabla} d_{\rho}(\mu), \lambda - \mu \rangle - \epsilon_{\text{in}}\\ & \ge \bar{d}_{\rho}(\mu) \!+ \! \langle \bar{\nabla} d_{\rho}(\mu), \lambda - \mu \rangle - \frac{L_d}{2}\norm{\lambda - \mu}^2 - \norm{\bar{\nabla} d_{\rho}(\mu) \!-\! \nabla d_{\rho}(\mu)} \norm{\lambda - \mu} - \epsilon_{\text{in}}\\ & \overset{\eqref{inexact_gradient_rel}}{\ge} \bar{d}_{\rho}(\mu) + \langle \bar{\nabla} d_{\rho} (\mu), \lambda - \mu \rangle - \frac{L_d}{2}\norm{\lambda - \mu}^2 - \sqrt{2L_d \epsilon_{\text{in}}}\norm{\lambda -\mu} - \epsilon_{\text{in}}. \end{align*} Using the relation $\sqrt{ab} \le (a + b)/2$ we have: \begin{equation*} d_{\rho}(\lambda) \ge \bar{d}_{\rho}(\mu) + \langle \bar{\nabla} d_{\rho} (\mu), \lambda - \mu \rangle - L_d\norm{\lambda - \mu}^2 - 2 \epsilon_{\text{in}}, \end{equation*} which shows the second inequality of our lemma. \qed \end{proof} \noindent This lemma will play a major role in proving rate of convergence for the methods presented in this paper. Note that in \eqref{descentlemmainexact} $\epsilon_{\text{in}}$ enters linearly, while in \cite{DevGli:14,NedNec:12} $\epsilon_{\text{in}}$ enters quadratically in the context of augmented Lagrangian and thus in the sequel we will get better convergence estimates than those in the previous papers. In conclusion, for solving the dual problem \eqref{dual_pr} in DuQuad we use the following inexact (augmented) dual first order algorithm: \begin{center} \framebox{ \parbox{7.7cm}{ \begin{center} \textbf{ Algorithm {\bf DFOM ($d_\rho,\mathcal{K}_d$)} } \end{center} {Given $\lambda^0 = \mu^1 \in \mathcal{K}_d$, for $k\geq 1$ compute:} \begin{enumerate} \item $\bar{u}^k$ satisfying \eqref{duquad_in} for $\mu = \mu^k$, i.e. $\bar{u}^k = \bar{u}(\mu^k)$ \item ${\lambda}^{k}= \left[ \mu^k + \frac{1}{2L_{\text{d}}} \bar{\nabla} d_\rho(\mu^k)\right]_{\mathcal{K}_d}$, \item $\mu^{k+1} = \lambda^k + \beta_k (\lambda^k - \lambda^{k-1})$. \end{enumerate} }} \end{center} Recall that $\bar{u}^k = \bar{u}(\mu^k)$ satisfying the inner criterion \eqref{duquad_in} and $\bar{\nabla} d_\rho(\mu^k) = G \bar{u}^k + g - s(\bar{u}^k,\mu^k)$. Moreover, $\beta_k$ is chosen as follows: \begin{itemize} \item \textbf{DGM}: in (augmented) Dual Gradient Method $\beta_k= \frac{\theta_k -1 }{\theta_{k+1}}$, where $\theta_k=1$ for all $k$, or equivalently $\beta_k=0$ for all~$k$, i.e. the ordinary gradient algorithm. \item \textbf{DFGM}: in (augmented) Dual Fast Gradient Method $\beta_k=\frac{\theta_k -1 }{\theta_{k+1}}$, where $\theta_{k+1} = \frac{1 + \sqrt{1 + 4 \theta_k^2 }}{2}$ and $\theta_1=1$, i.e. a variant of Nesterov's accelerated scheme. \end{itemize} \noindent Therefore, in DuQuad we can solve the smooth (augmented) dual problem \eqref{dual_pr} either with dual gradient method \textbf{DGM} ($\beta_k =0$) or with dual fast gradient method \textbf{DFGM} ($\beta_k$ is updated based on $\theta_k$). Recall that for computing $\bar{u}^k$ in DuQuad we use Algorithm {\bf FOM}($\mathcal{L}_\rho(\cdot,\mu^k),U$) (see the discussion of Step 1). When applied to inner subproblem \eqref{ul}, Algorithm {\bf FOM}($\mathcal{L}_\rho(\cdot,\mu^k),U$) will converge linearly provided that $\sigma_{\mathcal{L}} > 0$. Moreover, when applying Algorithm {\bf FOM}($\mathcal{L}_\rho(\cdot,\mu^k),U$) we use warm start: i.e. we start our iteration from previous computed $\bar{u}^{k-1}$. Combining the inexact descent relation \eqref{descentlemmainexact} with Lemma \ref{lemma_sublin_dfg} we obtain the following convergence rate for the general Algorithm {\bf DFOM}($d_\rho,\mathcal{K}_d$) in terms of dual function values of \eqref{dual_pr}: \begin{theorem} \cite{DevGli:14,NedNec:12,NecNed:13} \label{lemma_sublin_dfg_inexact} For the smooth (augmented) dual problem \eqref{dual_pr} the dual sequences $\left(\lambda^k, \mu^k\right)_{k\geq 0}$ generated by Algorithm {\bf DFOM}($d_\rho,\mathcal{K}_d$) satisfy the following convergence estimate on dual suboptimality: \begin{equation} \label{bound_gen_dfg} F^* - d_\rho(\lambda^k) \leq \frac{4 L_{\text{d}} \mathcal{R}^2_{\text{d}}}{(k+1)^{p(\beta_k)}} + 2(k+1)^{p(\beta_k)-1} \epsilon_{\text{in}}, \end{equation} where recall $\mathcal{R}_{\text{d}} = \min \limits_{\lambda^* \in \Lambda^*} \|\lambda^* - \lambda^0\|$ and $p(\beta_k)$ is defined as in \eqref{pbetak}. \qed \end{theorem} \noindent Note that in \cite[Theorem 2]{DevGli:14}, the convergence rate of \textbf{DGM} scheme is provided in the average dual iterate $\hat{\lambda}^k = \frac{1}{k+1} \sum\limits_{j=0}^k \lambda^j$ and not in the last dual iterate $\lambda^k$. However, for a uniform treatment in Theorem \ref{lemma_sublin_dfg_inexact} we redefine the dual final point (the dual last iterate $\lambda^k$ when some stopping criterion is satisfied) as follows: $\lambda^k = \left[\hat{\lambda}^k + \frac{1}{2L_{\text{d}}} \bar{\nabla} d_\rho(\hat{\lambda}^k)\right]_{\mathcal{K}_d}$. \subsection{How to choose inner accuracy $\epsilon_{\text{in}}$ in DuQuad} \noindent We now show how to choose the inner accuracy $\epsilon_{\text{in}}$ in DuQuad. From Theorem \ref{lemma_sublin_dfg_inexact} we conclude that in order to get $\epsilon$-dual suboptimality, i.e. $ F^* - d_\rho(\lambda^k) \leq \epsilon$, the inner accuracy $\epsilon_{\text{in}}$ and the number of outer iteration $k_{\text{out}}$ (i.e. number of updates of Lagrange multipliers) have to be chosen as follows: \begin{equation} \label{choose_ein} \epsilon_{\text{in}} = \begin{cases} \frac{\epsilon}{4} & \mbox{if } \;\; \textbf{DGM} \\ \frac{\epsilon\sqrt{\epsilon}}{8 \mathcal{R}_{\text{d}} \sqrt{2L_{\text{d}}}} & \mbox{if } \;\; \textbf{DFGM}, \end{cases} \hspace{20pt} k_{\text{out}} = \begin{cases} \frac{8 L_d \mathcal{R}_d^2}{\epsilon} & \text{if} \;\; \textbf{DGM} \\ \sqrt{\frac{8 L_d \mathcal{R}_d^2}{\epsilon}} & \text{if} \;\; \textbf{DFGM}. \end{cases} \end{equation} Indeed, by enforcing each term of the right hand side of \eqref{bound_gen_dfg} to be smaller than $\frac{\epsilon}{2}$ we obtain first the bound on the number of the outer iterations $k_{\text{out}}$. By replacing this bound into the expression of $\epsilon_{\text{in}}$, we also obtain how to choose $\epsilon_{\text{in}}$, i.e the estimates \eqref{choose_ein}. We conclude that the inner QP \eqref{ul} has to be solved with higher accuracy in dual fast gradient algorithm \textbf{DFGM} than in dual gradient algorithm \textbf{DGM}. This shows that dual gradient algorithm \textbf{DGM} is robust to inexact (augmented) dual first order information, while dual fast gradient algorithm \textbf{DFGM} is sensitive to inexact computations (see also Fig. \ref{dfgm_sensitivity}). In DuQuad the user can choose either Algorithm \textbf{DFGM} or Algorithm \textbf{DGM} for solving the (augmented) dual problem \eqref{dual_pr} and he can also choose the inner accuracy $\epsilon_{\text{in}}$ for solving the inner problem (in the toolbox the default values for $\epsilon_{\text{in}}$ are taken of the same order as in \eqref{choose_ein}). \begin{figure}[h!] \centering \includegraphics[width=0.52\textwidth,height=4.5cm]{dfgm_last_subopt1} \hspace*{-0.7cm} \includegraphics[width=0.52\textwidth,height=4.5cm]{dfgm_last_sensitivity} \caption{Behavior of Algorithms \textbf{DGM} (left), \textbf{DFGM}(right) in terms of primal suboptimality w.r.t. inner accuracy $\epsilon_{\text{in}}$ for a strongly convex QP with $\epsilon=0.01$.} \label{dfgm_sensitivity} \end{figure} \section{How to recover an $\epsilon$-primal solution in DuQuad} \noindent It is natural to investigate how to recover an $\epsilon$-primal solution for the original QP \eqref{original_primal}. Since dual suboptimality is given in the last dual iterate $\lambda^k$, it is natural to consider as an approximate primal solution the last primal iterate generated by Algorithm {\bf DFOM} in $\lambda^k$, i.e.: \begin{equation}\label{last_primal} \bar{u}_\epsilon^k = \bar{u}(\lambda^k). \end{equation} Note that the last primal iterate $\bar{u}_\epsilon^k = \bar{u}(\lambda^k)$ coincides with $\bar{u}^k = \bar{u}(\mu^k)$ for Algorithm \textbf{DGM}. However, for Algorithm \textbf{DFGM} these two sequences are different, i.e. $\bar{u}_\epsilon^k \not = \bar{u}^k$. We will show below that the last primal iterate $\bar{u}_\epsilon^k$ is an $\sqrt{\epsilon}$-primal solution for the original QP \eqref{original_primal}, provided that $ F^* - d_\rho(\lambda^k) \leq \epsilon$. We can also construct an approximate primal solution based on an average of all previous primal iterates generated by Algorithm {\bf DFOM} , i.e.: \begin{equation}\label{average_primal} \hat{u}_\epsilon^k = \sum_{j=1}^k \frac{\theta_j \bar{u}^j}{S_k}, \quad S_k=\sum_{j=1}^k \theta_j. \end{equation} Recall that $\theta_j=1$ in Algorithm \textbf{DGM} and $\theta_j$ is updated according to the rule $\theta_{j+1} = \frac{1 + \sqrt{1 + 4 \theta_j^2 }}{2}$ and $\theta_1=1$ in Algorithm \textbf{DFGM}. In the sequel, we prove that the average of primal iterates sequence $\hat{u}_\epsilon^k$ is an $\epsilon$-primal solution for the original QP \eqref{original_primal}, provided that $ F^* - d_\rho(\lambda^k) \leq \epsilon$. \noindent Before proving primal rate of convergence for Algorithm \textbf{DFOM} we derive a bound on $\norm{\lambda^{k+1} - \lambda^*}$, with $\lambda^k$ generated by algorithm \textbf{DFOM}, bound that will be used in the proofs of our convergence results. In the case of \textbf{DGM}, using its particular iteration, for any $\lambda \in \mathcal{K}_d$, we have: \begin{align} \norm{\lambda^{k+1}-\lambda}^2 &= \norm{\lambda^k - \lambda}^2 + 2\langle \lambda^{k+1} - \lambda^k, \lambda^k - \lambda \rangle + \norm{\lambda^{k+1} - \lambda^k}^2 \nonumber\\ & = \norm{\lambda^k - \lambda}^2 + 2\langle \lambda^{k+1} - \lambda^k, \lambda^{k+1}- \lambda \rangle - \norm{\lambda^{k+1} - \lambda^k}^2 \nonumber\\ & \le \norm{\lambda^k - \lambda}^2 + \frac{1}{L_{\text{d}}} \langle \bar{\nabla} d_{\rho}(\lambda^k), \lambda^{k+1} - \lambda \rangle - \norm{\lambda^{k+1} - \lambda^k}^2 \nonumber\\ & = \norm{\lambda^k - \lambda}^2 + \frac{1}{L_{\text{d}}} \langle \bar{\nabla} d_{\rho}(\lambda^k) , \lambda^k - \lambda \rangle \label{rel_seq1}\\ & \quad \quad + \frac{1}{L_{\text{d}}} \left(\langle \bar{\nabla} d_{\rho}(\lambda^k), \lambda^{k+1} - \lambda^{k} \rangle - L_{\text{d}} \norm{\lambda^{k+1} - \lambda^k}^2\right) \nonumber\\ & \le \norm{\lambda^k -\lambda}^2 + \frac{1}{L_{\text{d}}}(d_{\rho}(\lambda^{k+1}) - d_{\rho}(\lambda)) + \frac{\epsilon_{\text{in}}}{L_{\text{d}}}.\nonumber \end{align} Taking now $\lambda = \lambda^*$ and using an inductive argument, we get: \begin{equation}\label{bound_seq_dgm} \norm{\lambda^{k} - \lambda^*} \le R_d + \sqrt{\frac{k\epsilon_{\text{in}}}{L_{\text{d}}}}. \end{equation} On the other hand, for the scheme \textbf{DFGM}, we introduce the notation $l^k = \lambda^{k-1} + \theta_{k}(\lambda^k - \lambda^{k-1})$ and present an auxiliary result: \begin{lemma}\cite{NecPat:15,Tse:08} \label{th_tseng_2} Let $(\lambda^k, \mu^k)$ be generated by Algorithm \textbf{DFOM}($d_{\rho},\mathcal{K}_d$) with $\theta_{k+1} = \frac{1 + \sqrt{1 + 4 \theta_k^2 }}{2}$, then for any Lagrange multiplier $\lambda \in \rset^p$ we have: \begin{align*} \theta_{k}^2 (d_{\rho}(\lambda) - d_{\rho}(\lambda^{k})) \!+\! \sum\limits_{j=0}^{k}\theta_{j}\Delta(\lambda,\mu^j) \!+\! L_\text{d} \norm{l^{k} - \lambda}^2 \!\le\! L_\text{d} \norm{\lambda^0 - \lambda}^2 + 2 \! \sum\limits_{j=0}^{k}\theta_j^2 \epsilon_{\text{in}}, \end{align*} for all $k \ge 0$, where $\Delta(\lambda,\mu) = \bar{d}_{\rho}(\mu) + \langle \bar{\nabla} d_{\rho}(\mu), \lambda - \mu\rangle - d_{\rho}(\lambda)$. \end{lemma} \noindent Using this result and a similar reasoning as in \cite{NecPat:15} we obtain the same relation \eqref{bound_seq_dgm} for the scheme \textbf{DFGM}. Moreover, for simplicity, in the sequel we also assume $\lambda^0=0$. In the next two sections we derive rate of convergence results of Algorithm \textbf{DFOM} in both primal sequences, the primal last iterate \eqref{last_primal} and an average of primal iterates \eqref{average_primal}. \subsection{ The $\sqrt{\epsilon}$ convergence in the last primal iterate $\bar{u}_{\epsilon}^k$} In this section we present rate of convergence results for the Algorithm \textbf{DFOM}, in terms of primal suboptimality and infeasibility for the last primal iterate $\bar{u}_{\epsilon}^k$ defined in \eqref{last_primal}, provided that the relations \eqref{choose_ein} hold. \begin{theorem} Let $\epsilon> 0$ be some desired accuracy and $\bar{u}_{\epsilon}^k = \bar{u}(\lambda^k)$ be the primal last iterate sequence generated by Algorithm \textbf{DFOM}($d_{\rho}, \mathcal{K}_d$) using the inner accuracy from \eqref{choose_ein}. Then, after $k_{\text{out}}$ number of outer iterations given in \eqref{choose_ein}, $\bar{u}_{\epsilon}^{k_{\text{out}}}$ is $\sqrt{\epsilon}$-primal optimal for the original QP \eqref{original_primal}. \end{theorem} \begin{proof} Using the Lipschitz property of the gradient of $d_{\rho}(\cdot)$, it is known that the following inequality holds \cite{Nes:04}: \begin{equation*} d_{\rho}(\lambda) \le d_{\rho}(\mu) + \langle \nabla d_{\rho}(\mu), \lambda - \mu \rangle - \frac{1}{2L_d} \norm{\nabla d_{\rho}(\lambda) - \nabla d_{\rho} (\mu)}^2 \quad \forall \lambda, \mu \in \rset^p. \end{equation*} Taking $\mu = \lambda^*$ and using the optimality condition $\langle \nabla d_{\rho}(\lambda^*), \mu - \lambda^*\rangle \le 0$ for all $\mu \in \mathcal{K}_d$, we further have: \begin{equation}\label{grad_funcbound} \norm{\nabla d_{\rho}(\lambda) - \nabla d_{\rho}(\lambda^*)} \le \sqrt{2L_d (F^* - d_{\rho}(\lambda))} \qquad \forall \lambda \in \mathcal{K}_d. \end{equation} Considering $\lambda = \lambda^k$ and observing that $ s(\bar{u}(\lambda^k),\lambda^k) + Gu^* + g \in \mathcal{K}$ we obtain a link between the primal feasibility and dual suboptimality: \begin{align}\label{aux_feas_bound} d_{\mathcal{K}}(G \bar{u}^k_{\epsilon}+g) & \le \left\| G \bar{u}(\lambda^k) + g - s(\bar{u}(\lambda^k),\lambda^k) - G u^* - g \right\| \nonumber\\ & = \norm{\bar{\nabla} d_{\rho}(\lambda^k) - \nabla d_{\rho}(\lambda^*)} \nonumber \\ &\le \norm{\bar{\nabla} d_{\rho}(\lambda^k) - \nabla d_{\rho} (\lambda^k)} + \norm{\nabla d_{\rho}(\lambda^k) - \nabla d_{\rho}(\lambda^*)} \nonumber\\ & \overset{\eqref{grad_funcbound} + \eqref{inexact_gradient_rel}}{\le} \sqrt{2 L_d \epsilon_{\text{in}}} + \sqrt{2L_d (F^* - d_{\rho}(\lambda^k))}. \end{align} \noindent Provided that $F^* - d_{\rho}(\lambda^{k_{\text{out}}}) \le \epsilon$ and using $\epsilon_{\text{in}}$ as in \eqref{choose_ein}, we obtain: \begin{equation}\label{pfeas_subopt_final} d_{\mathcal{K}}(G \bar{u}^{k_{\text{out}}}_{\epsilon} + g) \le \max \left\{ \frac{(L_d \epsilon)^{1/2}}{\sqrt{2}}, \frac{L_d^{1/4}}{(3R_d)^{1/2}} \epsilon^{3/4} \right\} + (2L_d \epsilon)^{1/2}. \end{equation} \noindent Secondly, we find a link between the primal and dual suboptimality. Indeed, we have for all $\mathbf{\lambda} \in \mathcal{K}_d$: \begin{align}\label{aux_subopt_left} F^* & = \min_{u \in U, s \in \mathcal{K}} F(u) + \langle \mathbf{\lambda}^*, Gu + g - s\rangle \nonumber \\ &\leq F(\bar{u}(\lambda^k)) + \left \langle \mathbf{\lambda}^*, G \bar{u}(\lambda^k) + g - \left[G \bar{u}(\lambda^k) + g\right]_{\mathcal{K}} \right \rangle. \end{align} Further, using the Cauchy-Schwartz inequality, we derive: \begin{align} \label{psubopt_left} F(\bar{u}^{k_{\text{out}}}_{\epsilon}) - F^* & \geq - \|\lambda^*\| \text{dist}_{\mathcal{K}}(G \bar{u}(\lambda^{k_{\text{out}}}) + g) \nonumber \\ & \ge - R_d\max \left\{ \frac{(L_d \epsilon)^{1/2}}{\sqrt{2}}, \frac{L_d^{1/4}}{(3R_d)^{1/2}} \epsilon^{3/4} \right\} - R_d(2L_d \epsilon)^{1/2}. \end{align} \noindent On the other hand, from the concavity of $d_{\rho}(\cdot)$ we obtain: \begin{align}\label{aux_subopt_right} &F(\bar{u}(\lambda^k)) - F^* \le \bar{d}_{\rho}(\lambda^k) - F^* - \langle \bar{\nabla} d_{\rho}(\lambda^k), \lambda^k \rangle \nonumber\\ &\le d_{\rho}(\lambda^k) -F^* - \langle \nabla d_{\rho}(\lambda^*), \lambda^k \rangle + \langle \nabla d_{\rho}(\lambda^*) - \bar{\nabla} d_{\rho}(\lambda^k), \lambda^k \rangle \nonumber + \epsilon_{\text{in}} \nonumber \\ &\le d_{\rho}(\lambda^k) -F^* - \langle \nabla d_{\rho}(\lambda^*), \lambda^k -\lambda^*\rangle + \norm{\nabla d_{\rho}(\lambda^*) - \bar{\nabla} d_{\rho}(\lambda^k)}\norm{ \lambda^k} + \epsilon_{\text{in}} \nonumber \\ & \le \norm{\lambda^k} \norm{\bar{\nabla} d_{\rho}(\lambda^k) - \nabla d_{\rho}(\lambda^*)} + \epsilon_{\text{in}} \nonumber \\ & \overset{\eqref{aux_feas_bound}}{\le} \norm{\lambda^k}\sqrt{2 L_d \epsilon_{\text{in}}} + \norm{\lambda^k}\sqrt{2L_d (F^* - d_{\rho}(\lambda^k))} + \epsilon_{\text{in}}. \end{align} Taking $k = k_{\text{out}}$ and $\epsilon_{\text{in}}$ as in \eqref{choose_ein}, based on \eqref{bound_seq_dgm} and on the implicit assumption that $k_{\text{out}} \ge 1$, we observe that $\norm{\lambda^{k_{\text{out}}}} \le \norm{\lambda^{k_{\text{out}}} - \lambda^*} + \norm{\lambda^*} \le 4 R_d$ for both schemes \textbf{DGM} and \textbf{DFGM}. Therefore, \eqref{aux_subopt_right} implies: \begin{equation*} F(\bar{u}^{k_{\text{out}}}_{\epsilon}) - F^* \overset{\eqref{pfeas_subopt_final}}{\le} 4R_d \max \left\{ \frac{(L_d \epsilon)^{1/2}}{\sqrt{2}}, \frac{L_d^{1/4}}{(3R_d)^{1/2}} \epsilon^{3/4} \right\} + 4R_d(2L_d \epsilon)^{1/2} + \epsilon_{\text{in}}. \end{equation*} \noindent As a conclusion, from \eqref{psubopt_left} and the previous inequality, we get the bound: \begin{align*} | F(\bar{u}_{\epsilon}^{k_{\text{out}}}) - F^* | \le 4R_d \max \left\{ \frac{(L_d \epsilon)^{1/2}}{\sqrt{2}}, \frac{L_d^{1/4}}{(3R_d)^{1/2}} \epsilon^{3/4} \right\} + 4R_d(2L_d \epsilon)^{1/2} + \epsilon_{\text{in}}, \end{align*} which implies $| F (\bar{u}^{k_{\text{out}}}_{\epsilon}) - F^*| \le \mathcal{O}(\sqrt{\epsilon})$. Using this fact and the feasibility bound \eqref{pfeas_subopt_final}, which also implies $\text{dist}_{\mathcal{K}} (G \bar{u}^{k_{\text{out}}}_{\epsilon} + g) \le \mathcal{O}(\sqrt{\epsilon})$, we finally conclude that the last primal iterate $\bar{u}^{k_{\text{out}}}_{\epsilon}$ is $\sqrt{\epsilon}$-primal optimal. \qed \end{proof} \noindent We can also prove linear convergence for algorithm \textbf{DFOM} provided that $\lambda_{\min} (Q) > 0$ (i.e. the objective function is smooth and strongly convex) and $U = \rset^n$ (i.e. the inner problem is unconstrained). In this case we can show that the dual problem satisfies an error bound property \cite{NecNed:15,NecPat:15}. Under these settings \textbf{DFOM} is converging linearly (see \cite{NecNed:15,NecPat:15,WanLin:13} for more details). \subsection{The $\epsilon$ convergence in the average of primal iterates $\hat{u}_{\epsilon}^k$} Further, we analyze the convergence of the algorithmic framework \textbf{DFOM} in the average of primal iterates $\hat{u}^k_{\epsilon}$ defined in \eqref{average_primal}. Since we consider different primal average iterates for the schemes \textbf{DGM} and \textbf{DFGM}, we analyze separately the convergence of these methods in $\hat{u}^k_{\epsilon}$. \begin{theorem} Let $\epsilon> 0$ be some desired accuracy and $\hat{u}_{\epsilon}^k$ be the primal average iterate given in \eqref{average_primal}, generated by algorithm \textbf{DGM}, i.e. Algorithm \textbf{DFOM}($d_{\rho}, \mathcal{K}_d$) with $\theta_k = 1$ for all $k \ge 0$, using the inner accuracy from \eqref{choose_ein}. Then, after $k_{\text{out}}$ number of outer iterations given in \eqref{choose_ein}, $\hat{u}^{k_{\text{out}}}_{\epsilon}$ is $\epsilon$-primal optimal for the original QP \eqref{original_primal}. \end{theorem} \begin{proof} First, we derive sublinear estimates for primal infeasibility for the average primal sequence $\hat{u}^k_{\epsilon}$ (recall that in this case $\hat{u}^k_{\epsilon} = \frac{1}{k+1}\sum\limits_{j=0}^{k} \bar{u}^j$). Given the definition of $\lambda^{j+1}$ in Algorithm \textbf{DFOM}($d_{\rho}, \mathcal{K}_d$) with $\theta_j = 1$, we get: \[ \lambda^{j+1} = \left[ \lambda^j + \frac{1}{2L_{\text{d}}} \bar{\nabla} d_{\rho}(\lambda^j) \right]_{\mathcal{K}_d} \quad \forall j \geq 0.\] Subtracting $\lambda^j$ from both sides, adding up the above inequality for $j=0$ to $j=k$, we obtain: \begin{align}\label{bound_feas} \left\|\frac{1}{k+1}\sum_{j=0}^k \left[\lambda^j + \frac{1}{2L_d} \bar{\nabla} d_{\rho}(\lambda^j) \right]_{\mathcal{K}_d} -\lambda^j \right\| = \frac{1}{k+1}\norm{\lambda^{k+1} - \lambda^0} . \end{align} If we denote $z^j = \lambda^j + \frac{1}{2L_d}\bar{\nabla} d_{\rho}(\lambda^j) - \left[ \lambda^j + \frac{1}{2L_d}\bar{\nabla} d_{\rho}(\lambda^j) \right]_{\mathcal{K}_d} $, then we observe that $z^j \in \mathcal{K}$. Thus, we have $\frac{2L_d}{k+1}\sum\limits_{j=0}^{k} z^j \in \mathcal{K}$. Using the definition of $ \bar{\nabla} d_{\rho}(\lambda^j)$, we obtain: \begin{align* \text{dist}_{\mathcal{K}}(G\hat{u}^k_{\epsilon} + g) &\le \left\| \frac{1}{k+1}\sum_{j=0}^k (G \bar{u}^j + g) - \frac{1}{k+1}\sum\limits_{j=0}^k \left(2L_d z^j + s(\bar{u}^j,\lambda^j) \right) \right\| \nonumber \\ & = \left\| \frac{1}{k+1} \sum \limits_{j=0}^k (\bar{\nabla} d_{\rho}(\lambda^j) - 2L_d z^j) \right\| \overset{\eqref{bound_feas}}{=} \frac{2 L_{\text{d}}}{k+1}\norm{\lambda^{k+1} - \lambda^0}. \end{align*} Using $\norm{\lambda^k - \lambda^0} \le \norm{\lambda^k - \lambda^*} + R_d$ and the bound \eqref{bound_seq_dgm} for the values $\epsilon_{\text{in}}$ and $k = k_{\text{out}}$ from \eqref{choose_ein} in the previous inequality, we get: \begin{equation}\label{feasibility_final} \text{dist}_{\mathcal{K}}(G\hat{u}^{k_{\text{out}}}_{\epsilon}+g) \le \frac{4L_{\text{d}} R_{\text{d}}}{k_{\text{out}}} + 2\sqrt{\frac{L_{\text{d}}\epsilon_{\text{in}}}{k_{\text{out}}}} \le \frac{\epsilon}{R_d}. \end{equation} \noindent It remains to estimate the primal suboptimality. First, to bound below $F(\hat{u}^{k_{\text{out}}}_{\epsilon}) - F^*$ we proceed as follows: \begin{align*} F^* &= \min\limits_{u \in U, s \in \mathcal{K}} F(u) + \langle \lambda^*, Gu+g -s \rangle \nonumber\\ & \le F(\hat{u}^k_{\epsilon}) + \langle \lambda^*, G\hat{u}^k_{\epsilon} + g - \left[G\hat{u}^k_{\epsilon} + g\right]_{\mathcal{K}} \rangle \nonumber\\ & \le F(\hat{u}^k_{\epsilon}) + \norm{\lambda^*} \norm{G\hat{u}^k_{\epsilon}+g - \left[G\hat{u}^k_{\epsilon} + g\right]_{\mathcal{K}}}\nonumber\\ & = F(\hat{u}^k_{\epsilon}) + R_d \; \text{dist}_{\mathcal{K}} \left(G\hat{u}^k_{\epsilon} + g\right). \end{align*} Combining the last inequality with \eqref{feasibility_final}, we obtain: \begin{equation}\label{left_subopt} - \epsilon \le F(\hat{u}^{k_{\text{out}}}_{\epsilon}) - F^*. \end{equation} Secondly, we observe the following facts: for any $u\in U$, $d_{\rho}(\lambda) \le F^*$ and the following identity holds: \begin{align}\label{identity_lag} \bar{d}_{\rho}(\lambda) - \langle \bar{\nabla} d_{\rho}(\lambda), \lambda \rangle = F(\bar{u}(\lambda)) + \frac{\rho}{2}\norm{\bar{\nabla} d_{\rho}(\lambda)}^2 \ge F(\bar{u}(\lambda)). \end{align} \noindent Based on previous discussion, \eqref{rel_seq1} and \eqref{identity_lag}, we derive that \begin{align*} & \norm{\lambda^{k+1} - \lambda}^2 \\ & \overset{\eqref{rel_seq1}}{\le} \norm{\lambda^k - \lambda}^2 + \frac{1}{L_{\text{d}}} \left( d_{\rho}(\lambda^{k+1}) - \bar{d}(\lambda^k) + \langle \bar{\nabla} d_{\rho}(\lambda^k), \lambda^k - \lambda \rangle + \epsilon_{\text{in}} \right)\\ & \overset{\eqref{identity_lag}}{\le} \norm{\lambda^k - \lambda}^2 + \frac{1}{L_{\text{d}}}\left( F^* - F(\bar{u}^k) - \frac{\rho}{2}\norm{\bar{\nabla} d_{\rho}(\lambda^k)}^2 + \epsilon_{\text{in}} - \langle \bar{\nabla} d_{\rho}(\lambda^k), \lambda \rangle \right). \end{align*} Taking now $\lambda=0$, $k = k_{\text{out}}$ and using an inductive argument, we obtain: \begin{equation}\label{right_subopt} F(\hat{u}^{k_{\text{out}}}) - F^* \le \frac{L_{\text{d}}\norm{\lambda^0}^2}{k_{\text{out}} } + \epsilon_{\text{in}} = \frac{\epsilon}{4}, \end{equation} provided that $\lambda^0=0$. From \eqref{feasibility_final}, \eqref{left_subopt} and \eqref{right_subopt}, we obtain that the average primal iterate $\hat{u}^{k_{\text{out}}}_{\epsilon}$ is $\epsilon$-primal optimal. \qed \end{proof} \noindent Further, we analyze the primal convergence rate of algorithm \textbf{DFGM} in the average primal iterate $\hat{u}_{\epsilon}^k$: \begin{theorem} Let $\epsilon> 0$ be some desired accuracy and $\hat{u}_{\epsilon}^k$ be the primal average iterate given in \eqref{average_primal}, generated by algorithm \textbf{DFGM}, i.e. Algorithm \textbf{DFOM}($d_{\rho}, \mathcal{K}_d$) with $\theta_{k+1} = \frac{1 + \sqrt{1 + 4 \theta_k^2}}{2}$ for all $k \geq0$, using the inner accuracy from \eqref{choose_ein}. Then, after $k_{\text{out}}$ number of outer iterations given in \eqref{choose_ein}, $\hat{u}^{k_{\text{out}}}_{\epsilon}$ is $\epsilon$-primal optimal for the original QP \eqref{original_primal}. \end{theorem} \begin{proof} Recall that we have defined $S_k = \sum\limits_{j=0}^k \theta_k$. Then, it follows: \begin{equation}\label{fg_step} \frac{k+1}{2} \le \theta_k \le k \qquad \text{and} \qquad S_k = \theta_k^2. \end{equation} \noindent For any $j \ge 0$ we denote $z^j = \mu^{j} + \frac{1}{2L_{\text{d}}} \bar{\nabla} d_{\rho}(\mu^{j})$ and thus we have $\lambda^j = \left[ z^j \right]_{\mathcal{K}_d}$. In these settings, we have the following relations: \begin{align} \label{feasibility_aux1} \theta_j & \left( \frac{1}{2L_{\text{d}}} \bar{\nabla} d_{\rho}(\mu^j)- (z^j - [z^j]_{\mathcal{K}_d})\right) \nonumber\\ & = \theta_j \left(\left[ \mu^{j} + \frac{1}{2L_{\text{d}}} \bar{\nabla} d_{\rho}(\mu^{j}) \right]_{\mathcal{K}_d} - \lambda^{j}\right) \nonumber\\ & = \theta_{j}(\lambda^{j} - \mu^{j})\nonumber\\ & = \theta_{j}(\lambda^{j} - \lambda^{j-1}) + (\theta_{j-1} -1)( \lambda^{j-2} - \lambda^{j-1})\nonumber\\ & = \underbrace{\lambda^{j-1} + \theta_{j}(\lambda^{j} - \lambda^{j-1})}_{=l^{j}} - \underbrace{(\lambda^{j-2} + \theta_{j-1}(\lambda^{j-1} - \lambda^{j-2}))}_{=l^{j-1}}. \end{align} \noindent For simplicity consider $\lambda^{-2} = \lambda^{-1} = \lambda^0$ and $\theta_{-1} = \theta_0 = 0$. Adding up the above equality for $j = 0$ to $j = k$, multiplying by $\frac{2L_{\text{d}}}{S_k}$ and observing that $ s(\bar{u}^j,\mu^j) + z^j - [z^j]_{\mathcal{K}_d} \in \mathcal{K}$ for all $j \ge 0 $, we obtain: \begin{align*} \text{dist}_{\mathcal{K}}\left(G\hat{u}^k_{\epsilon} + g \right) &\le \left\| \sum\limits_{j=0}^k \frac{\theta_j}{S_k} \left( G \bar{u}^j + g - s(\bar{u}^j,\mu^j) - 2 L_{\text{d}}(z^j-[z^j]_{\mathcal{K}_d}) \right)\right\| \\ & = \left\| \sum\limits_{j=0}^k \frac{\theta_j}{S_k} \left( \bar{\nabla} d_{\rho}(\mu^j) - 2 L_{\text{d}}(z^j-[z^j]_{\mathcal{K}_d}) \right)\right\| \\ & \overset{\eqref{feasibility_aux1}}{=} \frac{L_\text{d}}{S_k}\norm{l^{k}-l^0} \le \frac{4L_\text{d}}{(k+1)^2} \norm{l^k -l^{0}}. \end{align*} \noindent Taking $\lambda =\lambda^*$ in Lemma \ref{th_tseng_2} and using that the two terms $\theta_{k}^2 (F^* - d_{\rho}(\lambda^{k}))$ and $\sum\limits_{j=0}^{k}\theta_{j} \Delta(\lambda^*,\mu^j)$ are positive, we get: \begin{align*} \| l^k - \lambda^* \| &\le \sqrt{\| \lambda^0 - \lambda^*\|^2 + \sum\limits_{i=1}^{k} \frac{2\theta_i^2\epsilon_{\text{in}}}{L_{\text{d}}}} \le \| \lambda^0 - \lambda^*\| + \sqrt{\frac{8\epsilon_{\text{in}}}{3L_{\text{d}}} (k+1)^3}\\ & \le \| \lambda^0 - \lambda^* \| + \left(\frac{8\epsilon_{\text{in}}}{3L_{\text{d}}}\right)^{1/2} (k+1)^{3/2} \qquad \forall k \geq 0. \end{align*} \noindent Thus, we can further bound the primal infeasibility as follows: \begin{align} \text{dist}_{\mathcal{K}}\left(G\hat{u}^k_{\epsilon} + g \right) &\le \frac{4L_\text{d}}{(k+1)^2} \|l^k - l^0\| \le \frac{4L_\text{d}}{(k+1)^2}(\|l^k - \lambda^*\| + R_d) \nonumber\\ & \le \frac{8L_\text{d}R_{\text{d}}}{(k+1)^2} + 8 \left( \frac{L_d \epsilon_{\text{in}} }{k+1} \right)^{1/2}. \label{infes_av} \end{align} Therefore, using $k_{\text{out}}$ and $\epsilon_{\text{in}}$ from \eqref{choose_ein}, it can be derived that: \begin{equation}\label{infes_av_final} \text{dist}_{\mathcal{K}}(G \hat{u}^{k_{\text{out}}}_{\epsilon} + g) \le \frac{8L_{\text{d}} R_{\text{d}}}{k_{\text{out}}^2} + 8 \left(\frac{L_d \epsilon_{\text{in}}}{k_{\text{out}}} \right)^{1/2} \le \frac{3 \epsilon}{R_d}. \end{equation} \noindent Further, we derive sublinear estimates for primal suboptimality. First, note the following relations: \begin{align*} \Delta(\lambda, & \mu^{k}) = \bar{d}_{\rho}(\mu^{k}) + \langle \bar{\nabla} d_{\rho}(\mu^{k}), \lambda - \mu^{k}\rangle - d_{\rho}(\lambda) \\ &= \mathcal{L}_{\rho}(\bar{u}^{k},\mu^{k}) + \langle \bar{\nabla}d_{\rho}(\mu^{k}) , \lambda - \mu^{k}\rangle - d_{\rho} (\lambda) \\ & = F(\bar{u}^k) + \langle \lambda, G\bar{u}^k + g - s(\bar{u}^k,\mu^k) \rangle + \frac{\rho}{2}\norm{G \bar{u}^k +g - s(\bar{u}^k,\mu^k)}^2 - d_{\rho}(\lambda)\\ & \ge \min\limits_{s \in \mathcal{K}} \; \; F(\bar{u}^k) + \langle \lambda, G\bar{u}^k + g - s\rangle + \frac{\rho}{2}\norm{G \bar{u}^k +g - s}^2 - d_{\rho}(\lambda)\\ & = \mathcal{L}_{\rho}(\bar{u}^{k},\lambda) - d_{\rho} (\lambda). \end{align*} \noindent Summing on the history and using the convexity of $\mathcal{L}_{\rho}(\cdot,\lambda)$, we get: \begin{align} \sum\limits_{j=0}^{k}&\theta_j \Delta(\lambda,\mu^j) \ge \sum\limits_{j=1}^{k}\theta_j ( \mathcal{L}_{\rho}(\bar{u}^{j},\lambda) - d_{\rho}(\lambda))\nonumber\\ &\ge S_{k} \left( \mathcal{L}_{\rho} (\hat{u}^{k}_{\epsilon},\lambda) -d_{\rho} (\lambda)\right) = \theta_{k}^2 \left( \mathcal{L}_{\rho} (\hat{u}^{k}_{\epsilon} , \lambda) - d_{\rho} (\lambda)\right). \label{sum_theta_aux_ag} \end{align} Using \eqref{sum_theta_aux_ag} in Lemma \ref{th_tseng_2}, and dropping the term $ L_{\text{d}}\norm{l^{k}- \lambda}^2$, we have: \begin{align}\label{subopt_right_aux} F(\hat{u}^{k}_{\epsilon}) + \langle G\hat{u}^{k}_{\epsilon} +g - s(\hat{u}^k_{\epsilon},\lambda),& \lambda \rangle - d_{\rho}(\lambda^{k}) \le \frac{L_\text{d}}{\theta_{k}^2}\norm{\lambda^0- \lambda}^2 + \frac{2\sum\limits_{j=0}^{k} \theta_j^2}{\theta_{k}^2} \epsilon_{\text{in}}. \end{align} Moreover, we have that: \[\frac{1}{\theta_{k}^2}\sum\limits_{j=0}^{k} \theta_j^2 = \frac{1}{S_{k}} \sum\limits_{j=0}^{k} \theta_j^2 \le \max\limits_{0\le j \le k} \theta_j \le k \quad \text{and} \quad d_{\rho} (\lambda^{k}) \le F^*. \] Now, by choosing the Lagrange multiplier $\lambda = 0$ and $k = k_{\text{out}}$ in \eqref{subopt_right_aux}, we have: \begin{align}\label{subopt_right_av} F(&\hat{u}^{k_{\text{out}}}_{\epsilon}) - F^* \le F(\hat{u}^{k_{\text{out}}}_{\epsilon}) - d_{\rho} (\lambda^{k_{\text{out}}}) \le \frac{2L_\text{d}R_{\text{d}}^2}{k^2_{\text{out}}} + 2k_{\text{out}}\epsilon_{\text{in}} \le \frac{5\epsilon}{4}. \end{align} \noindent On the other hand, we have: \begin{align*} F^* &= \min_{u \in U, s \in \mathcal{K}} F(u) + \langle \lambda^*, G u + g -s \rangle \leq F(\hat{u}^k_{\epsilon}) + \langle \lambda^*, G \hat{u}^k_{\epsilon} +g - \left[G \hat{u}^k_{\epsilon} + g \right]_{\mathcal{K}} \rangle\nonumber\\ & \le F(\hat{u}^k_{\epsilon}) + R_d \; \text{dist}_{\mathcal{K}}(G \hat{u}^k_{\epsilon} + g). \end{align*} Taking $k = k_{\text{out}}$ and $\epsilon_{\text{in}}$ from \eqref{choose_ein}, and using \eqref{infes_av_final}, we obtain: \begin{equation}\label{subopt_left_final} -3 \epsilon \le F(\hat{u}^{k_{\text{out}}}_{\epsilon}) - F^*. \end{equation} Finally, from \eqref{infes_av_final}, \eqref{subopt_right_av} and \eqref{subopt_left_final}, we get that the primal average sequence $\hat{u}^{k_{\text{out}}}_{\epsilon}$ is $\epsilon$ primal optimal. \qed \end{proof} \noindent In conclusion, in DuQuad we generate two approximate primal solutions $\bar{u}_\epsilon^k$ and $\hat{u}_\epsilon^k$ for each algorithm \textbf{DGM} and \textbf{DFGM}. From previous discussion it can be seen that theoretically, the average of primal iterates sequence $\hat{u}_\epsilon^k$ has a better behavior than the last iterate sequence $\bar{u}_\epsilon^k$. On the other hand, from our practical experience (see also Section \ref{numerical_tests}) we have observed that usually dual first order methods are converging faster in the primal last iterate than in a primal average sequence. Moreover, from our unified analysis we can conclude that for both approaches, ordinary dual with $Q \succ 0$ and augmented dual with $Q \succeq 0$, the rates of convergence of algorithm \textbf{DFOM} are the same. \section{Total computational complexity in DuQuad} \noindent In this section we derive the total computational complexity of the algorithmic framework \textbf{DFOM}. Without lose of generality, we make the assumptions: $ R_d>1, \epsilon<1, \lambda_{\text{max}}(Q) \ge \norm{G}^2.$ However, if any of these assumptions does not hold, then our result are still valid with minor changes in constants. Now, we are ready to derive the total number of iterations for \textbf{DFOM}, i.e. the total number of projections on the set $U$ and of matrix-vector multiplications $Qu$ and $G^T \lambda$. \begin{theorem}\label{in:th_last} Let $\epsilon> 0$ be some desired accuracy and the inner accuracy $\epsilon_{\text{in}}$ and the number of outer iterations $k_{\text{out}}$ be as in \eqref{choose_ein}. By setting $\rho = \frac{8 R_d^2}{\epsilon}$ and assuming that the primal iterate $\bar{u}^k$ is obtained by running the Algorithm \textbf{FOM}($\mathcal{L}_{\rho}(\cdot, \mu^k), U$), then $\bar{u}_{\epsilon}^k$ ($\hat{u}_{\epsilon}^k$ ) is $\sqrt{\epsilon}$ ($\epsilon$) primal optimal after a total number of projections on the set $U$ and of matrix-vector multiplications $Qu$ and $G^T \lambda$ given by: \begin{equation*} k_{\text{total}} = \begin{cases} \left\lfloor \frac{24 \norm{G} D_U R_d}{\epsilon} \right\rfloor &\text{if} \quad \sigma_{\mathcal{L}} = 0 \\ \left\lfloor \frac{16 \norm{G} R_d}{\sqrt{\sigma_{\mathcal{L}}\epsilon }} \log \left( \frac{8 \norm{G} D_U R_d}{\epsilon} \right) \right\rfloor &\text{if} \quad \sigma_{\mathcal{L}} > 0. \end{cases} \end{equation*} \end{theorem} \begin{proof} From Lemma \ref{lemma_sublin_dfg} we have that the inner problem (i.e. finding the primal iterate $\bar{u}^k$) for a given $\mu^k$ can be solved in sublinear (linear) time using Algorithm {\bf FOM}($\mathcal{L}_\rho(\mu^k,\cdot),U$), provided that the inner problem has smooth (strongly) convex objective function, i.e. $\mathcal{L}_\rho(\mu^k,\cdot)$ has $\sigma_{\mathcal{L}} =0$ ($\sigma_{\mathcal{L}} >0$). More precisely, from Lemma \ref{lemma_sublin_dfg}, it follows that, regardless if we apply algorithms \textbf{DFGM} or~\textbf{DGM}, we need to perform the following number of inner iterations for finding the primal iterate $\bar{u}^k$ for a given $\mu^k$: \[ k_{\text{in}} = \begin{cases} \sqrt{\frac{2 L_{\mathcal{L}} D_U^2}{\epsilon_{\text{in}}}}, & \text{if}\;\; \sigma_{\mathcal{L}} =0 \\ \sqrt{\frac{L_{\mathcal{L}} }{\sigma_{\mathcal{L}}}} \log\left(\frac{L_{\mathcal{L}} D_U^2}{\epsilon_{\text{in}}}\right) + 1, & \text{if}\;\; \sigma_{\mathcal{L}}>0. \end{cases} \] \noindent Combining these estimates with the expressions \eqref{choose_ein} for the inner accuracy $\epsilon_{\text{in}}$, we obtain, in the first case $\sigma_{\mathcal{L}} = 0$, the following inner complexity estimates: \begin{equation*} k_{\text{in}} = \begin{cases}\left(\frac{8 L_{\mathcal{L}} D_U^2}{\epsilon}\right)^{1/2} & \text{if} \;\; \textbf{DGM} \\ \frac{4 (L_{\mathcal{L}} D_U^2)^{1/2} (2L_d R_d^2)^{1/4} }{\epsilon^{3/4}}, & \text{if} \;\; \textbf{DFGM}. \end{cases} \end{equation*} Multiplying $k_{\text{in}}$ with the number of outer iterations $k_{\text{out}}$ from \eqref{choose_ein} and minimizing the product $k_{\text{in}} k_{\text{out}} $ over the smoothing parameter $\rho$ (recall that $L_{\mathcal{L}} = \lambda_{\max} (Q) + \rho \|G\|^2$ and $L_{\text{d}}= \frac{\|G\|^2}{\lambda_{\min}(Q) + \rho\|G\|^2}$), we obtain the following optimal computational complexity estimate (number of projections on the set $U$ and evaluations of $Qu$ and $G^T \lambda$): \begin{equation*} k_{\text{total}}^* = (k_{\text{out}} k_{\text{in}})^* = \frac{24 \norm{G} D_U R_d}{\epsilon}, \end{equation*} which is attained for the optimal parameter choice: \[ \rho^* = \frac{8 R_d^2}{\epsilon}. \] \noindent Using the same reasoning for the second case when $\sigma_{\mathcal{L}} > 0$, we observe that the value $\rho = \frac{8 R_d^2}{\epsilon}$ is also optimal for this case in the following sense: the difference between the estimates obtained with the exact optimal $\rho$ and the value $\frac{8 R_d^2}{\epsilon}$ are only minor changes in constants. Therefore, when $\sigma_{\mathcal{L}} > 0$, the total computational complexity (number of projections on the set $U$ and evaluations of $Qu$ and $G^T \lambda$) is: \begin{equation*} k_{\text{total}}^* = (k_{\text{out}} k_{\text{in}})^* = \frac{16 \norm{G} R_d}{\sqrt{\sigma_{\mathcal{L}}\epsilon }} \log \left( \frac{8 \norm{G} D_U R_d}{\epsilon} \right). \end{equation*} \qed \end{proof} \noindent In conclusion, the last primal iterate $\bar{u}_{\epsilon}^k$ is $\sqrt{\epsilon}$-primal optimal after ${\cal O} (\frac{1}{\epsilon})$ (${\cal O}(\frac{1}{\sqrt{\epsilon}} \log \frac{1}{\epsilon})$) total number of projections on the set $U$ and of matrix-vector multiplications $Qu$ and $G^T \lambda$, provided that $\sigma_{\mathcal{L}} = 0$ ($\sigma_{\mathcal{L}} > 0$). Similarly, the average of primal iterate $\hat{u}_{\epsilon}^k$ is $\epsilon$-primal optimal after ${\cal O} (\frac{1}{\epsilon})$ (${\cal O}(\frac{1}{\sqrt{\epsilon}} \log \frac{1}{\epsilon})$) total number of projections on the set $U$ and of matrix-vector multiplications $Qu$ and $G^T \lambda$, provided that $\sigma_{\mathcal{L}} = 0$ ($\sigma_{\mathcal{L}} > 0$). Moreover, the optimal choice for the parameter $\rho$ is of order ${\cal O}(\frac{1}{\epsilon})$, provided that $\lambda_{\min}(Q) =0$. \subsection{What is the main computational bottleneck in DuQuad?} \noindent Let us analyze now the computational cost per inner and outer iteration for Algorithm {\bf DFOM}($d_\rho,\mathcal{K}^*$) for solving approximately the original QP \eqref{original_primal}: \vspace{0.2cm} \noindent \textbf{Inner iteration}: When we solve the inner problem with the Nesterov's algorithm {\bf FOM}($\mathcal{L}_\rho(\mu,\cdot),U$), the main computational effort is done in computing the gradient of the augmented Lagrangian $\mathcal{L}_\rho(\mu,\cdot)$ defined in \eqref{auglag}, which e.g. has the form: \[ \nabla \mathcal{L}_\rho(\mu,u) = (Q + \rho G^T G)u + (q + G^T \mu + \rho G^T g). \] In DuQuad these matrix-vector operations are implemented efficiently in C (matrices that do not change along iterations are computed once and only $G^T \mu$ is computed at each outer iteration). The cost for computing $\nabla \mathcal{L}_\rho(\mu,u)$ for general QPs is ${\cal O} (n^2)$. However, when the matrices $Q$ and $G$ are sparse (e.g. network utility maximization problem) the cost ${\cal O} (n^2)$ can be reduced substantially. The other operations in Algorithm {\bf FOM}($\mathcal{L}_\rho(\mu,\cdot),U$) are just vector operations and thus they are of order ${\cal O} (n)$. Thus, the dominant operation at the inner stage is the matrix-vector product. \vspace{0.2cm} \noindent \textbf{Outer iteration}: When solving the outer (dual) problem with Algorithm {\bf DFOM}($d_\rho,\mathcal{K}^*$), the main computational effort is done in computing the inexact gradient of the dual function: \[ \bar{\nabla} d_\rho(\mu) = G \bar{u}(\mu) + g - s(\bar{u}(\mu),\mu). \] The cost for computing $\bar{\nabla} d_\rho(\mu)$ for general QPs is ${\cal O} (np)$. However, when the matrix $G$ is sparse, this cost can be reduced. The other operations in Algorithm {\bf DFOM}($d_\rho,\mathcal{K}_d$) are of order ${\cal O} (p)$. Thus the dominant operation at the outer stage is also the matrix-vector product. \noindent Fig. \ref{fig:gprof_n150_dfgm_case1} displays the result of profiling the code with gprof. In this simulation, a standard QP with inequality constraints and dimensions $n = 150$ and $p = 225$ was solved by Algorithm \textbf{DFGM}. The profiling summary is listed in the order of the time spent in each file. This figure shows that almost all the time for executing the program is spent in the library module \textit{math-functions.c}. Furthermore, \textit{mtx-vec-mul} is by far the dominating function in this list. This function is multiplying a matrix with a vector, which is defined as a special type of matrix multiplication. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth,height=6cm]{gprof_n150_dfgm_case1} \caption{Profiling the code with gprof.} \label{fig:gprof_n150_dfgm_case1} \end{figure} \noindent In conclusion, in DuQuad the main operations are the matrix-vector products. Therefore, DuQuad is adequate for solving QP problems on hardware with limited resources and capabilities, since it does not require any solver for linear systems or other complicating operations, while most of the existing solvers for QPs from the literature implementing e.g. active set or interior point methods require the capability of solving linear systems. On the other hand, DuQuad can be also used for solving large-scale sparse QP problems since the iterations are very cheap in this case (only sparse matrix-vector products). \section{Numerical simulations} \label{numerical_tests} DuQuad is mainly intended for small to medium size, dense QP problems, but it is of course also possible to use DuQuad to solve (sparse) QP instances of large dimension. \subsection{Distribution of DuQuad} The DuQuad software package is available for download from:\\ \textcolor{blue}{http://acse.pub.ro/person/ion-necoara}\\ \noindent and distributed under general public license to allow linking against proprietary codes. Proceed to the menu point ``Software'' to obtain a zipped archive of the most current version of DuQuad. The user’s manual and extensive source code documentation are available here as well.\\ \noindent An overview of the workflow in DuQuad is illustrated in Fig. \ref{fig:duquad_workflow}. A QP problem is constructed using a Matlab script called \textit{test.m}. Then, the function \textit{duquad.m} is called with the problem as input and is regarded as a prepossessing stage for the online optimization. The binary MEX file is called, with the original problem and the extra info as input. The \textit{main.c} file of the C-code includes the MEX framework and is able to convert the MATLAB data into C format. Furthermore, the converted data gets bundled into a C struct and passed as input to the algorithm that solves the~problem. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{duquad_workflow} \caption{DuQuad workflow.} \label{fig:duquad_workflow} \end{figure} \subsection{Numerical tests: case $\mathcal{K}=\{0\}$} \noindent We plot in Fig. \ref{fig:qp_cpu} the average CPU time for several solvers, obtained by solving $50$ random QP's with equality constraints ($Q \succeq 0$ and $\mathcal{K}=\{0\}$) for each dimension $n$, with an accuracy $\epsilon=0.01$ and the stopping criteria $\abs{F(\hat{u}^k_\epsilon) - F^{*}}$ and $\norm{G \hat{u}^k_\epsilon + g}$ less than the accuracy $\epsilon$. In both algorithms \textbf{DGM} and \textbf{DFGM} we consider the average of iterates $\hat{u}^k_\epsilon$. Since $Q \succeq 0$, we have chosen $\rho = {\cal O}(1/\epsilon)$. In the case of Algorithm \textbf{DGM}, at each outer iteration the inner problem is solved with accuracy $\epsilon_{\text{in}} = \epsilon$. For the Algorithm \textbf{DFGM} we consider two scenarios: in the first one, the inner problem is solved with accuracy $\epsilon_{\text{in}} = 0.001$, while in the second one we use the theoretic inner accuracy \eqref{choose_ein}. We observe a good behavior of Algorithm \textbf{DFGM}, comparable to Cplex and Gurobi. \begin{figure}[ht!] \begin{center} \vskip-0.1cm \includegraphics[width=0.55\textwidth,height=5cm]{fig_qp_cpu} \caption{Average CPU time (ms) for solving QPs, with $Q \succeq 0$ and $\mathcal{K}=\{0\}$ of different dimensions, with several solvers.} \label{fig:qp_cpu} \end{center} \end{figure} \subsection{Numerical tests: case $\mathcal{K}=\rset^p_{-}$} \noindent We plot in Fig. \ref{fig:comparison_dfo} the number of iterations of Algorithms \textbf{DGM} and \textbf{DFGM} in the primal last and average iterates for $25$ random QPs with inequality constraints ($Q \succ 0$ and $\mathcal{K}=\rset^p_{-}$) of variable dimension ranging from $n=10$ to $n = 500$. We choose the accuracy $\epsilon=0.01$ and the stopping criteria was $\abs{F(u) - F^{*}}$ and $\text{dist}_{\mathcal{K}}(G u + g)$ less than the accuracy $\epsilon$. From this figure we observe that the number of iterations are not varying much for different test cases and also that the number of iterations are mildly dependent on problem's dimension. Finally, we observe that dual first order methods perform usually better in the primal last iterate than in the average of primal iterates. \begin{figure}[ht!] \begin{center} \vskip-0.1cm \includegraphics[width=1.05\textwidth,height=5cm]{comparison_bun} \caption{Number of outer iterations on random QPs ($Q \succ 0$, $\mathcal{K}=\rset^p_{-}$) for \textbf{DGM} and \textbf{DFGM} in primal last/average of iterates for different test cases of the same dimension (left) and of variable dimension~(right). } \label{fig:comparison_dfo} \end{center} \end{figure}
2,869,038,155,187
arxiv
\section{INTRODUCTION} As the energy demand multiplies the world over and the conventional energy resources deplete alarmingly, wind energy utilisation has attained greater importance from the perspective of sustainable development, as it is renewable, clean and comparatively cost effective. As a result, wind energy has already emerged as an essential constituent in the global energy mix and is all set to grow to its maximum potential across countries and regions in the years to come. As wind power is directly proportional to the cube of wind speed, even slightest variations in wind speed will significantly affect the power output from wind energy generators. Since wind is a fluctuating resource in respect of availability and speed, precise wind forecasting becomes essential, especially when wind power penetration grows, for the effective management of electricity grid to ensure quality power supply. Accurate wind forecasts on different lead time scales help wind farms in real-time grid operations, economic load dispatch planning, reserve requirement decisions, market trading, maintenance planning and the like. Wind forecasting continues to be an area of high research interest owing to its practical relevance in the ever-expanding wind energy industry. Costa et al. \cite{costa2008review} have reviewed the research in short-term wind prediction over 30 years, giving attention to forecasting methods, mathematical, statistical and physical models, as well as meteorology. Foley et al. \cite{foley2012current} and Okumus et al. \cite{okumus2016current} have conducted an extensive review of the current methods and improvements in the field of wind power forecasting. Based on the methodology adopted, wind forecasting models are grouped mainly into physical, statistical, data learning and hybrid models. The physical models, which utilize different atmospheric parameters are useful for identifying recurring patterns and making long-term predictions. Statistical models assume that the wind speed fluctuations are stochastic. However, it has recently been demonstrated that the underlying dynamics of apparent random- like fluctuations of wind speed measurements is deterministic, low-dimensional and chaotic \cite{sreelekshmi2012deterministic, drisya2014deterministic,drisya2018diverse}. Hybrid models have been developed recently by combining different methods such as physical, statistical and machine learning methods to enhance prediction accuracy \cite{meng2016wind,han2017non}. Models based on artificial neural network and other data learning techniques have also been gaining increased attention in literature in the recent past. Mohandes et al. \cite{mohandes2004support} have compared a support vector regression (SVR) approach for wind speed prediction favourably against a multi-layer perceptron (MLP) for systems with orders 1 to 11. Liu et al. \cite{liu2014short} attempted short-term wind speed forecasting using wavelet transform and Support Vector Machines, applying a genetic algorithm for parameter optimization. Fugon et al. \cite{fugon2008data} applied linear and non-linear data mining algorithms for the short-term wind power forecasting at three distinct wind farm locations in France. Lahouar et al. \cite{lahouar2017hour} tried out Random Forest model for an hour ahead wind power prediction and tested the model with measured wind data and showed good improvement of forecast accuracy compared to classical neural network prediction. Mohandes et al. \cite{mohandes2012spatial} conducted a study within Saudi Arabia to estimate the mean monthly wind speed at certain locations using the historic mean monthly wind speed data from a number of other locations and reported good agreement between estimated and measured monthly mean wind speed values. Browell et al. \cite{browell2018improved} attempted very short-term wind forecasting by incorporating large-scale meteorological information into a vector autoregressive model and showed improved accuracies in forecasting in different case studies conducted in the United Kingdom. Various machine learning algorithms have been tested successfully to predict wind speed variations. However, almost all reported studies are location specific as training and testing data are sourced from the same location. Apart from that, application of such models have not been tested for time independence as the test data considered are adjacent in time to the training data. In this work, we investigate the possibility of developing time and location independent models based on machine learning algorithms for wind speed forecasting for the wind energy industry. Such models hold practical value for wind energy industry for locations where sufficient past data are not available for model training. The first objective of this study is to investigate the accuracy of models applied to data moving away from the training data set. The second objective is to analyse the accuracy of cross-location predictions in which models trained using data from one location are tested against data from another location. Proper training of the forecast model with the available data is a crucial factor for its performance. In the case of wind speed time series, it raises the question of deciding the optimum size of past data required to train the model for producing accurate time ahead predictions, as the duration of wind speed time series is significant since the fluctuation characteristics vary remarkably over different seasons. The theory of mutual information has been applied to estimate the optimum size of preceding data set to be applied in the training of models and testing of forecasts. The study reported here investigates the efficacy of two uniquely trained and tested machine learning models based on (i) Support Vector Machine (SVM) and (ii) Random Forest (RF) algorithms for wind speed forecasting in the same-location as well as cross-location scenarios, wherein promising results are obtained. \begin{table} \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Location & Latitude & Longitude \\ \hline L1 & $09^0 45' 30.2''$ & $77^0 10' 41.3''$ \\ \hline L2 & $09^0 59' 09.5''$ & $77^0 11' 50.0''$ \\ \hline L3 &$12^0 01' 32.7''$ & $75^0 20' 32.4''$\\ \hline L4 & $09^0 39' 11.5''$ & $76^0 53' 3.0''$\\ \hline L5 &$10^0 48' 57.7''$ & $76^0 40' 10.3''$\\ \hline \end{tabular} \caption{\label{tab.loc} Geographic coordinates of the wind masts used for wind data collection. } \end{center} \end{table} \begin{figure} \centering\includegraphics[width=\columnwidth]{./figures/locations.JPG} \caption{\label{fig.locations} Geographical locations of the wind measuring masts indicated on 3D map (Courtesy: Google Maps).} \end{figure} \section{MACHINE LEARNING MODELS} Machine learning is a method of improving the performance of a computational software programme by itself with experience. In machine learning, past experience is fed to the machine as input, and it gives the output as a typical model that is capable of solving future problems of the same nature. Here, past experience is collected for the purpose of imparting training. An abstract target function is determined that well describes the relationship between existing input and desired output. Subsequently, a machine learning model is selected to approximate the target function. In the end, a suitable algorithm is used to build the model from the training examples. In this paper, two different machine learning models, namely, Support Vector Machine and Random Forest models have been used for the investigations. The e1071 package in R is used in this research to develop, train and test the models \cite{e1071}. \subsection{ Support Vector Machine model} Support Vector Machine (SVM) is a supervised learning method derived from Vapnik’s work on statistical learning theory which was initially used for classification problems and later generalized for regression \cite{cristianini2000introduction,cortes1995support}. It is an optimization technique of finding a surface which maximizes the margin between two classes based on two main ideas namely, the maximization of distance between the classifying surface and the nearest elements called support vector and the transformation of the input dimension into higher dimensional space using a kernel function. SVM method performs classification tasks by constructing hyperplanes in a multidimensional space that separate cases of different class labels. To construct an optimal hyperplane, SVM employs an iterative training algorithm which is used to minimize an error function. In the case of a linear classification problem, the hyperplane can be expressed as \begin{eqnarray*} y_i(w \times x_i+b) \geq 1-\psi_i \end{eqnarray*} where $x_i, y_i \in R^n$ are the training data pairs and $w$ the coefficient vector of classification hyperplane, b the offset of the hyperplane from the origin and $\psi_i$ are the positive slack variables \cite{cortes1995support}. The optimum hyperplane is obtained by solving the minimization problem . \begin{eqnarray*} Minimize \sum_{i=1}^{n} \alpha_i \frac{1}{2}\sum_{i=1}^{n}\sum_{i=1}^{n}n \alpha_i \alpha_j y_iy_j(x_ix_j) \\ subject \, to \sum_{i=1}^{n}n \alpha_i y_j = 0 and 0 \leq \alpha_i \leq C \end{eqnarray*} Where $\alpha_i$ are Lagrange multipliers and C the penalty \cite{samui2008slope}. RBF kernel has been used in our present analysis. The SVM can also be used for regression without sacrificing its main features and it is resistant to overfitting. \subsection{Random Forest model} The Random Forest (RF) algorithm is a non-parametric ensemble based learning technique used for both classification and regression \cite{breiman2001random}. The decision tree algorithm works on a set of rules and the possible outcomes to form a tree-like structure. However, such a system is prone to error propagation contributed by an incorrect rule which adds the impurity to the subsequent nodes. Random Forest algorithm eliminates error diffusion process inherent in decision trees by constructing multiple trees. Random samples of given data set are generated and fed to several tree-based learners to form a random forest. Splitting condition for each node in a tree is based only on the randomly selected predictor attributes which lower the error rate by avoiding the correlation among the trees. The successful application of random forest regression algorithm has already been reported in many fields like cheminformatics, speech recognition, bioinformatics, classification and prediction in ecology \cite{svetnik2003random,xu2004random,jiang2004joint}. The random forest regression, which is a non-parametric, captures the functional relationship between dependent and independent variables from the features of the data. From a given data set, algorithm generates a forest of $n$ trees as $\{T_1(X),T_2(X),\hdots, T_n(X)\}$, using an $m$ dimensional vector input $X=(x_1,x_2,\hdots ,x_m)$. Every tree $T_i(X)$ generates an outcome $W_i=T_1(X)$. The average of all individual outputs is considered as the response of random forest. The bagging process of selection with replacement is done both on the samples and the attribute. Normally two third of the data will be the size of bootstrap samples, and the rest is known as out-of-bag samples. The combined effect of bootstrap and attribute bagging helps the algorithm to reduce misclassification error. \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/mutual.jpeg} \caption{\label{fig.mutual} The mutual information of hourly wind speed time series as function of delay.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/sameloc_3hr_kul.jpeg} \caption{\label{fig.sameloc_3hr_kul} Comparison between the actual and forecast wind speeds with repeated 3 hour ahead predictions for the same-location forecasting for location L1, using SVM model.} \end{figure} \section{RESULTS AND DISCUSSIONS} The hourly wind speeds at the height of 80 m above the ground level at five different windy locations situated in the Indian state of Kerala, represented by L1 to L5 as shown in 3D map given in Fig.~\ref{fig.locations} and summarised in Table~\ref{tab.loc}, measured using wind masts over a continuous period of two years 2012 and 2013 have been utilised for the analysis in this work. These locations are geographically distributed in such a way that the locations L2, L3, L4 and L5 are at radial distances 25 km, 321 km, 34 km and 130 km respectively from the location L1 and the triangular area formed by the locations L1, L2 and L4 comes to 385 square kilometres. Two different machine learning forecast models built on SVM and RF algorithms have been employed for the analysis. The wind speed data for the first year (2012) have been used for training and validating the models, in order to ensure effective learning of the dynamics of wind flow fluctuations over a complete seasonal cycle. The wind speed data for the second year (2013) have been used for the testing of forecast results. \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/sameloc_5hr_kul.jpeg} \caption{\label{fig.sameloc_5hr_kul} Comparison between the actual and forecast wind speeds with repeated 5 hour ahead predictions for the same-location forecasting for location L1, using SVM model.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/same_svmrmse1.jpeg} \caption{\label{fig.same_svmrmse1} Mean of RMSEs of predictions versus prediction time, for predictions at the locations L1 to L5 using SVM model trained with data from the respective locations. } \end{figure} The extent of dependence of a value in a time series on the previous values can be estimated by the calculation of mutual information between delayed time series \cite{fraser1986using}. Relying on this, the optimum length of dependency of wind speed values on the past data has been determined by computing values of mutual information and the same knowledge made use of in the training of models and testing of forecasts. The mutual information in the present analysis appears to become negligible by 72 data points, after which the plot starts to level-off, as observed in Fig.~\ref{fig.mutual}. It is therefore taken that every wind speed data point is a function of its previous 72 data points. Hence, the training of SVM model using the past data has been done for all the wind speed values in the training data segment, by inputting 72 continuous measures for each wind speed value ahead. The same approach has been adopted while inputting the previous set of data for training the RF models as well. In the first stage, same-location predictions (where data from the same location is used in parts for training and testing) have been examined for all the five locations, for all hours ahead predictions from 1 hour up to 48 hours ahead using SVM model. Each of the one step ahead predictions is generated by inputting the immediately preceding 72 data points into the trained model. In 2 and higher hour ahead predictions, one step ahead predictions are repeated, each time by using the lastly predicted value as the last of the preceding 72 input points. The predicted time series segments have been compared with actual values along the one year test period, and the deviations analysed using the statistical measure of Root Mean Square Error (RMSE). In the next step, all possible combinations of cross-location predictions have been experimented, wherein, SVM model trained with data from one location has been employed to generate wind speed forecasts at the other four locations by inputting test data from those four locations respectively. In the subsequent stage, all the above investigations have been repeated by developing and using the machine learning model of RF and the results obtained with both the models have been analysed and compared. In the typical prediction scenario we input past 72 wind speed data points and obtain one step ahead prediction. Further values are predicted recursively using one step ahead prediction. Fig.~\ref{fig.sameloc_3hr_kul} shows a typical 3-hour ahead prediction up to 1000 hours ahead of the training data set at location L1 using SVM model. Similar plot of 5-hour ahead prediction is given in Fig.~\ref{fig.sameloc_5hr_kul}. In both these plots, the predictions are seen remarkably close to the original measured data, except for some over predictions at peaks. With the SVM model trained using the data in 2012, we obtained predictions up to 48 hours ahead corresponding to each data point in 2013 by inputting 72 previous values. These predictions were compared with actual data to find the RMSE. Fig.~\ref{fig.same_svmrmse1} shows the mean RMSE versus prediction time averaged over predictions with respect to each data point in 2013 for all the five locations. It may be noted that the RMSE is less than 2 m/s up to 48-hour ahead prediction for locations L1 and L3 whereas the same is almost less than 3 m/s for other locations. However, up to 22-hour ahead, predictions show RMSEs less than 2.5 m/s in all the locations. \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/mal_kul_3hr.jpeg} \caption{\label{fig.mal_kul_3hr} Comparison between the actual and forecast wind speeds with 3-hour ahead predictions for the cross-location forecasting for location L1, using SVM model trained with data from location L5.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/mal_kul_5hr.jpeg} \caption{\label{fig.mal_kul_5hr} Comparison between the actual and forecast wind speeds with 5-hour ahead predictions for the cross-location forecasting for location L1, using SVM model trained with data from location L5.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/kul_rmse1.jpeg} \caption{\label{fig.kul_rmse1} Mean of RMSEs of predictions versus prediction time, for cross-location predictions at the locations L2, L3, L4 and L5 using SVM model trained with data from the location L1. } \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/precentage_SVM1.jpeg} \caption{\label{fig.precentage_SVM1} Percentage of cross-location predictions with RMSEs below 1.5 m/s, 2.5 m/s and 3.5 m/s versus prediction time, when using SVM model.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/RF_mal_kul_3hr.jpeg} \caption{\label{fig.RF_mal_kul_3hr} Comparison between the actual and forecast wind speeds with 3 hour ahead predictions for the cross-location forecasting for location L1, using RF model trained with data from location L5.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/RF_mal_kul_5hr.jpeg} \caption{\label{fig.RF_mal_kul_5hr} Comparison between the actual and forecast wind speeds with 5 hour ahead predictions for the cross-location forecasting for location L1, using RF model trained with data from location L5.} \end{figure} \begin{figure} \centering\includegraphics[width=\columnwidth]{figures/percentage1.jpeg} \caption{\label{fig.percentage1} Percentage of cross-location predictions with RMSEs below 1.5 m/s, 2.5 m/s and 3.5 m/s versus prediction time, when using SVM model (circular points) and RF model (squared points).} \end{figure} As a next step, we trained the model with data of one location of the year 2012 and applied to predict, corresponding to each data point in 2013, up to 48 hours ahead for each other location. Two sample predictions, 3-hour and 5-hour ahead, up to 1000 hours ahead of the time of training data are given in Figs.~\ref{fig.mal_kul_3hr}, \ref{fig.mal_kul_5hr}. When the above investigation is carried out under the cross-location scenario for the five locations, a total of 5 prediction scenarios, each with predictions for four locations using the SVM model trained with the data from the other location is available for analyses. Fig.~\ref{fig.kul_rmse1} depicts one such scenario, wherein predictions are generated for the locations L2 to L5 using the SVM model trained with the historic data from the location L1. In this case, it can be seen that the mean RMSE is less than 2.5 m/s for up to 48-hour predictions even in the worst case of location L4. In a similar fashion, we repeated the cross-prediction sequence by selecting one location for modelling and other locations for prediction. In order to have a better understanding of the efficacy of the cross-location predictions, a percentage-wise representation of all predictions throughout the year 2013 for all possible cross-location prediction situations, that show individual RMSE values less than 1.5 m/s, 2.5 m/s and 3.5 m/s are plotted against the prediction time in Fig.~\ref{fig.precentage_SVM1}. This plot is useful in two different ways as it helps to determine (i) the length of time ahead predictions that is achievable for a given accuracy of prediction and (ii) the accuracy levels for a given length of time ahead prediction. For prediction accuracy with RMSE less than 1.5 m/s 75 \% of predictions goes up to 2-hour ahead in time. When the level of prediction accuracy with RMSE is leas than 2.5 m/s, up to 16-hour ahead predictions constitute up to 75\% of the prediction samples. More than 85\% of the 48-hour ahead cross-location predictions are with RMSE less than 3.5 m/s. In another way of interpreting the above plot is to say that, for example, if 10 hour ahead prediction is considered, around 94\% of prediction samples show individual RSME values below 3.5 m/s, 80\% of samples show the same value below 2.5 m/s and 45\% of samples show the same below 1.5 m/s. In the final phase of the research, the above investigations have been repeated by employing RF model in place of SVM model. The results show predicted values to match closely with the actual wind time series dynamics almost as seen as in the case of SVM model, both for same-location as well as cross-location forecasting. The typical cross-location predictions presented in Fig.~\ref{fig.mal_kul_3hr} and Fig.~\ref{fig.mal_kul_5hr} are reproduced in Fig.~\ref{fig.RF_mal_kul_3hr} and Fig.~\ref{fig.RF_mal_kul_5hr} respectively with RF model in place of SVM model for the forecasting, which again show comparable results in terms of wind flow dynamics and forecast accuracy. In Fig.~\ref{fig.percentage1}, the scenario in Fig.~\ref{fig.precentage_SVM1} for SVM model is reproduced with the corresponding results generated by the RF model overlaid. As can be seen from Fig.~\ref{fig.percentage1} both models show similar behaviour, especially for lower time ahead predictions. If more stringent levels of forecast accuracy is desired, the SVM model shows marginal supremacy over the RF model in the shorter time ahead predictions and vice versa in the longer time ahead predictions. \section{CONCLUSION} In this work, we investigated the prospect of employing machine learning based predictive models for the cross-location prediction of wind speed variations. We analysed wind speed data for a period of two years from 5 locations separated by a minimum of 25 km and a maximum of 321 km by using both Support Vector Machine (SVM) and Random Forest (RF) models. The dependency of wind speed on past data has been assessed using the theory of mutual information and the same estimate has been made use of in the intelligent training of models with proper time delay embedding matrix and also in inputting proper length of past data in the testing of forecasts. The results indicate that the models developed and trained here can be effectively used in wind speed forecasting on the same time series at far away points in relation to the training data with respect to time. This time-independent characteristic is helpful in avoiding the need in the usual methods for training of models using the immediate past data every time predictions are attempted. The research further proves that both these models, together with the methods of training and testing followed here, can generate reliable and quality cross-location short-term wind speed forecasts to a duration of 16 to 17 hours from a given point in time across a geographical area as wide as in the present case, with not less than 75\% of such time ahead prediction samples from all along the testing period showing individual RMSE values below 2.5 m/s. In the case of one hour ahead predictions along the one year test period of all the 20 cross-location prediction scenarios from the 5 locations, almost 95\% of such predictions show RMSE values below the same value. The results obtained further show that in cross-location forecasting, the RF model slightly outperforms the SVM model in the longer time ahead predictions when higher levels of accuracies are desired. The promising results obtained in the cross-location forecasting of wind speed point to the possible existence of certain collective characteristics hidden within the surface wind flow dynamics, which deserve to be studied further. From the practical perspective, the research outcome is very promising for supporting the growing wind energy industry, by being able to help develop hardware instruments embedded with trained models for time as well as location independent wind speed forecasting tasks. The cross-location forecast capability makes it possible to predict wind speeds at newly identified locations where sufficient past data are not available for model-building by employing a model trained with historical wind speed data from a different location within that geographical area. \section*{ACKNOWLEDGEMENT} The authors acknowledge with gratitude the use of wind data, measured by NIWE (National Institute of Wind Energy under the Ministry of New and Renewable Energy, Government of India) for and on behalf of ANERT (Agency for Non-conventional Energy and Rural Technology under the state government of Kerala in India), in this research work. The authors are also grateful to the campus computing facility of University of Kerala set up under DST-PURDE programme for providing computational facilities. The authors also state that they have no conflicts of interest to declare. \section*{REFERENCES} \bibliographystyle{unsrt}
2,869,038,155,188
arxiv
\section{Introduction} Web application attacks are found one of the most targets by attackers. Symantec Internet security \cite{istr2019} reported an interesting statistics that 1 in 10 URLs are identified as being malicious. Web applications typically use the HTTP/HTTPS protocols supported by other backend and frontend interfaces. According to Imperva Web Application Vulnerability Report \cite{istr2019}, high severity attacks are injection attacks, which are being exploited through injecting payloads in HTTP/HTTPS web queries by using GET, POST and PUT methods. CSRF (Cross Site Request Forgery) attack, SQL injection attack, XSS (Cross Site Scripts) attacks, and widely used vulnerable JS libraries, which account for 51 percent, 27 percent, 33 percent and 36 percent, respectively. This paper focuses on the most frequent types of web-based injection attacks, which includes SQL injection, XSS (Cross Site Script), RFI (Remote File Inclusion), XXE (XML External Entity), CSRF (Cross Site Request Forgery), and SSRF (Server Side Request Forgery).\vspace{1 mm}\\ Network Intrusion Detection System (NIDS) monitors the network traffic in web applications. Web IDS acts as intermediate between web application and users, as it analyzes web traffics to detect any anomaly or malicious activity \cite{ying2017}. Generally, there are two types of detection approaches: anomaly-based detection and signature-based detection. The signature-based IDS system uses a signature concept, more like antivirus detects the virus, when the antivirus database has that specific kind of virus signature. If attackers create new virus, the antivirus is of no use if the signature/pattern is not present. Anomaly-based detection is based on detection unique behavior pattern recognition or any activity that differs from previous data or information is fed. When comparing signature-based detection method with anomaly-based detection method, the performance of anomaly based detection found high to detect the unknown attacks, but it comes with a cost that it has problem with false positive alarm rates. After detection of an anomaly it is stored in the database it becomes a ``signature'', and furthermore, there are two detection methods which come under anomaly detection which is adaptive detection and constant detection \cite{lstm}. An adaptive detection algorithm analyzes the network traffic of port 80 of web network that is HTTP traffic, which continuously gets the input of traffic and analyzes in a timely manner, while constant based detection method analyzes stores incoming traffic or use to analyze the logs of collected traffic.\vspace{1 mm}\\ The conventional patching approach to mitigating most network layer vulnerabilities does not work well in web application vulnerabilities such as SQLi, RCE, or XSS. The reason behind all these attacks is that modern web applications are poorly designed with insecure coding. One can follow the OWASP's secure coding guidelines to prevent most of the attacks. Adaptive detection model is effective to detect anomalies and classify them which type of attack it is, so that the developer at backend can fix that patch or prevent it (e.g., Django uses CSRF Tokens in the framework to prevent CSRF) attacks which account for 51 percent of the web attack. At the same time, the model should learn the patterns over time to detect unknown web attacks and identify which type of attack vectors are being exploited. Anomaly based detection approaches \cite{ying2017} usually rely on an adaptive model to identify anomalous web requests, but with a high degree of false positive rate. In this paper, we come up with a solution to handle false positive where IDS monitor a system based on their behavior pattern. There are several reasons why a conventional IDS or web application firewall does not work, as follows: \begin{itemize} \item Limited Dataset: To collect and capture a large amount of anomalous data, one has to set-up a system or an automated system that captures the attack requests and classify them whether they are anomalous or normal requests. This is more like a attack-defense simulation system, but smart enough to classify, that does not need to be labeled manually and it could save lots of time.\vspace{2 mm} \item High False Positive: Conventional system uses unsupervised learning algorithms such as PCA \cite{nn1} and SVM \cite{nn2} to detect web attacks, these approaches require manual selection of attack specific features \cite{yp}. These conventional methods may achieve acceptable performance, but they face high false positive rates.\vspace{2 mm} \item Labeled Dataset: Conventional IDS uses rule-based or conditional strategies or supervised algorithms like support vector machines or decision trees to classify normal traffic requests from attack requests, which requires large database to get the accurate results \cite{yp}. \end{itemize} In this paper, we present a web application attacks detection model, SWAD, based on deep learning technique that detects web application attacks autonomously in real-time. The model uses auto-encoder that can learn from the sequences of word and weight each word or character according to them. The classification engine is trained on ECML-KDD dataset for classification of anomaly queries with respect to specific attack type. We have implemented the model on sequence to sequence model, which consists of encoder and decoder, that sets its target values equal to its input values. The proposed SWAD model first uses 40,000 web requests, both anomaly and benign nature, for training and then 20,000 anomaly web requests and responses for training the model. The experimental results of the proposed model show that it can detect web applications attack with true positive rate is 1 and low false positive rate.\vspace{2 mm}\\ The paper is organized as follows: Section II summarizes the background and related works. Section III describes the system design. Section IV evaluates the performance of the proposed model. Section V concludes the paper. \section{Background and Related work} \subsection{Deep Learning for Web Attack Detection} There are two categories of Machine Learning approaches for detecting web attacks: unsupervised and supervised learning. Supervised learning is the learning approach feeds mapped labeled data which then outputs the expected data, which is simply mapping of input functions to dataset and expecting new input with learned labels at the output. For the classification of data, the most common algorithm is supervised learning, which is used to learn the machine learning model to train and identify the data using the labels which are mapped. The concept of the algorithm is to learn the mapping function of a given input to the output, where the output is defined by variable $Y$ and input is defined by variable $x$. \[ Y = f(X) \] If web attacks labeled dataset is trained using supervised algorithms such as SVM (Support Vector Machine) \cite{svm} and Naive Bayes \cite{nb}, then it classifies anomalous to normal web attack requests. However, the model cannot handle new types of attack requests and it requires a large amount of labeled dataset.\vspace{1 mm}\\ Unsupervised learning is used mainly with unlabeled dataset. The model supported by this learning finds patterns from previous sequence or dataset and identifies or predicts the next one. For exploratory analysis, the unsupervised learning method is used to automate the identification pattern of data structures. Using an unsupervised method, one can reduce the dimensions used to represent data for fewer columns and features. To sort the eigenvectors, Principal Component Analysis (PCA) \cite{pca} is used to compute the eigenvectors of the co-variance matrix that is ``principal axes''. To get the dimensionality reduction, the centered data were projected into principal axes. Principal component (PC) scores are a group of score that are obtained from PCA. To create equal number of new imaginary variables or principle components the relationship between a batch or group of those PC scores are analyzed. The optimized and maximally correlated with all of the original group of variables is the first created imaginary variables, then the next created imaginary variable is less correlated and next is lesser than the previous and it goes on until the point when the principal components scores predicts any variable from the first created group. PCA Reconstruction = PC Score $\times$ EigenVectors(t) + Mean \vspace{1 mm}\\ The condition of perfect reconstruction of the data or input and the there will be no dimentionality reduction is when all the $p$ eigenvectors are used and $VV^t$ is the identity matrix. When using large dataset features, whether it is image, text or video data, one cannot use any machine-learning algorithms directly. In order to reduce the training time, prepossessing steps are required to clean the dataset. It is noted that PCA is restricted to a linear map. Autoencoders \cite{acd} can have non linear encoder/decoders. A single layer autoencoder with linear function is nearly equivalent to PCA. We use sequence-to-sequence autoencoder in our proposed detection model. \begin{figure}[h!] \centering \includegraphics[width=7cm, height=4cm]{encoder_decoder} \caption{Autoencoder basic architecture} \end{figure} The condition to make the autoencoder equivalent to principal component analysis is that if normalized inputs are used with the linear decoder, linear encoder and square error loss function, thenautoencoders are not restricted to linear maps. The proposed model is optimized and trained to minimize and reduce the loss between the input and the output layer. We have used non-linear functions with encoders to get more accuracy when reconstruction of data is processing. The activation functions used in autoencoders are ReLu and sigmoid, which are non-linear in nature. \begin{equation} \Phi : \chi \rightarrow F \end{equation} \begin{equation} \Psi : F \rightarrow \chi \end{equation} \begin{equation} \Phi , \Psi = arg min_\Phi,_\Psi || X = (\Phi * \Psi) X || ^2 \end{equation} The encoder function, denoted by \( \Phi \), maps the original data X to a latent space F. The decoder function, denoted by \( \Psi \), maps the latent space F to the output. We basically recreate the original image after some generalized non-linear compression. The encoding network can be represented by the standard neural network function passed through an activation function, where $z$ is the hidden dimension. The output works the same as the input. \begin{equation} Z = \sigma(W_x + b) \end{equation} With slight different weight, bias and activation function, the output function or the decoder network is represented in the same way. \begin{equation} X^{'} = \sigma^{'}(W^{'}z + b^{'}) \end{equation} To train the model for getting optimized results and the loss function in the equation, the model is trained with back-propagation method. \begin{equation} L(x, x^{'}) = || x - x^{'} || = || x -{\sigma^{'}}({W^{'}}(\sigma(Wx+b))+{b^{'}})||^2 \end{equation} To reconstruct the input data or input characters, the autoencoders select the encoder and decoder function for optimization, so that it requires the minimal information to encode the input data for reconstructing the output. \section{The Proposed Model}\label{AA} The proposed detection engine uses an autoencoder model based on sequence to sequence architecture that is made up of LSTM (Long Short Term Memory) \cite{lstm} cells. LSTM networks are complex neural networks that are used to train ordered sequences of inputs to remember it and re-create it. The proposed model devises the LSTM neural network model, which feeds sequenced inputs. After completing the reading input processes, the output is given by an internal learned representation of the fed input sequences as a fixed-length vector. Then, output vector is fed inputs that interprets the input sequence by sequence at each step and the output is generated. \begin{figure*}[h!] \includegraphics[width=\textwidth, height=12cm]{system} \caption{The Proposed System Architecture} \end{figure*} The proposed detection and classification model works in synchronization as follows: \begin{enumerate} \item For the training purpose, large amounts of unlabeled normal HTTP requests are collected from open-source Vulnbank organization, which contains 40k normal HTTP (GET,POST and PUT) methods requests.\vspace{2 mm} \item For the auto-encoder's (Encoder-Decoder) architecture, the hyper-parameters are trained by setting the problem as a grid search problem. Each hyper-parameter combination requires training the neuron weights for the hidden layer(s), which results in increasing computational complexity with an increase in the number of layers and number of nodes within each layer. To deal with these critical parameters and training issues, stacked auto-encoder concepts have been proposed that trains each layer separately to get pre-trained weights. Then the model is fine-tuned using the obtained weights. This approach significantly improves the training performance over the conventional mode of training. For implementation of the proposed model, we consider the following parameters.\vspace{1 mm}\\ Batch Size = 128 Embed Size = 64 Hidden Size = 64 Number of Layers = 2 Dropout Rate = 0.7\\ \item Reconstruction of requests are done by the decoder \(X^{'}=\sigma^{'}(W^{'}+b^{'})\), which perfectly reconstructs the given input and evaluates loss function and accuracy.\vspace{2 mm} \item When a new requests is given as input to the trained autoencoder, it decodes and encodes the requests vector and calculates the reconstruction or loss error. If loss error is larger than the learned threshold \(\theta\), it categorizes as anomalous requests. If loss error is smaller than \(\theta\), it categorizes as normal requests.\vspace{2 mm} \item After categorizing requests into \textit{normal} and \textit{anomalous} requests, normal requests are sent to the database for retraining or re-learning, so that over time the detection model learns new type of requests patterns. Anomalous requests are sent to the classification model which further categorizes the anomalous requests into which type of attack it was exploited through requests like SQLi, XSS or CSRF.\vspace{2 mm} \item The classification model is trained on larger number of labeled attack vectors HTTP requests. It contains 7 class of attacks which are Os-Commanding, PathTraversal, SQLi, X-PathInjection, LDAPInjection, SSI, and XSS. \end{enumerate} We use LSTM layers to train the classification model and fine-tune the model with hyperparametrs. Every LSTM layer is accompanied by a dropout layer, which helps to prevent over-fitting by ignoring randomly selected neurons during training, and hence, reduces the sensitivity to the specific weights of individual neurons. \begin{figure}[h!] \includegraphics[width=7cm,height=4cm]{raw_http} \caption{HTTP Requests with XSS attack Vector} \end{figure} The image in Figure-3 is the raw anomaly HTTP requests with XSS attack vector. Tn data pre-processing step, the raw HTTP data is converted to a single string and parsed as input to the LSTM cell, which is the passed to the training phase to train the model. \section{Experimental Results and Evaluation} We have experimented the proposed model with 40,000 web requests followed by 20,000 anomaly web requests and responses. The classification engine is trained on ECML-KDD dataset for classification of anomaly queries with respect to specific attack type. We have evaluated the proposed model on ROC curve. An ROC curve is a graph showing the performance of a classification model at all classification thresholds. The ROC curve plots two parameters - true positive rate and false positive rate. A false positive (FP) or false alarm, which refers to the detection of benign traffic as an attack. A false negative (FN) refers to detecting attack traffic as benign traffic. A key goal of an intrusion detection system is to minimize both the FP rate and FN rate. We use the following parameters to evaluate the proposed model's performance: \newcounter{5} \begin{list}{-} {\usecounter{5}} \item True Positive (TP): the number of observations correctly assigned to the positive class.\vspace{1 mm} \item False Positive (FP): the number of observations assigned by the model to the positive class.\vspace{1 mm} \item True Positive Rate (TPR) reflects the classifier's ability to detect members of the positive class \[ TPR = \frac{TP} {(TP + FN)} \]\vspace{1 mm} \item False Positive Rate (FPR) reflects the frequency with which the classifier makes a mistake by classifying normal state as pathological \[ FPR = \frac {FP}{(FP + TN)} \] \end{list} An ROC curve plots TPR versus FPR at different classification thresholds. Lowering the classification threshold classifies more items as positive, and thus, increasing both false positives and true positives. \begin{figure}[hbt!] \includegraphics[width=9cm, height=6cm]{ROC} \caption{ROC Curve of the Proposed Model} \end{figure} As defining normality with a descriptive feature set is difficult, anomalies raised by systems can sometime be detected with false alarms (false positives) or missed alerts (false negatives). With the ROC curve, the closer the graph is to the top and left-hand borders, the more accurate the test. Similarly, the closer the graph to the diagonal, the less accurate the test. The experimental results obtained on the proposed model are as follows: Precision: 0.9979 Recall: 1.00 Number of True Positive: 1097 Number of Samples: 1097 True Positive Rate: 1.00 Number of False Positive: 7 Number of samples: 2200 False Positive Rate: 0.0032 \section{Conclusion} We discussed an intrusion detection model using deep learning. The proposed model detects web application attacks autonomously in real-time. The model uses auto-encoder that can learn from the sequences of word and weight each word or character according to them. The experimental results show that the proposed model can detect web applications attack with low false positive rate and true positive rate is 1. Because of less volume of labeled categorized anomalous dataset, the proposed classification engine is not 100 percent accurate; however, the classification can be improved with optimized training with a large volume of dataset, which is left as the future scope of the work.
2,869,038,155,189
arxiv
\section{Introduction} \label{s:intro} Previously used in niche applications and by a small group of enthusiasts, general purpose computing on graphics processing unit (GPU) cards has gained widespread popularity after the release in 2007 of the CUDA programming environment \cite{cudaProgGuide}. Owing also to the release of the OpenCL specification \cite{openCL} in 2008, GPU computing has been rapidly adopted by numerous groups with computing needs originating in a broad spectrum of application areas. In several of these areas though, when compared to the library ecosystem enabling sequential and/or parallel computing on x86 chips, GPU computing library support continues to be spotty. This observation motivated an effort whose outcomes are reported in this paper, which is concerned with solving sparse linear systems of equations on the GPU. Developing an approach and implementing parallel code for solving sparse linear systems is not trivial. This, and the relative novelty of GPU computing explain the scarcity of solutions for solving $\bA \bx = \bb$ on the GPU, when $\bA \in {\mathbb{R}}^{N \times N}$ is possibly nonsymmetric, sparse, and moderately large; i.e., $\SI{10000}{} \leq N \leq \SI{500000}{}$. An inventory of software solutions as of 2015 produced a short list of codes that solved $\bA \bx = \bb$ on the GPU: {\cuSOLVER}~\cite{cuSOLVER}, {\Paralution}~\cite{paralution}, and {\SuperLU}~\cite{demmel2011superlu}, the latter focused on distributed memory architectures and leveraging GPU computing at the node level only. Several CPU multi-core approaches exist and are well established, see for instance \cite{HSL,schenk2004solving,mumps,demmel2011superlu}. For a domain-specific application implemented on the GPU that calls for solving ${\bf A} {\bf x} = {\bf b}$, one alternative would be to fall back on one of these CPU-based solutions. This strategy usually impacts the overall performance of the algorithm due to the back-and-forth data movement across the PCI host--device interconnect, which in practice supports bandwidths of the order of 10 GB/s. Herein, the focus is not on this strategy. Instead, we are interested in carrying out the LU factorization on the GPU when the possibly nonsymmetric matrix ${\bf A}$ is sparse or dense banded with narrow bandwidth. There are pros and cons to having a linear solver on the GPU. On the upside, since a parallel implementation of a LU factorization is memory bound, particularly for sparse systems, the GPU is attractive owing to its high bandwidths and relatively low latencies. At main-memory bandwidths of roughly 300 GB/s, the GPU is four to five times faster than a modern multicore CPU. On the downside, the irregular memory access patterns associated with sparse matrix factorization ablate this GPU-over-CPU advantage, which is further eroded by the intense logic and integer arithmetic requirements associated with existing algorithms. The approach discussed herein alleviates these two pitfalls by embracing a splitting strategy described for CPU-centric multicore and/or multi-node computing in \cite{PoSa2006}. Two successive row--column permutations attempt to increase the diagonal dominance of the matrix and reduce its bandwidth, respectively. Ideally, the reordered matrix would be ($i$) diagonal dominant, and ($ii$) dense banded. If ($i$) is accomplished, no LU factorization row/column pivoting is necessary, thus avoiding tasks at which the GPU does not shine: logic and arithmetic operations. Additionally, if ($ii$) holds, coalesced memory access patterns associated with dense matrix operations can capitalize on the GPU's high bandwidth. The overall solution strategy adopted herein solves ${\bf A} {\bf x} = {\bf b}$ using a Krylov-subspace method and employs LU preconditioning with work-splitting and drop-off. Specifically, each outer Krylov-subspace iteration takes at least one preconditioner solve step that involves solving ${\hat {\bf A}} {\bf y} = {\hat{\bf b}}$ on the GPU, where ${\hat {\bf A}} \in \R^{N \times N}$ is a {\em dense} banded matrix obtained from ${\bf A}$ after a sequence of possibly two reordering stages that can include element drop-off. Regardless of whether ${\bf A}$ is sparse or not, the salient attribute of the approach is the casting of the preconditioning step as a {\em dense} linear algebra problem. Thus, a reordering process is employed to obtain a narrow--band, dense ${\hat {\bf A}}$, which is subsequently LU--factored. For the reordering, a strategy that combines two stages, namely diagonal dominance boosting and bandwidth reduction, has yielded well balanced coefficient matrices that can be factored fast on the GPU leveraging a single instruction multiple data (SIMD)--friendly underlying data structure. The LU factorization relies on a splitting of the matrix ${\hat {\bf A}}$ in several diagonal blocks that are factored independently and a correction process to account for the inter-diagonal block coupling. The implementation takes advantage of the GPU's deep memory hierarchy, its multi-SM layout, and its predilection for SIMD computation. This paper is organized as follows. Section \ref{s:description} summarizes the solution algorithm. The discussion covers first the work-splitting-based LU factorization of dense banded matrices. Subsequently, the ${\bf A} {\bf x} = {\bf b}$ sparse case brings into focus strategies for matrix reordering. Section \ref{s:implementation} summarizes aspects related to the GPU implementation of the solution approaches proposed. Results of a series of numerical experiments for both dense banded and sparse linear systems are reported in Section \ref{s:experiments}. Since reordering strategies play a pivotal role in the sparse linear system solution, we present benchmarking results in which we compared the reordering strategies adopted herein to established solutions/implementations. The paper concludes with a series of final remarks and a summary of lessons learned and directions of future work. \section{Description of the methodology} \label{s:description} \subsection{The dense banded linear system case} \label{ss:denseLinSysExp} Assume that the banded dense matrix ${\bA}\in{\mathbb{R}}^{N\times N}$ has half-bandwidth $K\ll N$. Following an approach discussed in \cite{SaKu1978, PoSa2006, PoSa2007}, we partition the banded matrix $\bA$ into a block tridiagonal form with $P$ diagonal blocks $\bA_i \in \R^{N_i \times N_i}$, where $\sum_i^P N_i = N$. For each partition $i$, let $\bB_i$, $i=1,\ldots,P-1$ and $\bC_i$, $i=2,\ldots,P$ be the super- and sub-diagonal coupling blocks, respectively -- see Figure \ref{f:matrix_partitioning}. Each coupling block has dimension $K\times K$ for banded matrices with half-bandwidth $K=\max\limits_{i,j,a_{ij} \ne 0}|i - j|$. As illustrated in Fig.~\ref{f:matrix_partitioning}, the banded matrix $\bA$ is expressed as the product of a block diagonal matrix $\bD$ and a so-called {\em spike matrix} $\bS$ \cite{SaKu1978}. The latter is made up of identity diagonal blocks of dimension $N_i$, and off-diagonal spike blocks, each having $K$ columns. Specifically, \begin{equation} \bA = \bD \bS \, , \end{equation} where $\bD = \mbox{diag}(\bA_1,\ldots,\bA_P)$ and, assuming that $\bA_i$ are non-singular, the so-called left and right spikes $\bW_i$ and $\bV_i$ associated with partition $j$, each of dimension $N_i \times K$, are given by \begin{subequations}\label{eq:LeftRightSpikes} \begin{alignat}{2} \label{eq:RightSpike_1} \bA_1 \bV_1 &= \left[\begin{matrix} \zero \\ \zero \\ \bB_1 \end{matrix}\right] & &{}\\ \label{eq:LeftRightSpikes_i} \bA_i \left[ \bW_i \mid \bV_i \right] &= \left[\begin{matrix} \bC_i & \zero \\ \zero & \zero \\ \zero & \bB_i \end{matrix}\right] \, ,& &\quad i= 2, \ldots, P-1\\ \label{eq:LeftSpike_P} \bA_P \bW_P &= \left[\begin{matrix} \bC_P \\ \zero \\ \zero \end{matrix}\right] . & &{} \end{alignat} \end{subequations} \begin{figure}[ht] \centering {\includegraphics[width=0.85\textwidth]{Figs/partition.png}} \caption{Factorization of the matrix $\bA$ with $P=3$.} \label{f:matrix_partitioning} \end{figure} Solving the linear system $\bA \bx = \mathbf{b}$ is thus reduced to solving \begin{align} \bD \bg &= \mathbf{b} \label{eq:diag-sys} \\ \bS \bx &= \bg \label{eq:spike-sys} \end{align} Since $\bD$ is block-diagonal, solving for the modified right-hand side $\bg$ from (\ref{eq:diag-sys}) is trivially parallelizable, as the work is split across $P$ processes, each charted to solve $\bA_i \bg_i = \bb_i$, $i=1,\ldots,P$. Note that the same decoupling is manifest in Eq.~(\ref{eq:LeftRightSpikes}), and the work is spread over $P$ processes. The remaining question is how to solve quickly the linear system in (\ref{eq:spike-sys}). This problem can be reduced to one of smaller size, $\hat\bS \hat\bx = \hat\bg$. To that end, the spikes $\bV_i$ and $\bW_i$, as well as the modified right-hand side $\bg_i$ and the unknown vectors $\bx_i$ in (\ref{eq:spike-sys}) are partitioned into their top $K$ rows, the middle $N_i - 2K$ rows, and the bottom $K$ rows: \begin{subequations} \begin{alignat}{2} \bV_i &= \left[\begin{matrix} \T{\bV}{i} \\ \M{\bV}{i} \\ \B{\bV}{i}\end{matrix}\right], & \quad \bW_i &= \left[\begin{matrix} \T{\bW}{i} \\ \M{\bW}{i} \\ \B{\bW}{i}\end{matrix}\right], \\ \bg_i &= \left[\begin{matrix} \T{\bg}{i} \\ \M{\bg}{i} \\ \B{\bg}{i}\end{matrix}\right], & \quad \bx_i &= \left[\begin{matrix} \T{\bx}{i} \\ \M{\bx}{i} \\ \B{\bx}{i}\end{matrix}\right]. \end{alignat} \end{subequations} A block-tridiagonal reduced system is obtained by excluding the middle partitions of the spike matrices as: \begin{equation}\label{eq:SPIKE_reduced_system} \left[\begin{matrix} \bR_1 & \bM_1 & & & \\ & \ddots & & & \\ & \bN_i & \bR_i & \bM_i & \\ & & & \ddots & \\ & & & \bN_{P-1} & \bR_{P-1} \end{matrix}\right] \left[\begin{matrix} \hat\bx_1 \\ \vdots \\ \hat\bx_i \\ \vdots \\ \hat\bx_{P-1} \end{matrix}\right] = \left[\begin{matrix} \hat\bg_1 \\ \vdots \\ \hat\bg_i \\ \vdots \\ \hat\bg_{P-1} \end{matrix}\right] , \end{equation} where the linear system above, denoted $\hat\bS \hat\bx = \hat\bg$, is of dimension $2K(P-1) \ll N$, \begin{subequations} \begin{alignat}{2} \bN_i & = \left[\begin{matrix} \B{\bW}{i} & \zero \\ \zero & \zero \end{matrix}\right] \, ,& &\quad i=2,\ldots,P-1 \\ \bR_i &= \left[\begin{matrix} {\bf I}_M & \B{\bV}{i} \\ \T{\bW}{i+1} & {\bf I}_M \end{matrix}\right] \, ,& &\quad i=1,\ldots,P-1 \\ \bM_i &= \left[\begin{matrix} \zero & \zero \\ \zero & \T{\bV}{k+1} \end{matrix}\right] \, ,& &\quad i=1,\ldots,P-2 \end{alignat} \end{subequations} and \begin{equation} \hat\bx_i = \left[\begin{matrix} \B{\bx}{i} \\ \T{\bx}{i+1} \end{matrix}\right] , \, \hat\bg_i = \left[\begin{matrix} \B{\bg}{i} \\ \T{\bg}{i+1} \end{matrix}\right] , \quad i=1,\ldots,P-1 \; . \end{equation} Two strategies are proposed in \cite{PoSa2006} to solve (\ref{eq:SPIKE_reduced_system}): ({\textit{i}}) an exact reduction; and, ({\textit{ii}}) an approximate reduction, which sets $\bN_i \equiv {\bf 0}$ and $\bM_i \equiv {\bf 0}$ and results in a block diagonal matrix $\hat\bS$. The solution approach adopted herein is based on ($ii$) and therefore each sub-system $\bR_i {\hat \bx}_i = {\hat \bg}_i$ is solved independently using the following steps: \begin{subequations} \label{eq:solveTopBottom} \begin{align} \mbox{Form } &{\bar \bR}_i = {\bf I}_M - \T{\bW}{i+1} \B{\bV}{i} \\ \mbox{Solve } &{\bar \bR}_i \T{\tilde{\bx}}{i+1} = \T{\bg}{i+1} - \T{\bW}{i+1} \B{\bg}{i} \\ \mbox{Calculate } &\B{\tilde{\bx}}{i} = \B{\bg}{i} - \B{\bV}{i} \T{\tilde{\bx}}{i+1} \end{align} \end{subequations} Note that a tilde was used to differentiate between the actual and approximate values $\T{\tilde{\bx}}{i}$ and $\B{\tilde{\bx}}{i}$ obtained upon dropping the $\bN_i$ and $\bM_i$ terms. An approximation of the solution of the original problem is finally obtained by solving independently and in parallel $P$ systems using the available LU factorizations of the $\bA_i$ matrices: \begin{subequations} \begin{alignat}{8} &\:\bA_1 \bx_1 & &= & &\:\mathbf{b}_1 & &{} & &{} & &\:- & &\left[\begin{matrix} \zero \\ \zero \\ \bB_1 \T{\tilde{\bx}}{2} \end{matrix}\right] & &{} \\ &\:\bA_i \bx_i & &=& &\:\mathbf{b}_i & &-& &\left[\begin{matrix} \bC_i \B{\tilde{\bx}}{i-1} \\ \zero \\ \zero \end{matrix}\right] & &\:-& &\left[\begin{matrix} \zero \\ \zero \\ \bB_i \T{\tilde{\bx}}{i+1} \end{matrix}\right] \, , & &\quad i = 2,\ldots,P-1 \\ &\:\bA_P \bx_P & &= & &\:\mathbf{b}_P& &-& &\left[\begin{matrix} \bC_P \B{\tilde{\bx}}{P-1} \\ \zero \\ \zero \end{matrix}\right] \; .& &{} & &{} & &{} \end{alignat} \end{subequations} Computational savings can be made by noting that if an LU factorization of the diagonal blocks $\bA_i$ is available, the bottom block of the right spike; i.e. $\B{\bV}{i}$, can be obtained from (\ref{eq:RightSpike_1}) using only the bottom $K \times K$ blocks of L and U. However, obtaining the top block of the left spike requires calculating the entire spike $\bW_i$. An effective alternative is to perform an additional UL factorization of $\bA_i$, in which case $\T{\bW}{i}$ can be obtained using only the top $K \times K$ blocks of the new U and L. Next, note that the decision to set $\bN_i \equiv {\bf 0}$ and $\bM_i \equiv {\bf 0}$ relegates the resulting algorithm to preconditioner status. Embracing this path is justified by the following observation that although the dimension of the reduced linear system in (\ref{eq:SPIKE_reduced_system}) is smaller that that of the original problem, its half-bandwidth is at least three times larger. The memory footprint of exactly solving (\ref{eq:SPIKE_reduced_system}) is large, thus limiting the size of problems that can be tackled on the GPU. Specifically, at each recursive step, additional memory that is required to store the new reduced matrix cannot be deallocated until the global solution is fully recovered. Finally, it becomes apparent that the quality of the preconditioner is correlated to neglecting the $\bN_i$ and $\bM_i$ terms. For the sake of this discussion, assume that the matrix $\bA$ is diagonally dominant with a degree of diagonal dominance $d \ge 1$; i.e., \begin{equation} \label{eq:diagDominanceDef} |a_{ii}| \ge d \sum\limits_{j \ne i} |a_{ij}| \; , \forall i = 1,\ldots,N \; . \end{equation} \noindent When $d>1$, the elements of the left spikes $\bW_i$ decay in magnitude from top to bottom, while those of the right spikes $\bV_i$ decay from bottom to top \cite{MiMa2008}. This decay, which is more pronounced the larger the degree of diagonal dominance of $\bA$, justifies the approximation $\bN_i \equiv {\bf 0}$ and $\bM_i \equiv {\bf 0}$. However, note that having $\bA$ be diagonal dominant, although desirable, it is not a prerequisite as demonstrated by numerical experiments reported herein. Truncating when $d<1$ will lead to a preconditioner of lesser quality. \subsubsection{Nomenclature, solution strategies} \label{sss:nomenclature} Targeted for execution on the GPU, the methodology outlined above becomes the foundation of a parallel implementation called herein ``split and parallelize'' (SaP). The matrix $\bA$ is split into block diagonal matrices ${\bA}_i$, which are processed in parallel. The code implementing this strategy is called {\SpikeHyb}. Several flavors of {\SpikeHyb} can be envisioned. At one end of the spectrum, one solution path would implement the exact reduction, a strategy that is not considered herein. At the other end of the spectrum, {\SpikeHyb} solves the block-diagonal linear system in \ref{eq:diag-sys} and for preconditioning purposes uses the approximation ${\bf x} \approx {\bf g}$. In what follows, this will be called the decoupled approach, {\SaPD}. The middle ground is the approximate reduction, which sets $\bN_i \equiv {\bf 0}$ and $\bM_i \equiv {\bf 0}$. This will be called the coupled approach, {\SaPC}, owing to the coupling that occurs through the truncated spikes; i.e., $\B{\bV}{i}$ and $\T{\bW}{i+1}$. Neither the coupled nor the decoupled paths qualify as direct solvers and {\SpikeHyb} employs an outer Krylov subspace scheme to solve $\bA \bx = \bb$. The solver uses {\BCG}($\ell$) \cite{SlFo1993} and left-preconditioning, unless the matrix $\bA$ is symmetric and positive definite, in which case the outer loop implements a conjugate gradient method \cite{Saad2003}. {\SpikeHyb} is open source and available at \cite{SaP_git,SaPWebsite}. \subsection{The sparse linear system case} The discussion focuses next on solving $\bA_s \bx = \bb$, where ${\bA_s \in {\mathbb{R}}^{N \times N}}$ is assumed to be a sparse matrix. The salient attribute of the solution strategy is its fallback on the dense banded approach described in \S\ref{ss:denseLinSysExp}. Specifically, an aggressive row and column permutation process is employed to transform $\bA_s$ into a matrix $\bA$ that has a large $d$ and small $K$. Although the reordered matrix will remain sparse within the band, it will be regarded to be dense banded and LU- and/or UL-factored accordingly. For matrices $\bA_s$ that are either nonsymmetric or have low $d$, a first set of row permutations is applied as $\bQ \bA_s \bx = \bQ \mathbf{b}$, to either maximize the number of nonzeros on the diagonal (maximum traversal search) \cite{Duff1981}, or maximize the product of the absolute values of the diagonal entries \cite{DuKo1999, DuKo2001}. Both reordering algorithms are implemented using a depth first search with a look-ahead technique similar to the one in the Harwell Software Library (HSL) \cite{HSL}. While the purpose of the first reordering $\bQ \bA_s$ is to render the permuted matrix diagonally ``heavy'', a second reordering seeks to reduce $K$ by using the traditional Cuthill-McKee {\CM} algorithm \cite{CuMc1969}. Since the diagonal entries should not be relocated, the second permutation is applied to the symmetric matrix $\bQ\bA_s + \bA_s^T\bQ^T$. Following these two reorderings, the resulting matrix $\bA$ is split to obtain ${\bA}_1$ through ${\bA}_P$. A third {\CM} reordering is then applied to each ${\bA}_i$ for further reduction of bandwidth. While straightforward to implement in {\SaPD}, this third stage reordering in {\SaPC} mandates computation of the entire spikes, an operation that can significantly increase the memory footprint and flop count of the numerical solution. Note that third stage reordering in {\SaPC} renders the UL factorization superfluous since computing only the top of a spike is insufficient. If $\bA_i$ is diagonally dominant, the LU and/or UL factorization can be safely carried out without pivoting \cite{golubMatrixBook96}. Adopting the strategy used in {\Pardiso} \cite{pardiso}, we always perform factorizations of the diagonal blocks $\bA_i$ {\emph{without}} pivoting but with {\em pivot boosting}. Specifically, if a pivot becomes smaller than a threshold value, it is boosted to a small, user controlled value $\epsilon$. This yields a factorization of a slightly perturbed diagonal block, $\bL_i \bU_i = \bA_i + \delta\bA_i$, where $\| \delta\bA_i \| = \mathcal{O}(u \| \bA\|)$ and $u$ is the unit roundoff~\cite{MSC2009}. \subsubsection{Brief comments on the reordering algorithms} \label{sss:reordering} {\SpikeHyb} employs two reordering strategies, namely Diagonal Boosting ({\DB}) and Cuthill-McKee ({\CM}), possibly multiple times, to reduce $K$ and increase the degree of diagonal dominance. {\DB} is applied first at the matrix $\bA_s$ level, followed by {\CM} applied at matrix level, and possibly followed by a set of $P$ third-stage {\CM} reorderings applied at the sub-matrix $\bA_i$ level. \noindent \textbf{Diagonal Boosting.} The {{\DB}} algorithm seeks to improve diagonal dominance in $\bA_s$ and draws on a minimum bipartite perfect matching~\cite{carpaneto1980algorithm,kuhn1955hungarian,burkhard1980assignment,carraresi1986efficient,derigs1986efficient,jonker1987shortest}. There are several variants of the algorithm aimed at different outcomes, e.g., maximizing the absolute value of bottleneck, the sum, the product or other metrics that factor in the diagonal entries. As a proxy for diagonal dominance, {\SpikeHyb} maximizes the absolute value of the product of all diagonal entries. The algorithm that seeks to leverage GPU computing is as follows. Given a matrix $\{a_{ij}\}_{n \times n}$, find a permutation $\sigma$ that maximizes $\prod_{i=1}^{n}|a_{i\sigma_{i}}|$. Denoting $a_i = \max_{j}|a_{ij}|$ and noting that $a_i$ is an invariant of $\sigma$, then we are to minimize \[\log\prod\limits_{i=1}^{n}\frac{a_i}{|a_{i\sigma_{i}}|} = \sum\limits_{i=1}^n\log\frac{a_i}{|a_{i\sigma_{i}}|} = \sum\limits_{i=1}^{n}(\log a_i - \log |a_{i \sigma_{i}}|)\, .\] The reordering problem is reduced to minimum bipartite perfect matching in the following way: given a bipartite graph $G_C = (V_R, V_C, E)$, we define the weight $c_{ij}$ of the edge between nodes $i \in V_R$ and $j \in V_C$ as \begin{equation} \label{eq:DBrelated} c_{ij} = \begin{cases} \log a_i - \log |a_{ij}| & (a_{ij} \ne 0) \\ \infty & (a_{ij} = 0) \end{cases} \,. \end{equation} If we are able to find a minimum bipartite perfect matching $\sigma$ such that $\sum c_{i \sigma_{i}}$ is minimized, according to the process of reduction above, then $\prod_{i=1}^n |a_{i \sigma_i}|$ is maximized. \noindent \textbf{Bandwidth reduction.} Whether ${\bf Q}\bA_s$ is sparse or not, there are $P-1$ pairs of always \emph{dense} spikes, each of dimension $N_i \times K$. They need to be stored unless one employs an LU and UL factorization of $\bA_i$ to retain only the appropriate bottom and top components. Large $K$ values pose memory challenges; i.e., storing and data movement, that limit the size of the problems that can be tackled. Moreover, the spikes need to be computed by solving multiple right-hand side linear systems with $\bA_i$ coefficient matrices. There are $2K$ such systems for each of the $P-1$ pairs of spikes. Evidently, a low $K$ is highly desirable. However, finding the lowest half-bandwidth $K$ by symmetrically reordering a sparse matrix is NP-hard. The {\CM} reordering provides simple and oftentimes effective heuristics to tackle this problem. Moreover, as the {\CM} reordering yields symmetric permutations, it will not displace the ``heavy'' diagonal terms obtained during the {\DB} step. However, to obtain a symmetric permutation, one has to start with a symmetric matrix. To this end, unless $\bA$ is already symmetric and does not call for a {\DB} step (which is the case, for instance, when $\bA$ is symmetric positive definite), the matrix passed over for {\CM} reordering is $(\bA + \bA^T)/2$. Given a symmetric $n \times n$ matrix with $m$ non-zero entries {\CM} works on its adjacency matrix. {\CM} first picks a random node and adds the node to the work list. Then the algorithm repeats sorting all its neighboring nodes with non-descending vertex degree and adding them until all vertices have been added and removed once from the work list. In other words, {\CM} is essentially a {\BFS} where neighboring vertices are visited in order from lowest to highest vertex degree. \noindent \textbf{Third-stage reordering.} The {\DB}--{\CM} reordering sequence yields diagonally-heavy matrices of smaller bandwidth. The band itself however can be very sparse. The purpose of the third-stage {\CM} reordering is to further reduce the bandwidth within each $\bA_i$ and reduce the sparsity within the band. Consider, for instance, the matrix ANCF88950 that comes from structural dynamics \cite{serban2015}. It has \SI{513900}{} nonzeros, $N= 88\,950$, and an average of $5.78$ non-zero elements per row. After {\DB}--{\CM} reordering with no drop-off, the resulting banded matrix has a half-bandwidth $K = 205$. The band itself is very sparse with a fill-in of only $0.7\%$ within the band. In its default solution, {\SpikeHyb} constructs a block banded matrix where each diagonal block $\bA_i$, obtained after the initial {\DB}--{\CM} reorderings, is allowed to have a different bandwidth. This is achieved using another {\CM} pass, independently and in parallel for each $\bA_i$. Applying this strategy to ANCF88950, using $P = 16$ partitions, the half bandwidth is reduced for all partitions to values no higher than $K = 141$, while the fill-in within the band becomes approximately $3\%$. Note that this third-stage reordering does nothing to reduce the column-width of the spikes. However, it helps in two respects: a smaller memory footprint for the LU/UL factors, and less factorization effort. These are important side effects, since the LU/UL GPU factorization is currently done in-core considering $\bA_i$ to be \emph{dense} within the band. \section{Brief implementation details} \label{s:implementation} \subsection{Dense banded matrix factorization details} \label{ss:impl-details} \input{spikegpu_impl.tex} \subsection{{\DB} reordering implementation details} \label{ss:db-impl} \input{db_impl.tex} \subsection{{\CM} reordering implementation details} \label{ss:cm-impl} \input{cm_impl.tex} \subsection{{\SpikeHyb}--components and computational flow} \label{ss:overallAlg} \input{overall_algorithm.tex} \section{Numerical Experiments} \label{s:experiments} The next three subsections summarize results from three numerical experiments concerned, in this order, with the solution of dense banded linear systems, sparse matrix reordering, and the solution of sparse linear systems. The subsection order is meant to emphasize that dense banded linear system solution and matrix reordering are two prerequisites for an effective sparse linear system implementation in {\SpikeHyb}. The hardware/software setup for these numerical experiments is as follows. The GPU used was Tesla K20X \cite{TeslaK20,TeslaK20-datasheet}. {\SpikeHyb} uses {\texttt{CUDA 7.0}} \cite{cudaProgGuide}, {\texttt{cusp}} \cite{Cusp2012}, and {\texttt{Thrust}} \cite{thrust}. The CPU used was the 3GHz, 25 MB last level cache, Intel Xeon E5-2690v2. The node used hosted two such CPUs, which is the maximum possible for this type of chip, for a total of 20 cores executing up to 40 HTT threads. The two-CPU node was used to run Intel's MKL version 13.0.1, {\Pardiso} \cite{schenk2004solving}, {\MUMPS} \cite{mumps}, {\SuperLU} \cite{demmel2011superlu}, and Harwell's {\MCs} and {\MCsf} \cite{HSL}. Unless otherwise stated, all times reported are in seconds and were obtained on a dedicated machine. In an attempt to avoid warm up overhead, the results reported represent averages that drew on multiple successive identical runs. When reporting below the results of several numerical experiments, one legitimate question is whether it makes sense to compare performance results obtained on one GPU with results obtained on two multicore CPUs. The multicore CPU is not the fastest, as Intel chips with more cores are presently available. Additionally, the Intel chip's microarchitecture is not Haswell, which is more recent than the Ivy Bridge microarchitecture of the Xeon E5-2690v2. Likewise, on the GPU side, one could have used a Tesla K80 card, which has roughly four times more memory than K20x and twice its memory bandwidth. Moreover, price-wise, the K80 would have been closer to the cost of two CPUs than K20x is. Finally, Kepler is not the latest microarchitecture either, since Maxwell currently enjoys that status. We do not attempt to answer these questions and hope that the interested reader will modulate this study's conclusions by factoring in unavoidable CPU--GPU hardware differences. No claim is made herein of one architecture being superior since such a claim could be easily proved wrong by moving from algorithm to algorithm or from discipline to discipline. The sole and narrow purpose of this section is to report on how apt {\SpikeHyb} is in tackling linear algebra tasks. To that end its performance is compared to that of established solutions running on CPUs and also of a recent GPU library. \subsection{Numerical experiments related to dense banded linear systems} \label{ss:theDBcase} The discussion in this subsection draws on a subset of results reported in \cite{AngSPIKEMKL-2014} and presents results pertaining to the influence on {\SaP}'s time to solution of the number of partitions $P$ and of the diagonal dominance $d$ of the coefficient matrix, as well as a comparison against Intel's MKL solver over a spectrum of problem dimensions $N$ and half bandwidth values $K$. \subsubsection{Sensitivity with respect to $P$} \label{sss:sensitivityWRT-P} The entire {\SpikeHyb} solution for dense banded linear systems is implemented on the GPU. We first carried out a sensitivity analysis of the time to solution with respect to the number of partitions. The results are summarized in Fig. \ref{f:P-sweep}. This behavior; i.e., relatively small gains after a threshold value of $P$, is typical. As a rule of thumb, some experimentation is necessary to find an optimal $P$ value. Otherwise, a conservatively large value should be picked in the neighborhood of 50 or above. For {\SaPD}, larger values of $P$ help with load balancing, particularly for GPUs with many stream multiprocessors. The same argument can be made for {\SaPC}, with the caveat that the spike truncation factor comes into play in a fashion that is modulated by the value of $d$. \begin{figure} \centering \input{p_sweep_tikz} \caption{Time to solution as a function of the number of partitions $P$. Study carried out for a dense banded linear system with $N=\SI{200000}{}$, $K=200$, and $d=1$.} \label{f:P-sweep} \end{figure} \input{sweepP4DvsCdense} \subsubsection{Sensitivity with respect to $d$} \label{sss:sensitivityWRT-d} Next, we report on the performance of {\SpikeHyb} for a dense banded linear system with $N=\SI{200000}{}$ and $K=200$, for degrees of diagonal dominance in the range $0.06 \le d \le 1.2$, see Eq.~(\ref{eq:diagDominanceDef}). The entries in the matrix are randomly generated and $P=50$. The findings are summarized in Fig.~\ref{f:SPIKE_MKL_banded:d}, where {\SaPC} and {\SaPD} are compared against the banded linear solver in {\MKL}. When $d>1$ the impact of the truncation becomes increasingly irrelevant, a situation that places the {\SpikeHyb} at an advantage. As such, there is no reason to go beyond $d=1.2$ since if anything, the results will get better. The more interesting range is $d<1$, when the diagonal dominance requirement is violated. {\SpikeHyb} solver demonstrates uniform performance over a wide range of degrees of diagonal dominance. For instance, {\SaPC} typically required less than one Krylov iteration for all $d > 0.08$. As the degree of diagonal dominance decreases further, the number of iterations and hence the time to solution increase significantly as a consequence of truncating the spikes that now contain non-negligible values. \begin{figure}[ht] \centering \input{d_sweep_tikz} \caption{Influence of the diagonal dominance $d$, with $0.06 \le d \le 1.2$, for fixed values $N=\SI{200000}{}$, $K=200$ and $P = 50$.} \label{f:SPIKE_MKL_banded:d} \end{figure} \input{sweepd4DvsCdense} \subsubsection{Comparison with Intel's MKL over a spectrum of $N$ and $K$} \label{sss:sensitivityWRT-NK} \input{sweep2D_DvsC_dense.tex} \subsection{Numerical experiments related to sparse matrix reorderings} \label{ss:reorderingNumExp} When solving sparse linear systems, {\SaP} reformulates the sparse problem as a dense banded linear system that is subsequently solved using {\SaPC} or {\SaPD}. Ideally, the ``sparse--to--dense'' transition yields a coefficient matrix that is diagonal heavy; i.e., has a large $d$, and has a small bandwidth $K$. Two matrix reorderings are applied in an attempt to meet these two objectives. The first one; i.e., the diagonal boosting reordering, is assessed in section \S\ref{sss:diagBoostReord}. The second one; i.e., the bandwidth reduction reordering, is evaluated in \S\ref{sss:BWreduction}. \subsubsection{Assessment of the diagonal boosting reordering solution} \label{sss:diagBoostReord} The first set of results, summarized in Fig. \ref{fig:db-speedup}, correspond to an efficiency comparison between the hybrid CPU--GPU implementation of \S\ref{ss:db-impl} and the Harwell Sparse Library (HSL) {\MCsf} algorithm \cite{HSL}. The hybrid implementation outperformed {\MCsf} for 96 out of the 116 matrices selected from the Florida Sparse Matrix Collection~\cite{davis2011university}. The left pane in Fig.~\ref{fig:db-speedup} presents results of a statistical analysis that used a median-quartile method to measure the spread of the {\MCsf} and {\DB} times to solution. Assume that $T_\alpha^{\DB}$ and $T_\alpha^{\MCsf}$ represent the times required by {\DB} and {\MCsf}, respectively, to complete the diagonal boosting reordering in test $\alpha$. A relative speedup is computed as \begin{equation} {\cal S}_\alpha^{\DB-\MCsf} = \log_2 \frac{T_\alpha^{\MCsf}}{T_\alpha^{\DB}} \, . \label{eq:spreadDefinitionDB} \end{equation} \noindent These ${\cal S}_\alpha^{\DB-\MCsf}$ values, which can be either positive or negative, are collected in a set ${\cal S}^{\DB-\MCsf}$ which is used to generate the left box plot in Fig.~\ref{f:boxPlotSparseSolvers}. The number of tests used to produce these statistical results was 116. Note that a positive value means that {\DB} is faster than {\MCsf}, with the opposite outcome being the case for negative values of ${\cal S}_\alpha^{\DB-\MCsf}$. The median values for ${\cal S}^{\DB-\MCsf}$ was $1.2423$, which indicates that half of the 116 tests ran more than 2.3 times faster using the {\DB} implementation. On average, it turns out that the larger the matrix, the faster the {\DB} solution becomes. Indeed, as a case study, we analyzed a subset of larger matrices. The ``large'' attribute was defined in two ways: first, by considering the matrix size, and second, by considering the number of nonzero elements. For the 116 matrices considered, we picked the largest 24 of them; i.e., approximately the largest 20\%. To this end, in the first case, we selected all matrices whose dimension was higher than $N=$\SI{150000}{}. In the second case, we selected all matrices whose number of nonzero elements was larger than \SI{4350000}{}. For large $N$, the median was 1.6255, while for matrices with many nonzero elements, the median was 1.7276. In other words, half of the large tests ran more than three times faster in {\DB}. Finally, the statistical results in Fig.~\ref{f:boxPlotSparseSolvers} indicate that for large tests, with the exception of two outliers, there were no tests for which ${\cal S}_\alpha^{\DB-\MCsf}$ was negative; i.e., with one exception, {\DB} was faster. When all 116 tests were considered, {\MCsf} was faster in several cases, with an outlier for which {\MCsf} was four times faster than {\DB}. Two facts emerged at the end of this analysis. First, as discussed in \cite{AngMC64-2014}, the bottleneck in the diagonal boosting reordering was either the {\DBS{2}} stage; i.e., finding the initial match, or the {\DBS{3}} stage; i.e., finding a perfect match, with an approximately equal split among them. Secondly, the quality of the reordering turned out to be identical -- almost all matrices displayed the same grand product of the diagonal entries regardless of whether the reordering was carried out using {\MCsf} or {\DB}. \begin{figure} \centering \includegraphics[width=\textwidth]{Figs/largeMatResDB.png} \caption{Results of a statistical analysis that uses a median-quartile method to measure the spread of the {\MCsf} and {\DB} times to solution. The speedup factor, or performance metric, is computed as in Eq.~(\ref{eq:spreadDefinitionDB}).} \label{fig:db-speedup} \end{figure} \subsubsection{Assessment of the bandwidth reduction solution} \label{sss:BWreduction} The performance of the {\CM} solution implemented in {\SaP} was evaluated on a set of 125 sparse matrices from various applications. These matrices were the 116 used in the previous section plus several other matrices such as ANCF31770, ANCF88950, and NetANCF\_40by40, etc., that arise in granular dynamics and the implicit integration of flexible multi-body dynamics~\cite{luningThesis2015,LuningTechReport2014,serban2015}. Figure~\ref{fig:allMatResCM} presents results of a statistical analysis that used a median-quartile method to compare ($i$) the half bandwidths of the matrices obtained by Harwell's {\MCs} and {\SaP}'s {\CM}; and, ($ii$) the time to solution; i.e., time to complete a band-reducing reordering. For ($i$), the quantity reported is the relative difference between the resulting bandwidths, \[ r_K \equiv 100 \times \frac{K_{\MCs}-K_{\CM}}{K_{\CM}} \, , \] where $K_{\MCs}$ and $K_{\CM}$ are, respectively, the half bandwidths $K$ of the matrices produced by {\MCs} and {\CM}. For ($ii$), the metric used was identical to the one introduced in Eq.~(\ref{eq:spreadDefinitionDB}). Note that {\CM} is superior when $r_K$ assumes large positive values, which are also desirable for the time-to-solution plot. As far as $r_K$ is concerned, the median value is $0\%$; i.e., out of 125 matrices, about half are better off being reordered by Harwell's {\MCs} with the other half being better off reordered by {\SaP}'s {\CM}. On a positive side, the number of outliers for {\CM} is higher, indicating that there is a propensity for {\CM} to ``win big''. In terms of times to solution, {\MCs} is marginally faster than {\CM}'s hybrid CPU/GPU solution. Indeed, the median value of the performance metric is $-0.1057$; i.e., it takes half of the tests run with {\CM} at least $1.076$ times longer to complete the bandwidth reduction task. \begin{figure} [ht] \centering \includegraphics[width=0.8\textwidth]{Figs/allMatResCM.png} \caption{Comparison of the Harwell {\MCs} and {\SaP}'s {\CM} implementations in terms of resulting half bandwidth $K$ and time to solution.} \label{fig:allMatResCM} \end{figure} It is insightful to discuss what happens when this statistical analysis is controlled to only consider larger matrices. The results of this analysis are captured in Fig.~\ref{fig:largeMatResCM}. Just like in section \S\ref{sss:diagBoostReord}, the focus is on the largest 20\% matrices, where ``large'' is understood to mean large matrix dimension $N$, and then separately, large number of nonzeros $nnz$. Incidentally, the cut-off value for the dimension was $N=$\SI{215000}{}, while for the number of nonzeros was $nnz=$\SI{7800000}{}. When the statistical analysis included the 25 largest matrices based on size $N$, the median value for the half bandwidth metric $r_K$ was yet again $0.0\%$. The median value for time to solution changed however, from $-0.1057$ to $0.6964$ to indicate that for half of these large tests {\SaP} ran more than $1.6$ times faster than the Harwell solution. Qualitatively, the same conclusions were reached when the 25 large matrices were selected on the grounds on $nnz$ count. The median for $r_K$ was $0.4182\%$, which again suggested that the relative difference in the resulting bandwidth $K$ yielded by {\CM} and {\MCs} was practically negligible. The median time to solution was the same $0.6964$. Note though that according to the results shown in Fig.~\ref{fig:largeMatResCM}, there is no large--$nnz$ test for which the Harwell implementation is faster than the {\CM}. In fact, 25\% of the large tests; i.e., about five tests, run at least three times faster in {\CM}. \begin{figure} [ht] \centering \includegraphics[width=0.8\textwidth]{Figs/largeMatResCM.png} \caption{Comparison of the Harwell {\MCs} and {\SaP}'s {\CM} implementations in terms of resulting half bandwidth $K$ and time to solution. Statistical analysis of large matrices only.} \label{fig:largeMatResCM} \end{figure} Finally, it is worth pointing out the correlations between times to solutions and $K$ values, on the one hand, and $N$ and $nnz$, on the other hand. Herein, the correlation used is the Pearson product-moment correlation coefficient \cite{Box1978}. As a rule of thumb, a Pearson correlation coefficient of 0.01 to 0.19 suggests a negligible relationship, while a coefficient between 0.7 and 1.0 indicates a strong positive relationship. The correlation coefficient between the bandwidth and the dimension $N$ of the matrix turns out to be small; i.e., $0.15$ for {\MCs} and $0.16$ for {\CM}. Indeed, the fact that a matrix is large doesn't say much about what $K$ value one can expect upon reordering this matrix. The correlation between the number of nonzeros and the amount of time to figure out the reordering is very high though. In other words, the larger the matrix size $N$, the longer the time to produce the reordering. For instance, the correlation coefficient was $0.91$ for {\MCs} and $0.81$ for {\CM}. The same observation holds for the number of nonzeros entries: when there is a lot of them, the time to produce a reordering is large. The Pearson correlation coefficient is $0.71$ for {\MCs} and $0.83$ for {\CM}. These correlation coefficients were obtained on a sample size of 125 matrices. Yet the same trends are manifest for the reduced set of 25 large matrices that we worked with. For instance, the correlation between dimension $N$ and resulting $K$ is very small at large $N$ values: $0.04$ for {\MCs} and $0.05$ for {\CM}. For the time to solution, the correlation coefficients with respect to $N$ are $0.89$ for {\MCs} and $0.76$ for {\CM}. \FloatBarrier \subsection{Numerical experiments related to sparse linear systems} \label{ss:sparseLinSysExp} \subsubsection{Profiling results} \label{sss:profilingStudy} \input{profilingResultsSaP.tex} \subsubsection{The impact of the third stage reordering} \label{sss:TSR} \input{3rdstageReordering.tex} \subsubsection{Comparison against state of the art} \label{sss:compareAgainstOtherSS} \input{spSolvComp-vs-CPUsols.tex} \subsubsection{Comparison against another GPU solver} \label{sss:compareAgainst-cuSolver} \input{spSolvComp-vs-GPUsols.tex} \FloatBarrier \section{Conclusions and future work} \label{s:conclusions} This contribution discusses parallel strategies to ($i$) solve dense banded linear systems; ($ii$) solve sparse linear systems; and ($iii$) perform matrix reorderings for diagonal boosting and bandwidth reduction. The salient feature shared by these strategies is that they are designed to run in parallel on GPU cards. BSD3 open source implementations of all these strategies are available at \cite{SaP_git,SaPWebsite} as part of a software package called {\SaP}. As far as the parallel solution of linear systems is concerned, the strategies discussed are in-core; i.e., there is no host-device, CPU-GPU, memory swapping, which somewhat limits the size of the problems that can be presently solved by {\SaP}. Over a broad range of dense matrix sizes and bandwidths, {\SaP} is likely to run two times faster than Intel's MKL. This conclusion should be modulated by hardware considerations and also the observation that the diagonal dominance of the dense banded matrix is a performance factor. On the sparse linear system side, the most surprising result was the robustness of {\SaP}. Out of a set of 114 tests, most of them using matrices from the University of Florida sparse matrix collection, {\SaP} failed only 28 times, of which 23 were ``out-of-memory'' failures owing to a 6 GB limit on the size of the GPU memory. In terms of performance, {\SaP} was compared against {\Pardiso}, {\MUMPS}, and {\SuperLU}. We noticed a perfect negative correlation between robustness and time to solution: the faster a solver, the less robust it was. In this context, {\Pardiso} was the fastest, followed by {\MUMPS}, {\SaP}, and {\SuperLU}. Surprisingly, the straight split-and-parallelize strategy, without the coupling involved in the SPIKE-type strategy, emerged as the more often solution approach adopted by {\SaP}. The implementation of {\SaP} is somewhat peculiar in that the sparse solver builds on top of the dense banded one. The sparse--to--dense transition occurs via two reorderings: one that boosts the diagonal entires and one that reduces the matrix bandwidth. Herein, they were implemented as CPU/GPU hybrid solutions which were compared against Harwell's implementations and found to be twice as fast for the diagonal boosting reordering, and of comparable speed for the bandwidth reduction. Many issues remain to be investigated at this point. First, given that more than 50\% of the time to solution is spent in the iterative solver, it is worth consider the techniques analyzed in \cite{SpMV-TR2015}, which sometimes double the flop rate in sparse matrix-vector multiplication operations upon changing the matrix storage scheme; i.e., moving from CSR to ELL or hybrid. Second, an out-of-core and/or multi-GPU implementation would enable {\SaP} to handle larger problems while possibly reducing time to solution. Third, the {\CM} bandwidth reduction strategy implemented is dated; spectral and/or hyper-graph partitioning for load balancing should lead to superior splitting of the coefficient matrix. Finally, as it stands, with the exception of parts of the matrix reordering, {\SaP} is entirely a GPU solution. It would be worth investigating how the CPU can be involved in other phases of the implementation. Such an investigation would be well justified given the imminent tight integration of the CPU and GPU memories. \section*{Acknowledgments} This work was funded through National Science Foundation grant SI2-SSE 1147337 and benefited from many discussions the authors had with Matt Knepley and Ahmed Sameh.
2,869,038,155,190
arxiv
\section{Introduction} The analogy between graphs and algebraic curves has been a source of inspiration both in combinatorics and algebraic geometry. In this frame of mind, M. Kotani and T. Sunada (see \cite{KS}) introduced the Albanese torus, $\operatorname{Alb}(\Gamma)$, and the Jacobian torus, $\operatorname{Jac}(\Gamma)$, of a graph $\Gamma$; see section~\ref{at} for the precise definition. By \cite{KS}, $\operatorname{Alb}(\Gamma)$ and $\operatorname{Jac}(\Gamma)$ are dual flat tori of dimension equal to $b_1(\Gamma)$, the first Betti number of $\Gamma$. As $b_1(\Gamma)$ is the maximum number of linearly independent cycles in $\Gamma$, it can be viewed as the analog for a graph of the genus of a Riemann surface. In analogy with the classical Torelli theorem for curves, it is natural to ask the following question: \begin{prob} \label{p1} When are two graphs $\Gamma$ and $\Gamma'$ such that $\operatorname{Alb}(\Gamma)\cong \operatorname{Alb}(\Gamma')$? \end{prob} There exist in the literature other versions of such a problem (see for example \cite{BdlHN}, or \cite{BN}); the statement of Problem~\ref{p1} is due to T. Sunada. One of the goals of this paper is to answer the above question. In our Theorem~\ref{main-thm}, we prove that $\operatorname{Alb}(\Gamma)\cong \operatorname{Alb}(\Gamma')$ if and only if the two graphs obtained from $\Gamma$ and $\Gamma'$ by contracting all of their separating edges are cyclically equivalent (or 2-isomorphic, cf. Definition~\ref{cyc-equiv}). Using a result of Whitney, we obtain that the Torelli theorem is true for $3$-connected graphs; see Corollary \ref{main-cor}. This answers a problem implicitly posed in \cite[Page 197]{BdlHN}, where the authors ask, albeit indirectly, whether there exist two non isomorphic, $3$-connected graphs with isomorphic Albanese torus. Let us now turn to another, recently discovered aspect of the analogy between graphs and curves, that is, the tight connection between tropical curves and graphs. By results of G. Mikhalkin and I. Zharkov, see \cite{MIK3} and \cite{MZ}, there exists a natural bijection between the set of tropical equivalence classes of compact tropical curves and metric graphs all of whose vertices have valence at least 3. Observe now that compact tropical curves, just like compact Riemann surfaces, are endowed with a Jacobian variety, which is a principally polarized tropical Abelian variety; see Section~\ref{trp} for details. The following Torelli-type question arises \begin{prob} \label{p2} Can two compact tropical curves have isomorphic Jacobian varieties? If so, when? \end{prob} It is well known (see \cite[Sect. 6.4]{MZ}) that the answer to the first part of this question is ``yes". In Theorem~\ref{main-thmt} we precisely characterize which tropical curves have the same Jacobian variety. In particular, we prove that for curves whose associated graph is 3-connected, the Torelli theorem holds in strong form, i.e. two such curves are tropically equivalent if and only if their polarized Jacobians are isomorphic. The proof of Theorem~\ref{main-thmt} is based on a Torelli theorem for metric graphs, Theorem~\ref{main-thml}, which is interesting in its own right, and uses essentially the same ideas as the proof of Theorem~\ref{main-thm}. The statement of Theorem~\ref{main-thml} is slightly more technical, but can be phrased as follows: two metric graphs have the same Albanese torus if and only if they have the same 3-edge connected class (defined in \ref{3ec} and \ref{3ecl}). A key ingredient turns out to be the Delaunay decomposition ${\rm Del}(\Gamma)$ of a graph $\Gamma$. ${\rm Del}(\Gamma)$ is well known to be a powerful tool, and has been investigated in, among others, \cite{nam}, \cite{OS} and \cite{alex}, which have been quite useful in the writing of this paper. In Proposition \ref{Del-equ}, we characterize when two graphs have the same Delaunay decomposition. The last section of the paper gives other characterizations of a graph, or rather, of the 3-edge connected class of a graph. These characterizations, given in Theorem~\ref{final-thm}, use three remarkable posets (i.e. partially ordered sets), $\mathcal{SP}_{\Gamma}$, $\mathcal{OP}_{\Gamma}$ and $\overline{\mathcal{OP}_{\Gamma}}$. The poset $\mathcal{SP}_{\Gamma}$ is the set of spanning subgraphs of $\Gamma$ that are free from separating edges. The maximal elements of $\mathcal{SP}_{\Gamma}$ are the so-called C1-sets (see Definition~\ref{C1}), which play a crucial role in the previous sections. The two posets $\mathcal{OP}_{\Gamma}$ and $\overline{\mathcal{OP}_{\Gamma}}$, defined in Section~\ref{op}, are associated to totally cyclic orientations; we conjecture a geometric interpretation for them in \ref{geo-conj}, relating to an interesting question posed in \cite{BdlHN}. Not only is this last section related to the Torelli theorems in the previous parts, but also, our interest in it is motivated by a different, open, Torelli problem. The material of Section~\ref{pos} will in fact be applied in our ongoing project, \cite{CV}, in order to describe the combinatorial structure of the compactified Jacobian of a singular algebraic curve, and generalize the Torelli theorem to stable curves. In the Appendix, assuming some natural facts about the Torelli map $t_g^{\rm trop}:M_g^{\rm trop}\to A_g^{\rm trop}$ (facts that are commonly expected, yet still awaiting to be fully settled in the literature), we prove that $t_g^{\rm trop}$ is of tropical degree one to its image, even though it is not injective; see Theorem \ref{MZ-conj}. This proves a conjecture of Mikhalkin-Zharkov (see \cite[Sect. 6.4]{MZ}). \emph{Acknowledgements.} We thank L. Babai for a stimulating e-mail correspondence, M. Baker for pointing us the paper \cite{art}, and G. Mikhalkin and I. Zharkov for precious comments on the tropical Torelli map, which prompted us to add the Appendix. The second author would like to thank T. Sunada for a series of lectures at Humboldt University of Berlin, during which he learnt about the Torelli problem for graphs, and G. Mikhalkin for a series of lectures at the INdAM workshop``Geometry of projective varieties", during which he learnt about the Torelli problem for tropical curves. Finally, we benefitted from a very thoughtful report by an anonymous referee, to whom we are grateful. \section{Preliminaries} \subsection{The Albanese torus of a graph} \label{at} Throughout the paper $\Gamma$ will be a finite graph (loops and multiple edges are allowed); we denote by $V(\Gamma)$ its set of vertices and by $E(\Gamma)$ its set of edges. We recall the definition of the Albanese torus, from \cite{KS}. Fix an orientation of $\Gamma$ and let $s, t: E(\Gamma)\to V(\Gamma)$ be the two maps sending an oriented edge to its source and target point, respectively. Notice that the Albanese torus will not depend on the chosen orientation. Consider the spaces of chains of $\Gamma$ with values in an abelian group $A$: $$ C_0(\Gamma, A):=\oplus_{v\in V(\Gamma)} A \cdot v ,\hskip.8in C_1(\Gamma, A):=\oplus_{e\in E(\Gamma)}A \cdot e . $$ Define, as usual, a boundary map $$\begin{aligned} \partial : C_1(\Gamma, A) & \longrightarrow C_0(\Gamma, A) \\ e & \mapsto t(e)-s(e). \end{aligned}$$ The first homology group of $\Gamma$ with values in $A$ is $H_1(\Gamma, A):=\ker\partial$. If $A=\mathbb{R}$, we define the scalar product $(,)$ on $C_1(\Gamma,\mathbb{R})$ by $$(e,e')=\begin{cases} 1 & \text{ if } e=e', \\ 0 & \text{ otherwise.} \end{cases} $$ We continue to denote by $(,)$ the induced scalar product on $H_1(\Gamma,\mathbb{R})$. The subspace $H_1(\Gamma,\mathbb{Z})$ is a lattice inside $H_1(\Gamma,\mathbb{R})$. \begin{defi}\cite{KS} \label{alb} The {\it Albanese torus} $\operatorname{Alb}(\Gamma)$ of $\Gamma$ is $$ \operatorname{Alb}(\Gamma):=\Bigl(H_1(\Gamma,\mathbb{R})/H_1(\Gamma,\mathbb{Z}); (,)\Bigr) $$ with the flat metric derived from the scalar product $(,)$. \end{defi} We have $\dim \operatorname{Alb}(\Gamma)=b_1(\Gamma)$ where $b_1(\Gamma)$ is the first Betti number: $$b_1(\Gamma)={\rm{rank}_{\mathbb{Z}}} H_1(\Gamma, \mathbb{Z})=\#\{\text{connected components of } \Gamma\} - \#V(\Gamma) +\#E(\Gamma). $$ There is also the cohomological version of the previous construction, (we refer to \cite{KS} for the details). One obtains another torus, called the {\it Jacobian torus} $\operatorname{Jac}(\Gamma)$, which has the following form $ \operatorname{Jac}(\Gamma):=(H^1(\Gamma,\mathbb{R})/H^1(\Gamma,\mathbb{Z});\langle,\rangle). $ As we said, $\operatorname{Jac}(\Gamma)$ and $\operatorname{Alb}(\Gamma)$ are dual flat tori. There exist in the literature several definitions of Albanese and Jacobian torus of a graph, related to one another by means of standard dualities. In particular, we need to briefly explain the relation with \cite{BdlHN}. Our lattice $H^1(\Gamma,\mathbb{Z})$ is the dual lattice, in $(H^1(\Gamma,\mathbb{R});\langle,\rangle)$, of the so-called lattice of integral flows $\Lambda^1(\Gamma)\subset H^1(\Gamma,\mathbb{R})$ studied in \cite{BdlHN}. In particular, the Albanese torus $\operatorname{Alb}(\Gamma)$ determines the lattice $\Lambda^1(\Gamma)$ and conversely (see Proposition 3 of loc.cit.). \subsection{Cyclic equivalence and connectivity} \label{Sec2.2} \begin{nota} \label{notgraph} We set some notation that will be used throughout. Let $S\subset E(\Gamma)$ be a subset of edges of a graph $\Gamma$. We associate to $S$ two graphs, denoted $\Gamma\smallsetminus S$ and $\Gamma(S)$, as follows $\bullet$ The graph $\Gamma\smallsetminus S$ is, as the notation indicates, obtained from $\Gamma$ by removing the edges in $S$ and by leaving the vertices unchanged. Thus $V(\Gamma\smallsetminus S)=V(\Gamma)$ (so that $\Gamma\smallsetminus S$ is a spanning subgraph) and $E(\Gamma\smallsetminus S)=E(\Gamma)\smallsetminus S$. $\bullet$ The graph $\Gamma(S)$ is obtained from $\Gamma$ by contracting all the edges not in $S$, so that the set of edges of $\Gamma(S)$ is equal to $S$. There is a surjective contraction map $\Gamma \to \Gamma(S)$ which contracts to a point every connected component of $\Gamma \smallsetminus S$. Notice that $\Gamma(S)$ is connected if and only if so is $\Gamma$. For example, $\Gamma(E(\Gamma))=\Gamma$, and, if $c$ is the number of connected components of $\Gamma$, then $\Gamma (\emptyset)$ is a set of $c$ isolated points (i.e. $\Gamma (\emptyset)$ has $c$ vertices and no edges). \begin{example}\label{disGamma(S)} Here is an example of a $\Gamma(S)$, with the contraction map $\Gamma\to\Gamma(S)$, where $S=\{e_1, e_2\}\subset E(\Gamma)$: \begin{figure}[!htp] $$\xymatrix@=1pc{ &*{\bullet} \ar@{-}[rr] \ar@{-}[dd] \ar@{-}@/_/[dd] & &*{\bullet} \ar@{-}[dd] \ar@{-}@/_/[ll]\ar@{-}[rr]^{e_1}& & *{\bullet} \ar@{-}[dd] \ar@{-}@/^.5pc/[rrd]\ar@{-}[drr]&&&&&& & & &\\ \Gamma \, = &&&& & &&*{\bullet} \ar@{-}@(ur,dr) &&\ar[rr] & &&*{\bullet} \ar@{-}@/^.5pc/[rr]^{e_1} \ar@{-}@/_.5pc/[rr]_{e_2} &&*{\bullet} & \, = \Gamma(S) \\ & *{\bullet} \ar@{-}[rr] & &*{\bullet} \ar@{-}[rr]^{e_2}& & *{\bullet} \ar@{-}[urr]&& &&&& & && }$$ \caption{Example of $\Gamma(S)$ with $S=\{e_1, e_2\}$.} \end{figure} \end{example} We have the useful additive formula \begin{equation} \label{b1} b_1(\Gamma)=b_1(\Gamma\smallsetminus S)+b_1(\Gamma(S)). \end{equation} If $\Gamma $ is a connected graph, a {\it separating edge} is an $e\in E(\Gamma)$ such that $\Gamma \smallsetminus e$ is not connected. If $\Gamma$ is not connected we say that an edge is separating if it is separating for the connected component containing it. We denote by $E(\Gamma)_{{\rm sep}}$ the set of separating edges of $\Gamma $. We say that a graph $\Delta$ is a {\it cycle} if it is connected, free from separating edges and if $b_1(\Delta)=1$. We call $\#E(\Delta)=\#V(\Delta)$ the length of $\Delta$. \end{nota} \begin{defi}\label{cyc-equiv} Let $\Gamma$ and $\Gamma'$ be two graphs. We say that a bijection between their edges, $\epsilon:E(\Gamma)\to E(\Gamma ')$, is {\it cyclic} if it induces a bijection between the cycles of $\Gamma$ and the cycles of $\Gamma'$. We say that $\Gamma$ and $\Gamma'$ are {\it cyclically equivalent} or {\it 2-isomorphic}, and we write $\Gamma\equiv_{\rm cyc} \Gamma'$, if there exists a cyclic bijection $\epsilon:E(\Gamma)\to E(\Gamma')$. The cyclic equivalence class of $\Gamma$ will be denoted by $[\Gamma]_{\rm cyc}$. \end{defi} $[\Gamma]_{\rm cyc}$ is described by the following result of Whitney (see also \cite[Sec. 5.3]{Oxl}). \begin{thm}[\cite{Whi}]\label{cycequ-moves} Two graphs $\Gamma$ and $\Gamma'$ are cyclically equivalent if and only if they can be obtained from one another via iterated applications of the following two moves: \begin{enumerate} \item[(1)] Vertex gluing: $v_1$ and $v_2$ are identified to the separating vertex $v$, and conversely (so that $\Gamma_1\coprod \Gamma _2 \equiv_{\rm{cyc}}\Gamma$). \begin{figure}[!htp] $$\xymatrix@=1pc{ &&*{\bullet} \ar@{-}[dd]\ar@{-}@/_.5pc/[dd]\ar@{-}[drr]^>{v_1}&& &&*{\bullet}\ar@{-}[dd] \ar@{-}[dl]_>{v_2} \ar@{-}[dr] & & && & *{\bullet} \ar@{-}[dd]\ar@{-}@/_.5pc/[dd]\ar@{-}[drr]^>{v}&& &*{\bullet}\ar@{-}[dd] \ar@{-}[dl] \ar@{-}[dr] & &\\ \Gamma_1\ar@{=}[r]&&&&*{\bullet}&*{\bullet} \ar@{-}[rr] |!{[r]}\hole & &*{\bullet} \ar@{=}[r]& \Gamma_2 &\equiv_{\rm{cyc}} && &&*{\bullet} \ar@{-}[rr] |!{[r]}\hole & &*{\bullet}\ar@{=}[r] &\Gamma \\ &&*{\bullet} \ar@{-}[urr]&&&&*{\bullet}\ar@{-}[ru]\ar@{-}[lu]& && & &*{\bullet} \ar@{-}[urr]&& & *{\bullet}\ar@{-}[ru]\ar@{-}[lu] }$$ \caption{Two graphs $\Gamma_1$ and $\Gamma_2$ attached at $v_1\in V(\Gamma_1)$ and $v_2\in V(\Gamma_2)$.} \label{vert-gluing} \end{figure} \item[(2)] Twisting: the double arrows below mean identifications. \begin{figure}[!htp] $$\xymatrix@=1pc{ *{\bullet} \ar@{-}[rr] \ar@{-}[dd] \ar@{-}@/_/[dd] & &*{\bullet} \ar@{-}[dd] \ar@{-}@/_/[ll]_<{u_1}\ar@{<->}[rr]& & *{\bullet} \ar@{-}[dd] \ar@{-}@/^.5pc/[rrd]^<{u_2}\ar@{-}[drr]&&&&&& *{\bullet} \ar@{-}[rr] \ar@{-}[dd] \ar@{-}@/_/[dd] & &*{\bullet} \ar@{-}[dd] \ar@{-}@/_/[ll]_<{u_1} \ar@{<->}[ddrr]&& *{\bullet} \ar@{-}[dd]\ar@{-}@/^.5pc/[rrd]^<{u_2}\ar@{-}[drr]&& \\ &&& & &&*{\bullet} \ar@{-}@(ur,dr) &&\equiv_{\rm{cyc}} &&&&& & &&*{\bullet} \ar@{-}@(ur,dr) \\ *{\bullet} \ar@{-}[rr]_>{v_1} & &*{\bullet} \ar@{<->}[rr]& & *{\bullet} \ar@{-}[urr]_<{v_2}&& &&&& *{\bullet} \ar@{-}[rr]_>{v_1} & &*{\bullet} \ar@{<->}[uurr]&& *{\bullet} \ar@{-}[urr]_<{v_2}&& }$$ \caption{A twisting at a separating pair of vertices.} \label{twist} \end{figure} \end{enumerate} \end{thm} Let us describe the above twisting move more precisely. Let $u, v$ be a pair of separating vertices of $\Gamma$. Then $\Gamma$ is obtained from two graphs, $\Gamma_1$ and $\Gamma_2$, by identifying two pairs of vertices as follows: Let $u_i, v_i\in V(\Gamma_i)$ for $i=1,2$. Then $\Gamma$ is given by attaching $\Gamma_1$ to $\Gamma _2$ by the two identifications $u_1=u_2=u$ and $v_1=v_2=u$. The twisting at the pair $u, v$ is the graph $\Gamma '$ obtained by attaching $\Gamma_1$ to $\Gamma _2$ by the two identifications $u_1=v_2$ and $v_1=u_2$. \ We now recall the definitions of connectivity (see for example \cite[Chap. 3]{Die}). Let $k\geq 1$ be an integer. A graph $\Gamma$ having at least $k+1$-vertices is said to be $k$-{\it connected} if the graph obtained from $\Gamma$ by removing any $k-1$ vertices, and all the edges adjacent to them, is connected. A graph $\Gamma$ having at least $2$-vertices is said to be $k$-{\it edge connected} if the graph obtained from $\Gamma$ by removing any $k-1$ edges is connected. If $\Gamma $ is $k$-connected it is also $k$-edge connected, but the converse fails. $\Gamma$ is $1$-connected, or $1$-edge connected, if and only if it is connected. $\Gamma$ is $2$-edge connected if and only if it is connected and $E(\Gamma)_{\rm sep}=\emptyset$. 3-edge connected graphs will play an important role, and will be characterized in Corollary~\ref{cor3}. \begin{remark} \label{ } We shall frequently consider edge-contracting maps, for which we make the following useful observation. Let $\Gamma\to \Gamma '$ be a (surjective) map contracting some edge of $\Gamma$ to a point. Then if $\Gamma$ is $k$-edge connected so is $\Gamma'$. \end{remark} \begin{remark} \label{3v} If $\Gamma$ is 3-connected, the cyclic equivalence class of $\Gamma$ contains only $\Gamma$. Indeed, by Theorem~\ref{cycequ-moves} a move of type (1) can be performed only in the presence of a disconnecting vertex, and a move of type (2) in the presence of a separating pair of vertices. \end{remark} \subsection{C1-sets and connectivizations.} \begin{defi}\label{C1} Let $\Gamma$ be a graph and $S\subset E(\Gamma)$. Suppose $\Gamma$ connected and $E(\Gamma)_{{\rm sep}}=\emptyset$; we say that $S$ is a \emph{C1-set} of $\Gamma$ if $\Gamma(S)$ is a cycle and if $\Gamma\smallsetminus S$ has no separating edge. In general, let $\widetilde{\Gamma}:=\Gamma \smallsetminus E(\Gamma)_{{\rm sep}}$. We say that $S$ is a C1-set of $\Gamma$ if $S$ is a C1-set of a connected component of $\widetilde{\Gamma}$. We denote by ${\operatorname{Set}}^1 \Gamma$ the set of C1-sets of $\Gamma$. \end{defi} For instance, the set $S$ in Example~\ref{disGamma(S)} is a C1-set. The terminology ``C1" stands for ``Codimension 1", and will be justified in \ref{codim}. The following Lemma summarizes some useful properties of C1-sets. \begin{lemma}\label{C1lm} Let $\Gamma$ be a graph and $e, e'\in E(\Gamma)$. Then \begin{enumerate}[(i)] \item \label{C11} Every C1-set $S$ of $\Gamma$ satisfies $S\cap E(\Gamma)_{\rm sep}=\emptyset$. \item \label{C12} Every non-separating edge $e$ of $\Gamma$ is contained in a unique C1-set, $S_e$. If $E(\Gamma)_{{\rm sep}}=\emptyset$, then $S_e=E(\Gamma\smallsetminus e)_{\rm sep}\cup\{e\}$. \item \label{C13} $e$ and $e'$ belong to the same C1-set if and only if they belong to the same cycles of $\Gamma$. \item \label{C14} Assume $\Gamma$ connected and $e$ and $ e'$ non-separating. Then $e$ and $e'$ belong to the same C1-set if and only if $\Gamma\smallsetminus \{e, e'\}$ is disconnected ($(e,e')$ is called a {\emph {separating pair of edges}}). \end{enumerate}\end{lemma} \begin{proof} The first assertion follows trivially from Definition~\ref{C1}. Notice that a C1-set of $\Gamma$ is entirely contained in the set of edges of a unique connected component of $\widetilde{\Gamma}$. Therefore we can assume that $\Gamma $ is connected, and, for parts (\ref{C12}) and (\ref{C14}), free from separating edges. Fix an edge $e\in E(\Gamma)$, let $\Gamma_e=\Gamma \smallsetminus e$ and set \begin{equation} \label{Se} S_e:=E(\Gamma_e)_{\rm sep}\cup\{e\}\subset E(\Gamma). \end{equation} We claim that $S_e$ is the unique C1-set containing $e$. We have that $\Gamma(S_e)$ is connected and free from separating edges (as $\Gamma$ is). Therefore to prove that $S_e$ is a C1-set it suffices to prove that $b_1(\Gamma(S_e))=1$ . Let $\Gamma'$ be the graph obtained from $\Gamma(S_e)$ by removing $e$; then $b_1(\Gamma ')=0$ (by construction all its edges are separating). Now, $ \#E(\Gamma(S_e))=\#E(\Gamma ')+1 $, and, of course, $\Gamma(S_e)$ and $\Gamma '$ have the same vertices. Therefore $ b_1(\Gamma(S_e)) = b_1(\Gamma ')+1=1 $. So $S$ is a C1-set. Finally, let $\widetilde{S}$ be a C1-set containing $e$. It is clear that $S_e\subset \widetilde{S}$ (any $e'\in S_e$ such that $e'\not\in \widetilde{S}$ would be a separating edge of $\Gamma \smallsetminus \widetilde{S}$). To prove that $S_e= \widetilde{S}$ consider the map $\Gamma \to \Gamma(\widetilde{S})$ contracting all the edges not in $\widetilde{S}$. Suppose, by contradiction, that there is an edge $\widetilde{e}\in \widetilde{S}\smallsetminus S_e$; since $\Gamma(\widetilde{S})$ is a cycle, $\widetilde{e}$ is a separating edge of $\Gamma(\widetilde{S})\smallsetminus e$. Therefore $\widetilde{e}$ is a separating edge of $\Gamma \smallsetminus e=\Gamma_e$, and hence $\widetilde{e}$ must lie in $S_e$, by \ref{Se}. This is a contradiction, (\ref{C12}) is proved. Now part (\ref{C13}). We can assume that $e$ and $e'$ are non-separating, otherwise it is obvious. Suppose $S_{e}=S_{e'}$; then, by definition, we can assume that $E(\Gamma)_{{\rm sep}}=\emptyset$. Let $\Delta\subset \Gamma$ be a cycle containing $e'$. By part (\ref{C12}) we have that $e'$ is a separating edge of $\Gamma \smallsetminus e$; therefore if $\Delta$ does not contain $e$, then $e'$ is a separating edge of $\Delta$, which is impossible. Conversely, if $e'\not\in S_{e}$ then (as $e'$ is non-separating for $\Gamma \smallsetminus S_{e}$) there exists a cycle $\Delta \subset \Gamma \smallsetminus S_{e}$ containing $e'$. So $e$ and $e'$ do not lie in the same cycles. Finally part (\ref{C14}). If $(e,e')$ is a separating pair then $e$ is a separating edge of $\Gamma \smallsetminus e'$ and $e'$ is a separating edge of $\Gamma \smallsetminus e$. By part (\ref{C12}) $e$ and $e'$ belong to the same C1-set. The converse follows from the fact that a cycle with two edges removed is disconnected. \end{proof} \begin{remark} \label{decdelta} Let $\Delta\subset \Gamma$ be a cycle. By Lemma~\ref{C1lm} the set $E(\Delta)$ is a disjoint union of C1-sets. We define $ {\operatorname{Set}}^1_{\Delta}\Gamma :=\{S\in {\operatorname{Set}}^1 \Gamma: \ S\subset E(\Delta)\} $ so that $$ E(\Delta)=\coprod_{S\in {\operatorname{Set}}^1_{\Delta}\Gamma}S. $$ \end{remark} \begin{cor} \label{cor3} A graph $\Gamma$ is 3-edge connected if and only if it is connected and there is a bijection $E(\Gamma) \to {\operatorname{Set}}^1 \Gamma$ mapping $e\in E(\Gamma)$ to $\{ e\}\in{\operatorname{Set}}^1 \Gamma$. \end{cor} \begin{proof} If $\Gamma$ is 3-edge connected it is free from separating edges; hence every $e\in E(\Gamma)$ belongs to a unique $S\in {\operatorname{Set}}^1\Gamma$. So it suffices to prove that every $S\in {\operatorname{Set}}^1 \Gamma$ has cardinality 1. Suppose there are two distinct edges $e,e'\in S$. Then Lemma~\ref{C1lm}(\ref{C14}) yields that $\Gamma \smallsetminus \{e,e'\}$ is not connected, which is a contradiction. Conversely, if every edge lies in a C1-set, then $\Gamma$ has no separating edges. If $\Gamma$ is not 3-edge connected, it admits a separating pair of edges $(e,e')$. Then $e$ and $e' $ belong to the same $S\in {\operatorname{Set}}^1 \Gamma$ (by Lemma~\ref{C1lm}). So we are done. \end{proof} In the next statement we use the notation of \ref{C1lm}(\ref{C12}) and \ref{cyc-equiv}. \begin{cor} \label{CC1} Let $\Gamma $ and $\Gamma '$ be cyclically equivalent; then $\#E(\Gamma)_{{\rm sep}} = \#E(\Gamma')_{\rm{sep}}$. Let $\epsilon:E(\Gamma)\to E(\Gamma')$ be a cyclic bijection; then $\epsilon$ induces a bijection $$ \begin{aligned} \beta_{\epsilon}: &\ {\operatorname{Set}}^1 \Gamma & \longrightarrow &{\operatorname{Set}}^1 \Gamma ' \\ &\ S_e &\longmapsto & \ S_{\epsilon (e)}\ \end{aligned} $$ such that $\#S =\#\beta_{\epsilon}(S)$ for every $S\in {\operatorname{Set}}^1 \Gamma$. \end{cor} \begin{proof} An edge is separating if and only if it is not contained in any cycle. Therefore $\epsilon$ maps $E(\Gamma)_{{\rm sep}} $ bijectively to $E(\Gamma')_{\rm{sep}}$, so the first part is proved. The second part follows immediately from Lemma~\ref{C1lm} (\ref{C12}) and (\ref{C13}). \end{proof} We introduce two types of edge contractions that will be used extensively later: \begin{enumerate}[(A)] \item \label{A} Contraction of a separating edge: \begin{figure}[!htp] $$\xymatrix@=1pc{ &&*{\bullet} \ar@{-}[dd]\ar@{-}@/_.5pc/[dd]\ar@{-}[drr]&&&&&&& *{\bullet} \ar@{-}[dd]\ar@{-}@/_.5pc/[dd]\ar@{-}[drr]&&&&\\ \Gamma\ar@{=}[r]&&&&*{\bullet}\ar@{-}[r]^{e}&*{\bullet} \ar@{-}@(ur,dr)&&\ar@{->}[r]&& &&*{\bullet}\ar@{-}@(ur,dr)& \ar@{=}[r]& \overline{\Gamma}\\ &&*{\bullet} \ar@{-}[urr]&&&&&&& *{\bullet} \ar@{-}[urr]&&&& }$$ \caption{The contraction of the separating edge $e\in E(\Gamma)$.}\label{cont-sep} \end{figure} \item \label{B} Contraction of one of two edges of a separating pair of edges: \begin{figure}[!htp] $$\xymatrix@=1.pc{ &&*{\bullet} \ar@{-}[rr]^{e_1} \ar@{-}[dd] \ar@{-}@/_/[dd] & &*{\bullet} \ar@{-}[dd] \ar@{-}@/^/[dd] & &&& *{\bullet} \ar@{-}[ddl] \ar@{-}@/_/[ddl] \ar@{-}@/^/[ddr] \ar@{-}[ddr]&& \\ \Gamma\ar@{=}[r]&&&&&\ar@{->}[r]&& &&\ar@{=}[r]&\overline{\Gamma}\\ &&*{\bullet} \ar@{-}[rr]_{e_2} & &*{\bullet}& && *{\bullet} \ar@{-}[rr]_{\overline{e}} & & *{\bullet}&\\ } $$ \caption{The contraction of the edge $e_1$ of the separating pair $(e_1, e_2)$.}\label{con-pair} \end{figure} \end{enumerate} To a graph $\Gamma$ we shall associate two types of graphs. \begin{defi}\label{3-conn} The {\it $2$-edge connectivization} of a connected graph $\Gamma$ is the $2$-edge connected graph $\Gamma^2$ obtained from $\Gamma$ by iterating the above operation (A) (for all the separating edges of $ \Gamma$). A {\it $3$-edge connectivization} of a connected graph $\Gamma$ is a $3$-edge connected graph $\Gamma^3$ which is obtained from $\Gamma^2$ by iterating the above operation (B). If $\Gamma$ is not connected, we define $\Gamma^2$ (resp. $\Gamma^3$) as the disjoint union of the $2$-edge connectivizations (resp. $3$-edge connectivizations) of its connected components. \end{defi} \begin{remark} It is clear that $\Gamma^2$ is uniquely determined, while $\Gamma^3$ is not. If $\Gamma$ is not connected $\Gamma^2$ (resp. $\Gamma^3$) is not $2$-edge (resp. $3$-edge) connected. There is a (surjective) {\it contraction map} $\sigma:\Gamma \to \Gamma^2 \to \Gamma^3$ obtained by composing the contractions defining $\Gamma^2$ and $\Gamma^3$. \end{remark} \begin{lemma} \label{3lm} Let $\Gamma$ be a graph. \begin{enumerate}[(i)] \item \label{3lmb} $b_1(\Gamma^3)=b_1(\Gamma^2)=b_1(\Gamma)$. \item \label{3lmC1} There are canonical bijections $$ {\operatorname{Set}}^1 \Gamma^3 \leftrightarrow E(\Gamma^3)\leftrightarrow {\operatorname{Set}}^1 \Gamma. $$ \item \label{3lmcyc} Two 3-edge connectivizations of $\Gamma$ are cyclically equivalent. \item $\Gamma^2\equiv_{\rm cyc}\Gamma\smallsetminus E(\Gamma)_{\rm sep}.$ \end{enumerate} \end{lemma} \begin{proof} The first Betti number is invariant under the operations (\ref{A}) and (\ref{B}) above, because no loop gets contracted. So, part (\ref{3lmb}) is done. Notice also (which will be used later) that the contraction map $\sigma:\Gamma \to \Gamma^3$ induces a natural bijection between the cycles of $\Gamma$ and those of $\Gamma^3$. Now part (\ref{3lmC1}). The bijection ${\operatorname{Set}}^1 \Gamma^3 \leftrightarrow E(\Gamma^3)$ is described in \ref{cor3}. Let $S\in {\operatorname{Set}}^1 \Gamma$ and set \begin{equation} \label{Sset} S=\{ e_{S,1},\ldots, e_{S,\# S} \}. \end{equation} Consider again the contraction map $\sigma:\Gamma \to \Gamma^3$. Clearly $\sigma$ contracts all the edges of $S$ but one, which gets mapped to an edge $e _S\in E(\Gamma^3)$. We have thus defined a map \begin{equation} \label{Smap} \psi:{\operatorname{Set}}^1 \Gamma \longrightarrow E(\Gamma^3);\ \ \ S\longmapsto e _S. \end{equation} By \ref{C1lm} and by the definition of $\Gamma^3$ the above map is a bijection. So (\ref{3lmC1}) is proved. Let $\Gamma^3$ and $\widetilde{\Gamma^3}$ be two 3-edge connectivizations of $\Gamma$. By part (\ref{3lmC1}) there is a natural bijection $E(\Gamma^3)\leftrightarrow E(\widetilde{\Gamma^3})$. Moreover, by what we said before, the two contraction maps $$ \sigma: \Gamma \longrightarrow {\Gamma^3} \hskip.7in \widetilde{\sigma}: \Gamma \longrightarrow \widetilde{\Gamma^3}\ $$ induce natural bijections between cycles that are compatible with the bijection $E(\Gamma^3)\leftrightarrow E(\widetilde{\Gamma^3})$. Therefore $\Gamma^3 \equiv_{\rm cyc} \widetilde{\Gamma^3}$, and the part (\ref{3lmcyc}) is proved. For the last part, it suffices to observe that $\Gamma^2$ can be obtained by $\Gamma\smallsetminus E(\Gamma)_{\rm sep}$ by moves of type (1) (vertex gluing) in Theorem \ref{cycequ-moves}. \end{proof} \begin{prop}\label{lift-cyc} Let $\Gamma$ and $\Gamma '$ be two graphs. \begin{enumerate}[(i)] \item \label{lift1} Assume $ \Gamma^2 \equiv_{\rm cyc} \Gamma'^2$. Then $ \Gamma \equiv_{\rm cyc} \Gamma' $ if and only if $ \# E(\Gamma)_{\rm sep}= \# E(\Gamma')_{\rm sep}$. \item \label{lift2} Assume $ \Gamma^3 \equiv_{\rm cyc} \Gamma'^3$ and $E(\Gamma)_{{\rm sep}} = E(\Gamma ')_{\rm{sep}}=\emptyset$. Then $ \Gamma \equiv_{\rm cyc} \Gamma' $ if and only if the natural bijection $$ \beta: {\operatorname{Set}}^1 \Gamma \stackrel{\psi}{\longrightarrow} E( \Gamma^3) \stackrel{\epsilon^3}{\longrightarrow} E( \Gamma'^3) \stackrel{(\psi')^{-1}}{\longrightarrow} {\operatorname{Set}}^1 \Gamma' $$ satisfies $\#S=\#\beta (S)$, where $\psi$ and $\psi' $ are the bijections defined in (\ref{Smap}), and $\epsilon^3$ a cyclic bijection. \end{enumerate} \end{prop} \begin{proof} The ``only if" part for both (\ref{lift1}) and (\ref{lift2}) holds in general, by Corollary~\ref{CC1}. It suffices to add (for part (\ref{lift2})) that any cyclic bijection $\epsilon:E(\Gamma)\to E(\Gamma ')$ induces a canonical cyclic bijection $\epsilon^3:E( \Gamma^3) \to E( \Gamma'^3)$, and it is clear that $(\psi')^{-1}\circ \epsilon^3 \circ \psi =\beta_{\epsilon}$ defined in \ref{CC1}. Let us prove the sufficiency for part (\ref{lift1}). The point is that we can identify the edges of $\Gamma^2$ with the non-separating edges of $\Gamma$ so that we have $E(\Gamma)=E( \Gamma^2)\coprod E(\Gamma)_{{\rm sep}}$; the same holds for $\Gamma '$ of course. So, pick a cyclic bijection $\epsilon^2:E( \Gamma^2) \to E( \Gamma'^2)$ and any bijection $\epsilon_{\rm{sep}}:E( \Gamma )_{\rm{sep}} \to E( \Gamma' )_{\rm{sep}}$. Then we can glue $\epsilon^2$ with $\epsilon_{\rm{sep}}$ to a bijection $\epsilon:E(\Gamma)\to E(\Gamma ')$ which is easily seen to be cyclic. Now we prove the sufficiency in part (\ref{lift2}). Recall that the contraction map $\sigma:\Gamma \to \Gamma ^3$ induces a natural bijection between the cycles of $\Gamma$ and the cycles of $\Gamma ^3$; and the same holds for $\Gamma '$. Therefore $\epsilon^3$ induces a bijection, call it $\eta$, between the cycles of $\Gamma$ and the cycles of $\Gamma '$. On the other hand the bijection $\beta$ in the statement induces a (non unique) bijection between $\epsilon:E(\Gamma)\to E(\Gamma')$. Indeed, as $\Gamma$ and $\Gamma' $ have no separating edges, every edge belongs to a unique C1-set (\ref{C1lm}). As $\beta$ preserves the cardinality of the C1-sets, we easily obtain our $\epsilon$. To show that $\epsilon$ is cyclic, it suffices to observe that, because of the naturality of the various maps, $\epsilon$ induces the above bijection $\eta$ between cycles of $\Gamma$ and $\Gamma '$. \end{proof} \begin{remark} \label{3ec} By the previous results, the class $[\Gamma^3]_{\rm cyc}$ depends solely on $[\Gamma]_{\rm cyc}$. Moreover, every representative in the class $ [\Gamma^3]_{\rm cyc}$ is such that each of its connected components is 3-edge-connected; therefore we shall refer to $[\Gamma^3]_{\rm cyc}$ as the {\it 3-edge connected class of $\Gamma$}. \end{remark} \subsection{Totally cyclic orientations} \begin{defi} \label{tot} Let $\Gamma$ be a graph and $V(\Gamma)$ its set of vertices. If $\Gamma$ is connected, we say that an orientation of $\Gamma$ is {\it totally cyclic} if there exists no proper non-empty subset $W\subset V(\Gamma)$ such that the edges between $W$ and its complement $V(\Gamma)\smallsetminus W$ go all in the same direction. i.e. either all from $W$ to $V(\Gamma)\smallsetminus W$, or all in the opposite direction. If $\Gamma$ is not connected, we say that an orientation of $\Gamma$ is totally cyclic if the orientation induced on each connected component of $\Gamma$ is totally cyclic. \end{defi} Other names for these orientations are ``strongly connected", and ``stable" (the latter is used in algebraic geometry). \begin{remark} \label{orex} A cycle $\Delta$ admits exactly two totally cyclic orientations, which are usually called just cyclic, for obvious reasons. On the other hand if $E(\Gamma)_{{\rm sep}}\neq \emptyset$ then $\Gamma$ admits no totally cyclic orientations. Indeed, suppose $\Gamma$ connected for simplicity and let $e\in E(\Gamma)_{{\rm sep}}$. Then the graph $\Gamma\smallsetminus e$ is the disjoint union of two graphs $\Gamma_1$ and $\Gamma_2$. Then the set $W=V(\Gamma_1)\subset V(\Gamma)$ does not satisfy the requirement of Definition~\ref{tot}. \end{remark} The following lemma, the first part of which is already known, will be very useful. \begin{lemma}\label{chso} Let $\Gamma$ be a graph. \begin{enumerate}[(1)] \item \label{c2}$\Gamma$ admits a totally cyclic orientation if and only if $E(\Gamma)_{\rm sep}=\emptyset$. \item \label{c1} Assume $E(\Gamma)_{\rm sep}=\emptyset$ and fix an orientation on $\Gamma$. The following conditions are equivalent: \begin{enumerate}[(a)] \item \label{totcyc} The orientation is totally cyclic. \item \label{vw} For any distinct $v, w\in V(\Gamma)$ belonging to the same connected component of $\Gamma$, there exists a path oriented from $w$ to $v$. \item \label{bto} $H_1(\Gamma,\mathbb{Z})$ has a basis of cyclically oriented cycles. \item \label{cc} Every edge $e\in E(\Gamma)$ is contained in a cyclically oriented cycle. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} Part (\ref{c2}). We already observed, in \ref{orex}, that if $\Gamma$ has a separating edge it does not admit a totally cyclic orientation. The converse, which is the nontrivial part, was proved in \cite{robbins}, or later in \cite[Lemma 1.3.5]{ctheta}. We now prove the equivalence of the four conditions in Part (\ref{c1}). (\ref{totcyc})$\Rightarrow $ (\ref{vw}) Pick $w\in V(\Gamma)$. Let $W\subset V(\Gamma)$ be the set of all vertices $v$ such that $\Gamma$ contains a path oriented from $w$ to $v$. We want to prove that $W=V(\Gamma)$. By contadiction, suppose that $V(\Gamma)\smallsetminus W$ is not empty. Then every edge $e$ joining a vertex $w'$ in $W$ with a vertex $v$ in $V(\Gamma)\smallsetminus W$ must be oriented from $v$ to $w'$ (otherwise the path obtained attaching $e$ to an oriented path from $w$ to $w'$ would be oriented from $w$ to $v$, so that $v\in W$, which is not the case). But then every edge between $V(\Gamma)\smallsetminus W$ and $W$ goes from the former to the latter, hence the orientation is not totally cyclic, a contradiction. (\ref{vw})$\Rightarrow $ (\ref{bto}). Let $d\in H_1(\Gamma, \mathbb{Z})$ be an element corresponding to a cycle $\Delta$. We claim that $d$ can be expressed as $d=\sum n_id_i$ with each $d_i$ corresponding to a cyclically oriented cycle $\Delta_i\subset \Gamma$, and $n_i \in \mathbb{Z}$. Suppose that $\Delta $ is not cyclically oriented (otherwise there is nothing to prove). Clearly every edge of $\Delta$ is contained in a unique maximal oriented (connected) path contained in $\Delta$. This enables us to express $\Delta$ as a union of maximal oriented paths, call them $p_1,\ldots, p_c$, such that every $p_i$ is adjacent to $p_{i-1}$ and $p_{i+1}$ (with the cyclic convention $p_0=p_c$; note that $c\geq 2$). More precisely, call $v_1,\ldots , v_c$ the vertices of this decomposition, so that $v_i, v_{i+1}$ are the end points of $p_i$ for every $i<c$ and $v_c, v_1$ are the end points of $p_c$. Call $s_i$, respectively $t_i$, the starting, respectively the ending, vertex of each path. With no loss of generality, we may assume that $s_1=v_1$ (i.e. $p_1$ starts from $v_1$) and that $d = \sum _{i=1}^c(-1)^{i-1}p_i$ (abusing notation slightly). By the maximality assumption, we obtain that every odd vertex is the source of both its adjacent paths, and every even vertex is the target of its adjacent paths, i.e.: $$ v_{2i+1}=s_{2i}=s_{2i+1},\ \ \ v_{2i}=t_{2i}=t_{2i-1}. $$ Notice that the number of paths, $c$, is necessarily even. Now, by (\ref{vw}) we can pick a set of paths, $q_1,\ldots, q_{c-1}$, in $\Gamma$ such that $q_i$ joins $v_1$ and $v_{i+1}$ and is oriented as follows. For every odd $i$ the path $q_i$ starts from $v_{i+1}$ and ends in $v_1$. For every even $i$ the path $q_i$ starts from $v_1$ and ends in $v_{i+1}$. With this choice, we have the following cyclically oriented cycles $\Delta_1,\ldots \Delta_c$. The cycle $\Delta_1$ is obtained by composing the paths $p_1$ and $q_1$; for all $1<i<c$ the cycle $\Delta_i$ is obtained by composing the paths $p_i$, $q_i$ and $q_{i-1}$; finally $\Delta_c$ is the composition of $p_c$ with $q_{c-1}$. We have $$ d=\sum _{i=1}^c(-1)^{i-1}p_i= p_1+q_1-\sum_{i=2}^{c-1}(-1)^i(p_i+q_i+q_{i-1})-p_c-q_{c-1}= \sum_{i=1}^c(-1)^{i-1}d_i $$ where $d_i\in H_1(\Gamma, \mathbb{Z})$ corresponds to $\Delta_i$. This proves that the $\mathbb{Z}$-span of the set of cyclically oriented cycles is the entire $H_1(\Gamma, \mathbb{Z})$. (\ref{bto})$\Rightarrow $ (\ref{cc}). Pick a basis of cyclically oriented cycles for $H_1(\Gamma, \mathbb{Z})$. By contradiction, let $e\in E(\Gamma)$ be such that there exists no cyclically oriented cycle containing it. Then there exists no basis element containing $e$, and hence $e$ is not contained in any cycle, which is obviously impossible (as $E(\Gamma)_{\rm sep}=\emptyset$). (\ref{cc})$\Rightarrow $ (\ref{totcyc}). By contradiction, assume there exists a set of vertices $W$ such that $\emptyset\subsetneq W \subsetneq V(\Gamma)$ and such that every edge between $W$ and $V(\Gamma)\smallsetminus W$ goes from $W$ to $V(\Gamma)\smallsetminus W$. Let $e$ be any such edge; every cycle $\Delta$ containing $e$ must contain another edge $e'$ between $W$ and $V(\Gamma)\smallsetminus W$, and therefore (as $e'$ is also oriented from $W$ to $V(\Gamma)\smallsetminus W$) $\Delta$ is not cyclically oriented. We conclude that no cycle containing $e$ is cyclically oriented, and this contradicts part (\ref{cc}). \end{proof} We shall use the following notation. For any edge $e\in E(\Gamma)$, we denote by $e^*\in C_1(\Gamma, \mathbb{R})^*$ the functional on $C_1(\Gamma, \mathbb{R})$ defined, for $e'\in E(\Gamma)$, \begin{equation} \label{e*} e^*(e')=\begin{cases} 1 & \text{ if } e'=e, \\ 0 & \text{ otherwise.} \end{cases} \end{equation} We shall constantly abuse notation by calling $e^*\in H_1(\Gamma, \mathbb{R})^*$ also the restriction of $e^*$ to $ H_1(\Gamma, \mathbb{R})$. \begin{remark} \label{e0sep} $e\in E(\Gamma)_{{\rm sep}}$ if and only if the restriction of $e^*$ to $H_1(\Gamma, \mathbb{R})$ is zero. Indeed $e\in E(\Gamma)_{{\rm sep}}$ if and only if $e$ is not contained in any cycle of $\Gamma$. \end{remark} Recall that for any $S\in {\operatorname{Set}}^1 \Gamma$ we denote $S=\{ e_{S,1},\ldots, e_{S,\# S} \}.$ \begin{cor} \label{cH} Let $\Gamma$ be a graph and fix an orientation inducing a totally cyclic orientation on $\Gamma\smallsetminus E(\Gamma)_{\rm sep}$. Then the following facts hold. \begin{enumerate} \item \label{cH1}For every $c\in H_1(\Gamma, \mathbb{Z})$ we have $$ c=\sum_{S\in {\operatorname{Set}}^1 \Gamma} r_S(c)\sum _{i=1}^{\#S} e_{S,i},\ \ \ \ r_S(c)\in \mathbb{Z}. $$ \item \label{cH2} Let $e_1, e_2\in E(\Gamma)\smallsetminus E(\Gamma)_{{\rm sep}}$. There exists $u\in \mathbb{R}$ such that $e_1^*=u e_2^*$ on $H_1(\Gamma, \mathbb{R})$ if and only if $e_1$ and $e_2$ belong to the same C1-set of $\Gamma$; moreover, in this case $u=1$. \end{enumerate} \end{cor} \begin{proof} Let $\Delta\subset \Gamma$ be a cyclically oriented cycle. Then $\sum_{e\in E(\Delta)}e\in H_1(\Gamma, \mathbb{Z})$. By Lemma~\ref{C1lm} (\ref{C13}), if a C1-set intersects the set of edges of a cycle, then it is entirely contained in it. So part (\ref{cH1}) follows from Lemma~\ref{chso} (\ref{bto}). For the second part, if $e_1$ and $e_2$ belong to the same C1-set then $e_1^*=e_2^*$ by the first part. Conversely, suppose $e_1$ and $e_2$ belong to different C1-sets, $S_1$ and $S_2$. Then by Lemma~\ref{C1lm} (\ref{C13}) there exists a cycle containing $e_1$ and not $e_2$. Hence there exists $c\in H_1(\Gamma, \mathbb{Z})$ such that $r_{S_1}(c)\neq 0$ and $r_{S_2}(c) =0$. But then $e_1^*(c)=r_{S_1}(c)\neq 0$ and $e_2^*(c)=r_{S_2}(c)= 0$ therefore $e_1^*\neq u e_2^*$ for any $u\in \mathbb{R}$. \end{proof} \section{Torelli theorem for graphs } \subsection{Statement of the theorem} The aim of this section is to prove the following Torelli theorem for graphs. \begin{thm}\label{main-thm} Let $\Gamma$ and $\Gamma'$ be two graphs. Then $\operatorname{Alb}(\Gamma)\cong \operatorname{Alb}(\Gamma')$ if and only if $ \Gamma^2\equiv_{\rm cyc} \Gamma'^2$. \end{thm} We deduce that the Torelli theorem is true in stronger form for $3$-connected graphs. More generally: \begin{cor}\label{main-cor} Let $\Gamma$ be $3$-connected and let $\Gamma'$ have no vertex of valence 1. Then $\operatorname{Alb}(\Gamma)\cong \operatorname{Alb}(\Gamma')$ if and only if $ \Gamma \cong \Gamma'$. \end{cor} \begin{proof} By hypothesis $\Gamma^2= \Gamma$. Assume $\operatorname{Alb}(\Gamma)\cong \operatorname{Alb}(\Gamma')$; then Theorem \ref{main-thm} yields $\Gamma\equiv_{\rm cyc} \Gamma'^2$. By Remark~\ref{3v} we obtain $\Gamma\cong\Gamma'^2$. If $\Gamma'^2\not\cong \Gamma'$, then the contraction map $\Gamma '\to \Gamma'^2$ certainly produces some separating vertex, given by the image of a separating edge of $\Gamma'$ (because $\Gamma '$ has non vertex of valence 1). But $\Gamma'^2$ has no such vertices, by the assumption on $\Gamma$. Hence we necessarily have $\Gamma'\cong\Gamma'^2\cong\Gamma$. \end{proof} \begin{proof}[Proof of Theorem \ref{main-thm}: sufficiency.] The ``if" direction of Theorem~\ref{main-thm} is not difficult, and it follows from the subsequent statement, part (\ref{Alb1}) of which is already known; see \cite[Prop. 5]{BdlHN} (where a different language is used). \begin{prop}\label{Alb-eq} Let $\Gamma$ be a graph. \begin{enumerate}[(i)] \item \label{Alb1} $\operatorname{Alb}(\Gamma)$ depends only on $[\Gamma]_{\rm cyc}$. \item \label{Alb2} $\operatorname{Alb}(\Gamma)=\operatorname{Alb}(\Gamma^2)$. \end{enumerate} \end{prop} \begin{proof} Part (\ref{Alb1}) follows from the fact that $(H_1(\Gamma,\mathbb{Z});(,))$ is defined entirely in terms of the inclusion $H_1(\Gamma, \mathbb{Z})\subset C_1(\Gamma, \mathbb{Z})$ and of the basis $E(\Gamma)$ of $C_1(\Gamma,\mathbb{Z})$, which is clearly invariant by cyclic equivalence. For the second part, first note that we can naturally identify \begin{equation} \label{inc} E(\Gamma^2)=E(\Gamma)\smallsetminus E(\Gamma)_{\rm sep}\subset E(\Gamma). \end{equation} We fix orientations on $\Gamma^2$ and $\Gamma$ that are compatible with respect to the above (\ref{inc}). It is clear that there is a natural commutative diagram \begin{equation}\label{diag1} \xymatrix{ H_1(\Gamma^2,\mathbb{Z}) \ar@{^{}->}_{\cong}^{\tilde{j}}[r] \ar@{_{(}->}[d] & H_1(\Gamma,\mathbb{Z})\ar@{^{(}->}[d] \\ C_1(\Gamma^2,\mathbb{Z}) \ar@{^{(}->}^j[r] & C_1(\Gamma,\mathbb{Z}), } \end{equation} where the vertical maps are the inclusions, $j$ is induced by the inclusion (\ref{inc}), and $\tilde{j}$ denotes the restriction of $j$. Part (\ref{Alb2}) follows from the diagram and the fact that the inclusion $j$ is compatible with the scalar products $(,)$ on both sides. \end{proof} From Proposition~\ref{Alb-eq} we derive that if $\Gamma^2\equiv_{\rm {cyc}}\Gamma'^2$ then $\operatorname{Alb}(\Gamma)=\operatorname{Alb} (\Gamma')$. Hence the sufficiency in Theorem~\ref{main-thm} is proved. \end{proof} In order to prove the other half of the theorem, we need some preliminaries. \subsection{The Delaunay decomposition} Consider the lattice $H_1(\Gamma,\mathbb{Z})$ inside the real vector space $H_1(\Gamma,\mathbb{R})$. Observe that the scalar product induced on $C_1(\Gamma,\mathbb{R})$ by $(,)$ coincides with the Euclidean scalar product. We denote the norm $\sqrt{(x,x)}$ by $||x||$. \begin{defi} \label{Deldef} For any $\alpha\in H_1(\Gamma,\mathbb{R})$, a lattice element $x \in H_1(\Gamma,\mathbb{Z})$ is called {\it $\alpha$-nearest} if $$||x-\alpha||={\rm min}\{||y-\alpha||\: : \: y\in H_1(\Gamma,\mathbb{Z})\}.$$ A {\it Delaunay cell} is defined as the closed convex hull of all elements of $H_1(\Gamma,\mathbb{Z})$ which are $\alpha$-nearest for some fixed $\alpha \in H_1(\Gamma,\mathbb{R})$. Together, all the Delaunay cells constitute a locally finite decomposition of $H_1(\Gamma,\mathbb{R})$ into infinitely many bounded convex polytopes, called the {\it Delaunay decomposition} of $\Gamma$, denoted ${\rm Del} (\Gamma)$. Let $\Gamma$ and $\Gamma'$ be two graphs. We say that ${\rm Del}(\Gamma)\cong {\rm Del}(\Gamma')$ if there exists a linear isomorphism $H_1(\Gamma,\mathbb{R}) \to H_1(\Gamma',\mathbb{R})$ sending $H_1(\Gamma,\mathbb{Z})$ into $H_1(\Gamma',\mathbb{Z})$ and mapping the Delaunay cells of $ {\rm Del}(\Gamma)$ isomorphically into the Delaunay cells of ${\rm Del}(\Gamma')$. \end{defi} \begin{remark} \label{Del} It is well known that an equivalent, and for us very useful, definition is the following. The Delaunay decomposition ${\rm Del} (\Gamma)$ is the restriction to $H_1(\Gamma, \mathbb{R})$ of the decomposition of $C_1(\Gamma, \mathbb{R})$ consisting of the standard cubes cut out by all hyperplanes of equation $e^*=n$ for $e\in E(\Gamma)$ and $n\in \mathbb{Z}$; see \cite[Prop. 5.5]{OS}. These hyperplanes of $H_1(\Gamma, \mathbb{R})$, having equations $e^*=n$, are called the {\it generating hyperplanes} of the Delaunay decomposition. Notice that an isomorphism ${\rm Del}(\Gamma)\cong {\rm Del}(\Gamma')$ induces a bijection between the sets of generating hyperplanes. \end{remark} \begin{prop}\label{Del-equ} Let $\Gamma$ and $\Gamma'$ be two graphs. \begin{enumerate}[(i)] \item\label{Del1} ${\rm Del}(\Gamma)$ depends only on $[\Gamma]_{\rm cyc}$. \item \label{Del2} ${\rm Del}(\Gamma)\cong {\rm Del}(\Gamma^3)$ for any choice of $\Gamma^3$. \item \label{Del3} ${\rm Del}(\Gamma)\cong {\rm Del}(\Gamma')$ if and only if $\Gamma^3\equiv_{\rm cyc}\Gamma'^3$. \end{enumerate} \end{prop} \begin{proof} It is clear that the Delaunay decomposition is completely determined by the inclusion $H_1(\Gamma,\mathbb{Z})\subset C_1(\Gamma,\mathbb{Z})$ together with the basis $E(\Gamma)$ of $C_1(\Gamma,\mathbb{Z})$ defining the scalar product $(,)$. This proves part (\ref{Del1}). Let us now prove part (\ref{Del2}). First note that ${\rm Del}(\Gamma)\cong{\rm Del}(\Gamma^2)$, as it follows easily from diagram (\ref{diag1}) and Remark~\ref{e0sep}. We can therefore assume that $\Gamma$ is $2$-edge connected. Consider the natural bijection (cf. (\ref{Smap})) $$ \begin{aligned} \psi: {\operatorname{Set}}^1 \Gamma &\longrightarrow & E(\Gamma^3) \\ S &\longmapsto & e_S \ \end{aligned} $$ where $e_S$ is the only edge in $S$ which is not contracted by the contraction map $\sigma:\Gamma \to \Gamma^3$. We can thus define an injection $$ \begin{aligned} C_1(\Gamma^3,\mathbb{Z}) & \stackrel{\iota}{\longrightarrow} &C_1(\Gamma,\mathbb{Z})\\ e_S &\longmapsto &\sum_{i=1}^{\#S}e_{S,i},\ \end{aligned} $$ where for any $S\in {\operatorname{Set}}^1 \Gamma$ we denote, as in (\ref{Sset}), $S=\{ e_{S,1},\ldots, e_{S,\# S} \}$. Fix now a totally cyclic orientation on $\Gamma$ and the induced orientation on $\Gamma^3$; consider the corresponding spaces $H_1(\Gamma,\mathbb{Z})$ and $H_1(\Gamma^3,\mathbb{Z})$. We claim that the above injection induces a natural diagram \begin{equation}\label{diag2} \xymatrix{ H_1(\Gamma^3,\mathbb{Z}) \ar@{^{}->}_{\cong}^{\tilde{\iota}}[r] \ar@{_{(}->}[d] & H_1(\Gamma,\mathbb{Z})\ar@{^{(}->}[d] \\ C_1(\Gamma^3,\mathbb{Z}) \ar@{^{(}->}^{\iota}[r] & C_1(\Gamma,\mathbb{Z}), } \end{equation} where the vertical maps are the inclusions, and $\tilde{\iota}$ is the restriction of $\iota$. Indeed, the image of $\iota$ is clearly the subset $K_2\subset C_1(\Gamma, \mathbb{Z})$ defined by $$ K_2:=\bigcap_{S\in {\operatorname{Set}}^1 \Gamma} \bigcap _{i,j=1}^{\#S}\ker (e^*_{S,i}-e^*_{S,j}). $$ Moreover, by Corollary~\ref{cH} we get that $H_1(\Gamma,\mathbb{Z})\subset K_2$. On the other hand, the contraction map $\sigma:\Gamma \to \Gamma^3$ induces a bijection between cycles, therefore $H_1(\Gamma^3,\mathbb{Z})$ maps into $H_1(\Gamma,\mathbb{Z})$. It remains to prove that $H_1(\Gamma^3,\mathbb{Z})$ surjects onto $H_1(\Gamma,\mathbb{Z})$. We use again Corollary~\ref{cH} , according to which any $c\in H_1(\Gamma,\mathbb{Z})$ has the form $c=\sum_{S\in {\operatorname{Set}}^1 \Gamma} r_S(c)\sum _{i=1}^{\#S} e_{S,i}$, with $r_S(c)\in \mathbb{Z}$. Hence $$ c=\iota\bigr(\sum_{S\in {\operatorname{Set}}^1 \Gamma} r_S(c) e_S\bigl). $$ At this point (\ref{Del2}) follows from diagram (\ref{diag2}) and the fact that, by Corollary~\ref{cH}, $e, f$ belong to the same C1-set if and only if $e^*_{|H_1(\Gamma,\mathbb{Z})}=f^*_{|H_1(\Gamma,\mathbb{Z})}$. The implication if of part (\ref{Del3}) follows from the previous parts. In order to prove the other implication, we can assume that $\Gamma$ and $\Gamma'$ are 3-edge connected. We claim that, as $\Gamma$ is 3-edge connected, the functionals $e^*$ restricted to $H_1(\Gamma, \mathbb{R})$ are all non zero and distinct, as $e$ varies in $E(\Gamma)$ (and the same holds for $\Gamma'$ of course). That $e^*$ is nonzero follows from the fact that $E(\Gamma)_{{\rm sep}}$ is empty (cf. \ref{e0sep}). Let $e\neq f$, now $\{e\}$ and $\{f\}$ are C1-sets (by \ref{cor3}). By Corollary~\ref{cH} the restrictions of $e^*$ and $f^*$ to $H_1(\Gamma, \mathbb{R})$ are different. The claim is proved. The claim means that the intersections of the hyperplanes $\{e^*=0\}_{e\in E(\Gamma)}$ with $H_1(\Gamma, \mathbb{R})$ are all proper and distinct, and similarly for $\Gamma'$. Now, an isomorphism ${\rm Del}(\Gamma)\cong {\rm Del}(\Gamma')$ induces a bijection between the sets of generating hyperplanes passing through the origin; hence, by the claim, we get a bijection $E(\Gamma)\cong E(\Gamma')$ (which extends to an isomorphism $C_1(\Gamma, \mathbb{Z})\cong C_1(\Gamma',\mathbb{Z})$). To conclude, we now use a basic fact from graph theory (see for example \cite[Sect. 5.1]{Oxl} or \cite[Thm. 3.11]{alex}), according to which the $0$-skeleton of the hyperplane arrangement $\{e^*=n,\ \ e\in E(\Gamma), n\in \mathbb{Z}\}$ in $H_1(\Gamma,\mathbb{R})$ is the lattice $H_1(\Gamma,\mathbb{Z})$ itself. Therefore, we deduce that the above bijection $E(\Gamma)\cong E(\Gamma')$ induces an isomorphism $H_1(\Gamma,\mathbb{Z})\cong H_1(\Gamma',\mathbb{Z})$, from which we conclude that $ \Gamma\equiv_{\rm cyc} \Gamma'$. \end{proof} \begin{remark}\label{Artam} A special case of Proposition~\ref{Del-equ} has been proved by Artamkin using a different language. In \cite{art}, he associates to a graph $\Gamma$ a convex integral polytope $\Delta(\Gamma)$ in $H_1(\Gamma,\mathbb{R})$, called the ``simple cycle polytope", and he proves that a 3-connected graph $\Gamma$ is uniquely determined by $\Delta(\Gamma)$ (see \cite[Thm. 1]{art}). $\Delta(\Gamma)$ turns out to be the union of the maximal dimensional Delaunay cells that have a vertex in the origin; hence knowing $\Delta(\Gamma)$ is equivalent to knowing ${\rm Del}(\Gamma)$. Using this observation and \ref{3v}, \cite[Thm. 1]{art} is equivalent to Proposition \ref{Del-equ}(\ref{Del3}), provided that $\Gamma$ and $\Gamma'$ are 3-connected. \end{remark} \subsection{Proof of Theorem \ref{main-thm}: necessity.} \begin{proof} Assume that $\operatorname{Alb}(\Gamma)\cong\operatorname{Alb}(\Gamma')$. Using \ref{Alb-eq}, we can assume that $\Gamma$ and $\Gamma'$ are $2$-edge connected. We fix a totally cyclic orientation on them. Since the Delaunay decomposition is completely determined by $(H_1(\Gamma,\mathbb{Z});(,))$, i.e. by $\operatorname{Alb}(\Gamma)$ (see \cite[Sec. 2]{KS}), we have ${\rm Del}(\Gamma)\cong{\rm Del}(\Gamma')$. We can thus apply Proposition \ref{Del-equ}(\ref{Del3}), getting that $\Gamma^3\equiv_{\rm cyc} \Gamma'^3$. Therefore, by Proposition~\ref{lift-cyc}(\ref{lift2}) there is a natural bijection, \begin{equation} \label{beta} {\operatorname{Set}}^1 \Gamma \stackrel{\beta}{\longrightarrow}{\operatorname{Set}}^1 \Gamma' ;\ \ \ S\mapsto S':=\beta(S). \end{equation} To prove the theorem it suffices to show that $\beta$ preserves the cardinalities. In fact by Proposition~\ref{lift-cyc}(\ref{lift2}), this implies that $ \Gamma\equiv_{\rm cyc} \Gamma'$. First, note that by hypothesis there is an isomorphism, denoted \begin{equation} \label{Hiso} H_1(\Gamma,\mathbb{Z})\stackrel{\cong}{\longrightarrow}H_1(\Gamma',\mathbb{Z});\ \ \ c\mapsto c' \end{equation} such that $(c_1,c_2)=(c'_1,c'_2)$ for all $c_i\in H_1(\Gamma,\mathbb{Z})$. Pick $c\in H_1(\Gamma,\mathbb{Z})$; by Corollary~\ref{cH} we can write $c=\sum_{S\in {\operatorname{Set}}^1 \Gamma} r_S(c)\sum _{i=1}^{\#S} e_{S,i}$, with $r_S(c)\in \mathbb{Z}$; hence we can define (consistently with \ref{decdelta}) the set $$ {\operatorname{Set}}^1_c\Gamma:=\{S\in {\operatorname{Set}}^1 \Gamma: r_S(c)\neq 0\}. $$ We claim that for every $S\in {\operatorname{Set}}^1 \Gamma$ and every $c\in H_1(\Gamma, \mathbb{Z})$ we have \begin{equation} \label{divr}r_{S'}(c')=u(S)r_S(c) ,\ \ \ u(S):=\pm 1; \end{equation} in particular, \begin{equation} \label{div} S\in {\operatorname{Set}}^1_c \Gamma \Leftrightarrow S'\in{\operatorname{Set}}^1_{c'} \Gamma' . \end{equation} To prove the claim, consider the affine function $f^n_S:C_1(\Gamma, \mathbb{Z})\to \mathbb{Z}$ defined as $$ f^n_S:=e_S^*-n,\ \ \ n\in \mathbb{Z}. $$ By what we said before we have $$ r_S(c)=n \Leftrightarrow c\in \ker f^n_S. $$ Observe that the bijections (\ref{Hiso}) and (\ref{beta}) are compatible with one another. In other words, for every $c\in H_1(\Gamma, \mathbb{Z})$, the set ${\operatorname{Set}}^1_c \Gamma$ is mapped to ${\operatorname{Set}}^1_{c'} \Gamma '$ by $\beta$. Therefore the isomorphism between $\operatorname{Alb}(\Gamma)$ and $\operatorname{Alb} (\Gamma ')$ induces a bijection between the hyperplanes generating ${\rm Del} (\Gamma)$ and those generating ${\rm Del} (\Gamma ')$ such that $f_{S}^n$ is mapped either to $f_{S'}^n$ or to $f_{S'}^{-n}$ (see Remark~\ref{Del}). So, the claim is proved. To ease the notation, in the sequel for any $S\in {\operatorname{Set}}^1 \Gamma$ we denote $$ e(S):=\sum _{i=1}^{\#S} e_{S,i}. $$ Moreover, if $S\in {\operatorname{Set}}^1_c\Gamma$ for some $c$, we denote \begin{equation} \label{rc} \lambda(c-S):= \sum _{T\in {\operatorname{Set}}^1_c\Gamma \smallsetminus \{S\}}\# T. \end{equation} Observe that for any cycle $\Delta\subset \Gamma$ of length $\lambda$ and any $c:=\sum_{e\in E(\Delta)}\pm e\in H_1(\Gamma,\mathbb{Z})$ (such a $c$ exists for a suitable choice of signs), we have ${\operatorname{Set}}^1_{\Delta}\Gamma={\operatorname{Set}}^1_{c}\Gamma$ and $$ \lambda=||c||^2=\sum_{S\in {\operatorname{Set}}^1_{\Delta}\Gamma}\#S=\#S+\lambda (c-S) $$ for any $S\in {\operatorname{Set}}^1_{\Delta}\Gamma$. We shall now prove that the map (\ref{beta}) preserves cardinalities. By contradiction, suppose there exists $S\in {\operatorname{Set}}^1 \Gamma$ such that \begin{equation} \label{cardS} \# S>\# S'. \end{equation} By Lemma \ref{intS}, we can find two cycles $\Delta_1$ and $\Delta_2$ of $\Gamma$ such that $S=E(\Delta_1)\cap E(\Delta_2)$. For $i=1,2$, there exists an element $c_i\in H_1(\Gamma,\mathbb{Z})$ given by the formula $$ c_i= e(S)+\sum_{T\in {\operatorname{Set}}^1_{c_i}\Gamma \smallsetminus \{S\}}\pm e(T) $$ (by Corollary~\ref{cH}). The sign before $e(T)$ will play no role, so we can ignore it. Suppose to fix ideas that $u(S)=1$ in (\ref{divr}). The case $u(S)=-1$ is treated in a trivially analogous way (we omit the details). By (\ref{divr}), we have $$ c_i'= e(S')+\sum_{T\in {\operatorname{Set}}^1_{c_i}\Gamma \smallsetminus \{S\}}\pm e(T') $$ (recall that $T$ determines $T'$ uniquely). Therefore, as $||c_i||^2=(c_i,c_i)=(c_i',c_i')= ||c_i'||^2$, using notation (\ref{rc}) $$ ||c_i||^2=\#S+\lambda(c_i-S) = ||c_i'||^2=\#S'+\lambda(c_i'-S'). $$ By (\ref{cardS}) we get for $i=1,2$ \begin{equation} \label{cardT} \lambda(c_i-S)< \lambda(c_i'-S'). \end{equation} Now, $c_1-c_2$ lies, of course, in $H_1(\Gamma, \mathbb{Z})$; we have $$ c_1-c_2= \sum_{T\in {\operatorname{Set}}^1_{c_1}\Gamma \smallsetminus \{S\}}\pm e(T)- \sum_{U\in {\operatorname{Set}}^1_{c_2}\Gamma \smallsetminus \{S\}}\pm e(U). $$ Since $ {\operatorname{Set}}^1_{c_1}\Gamma\cap {\operatorname{Set}}^1_{c_2}\Gamma =\{S\}$, we have \begin{equation}\label{diff1} ||c_1-c_2||^2=\lambda(c_1-S)+\lambda(c_2-S). \end{equation} Arguing in the same way for $c_1'-c_2'$, we get \begin{equation}\label{diff2} ||c_1'-c_2'||^2=\lambda(c_1'-S')+\lambda(c_2'-S'). \end{equation} Therefore, using (\ref{diff2}), (\ref{cardT}) and (\ref{diff1}) $$ ||c_1'-c_2'||^2=\lambda(c_1'-S')+\lambda(c_2'-S')> \lambda(c_1-S)+\lambda(c_2-S)=||c_1-c_2||^2 $$ which contradicts the fact that the isomorphism (\ref{Hiso}) preserves the scalar products. \end{proof} In the proof we applied the next Lemma, which will be used again later on. \begin{lemma} \label{intS} Let $S\in {\operatorname{Set}}^1 \Gamma$. For every cycle $\Delta\subset \Gamma$ such that $S\subset E(\Delta)$ there exists a cycle $\hat{\Delta}\subset \Gamma$ such that $S=E(\Delta)\cap E(\hat{\Delta})$. \end{lemma} \begin{proof} It is clear that it suffices to assume $\Gamma$ free from separating edges. We begin by reducing to the case $\#S=1$. Choose an edge $e\in S$ and consider the map $\Gamma \to \overline{\Gamma}$ contracting all edges of $S$ but $e$. Then $\sigma$ induces a bijection between the cycles of $\Gamma$ and those of $\overline{\Gamma}$, and it is clear that if the statement holds on $\overline{\Gamma}$ it also holds on ${\Gamma}$. So, let $S=\{e\}$ and let $\Delta$ be a cycle containing $e$. We shall exhibit an iterated procedure which yields, at its $i$-th step, a cycle $\Delta_{i}$ containing $e$ and such that $\# E(\Delta)\cap E(\Delta_i)$ decreases at each step. Set $\Delta_1=\Delta$ and $S_1:=S=\{e\}$; if $\Delta$ has length 1 we take $\hat{\Delta}=\Delta$ and we are done. So, suppose $\#E(\Delta) \geq 2$; we can decompose $E(\Delta)$ as a disjoint union of C1-sets $E(\Delta)=\{e\}\cup S_2\cup\ldots\cup S_{h}$, with $S_i\in {\operatorname{Set}}^1 \Gamma$ (cf. Remark~\ref{decdelta}). For the second step consider $\Gamma_2:=\Gamma \smallsetminus S_2$; then $\Gamma_2$ has no separating edges, therefore there exists a cycle $\Delta_2\subset \Gamma_2$ containing $e$. Obviously $\Delta_2$ does not contain $S_2$, hence $\# E(\Delta)\cap E(\Delta_2)< \# E(\Delta)\cap E(\Delta_1)$. If $\Delta_2$ does not contain any other edge of $\Delta$ we take $\Delta_2=\hat{\Delta}$ and we are done. Otherwise we repeat the process within $\Gamma_2$. Namely, we have $E(\Delta_2)=\{e\}\cup S_2^2\cup\ldots\cup S^2_{h}$, with $S_i^2\in {\operatorname{Set}}^1 \Gamma_2$, set $\Gamma_3:=\Gamma_2\smallsetminus S_2^2$. There exists a cycle $\Delta_3\subset \Gamma_3$ containing $e$, and it is clear that $\# E(\Delta)\cap E(\Delta_3)< \# E(\Delta)\cap E(\Delta_2)$. Obviously this process must terminate after, say $m$, steps, when we necessarily have $E(\Delta)\cap E(\Delta_m)=\{e\}$. \end{proof} \section{Torelli theorem for metric graphs and tropical curves} \label{trp} In this section we apply the methods and results of the previous part to study the Torelli problem for tropical curves. We refer to \cite{MIK3}, or to \cite{MZ}, for details about the theory of tropical curves and their Jacobians. \subsection{Tropical curves, metric graphs and associated tori}\label{Sec4.1} Let $C$ be a compact tropical curve; $C$ is endowed with a Jacobian variety, $\operatorname{Jac} (C)$, which is a principally polarized tropical Abelian variety (see \cite[Sec. 5]{MZ} and \cite[Sect 5.2]{MIK3}); we shall denote $(\operatorname{Jac} (C), \Theta_C)$ the principally polarized Jacobian of $C$, where $\Theta_C$ denotes the principal polarization (see Remark~\ref{albjac} below). Observe that two tropically equivalent curves have isomorphic Jacobians. As we stated in the introduction, we want to study the following \ \noindent {\bf{Problem.}} {\it {For which compact tropical curves $C$ and $C'$ there is an isomorphism $(\operatorname{Jac} (C), \Theta_C) \cong (\operatorname{Jac} (C'), \Theta_{C'})$?}} \ \noindent We will answer this question in Theorem \ref{main-thmt}. As we already mentioned, the connection with the earlier sections of this paper comes from a result of G. Mikhalkin and I. Zharkov, establishing that tropical curves are closely related to metric graphs.\begin{defi} \label{mg} A metric graph $(\Gamma, l)$ is a finite graph $\Gamma$ endowed with a function $l:E(\Gamma)\to \mathbb{R}_{>0}$ called the {\it length function}. \end{defi} \begin{remark} \label{leaves} Our definition of metric graph coincides with that of \cite{MZ} only if the graph has valence at least 2. The difference occurs in the length function, whereas the graph is the same. More precisely, the definition of length function used in \cite{MZ} differs from ours, as it assigns the value $+\infty$ to every edge adjacent to a vertex of valence 1; such edges are called {\it leaves}. With this definition, metric graphs are in bijection with tropical curves. To avoid trivial cases, we shall always assume that our tropical curves have genus at least 2. Under this assumption, by \cite[Prop. 3.6]{MZ}, there is a one to one correspondence between tropical equivalence classes of compact tropical curves and metric graphs with valence at least 3 (i.e such that every vertex has at least three incident edges). Therefore, from now on, we identify compact tropical curves, up to tropical equivalence, with metric graphs of valence at least 3. \end{remark} \begin{remark} \label{termtrop} Since to every compact tropical curve $C$ we associate a unique finite graph $\Gamma$, we will use for $C$ the graph theoretic terminology. In particular, we shall say that $C$ is $k$-connected if so is $\Gamma$. \end{remark} Given a metric graph $(\Gamma, l)$, we define the scalar product $(,)_l$ on $C_1(\Gamma,\mathbb{R})$ as follows $$(e,e')_l=\begin{cases} l(e)& \text{ if } e=e', \\ 0 & \text{ otherwise. } \end{cases} $$ In analogy with Definitions~\ref{alb}, \ref{cyc-equiv} and \ref{3-conn} we shall define the Albanese torus, the cyclic equivalence, and the $3$-edge connectivization for metric graphs. \begin{defi}\label{Alb-torusl} The Albanese torus $\operatorname{Alb}(\Gamma,l)$ of the metric graph $(\Gamma,l)$ is $$ \operatorname{Alb}(\Gamma,l):=\bigr(H_1(\Gamma,\mathbb{R})/H_1(\Gamma,\mathbb{Z}); (,)_l\bigl) $$ (with the flat metric derived from the scalar product $(,)_l$). \end{defi} \begin{remark} \label{albjac} By \cite[Sect. 6.1 p. 218]{MZ} we can naturally identify $(\operatorname{Jac} (C),\Theta_C)$ with the Albanese torus ${\rm Alb}(\Gamma,l)$. \end{remark} \begin{defi}\label{cyc-equivl} Let $(\Gamma,l)$ and $(\Gamma',l')$ be two metric graphs. We say that $(\Gamma,l)$ and $(\Gamma',l')$ are {\it cyclically equivalent}. and we write $(\Gamma,l)\equiv_{\rm cyc} (\Gamma',l')$, if there exists a cyclic bijection $\epsilon:E(\Gamma)\to E(\Gamma')$ such that $l(e)=l'(\epsilon(e))$ for all $e\in E(\Gamma)$. The cyclic equivalence class of $(\Gamma,l)$ will be denoted by $[(\Gamma,l)]_{\rm cyc}$. \end{defi} \begin{defi}\label{3-connl} A {\it $3$-edge connectivization} of a metric graph $(\Gamma,l)$ is a metric graph $(\Gamma^3, l^3)$, where $\Gamma^3$ is a $3$-edge connectivization of $\Gamma$, and $l^3$ is the length function defined as follows, $$l^3(e_S)=\sum_{e\in \psi^{-1}(e_S)} l(e)=\sum_{e\in S } l(e) $$ where, with the notation of (\ref{Smap}), $\psi:{\operatorname{Set}}^1 \Gamma \to E(\Gamma^3)$ is the natural bijection mapping $S$ to $e_S$. \end{defi} \begin{remark} \label{3ecl} Using lemma~\ref{3lm}(\ref{3lmcyc}) we see that all the $3$-edge connectivizations of a metric graph $(\Gamma,l)$ are cyclically equivalent. Observe also that $[(\Gamma^3, l^3)]_{\rm cyc}$ is completely independent of the separating edges of $\Gamma$, and on the value that $l$ takes on them. Therefore, $(\Gamma^3, l^3)$ is well defined also if $l$ takes value $+\infty$ on the leaves of $\Gamma$. This enables us to define $[(\Gamma^3, l^3)]_{\rm cyc}$ for a graph $(\Gamma, l)$, metric in the sense of \cite{MZ}, associated to a tropical curve $C$ (see Remark~\ref{leaves}). Consistently with Remark~\ref{3ec}, we call $[(\Gamma^3, l^3)]_{\rm cyc}$ the {\it 3-edge connected class of $C$}. With this terminology, we state the main result of this section: \end{remark} \begin{thm}\label{main-thmt} Let $C$ and $C'$ be compact tropical curves. Then $(\operatorname{Jac} C, \Theta_C)\cong (\operatorname{Jac} C', \Theta_{C'})$ if and only if $C$ and $C'$ have the same 3-edge connected class. Suppose that $C$ is 3-connected. Then $(\operatorname{Jac} C, \Theta_C)\cong (\operatorname{Jac} C', \Theta_{C'})$ if and only if $C$ and $C'$ are tropically equivalent. \end{thm} \begin{proof} The first statement is a straightforward consequence of the next Theorem~\ref{main-thml}. We let $\Gamma$ and $\Gamma '$ be the metric graphs associated to $C$ and $C'$ respectively. Suppose now that $C$ is 3-connected. This means (cf. \ref{termtrop}) that the associated graph is 3-connected (and hence 3-edge connected). By the previous part $\Gamma =\Gamma^3\cong \Gamma'^3$. Recall that, by convention (cf. Remark~\ref{leaves}), the graph $\Gamma'$ has valence at least $3$. To finish the proof it suffices to show that the map $\sigma :\Gamma'\to \Gamma'^3$ is the identity map; to do that we will use the fact that $\Gamma '^3$ is 3-connected, as $\Gamma$ is. Suppose $\sigma $ contracts a separating edge $e$ of $\Gamma '$; observe that the two vertices adjacent to $e$ are both separating vertices for $\Gamma '$, because $\Gamma '$ has no vertices of valence 1. But then $\sigma(e)$ would be a separating vertex of $\Gamma'^3$, which is impossible. If $\sigma $ contracts one edge of a separating pair, arguing in a similar way we obtain that $\Gamma '$ has a separating pair of vertices which is mapped by $\sigma$ to a separating pair of vertices of $ \Gamma'^3$, which is impossible. Therefore $\sigma$ is the identity and we are done. \end{proof} \begin{thm}\label{main-thml} Let $(\Gamma,l)$ and $(\Gamma',l')$ be two metric graphs. Then $\operatorname{Alb}(\Gamma,l)\cong \operatorname{Alb}(\Gamma',l')$ if and only if $ [(\Gamma^3,l^3)]_{\rm cyc}=[(\Gamma'^3, l'^3)]_{\rm cyc}$. \end{thm} \subsection{Proof of the Torelli theorem for metric graphs} The proof of Theorem~\ref{main-thml} follows the same steps as the proof of Theorem~\ref{main-thm}. The ``if" part follows easily from the following \begin{prop}\label{Alb-eql} Let $(\Gamma,l)$ be a metric graph. \begin{enumerate}[(i)] \item \label{Alb1l} $\operatorname{Alb}(\Gamma,l)$ depends only on $[(\Gamma,l)]_{\rm cyc}$. \item \label{Alb2l} $\operatorname{Alb}(\Gamma,l)\cong \operatorname{Alb}(\Gamma^3,l^3)$ for any $3$-edge connectivization of $(\Gamma,l)$. \end{enumerate} \end{prop} \begin{proof} Part (\ref{Alb1l}) follows from the fact that $(H_1(\Gamma,\mathbb{Z});(,)_l)$ is defined entirely in terms of the inclusion $H_1(\Gamma, \mathbb{Z})\subset C_1(\Gamma, \mathbb{Z})$ and of the values of $(,)_l$ on the orthogonal basis $E(\Gamma)$ of $C_1(\Gamma,\mathbb{Z})$, all of which is clearly invariant by cyclic equivalence. To prove part (\ref{Alb2l}) we use the proof of Proposition~\ref{Del-equ}(\ref{Del2}), to which we now refer for the notation. Consider the diagram (\ref{diag2}). The point is that the inclusion $\iota$ is compatible with the scalar product $(,)_l$ on the right and the scalar product $(,)_{l^3}$ on the left. More precisely, for every edge $e_S$ of $\Gamma^3$ (so that $S\in {\operatorname{Set}}^1 \Gamma$) we have (by definition of $l^3$) $$ (e_S,e_S)_{l^3}=l^3(e_S)=\sum_{e\in S}l(e)= \bigl(\sum_{i=1}^{\#S}e_{S,i},\sum_{i=1}^{\#S}e_{S,i}\bigr)_l =\bigl(\iota(e_S),\iota(e_S)\bigr)_l. $$ On the other hand if $T\in {\operatorname{Set}}^1 \Gamma$ with $T\neq S$ we have $0=(e_S,e_T)_{l^3}=(\iota(e_S),\iota(e_T))_l$ (since $S\cap T=\emptyset$). Therefore (\ref{Alb2l}) is proved, and with it the sufficiency part of Theorem~\ref{main-thml}. \end{proof} To prove the opposite implication of Theorem~\ref{main-thml}, we need the following \begin{defi}\label{del-metr} The Delaunay decomposition ${\rm Del}(\Gamma,l)$ associated to the metric graph $(\Gamma,l)$ is the Delaunay decomposition (cf. Definition~\ref{Deldef}) associated to the scalar product $(,)_l$ on $H_1(\Gamma,\mathbb{R})$ with respect to the lattice $H_1(\Gamma,\mathbb{Z})$. \end{defi} \begin{lemma}\label{Del-lemma} Let $(\Gamma,l)$ be a metric graph. Then \begin{enumerate}[(i)] \item \label{Di} ${\rm Del}(\Gamma,l)$ is determined by $\operatorname{Alb}(\Gamma,l)$. \item \label{Dii} ${\rm Del}(\Gamma,l)={\rm Del}(\Gamma)$. \end{enumerate} \end{lemma} \begin{proof} Clearly, ${\rm Del}(\Gamma,l)$ is determined by the lattice $H_1(\Gamma,\mathbb{Z})\subset H_1(\Gamma,\mathbb{R})$ and the scalar product $(,)_l$, and therefore by $\operatorname{Alb}(\Gamma,l)$. This shows part (\ref{Di}). Part (\ref{Dii}) follows from a well-known Theorem of Mumford, \cite[Thm 18.2]{nam}. \end{proof} \noindent {\it Proof of Theorem~\ref{main-thml}: necessity.} Suppose that $\operatorname{Alb}(\Gamma,l)\cong\operatorname{Alb}(\Gamma',l')$. By Lemma~\ref{Del-lemma}, $(\Gamma,l)$ and $(\Gamma',l')$ have the same Delaunay decompositions and ${\rm Del} (\Gamma) = {\rm Del} (\Gamma ')$. We can assume that $\Gamma$ and $\Gamma'$ are $3$-edge connected. By Proposition~\ref{Del-equ}(\ref{Del3}), we have $\Gamma \equiv_{\rm cyc} \Gamma'$. Denote \begin{equation} \label{epsl} E( \Gamma) \stackrel{\epsilon}{\longrightarrow} E(\Gamma') ;\ \ \ e\mapsto e':=\epsilon(e) \end{equation} a cyclic bijection. It remains to prove that $l(e)= l'(e')$ for every $e\in E( \Gamma)$. We will proceed in strict analogy with the proof of the necessity of Theorem~\ref{main-thm}. First, note that there is an isomorphism, denoted \begin{equation} \label{Hisol} H_1(\Gamma,\mathbb{Z})\stackrel{\cong}{\longrightarrow}H_1(\Gamma',\mathbb{Z});\ \ \ c\mapsto c' \end{equation} such that $(c_1,c_2)_l=(c'_1,c'_2)_{l'}$ for all $c_i\in H_1(\Gamma,\mathbb{Z})$. Pick $c\in H_1(\Gamma,\mathbb{Z})$ and write $c=\sum_{e\in E( \Gamma)} r_e(c) e $, with $r_e(c)\in \mathbb{Z}$; similarly $c'=\sum_{e'\in E( \Gamma')} r_{e'}(c') e' $ with $r_{e'}(c')\in \mathbb{Z}$. We claim that for every $e\in E(\Gamma)$ and every $c\in H_1(\Gamma, \mathbb{Z})$ we have \begin{equation} \label{divrl}r_{e'}(c')=u(e) r_e(c),\ \ \ u(e):=\pm 1. \end{equation} To prove the claim, notice that $ r_e(c)=n \Leftrightarrow e^*(c)=n. $ On the other hand, the isomorphism between ${\rm Del} (\Gamma)$ and ${\rm Del} (\Gamma ')$ maps the hyperplane of equation $e^*=n$ either to $ e'^*=n$ or to $ e'^*=-n$. So, the claim is proved. Now define $ E_c(\Gamma):=\{e\in E(\Gamma): r_e(c)\neq 0\}. $ For any $c\in H_1(\Gamma, \mathbb{Z})$ and $e\in E_c(\Gamma)$ we shall denote \begin{equation} \label{rcl} \lambda(c-e):= \sum _{f\in E_c(\Gamma) \smallsetminus \{e\}}l(f). \end{equation} We can now prove that the map (\ref{epsl}) preserves the lengths, i.e. that $l(e)=l'(e')$ for every $e\in E(\Gamma)$. By contradiction, suppose there exists an edge $e$ of $ \Gamma$ such that \begin{equation} \label{cardSl} l(e)>l'(e'). \end{equation} By Lemma \ref{intS}, there exist two cycles $\Delta_1$ and $\Delta_2$ of $\Gamma$ such that $\{e\}=E(\Delta_1)\cap E(\Delta_2)$. As in the proof of Theorem \ref{main-thm}, consider $c_1 $ and $c_2 $ in $H_1(\Gamma,\mathbb{Z})$ associated to the above two cycles (so that $ E_{c_i}(\Gamma)={\operatorname{Set}}^1_{\Delta_i}\Gamma$ for $i=1, 2$): $$ c_i= e+\sum_{f\in E_{c_i}(\Gamma) \smallsetminus \{e\}}\pm f. $$ The sign before $f$ will play no role, hence we are free to ignore it. Suppose that $u(e)=1$ (the case $u(e)=-1$ is treated similarly). By (\ref{divrl}) we have$$ c_i'= e'+\sum_{f\in E_{c_i}(\Gamma) \smallsetminus \{e\}}\pm f'. $$ Therefore, as $||c_i||^2:=(c_i,c_i)_l=(c_i',c_i')_{l'}=: ||c_i'||^2$, using notation (\ref{rcl}) we have $$ ||c_i||^2=l(e)+\lambda(c_i-e) =||c_i'||^2=l(e')+\lambda(c_i'-e').$$ By (\ref{cardSl}) we get \begin{equation} \label{cardTl} \lambda(c_i-e)< \lambda(c_i'-e'). \end{equation} Now consider $c_1-c_2\in H_1(\Gamma, \mathbb{Z})$. We have $$ c_1-c_2= \sum_{f\in E_{c_1}(\Gamma) \smallsetminus \{e\}}\pm f- \sum_{g\in E_{c_2}(\Gamma) \smallsetminus \{e\}}\pm g. $$ Since $ E_{c_1}(\Gamma)\cap E_{ {c_2}}(\Gamma) =\{e\}$, we have $||c_1-c_2||^2=\lambda(c_1-e)+\lambda(c_2-e).$ Arguing similarly for $c_1'-c_2'$, we get $||c_1'-c_2'||^2=\lambda(c_1'-e')+\lambda(c_2'-e').$ Hence, by (\ref{cardTl}) $$ ||c_1'-c_2'||^2=\lambda(c_1'-e')+\lambda(c_2'-e')>\lambda(c_1-e)+\lambda(c_2-e)= ||c_1-c_2||^2, $$ contradicting the fact that (\ref{Hisol}) preserves the scalar products. $\qed$ \section{Further characterizations of graphs} \label{pos} The Torelli theorems proved in the previous sections are based on the notion of 3-edge connected class, $[\Gamma^3]_{\rm{cyc}}$, of a graph $\Gamma$. The aim of this section, whose main result is Theorem~\ref{final-thm}, is to give some other characterizations of $[\Gamma^3]_{\rm{cyc}}$. \subsection{The poset $\mathcal{SP}_{\Gamma}$} \label{op} \begin{defi} Let $\Gamma$ be a graph. The poset $\mathcal{SP}_{\Gamma}$ is the set of all the subsets $S\subset E(\Gamma)$ such that the subgraph $\Gamma\smallsetminus S$ is free from separating edges, endowed with the following partial order: $$ S\geq T \Longleftrightarrow S\subseteq T .$$ \end{defi} \begin{remark} \label{SPsep} It is clear that for every $S\in \mathcal{SP}_{\Gamma}$ we have $E(\Gamma)_{{\rm sep}} \subset S$. Therefore any map $\sigma:\Gamma \to \overline{\Gamma}$ contracting some separating edges of $\Gamma$ induces a bijection of posets $\mathcal{SP}_{\Gamma} \longrightarrow \mathcal{SP}_{\overline{\Gamma}}$. \end{remark} We will use some notions and facts from graph theory. \begin{defi}\cite[Sect. 2.3]{Oxl} \label{coma} The {\it cographic matroid} $M^*(\Gamma)$ of $\Gamma$ is the matroid of collections of linearly independent vectors among the collections of vectors $\{e^*\: : \: e\in E(\Gamma)\}$ of $H_1(\Gamma, \mathbb{R})^*$. \end{defi} \begin{remark} \label{mator} It is well known that the cographic matroid $M(\Gamma)$ is independent of the choice of the orientation of $\Gamma$ used to define $H_1(\Gamma,\mathbb{Z})\subset C_1(\Gamma,\mathbb{Z})$. \end{remark} \begin{thm}\cite[Sect. 5.3]{Oxl}\label{matroid} $M^*(\Gamma)\cong M^*(\Gamma')$ if and only if $\Gamma\equiv_{\rm cyc} \Gamma'$. \end{thm} We are going to show that the poset $\mathcal{SP}_{\Gamma}$ is determined by $M^*(\Gamma)$. Before doing that we recall the notion of a flat of the matroid $M^*(\Gamma)$ (see for example \cite[Sec. 1.7]{Oxl}). First, for any $S=\{ e_{S,1},\ldots, e_{S,\# S} \}\subset E(\Gamma)$ we denote by $$ \langle S^*\rangle={\rm span}( e_{S,1}^*,\ldots, e_{S,\# S}^*)\subset H_1(\Gamma,\mathbb{Z})^*. $$ We say that $S$ is a \emph{flat} of $M^*(\Gamma)$ if for every $e\in E(\Gamma)\smallsetminus S$ we have $$\dim\langle S^*\rangle< \dim {\rm span}(S^*, e^*)=\dim {\rm span}( e_{S,1}^*,\ldots, e_{S,\# S}^*,e^*). $$ \begin{lemma}\label{flats} $\mathcal{SP}_{\Gamma}$ is the set of flats of the matroid $M^*(\Gamma)$. \end{lemma} \begin{proof} Given any subset $T\subset E(\Gamma)$, its closure ${\rm cl}(T)$ is defined as the subset of $E(\Gamma)$ formed by all the $e\in E(\Gamma)$ such that $e^*\in {\rm span}_{f\in T}( f^*)$. It is clear that $T\subset E(\Gamma)$ is a flat if and only if $T={\rm cl}(T)$. We have the following commutative diagram \begin{equation*}\xymatrix{ H_1(\Gamma\smallsetminus T,\mathbb{Z})\ar@{^{(}->}[r] \ar@{^{(}->}[d]& C_1(\Gamma\smallsetminus T,\mathbb{Z}) \ar@{^{(}->}[d]\\ H_1(\Gamma,\mathbb{Z}) \ar@{^{(}->}[r] & C_1(\Gamma,\mathbb{Z}). }\end{equation*} The left vertical injective map induces a surjective map $H_1(\Gamma,\mathbb{Z})^* \twoheadrightarrow H_1(\Gamma\smallsetminus T,\mathbb{Z})^*$ whose kernel is equal to ${\rm span}_{f\in T}(f^*)$. Therefore $e\in {\rm cl}(T)$ if and only the image, $[e^*]$, of $e^*$ under the above surjection is zero. If $e\not\in T$ then $[e^*]=0$ if and only if $e$ does not belong to any cycle of $\Gamma\smallsetminus T$, if and only if $e$ is a separating edge of $\Gamma\smallsetminus T$. This shows that $T={\rm cl}(T)$ if and only $\Gamma\smallsetminus T$ does not have separating edges, in other words $T$ is a flat if and only $T\in \mathcal{SP}_{\Gamma}$. \end{proof} \begin{remark}\label{arrang} It is easy to see, from the above definitions, that the poset of flats of $M^*(\Gamma)$ is isomorphic to the poset of intersections of the arrangement of hyperplanes $\{e^*=0\}_{e\in E(\Gamma)\smallsetminus E(\Gamma)_{\rm sep}}.$ \end{remark} Recall that a poset $(P,\leq)$ is called {\it graded} if it is has a monotone function $\rho:P\to \mathcal N$, called the {\it rank function}, such that if $x$ covers $y$ (i.e. $y\lneq x$ and there does not exist a $z$ such that $y\lneq z\lneq x$) then $\rho(x)=\rho(y)+1$. If our poset has a minimum element $\underline{0}$, we say that it is bounded from below. If this is the case $(P,\leq)$ is graded if and only if for every element $x\in P$ all the maximal chains from $\underline{0}$ to $x$ have the same length. We can define a rank function $\rho:P\to \mathcal N$ by setting $\rho(x)$ equal to the length of any chain from $\underline{0}$ to $x$. This is the unique rank function on $(P,\leq)$ such that $\rho(\underline{0})=0$ and we call it the {\it normalized rank function}. \begin{cor} \label{rank} The poset $\mathcal{SP}_{\Gamma}$ is a graded poset with minimum element equal to $E(\Gamma)$ and normalized rank function given by $S\mapsto b_1(\Gamma\smallsetminus S)$. \end{cor} \begin{proof} (It is well-known, see \cite[Thm. 1.7.5]{Oxl}, that the poset of flats of a matroid is a geometric lattice; hence in particular a graded poset.) The minimum element is clearly $E(\Gamma)$ and the length of a chain in $\mathcal{SP}_{\Gamma}$ from $E(\Gamma)$ to $S$ is exactly equal to the number of independent cycles in $\Gamma\smallsetminus S$, that is to $b_1(\Gamma\smallsetminus S)$. \end{proof} \begin{remark} \label{codim} We like to think of the number $b_1(\Gamma\smallsetminus S)$ as the {\it codimension} of the set $S\in \mathcal{SP}_{\Gamma}$. If $E(\Gamma)_{{\rm sep}}=\emptyset$ (which is a harmless assumption, by remark~\ref{SPsep}), then ${\operatorname{Set}}^1 \Gamma \subset \mathcal{SP}_{\Gamma}$, and we have that $S$ has codimension 1 if and only if $S$ is a C1-set. (cf. \ref{C1}). \end{remark} \begin{lemma}\label{supp-equiv} Let $\Gamma$ and $\Gamma'$ two graphs. For any choice of $\Gamma^3$ and $\Gamma'^3$ we have: \begin{enumerate} \item[(i)] $\mathcal{SP}_{\Gamma}\cong \mathcal{SP}_{\Gamma^3}$ (as posets). \item[(ii)] $\mathcal{SP}_{\Gamma}\cong \mathcal{SP}_{\Gamma'}$ if and only if $ \Gamma^3 \equiv_{\rm cyc} \Gamma'^3$. \end{enumerate} \end{lemma} \begin{proof} It is well known that the poset of flats of a matroid $M$ depends on, and completely determines, any {\it simple} matroid $\widetilde{M}$ (see below) associated to $M$ (see \cite[Sec. 1.7]{Oxl}). Therefore, using Theorem \ref{matroid}, we will be done if we show that $\widetilde{M^*(\Gamma)}=M^*(\Gamma^3)$ for any choice of $\Gamma^3$ of $\Gamma$. Since the cographic matroid does not depend on the choice of the orientation (cf. Remark~\ref{mator}), we can fix an orientation on $\Gamma$ inducing a totally cyclic orientation on $\Gamma \smallsetminus E(\Gamma)_{{\rm sep}}$, and we let $\Gamma^3$ have the orientation induced by that of $\Gamma$. Recall (see loc. cit.) that a simple matroid $\widetilde{M^*(\Gamma)}$ is obtained from $M^*(\Gamma)$ by deleting the zero vectors and, for each parallel (i.e. proportional) class of vectors, deleting all but one of the vectors. We know $e^*\in H_1(\Gamma,\mathbb{R})^*$ is zero if and only $e\in E(\Gamma)_{\rm sep}$ (see \ref{e0sep}). On the other hand, Corollary \ref{cH}(\ref{cH2}) yields that $e_1^*$ and $e_2^*$ are proportional if and only if they belong to the same C1-set, if and only if, by Lemma~\ref{C1lm}(\ref{C14}), $(\{e_1,e_2\})$ is separating pair of edges. Therefore the edges deleted to pass from $M^*(\Gamma)$ to $\widetilde{M^*(\Gamma)}$ correspond exactly to the edges contracted to construct $\Gamma^3$ from $\Gamma$, and hence we get that $\widetilde{M^*(\Gamma)}\cong M^*(\Gamma^3)$. \end{proof} \subsection{The posets $\mathcal{OP}_{\Gamma}$ and ${\overline{\mathcal{OP}_{\Gamma}}}$} We defined totally cyclic orientations in Definition~\ref{tot}. Now we introduce a partial ordering among them. \begin{defi} \label{totdef} The poset $\mathcal{OP}_{\Gamma}$ of \emph{totally cyclic orientations} of $\Gamma$ is the set of pairs $(S, \phi_S)$ where $S\in \mathcal{SP}_{\Gamma}$ and $\phi_S$ is a totally cyclic orientation of $\Gamma\smallsetminus S$, endowed with the following partial order $$(S, \phi_S)\geq (T,\phi_T) \Leftrightarrow S\subset T \text{ and } \phi_T=(\phi_S)_{|E(\Gamma\smallsetminus T)}. $$ We call $S$ the {\it support} of the orientation $\phi_S$. \end{defi} We have a natural map $$\begin{aligned} \operatorname{Supp}:\mathcal{OP}_{\Gamma} & \rightarrow \mathcal{SP}_{\Gamma}\\ (S, \phi_S)& \mapsto S \end{aligned} $$ which is order-preserving by definition and surjective because of Lemma \ref{chso}(\ref{c2}). We say that a map $\pi:(P,\leq)\to (Q,\leq)$ between two posets $(P,\leq)$ and $(Q,\leq)$ is a {\it quotient} if and only if for every $x,y\in Q$ we have that $$x\leq y \Leftrightarrow \text{ there exist } \widetilde{x}\in \pi^{-1}(x) \text{ and } \widetilde{y}\in \pi^{-1}(y) \text{ such that } \widetilde{x}\leq \widetilde{y}. $$ In particular $\pi$ is monotone and surjective. Observe also that if $\pi:(P,\leq)\to (Q,\leq)$ is a quotient, then $(P,\leq)$ is graded if and only if $(Q,\leq)$ is graded, and in this case we can choose two rank functions $\rho_P$ on $P$ and $\rho_Q$ on $Q$ such that $\rho_Q(\pi(x))=\rho_P(x)$. We introduce now the outdegree function. \begin{defi}\label{outdeg} The outdegree function $\underline{d}^+$ is the map $$\begin{aligned} \underline{d}^+:\mathcal{OP}_{\Gamma}& \longrightarrow \mathcal N^{V(\Gamma)}\\ (S, \phi_S) & \mapsto \{d^+(S, \phi_S)_v\}_{v\in V(\Gamma)}, \end{aligned}$$ where $d^+(S, \phi_S)_v$ is the number of edges of $\Gamma\smallsetminus S$ that are going out of the vertex $v$ according to the orientation $\phi_S$. \end{defi} Note that $\underline{d}^+$ is monotone with respect to the component-by-component partial order on $\mathcal N^{V(\Gamma)}$. Moreover $$ \sum_{v\in V(\Gamma)} d^+(S, \phi_S)_v=\#(E(\Gamma\smallsetminus S)). $$ This definition enables us to introduce an equivalence relation $\sim$ on $\mathcal{OP}_{\Gamma}$. \begin{defi}\label{equiv-or} We say that two elements $(S, \phi_S)$ and $(S', \phi_{S'})$ of $\mathcal{OP}_{\Gamma}$ are equivalent, and we write that $(S, \phi_S)\sim (S', \phi_{S'})$, if $S=S'$ and $\underline{d}^+(S, \phi_S)=\underline{d}^+(S', \phi_{S'})$. We denote by $[(S, \phi_S)]$ the equivalence class of $(S, \phi_S)$. The set of equivalence classes will be denoted ${\overline{\mathcal{OP}_{\Gamma}}}:=\mathcal{OP}_{\Gamma}/_\sim$. On ${\overline{\mathcal{OP}_{\Gamma}}}$ we define a poset structure by saying that $[(S, \phi_S)] \geq [(T,\phi_T)]$ if there exist $ (S', \phi_{S'})\sim (S, \phi)$ and $(T',\phi_{T'})\sim (T,\phi_T)$ such that $(S', \phi_{S'})\geq (T', \phi_{T'})$ in $\mathcal{OP}_{\Gamma}$. \end{defi} Note that ${\overline{\mathcal{OP}_{\Gamma}}}$ is a quotient of the poset $ \mathcal{OP}_{\Gamma}$ and that the natural map of posets $\operatorname{Supp}: \mathcal{OP}_{\Gamma}\to \mathcal{SP}_{\Gamma}$ factors as $$\xymatrix{ \mathcal{OP}_{\Gamma} \ar@{->>}[rr]^{\rm Supp} \ar@{->>}[dr]& & \mathcal{SP}_{\Gamma} \\ & {\overline{\mathcal{OP}_{\Gamma}}} \ar@{->>}[ru]& }$$ The next two lemmas show that $\mathcal{OP}_{\Gamma}$ and ${\overline{\mathcal{OP}_{\Gamma}}}$ are invariant under cyclic equivalence and 3-edge connectivization. \begin{lemma}\label{or-cyclic} The posets $\mathcal{OP}_{\Gamma}$ and ${\overline{\mathcal{OP}_{\Gamma}}}$ depend only on $[\Gamma]_{\rm cyc}$.\end{lemma} \begin{proof} It is enough to show that the posets $\mathcal{OP}_{\Gamma}$ and ${\overline{\mathcal{OP}_{\Gamma}}}$ do not change under the two moves of Theorem \ref{cycequ-moves}. Consider first a move of type (1), that is the gluing of two graphs $\Gamma_1$ and $\Gamma_2$ at two vertices $v_1\in V(\Gamma_1)$ and $v_2\in V(\Gamma_2)$ (see figure \ref{cont-sep}). Call $\Gamma$ the resulting graph and $v\in V(\Gamma)$ the resulting vertex. It is clear that $(\mathcal{SP}_{\Gamma},\leq) \cong (\mathcal{SP}_{\Gamma_1\coprod \Gamma_2},\leq)$. Given an element $S\in \mathcal{SP}_{\Gamma}$, we denote by $(S_1, S_2)$ the corresponding element of $\mathcal{SP}_{\Gamma_1\coprod \Gamma_2}$. It is easy to check that any totally cyclic orientation $\phi_S$ of $\Gamma\smallsetminus S$ induces totally cyclic orientations $\phi_{S_1}$ and $\phi_{S_2}$ of $\Gamma_1\smallsetminus S_1$ and $\Gamma_2\smallsetminus S_2$ and conversely. Moreover the outdegree $\underline{d}^+(S,\phi_S)$ determines, and is determined by, the two outdegrees $\underline{d}^+(S_1, \phi_{S_1})$ and $\underline{d}^+(S_2, \phi_{S_2})$, hence we get the desired conclusion. Consider now a move of type (2). Let $\Gamma$ be obtained by gluing the two graphs $\Gamma_1$ and $\Gamma_2$ according to the rule $u_1\leftrightarrow u_2$ and $v_1\leftrightarrow v_2$, and let $\overline{\Gamma}$ be obtained by gluing $\Gamma_1$ and $\Gamma_2$ according to the rule $u_1\leftrightarrow v_2$ and $v_1\leftrightarrow u_2$ (see figure \ref{twist}). Note that since $E(\Gamma)=E(\Gamma_1)\cup E(\Gamma_2)$, any element $S\in \mathcal{SP}_{\Gamma}$ determines two subsets $S_1\subset E(\Gamma_1)$ and $S_2\in E(\Gamma_2)$. These two subsets $S_1$ and $S_2$ determine also a subset $\overline{S}\in E(\overline{\Gamma})$, which is easily seen to belong to $\mathcal{SP}_{\overline{\Gamma}}$. The association $S\mapsto \overline{S}$ determines an isomorphism $(\mathcal{SP}_{\Gamma},\leq) \cong (\mathcal{SP}_{\overline{\Gamma}},\leq)$. We now construct, for any $S\in \mathcal{SP}_{\Gamma}$, a bijection between the set of all totally cyclic orientations (resp. totally cyclic orientations up to equivalence) on $\Gamma\smallsetminus S$ and the set of totally cyclic orientations (resp. totally cyclic orientations up to equivalence) on $\Gamma\smallsetminus {\overline S}$. Any orientation $\phi_S$ on $\Gamma\smallsetminus S$ determines two orientations $\phi_{S_1}$ and $\phi_{S_2}$ on $\Gamma_1\smallsetminus S_1$ and $\Gamma_2\smallsetminus S_2$, respectively. We define an orientation $\phi_{\overline{S}}$ of $\overline{\Gamma}\smallsetminus \overline{S}$ by putting together the orientation $\phi_{S_1}$ and the inverse of the orientation $\phi_{S_2}$, that is the orientation $\phi_{S_2}^{-1}$ obtained by reversing the direction of all the edges. Using Lemma \ref{chso}, it is easy to check that if $\phi_S$ is a totally cyclic orientation of $\Gamma\smallsetminus S$ then $\phi_{\overline{S}}$ is a totally cyclic orientation of $\overline{\Gamma}\smallsetminus \overline{S}$. Moreover it is straightforward to check that the outdegree function $\underline{d}^+(S, \phi_S)$ determines and is completely determined by $\underline{d}^+(\overline{S},\phi_{\overline{S}})$. Clearly the association $\phi_S\mapsto \phi_{\overline{S}}$ is a bijection since we can reconstruct $\phi_S$ starting from $\phi_{\overline{S}}$ by reversing the orientation on $\Gamma_2$. Moreover it is easy to check that the constructed bijections $\mathcal{OP}_{\Gamma}\cong \mathcal{OP}_{\overline{\Gamma}}$ and ${\overline{\mathcal{OP}_{\Gamma}}} \cong {\overline{\mathcal{OP}_{\overline{\Gamma}}}} $ are compatible with the poset structure, and thus we are done. \end{proof} \begin{lemma}\label{or-conn} For any choice of $\Gamma^3$ we have natural isomorphisms of posets: $ \mathcal{OP}_{\Gamma} \cong \mathcal{OP}_{\Gamma^3} $ and ${\overline{\mathcal{OP}_{\Gamma}}} \cong {\overline{\mathcal{OP}_{\Gamma^3}}}$. \end{lemma} \begin{proof} It is enough to show that the posets $\mathcal{OP}_{\Gamma}$ and ${\overline{\mathcal{OP}_{\Gamma}}}$ do not change under the two moves of Definition \ref{3-conn}. Recall that for every $S\in \mathcal{SP}_{\Gamma}$ we have $E(\Gamma)_{{\rm sep}}\subset S$. Therefore $E(\Gamma)_{{\rm sep}}$ does not affect the totally cyclic orientations on $\Gamma\smallsetminus S$, nor does it affect the outdegree function. This proves that $\mathcal{OP}_{\Gamma}$ does not change when separating edges of $\Gamma$ get contracted. Consider now a move of type (B), that is the contraction of an edge $e_1$ belonging to a separating pair $(e_1, e_2)$. We refer to the notations of figure \ref{con-pair}. We known that $ \mathcal{SP}_{\Gamma} \cong \mathcal{SP}_{\overline{\Gamma}}$, by Lemma \ref{supp-equiv}. Given an element $S\in \mathcal{SP}_{\Gamma}$, we denote by $\overline{S}$ the corresponding element in $\mathcal{SP}_{\overline{\Gamma}}$. We now construct, for any $S\in \mathcal{SP}_{\Gamma}$, a bijection between the set of all totally cyclic orientations (resp. totally cyclic orientations up to equivalence) on $\Gamma\smallsetminus S$ and the set of totally cyclic orientations (resp. totally cyclic orientations up to equivalence) on $\Gamma\smallsetminus {\overline S}$. If $\overline{e}\in \overline{S}$ (which happens exactly when $e_1, e_2\in S$), then $\overline{\Gamma}\smallsetminus \overline{S}$ is cyclically equivalent to $\Gamma\smallsetminus S$ and therefore we conclude by the previous Lemma. If $\overline{e}\not\in \overline{S}$ (which happens exactly when $e_1$ and $e_2$ do not belong to $S$), we lift any totally cyclic orientation $\phi_{\overline{S}}$ of $\overline{\Gamma}\smallsetminus \overline{S}$ to an orientation $\phi_S$ of $\Gamma\smallsetminus S$ by orienting any edge in $E(\Gamma\smallsetminus S\cup \{e_1\})$ as the corresponding edge in $E(\overline{\Gamma}\smallsetminus S)$, and by orienting $e_1$ so that the cycle $\Gamma(\{e_1,e_2\})$ is cyclically oriented. Lemma \ref{chso} implies that $\phi_S$ is a totally cyclic orientation of $\Gamma\smallsetminus S$ and that any totally cyclic orientation $\phi_S$ must arise from a totally cyclic orientation of $\phi_{\overline{S}}$ via this construction. Moreover, it is easy to check that the outdegrees $d^+(S, \phi_S)$ and $d^+(\overline{S},\phi_{\overline{S}})$ are completely determined one from another, and this concludes the proof. \end{proof} \begin{nota}{\emph{A conjectural geometric description of $\mathcal{OP}_{\Gamma}$ and $\overline{\mathcal{OP}_{\Gamma}}$}} We propose a conjectural geometric description of the two posets $\mathcal{OP}_{\Gamma}$ and $\overline{\mathcal{OP}_{\Gamma}}$. Recall the following definition (see for example \cite[Pag. 174]{BdlHN}). \begin{defi}\label{Vordef} The Voronoi polyhedron of the graph $\Gamma$ is the compact convex polytope defined by $${\rm Vor}_{\Gamma}:=\{x\in H_1(\Gamma,\mathbb{R})\: :\: (x,x)\leq (x-\lambda, x-\lambda) \text{ for all } \lambda\in H_1(\Gamma,\mathbb{Z})\}. $$ \end{defi} We denote with ${\rm Faces}({\rm Vor}_{\Gamma})$ the poset of faces of the Voronoi polyhedron ${\rm Vor}_{\Gamma}$, with the order given by the reverse of the natural inclusion between the faces. It is a graded poset with minimum equal to the interior of ${\rm Vor}_{\Gamma}$ and normalized rank function equal to the codimension of the faces. From the definition, it follows that ${\rm Vor}_{\Gamma}$ is a fundamental domain for the action of $H_1(\Gamma,\mathbb{Z})$ on $H_1(\Gamma,\mathbb{R})$ by translations. In particular $H_1(\Gamma,\mathbb{Z})$ acts by translation on the faces of ${\rm Vor}_{\Gamma}$. We denote with $\overline{{\rm Faces}({\rm Vor}_{\Gamma})}$ the quotient poset of ${\rm Faces}({\rm Vor}_{\Gamma})$ with respect to the action of $H_1(\Gamma,\mathbb{Z})$. \begin{conj}\label{geo-conj} For a graph $\Gamma$, we have that \begin{enumerate}[(i)] \item $\mathcal{OP}_{\Gamma}\cong {\rm Faces}({\rm Vor}_{\Gamma})$. \item $\overline{\mathcal{OP}_{\Gamma}}\cong \overline{{\rm Faces}({\rm Vor}_{\Gamma})}$. \end{enumerate} \end{conj} The above conjecture $(i)$ generalizes the bijection (proved in \cite[Prop. 5.2]{OS} and \cite[Prop. 6]{BdlHN}) between the codimension-one faces of ${\rm Vor}_{\Gamma}$ and the oriented cycles of $\Gamma$ (which correspond to the elements $S\in \mathcal{OP}_{\Gamma}$ such that $b_1(\Gamma\smallsetminus S)=1$). Therefore part $(i)$ proposes an answer to the interesting problem posed in \cite[Page 174]{BdlHN}: ``More ambitiously, one would like to understand the combinatorics of the Voronoi polyhedron in terms of oriented circuits of the graph''. \end{nota} \subsection{Conclusions} \begin{lemma}\label{supp-map} The support map $\operatorname{Supp}:\mathcal{OP}_{\Gamma} \longrightarrow \mathcal{SP}_{\Gamma}$ is a quotient of posets. Moreover, given $S, T\in \mathcal{SP}_{\Gamma}$ such that $S$ covers $T$, and a totally cyclic orientation $\phi_T$ of $\Gamma\smallsetminus T$, there are at most two (possibly equal) extensions of $\phi_T$ to a totally cyclic orientation $\phi_S$ of $\Gamma\smallsetminus S$. \end{lemma} \begin{proof} We already observed that $\operatorname{Supp}$ is surjective and order preserving. For the remaining part we use the fact that $\mathcal{SP}_{\Gamma}$ is graded by the function $b_1(\Gamma \smallsetminus S)$(see \ref{rank}). By \ref{supp-equiv} and \ref{or-conn} we can assume that $\Gamma$ is 3-edge connected. In particular, we have $E(\Gamma)_{{\rm sep}} = \emptyset$. It is easy to see that it suffices to assume $S=\emptyset$. The hypothesis that $\emptyset$ covers $T$ is equivalent to the fact that $b_1(\Gamma)=b_1(\Gamma\smallsetminus T)+1$ or, equivalently, that $b_1(\Gamma(T))=1$. Hence $\Gamma(T)$ is a cycle (as $E(\Gamma)_{{\rm sep}} = \emptyset$). Using the characterization \ref{chso} (in particular part (\ref{vw})) it is easy to check the only way to extend the orientation $\phi_T$ of $\Gamma\smallsetminus T$ to a totally cyclic orientation on all of $\Gamma$ is by choosing for the edges of $T$ one of the two cyclic orientations of the cycle $\Gamma(T)$. \end{proof} Summing up what we have proved in this section, we get the following \begin{thm}\label{final-thm} Let $\Gamma$ and $\Gamma'$ be two graphs. The following facts are equivalent: \begin{enumerate}[(i)] \item \label{(i)} $ [\Gamma^3]_{\rm cyc} =[\Gamma'^3]_{\rm cyc}$. \item \label{(ii)} ${\rm Del}(\Gamma)\cong {\rm Del}(\Gamma')$. \item \label{(iii)} $\mathcal{SP}_{\Gamma}\cong \mathcal{SP}_{\Gamma'}$ as posets. \item \label{(iv)}$\mathcal{OP}_{\Gamma}\cong \mathcal{OP}_{\Gamma'}$ as posets. \item \label{(v)} ${\overline{\mathcal{OP}_{\Gamma}}} \cong{\overline{\mathcal{OP}_{\Gamma'}}}$ as posets. \end{enumerate} \end{thm} \begin{proof} The equivalence (\ref{(i)})$\Leftrightarrow$(\ref{(ii)}) was proved in Proposition \ref{Del-equ}(\ref{Del3}), while the equivalence (\ref{(i)})$\Leftrightarrow$(\ref{(iii)}) follows from Lemma~\ref{supp-equiv}. The implications (\ref{(i)})$\Rightarrow$(\ref{(iv)}) and (\ref{(i)})$\Rightarrow$(\ref{(v)}) follow from Lemmas \ref{or-cyclic} and \ref{or-conn}. Finally the implications (\ref{(iv)})$\Rightarrow$(\ref{(iii)}) and (\ref{(v)})$\Rightarrow$(\ref{(iii)}) follows from the fact that $\mathcal{SP}_{\Gamma}$ is a quotient poset of ${\overline{\mathcal{OP}_{\Gamma}}}$ and $\mathcal{OP}_{\Gamma}$ (see Lemma \ref{supp-map} and the discussion after definition \ref{equiv-or}). \end{proof}
2,869,038,155,191
arxiv
\section{Introduction} On a complex manifold equipped with the Bergman kernel and metric, the Bergman representative map, originally named as ``the representative domain'' by Stefan Bergman himself, is an important offshoot of the Bergman kernel form (cf. \cite{GKK2011}, Chapter 4). It is a special holomorphic map which is in a significant contrast with the exponential map of the Riemannian structure given by the real part of the Bergman metric; the Riemannian exponential map is almost never holomorphic. On the other hand, the representative map gives rise to a holomorphic K\"{a}hler normal coordinate system with respect to the Bergman metric. One of its best known features is that all holomorphic Bergman isometries become linear mappings in these representative coordinates. In spite of the difficulty that this map is not well-defined everywhere, this feature has been proven to be useful in many important works (see for instance, \cite{Lu1966}, \cite{bell1980}, \cite{Webster1979}, \cite{greene1985}, et al.). However, it was striking to us that no systematic study of this concept has yet been carried out. The goal of this paper is therefore to provide a first step towards establishing a systematic treatise on the Bergman representative map. In particular, we present a construction of the torsion-free flat holomorphic affine connection on the holomorphic tangent bundle of an open dense subdomain of the given complex manifold, whose affine exponential map is the inverse to the representative map (Theorem \ref{connection}). This yields a differential geometric interpretation of the Bergman representative map. It is worth mentioning that our connection was discovered, at least partially, by several other authors in the articles preceding this paper, even though the information was scattered around in the papers such as \cite{bochner1947} (much earlier than the others; in fact, Bochner constructed ``normal'' coordinates only, which can develop into the connection), \cite{calabi1953isometric}, and \cite{bcov1994}. It is also studied independently in \cite{Demailly1982} and \cite{kapranov1999} in relation to the holomorphic part of the K\"{a}hler metric connection (a symplectic geometric interpretation can be found in \cite{kontsevich1995} and \cite{ruan98}). More notably, as the connection for the case of ``bounded domains'', it was studied in \cite{Webster1979} for a version of extension theorem for biholomorphic mappings. We hope that this paper shows these concepts in a unified viewpoint. This paper is organized as follows: First, we briefly review fundamentals of Bergman geometry including the construction of the concept of the representative map. Then we present Bochner's normal coordinate system for the real analytic K\"{a}hler manifolds and the affine connection. We would like to call it {\it the Bochner connection}. Then, we restrict ourselves to complex manifolds with the Bergman metric, and study the Bochner connnection. We demonstrate that our study is not without a notable main statement (Theorem \ref{main}); we present a generalization of the theorem by Lu Qi-Keng \cite{Lu1966}, which says that a bounded domain in $\mathbb{C}^n$ whose Bergman metric is complete and of a constant holomorphic sectional curvature is biholomorphic to the unit ball. We were able to generalize this to the case of bounded domains with a \textit{pole of the Bochner connection} such as a circular domain or a homogeneous domain. \section{Fundamentals of Bergman geometry} \subsection{The Bergman kernel and metric for a bounded domain in $\mathbb{C}^n$} Let $\Omega$ be a bounded domain in $\mathbb{C}^n$ and $K(z,\ov{w})$ the Bergman kernel of $\Omega$. Since $K(z,\ov{z})>0$, the Bergman metric $$ g_{\Omega}(z)=\sum\limits_{j,k=1}^n g_{j\ov{k}}(z) dz_j\otimes d\ov{z_k} \text{\ \ \ \ with\ \ } g_{j\ov{k}}(z)=g_{j\ov{k}}(z,\ov{z}):=\frac{\partial^2\log K(z,\ov{z})}{\partial z_j\partial \ov{z_k}} $$ is well-defined. In fact, the following result was proved by Bergman himself \cite{bergman1970kernel}: \begin{thm}[Bergman] The Bergman metric $g_{\Omega}$ is positive-definite at every $z \in\Omega$. \end{thm} \begin{rmk} Note that $g_{\Omega}$ is a K\"{a}hler metric. The transformation formula for the Bergman kernel function (under biholomorphisms) implies that every biholomorphism between bounded domains is an isometry with respect to the Bergman metric. \end{rmk} \subsection{The Bergman representative map}\label{br} Let $p$ be a point of $\Omega$. Since $K(p,\ov{p})>0$, there is a neighborhood of $p$ such that $K(z,\ov{w})\neq 0$ for all $z,w$ in that neighborhood. Denote by ${g}^{\ov{k}j}(p)$ the $(k,j)$-th entry of the inverse matrix of $(g_{j\ov{k}}(p))$. \begin{defn}\label{rep} The {\it Bergman representative map} at $p$ is defined by $$ \hbox{rep}_p(z)=(\zeta_1(z),\ldots,\zeta_n(z)), $$ where: $$ \zeta_j(z):={g}^{\ov{k}j}(p)\Big\{\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K(z,\ov{w})-\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K(w,\ov{w})\Big\}. $$ \end{defn} Since $\frac{\partial \zeta_k}{\partial z_l}|_{z=p}=\delta_{lk}$, this map defines a holomorphic local coordinate system at $p$. Another special feature is in the following theorem by Bergman himself. \begin{thm}[Bergman]\label{clin} If $f:\Omega\rightarrow\tilde{\Omega}$ is a biholomorphic mapping of bounded domains, then $\hbox{\rm rep}_{f(p)}\circ f\circ \hbox{\rm rep}_p^{-1}$ is $\mathbb{C}$-linear. \end{thm} The original proof of this by Bergman was via a direct computation using the transformation formula. On the other hand, a differential geometric proof using the Bochner connection will be presented in Section \ref{bb} (see Theorem \ref{connection} as well as Remark \ref{remc}). Since the Bergman kernel and metric can be defined for complex manifolds \cite{kobayashi1959}, this geometric explanation applies to the case of complex manifolds. \subsection{The Bergman kernel form on a complex manifold} Let $M$ be an $n$-dimensional complex manifold and $A^2(M)$ the space of holomorphic $n$-forms $f$ on $M$ satisfying $$ \Big|\int_Mf\wedge\ov{f}\Big|<\infty. $$ Let $\{\phi_0,\phi_1,\phi_2,\ldots\}$ be a complete orthonormal basis for the Hilbert space $A^2(M)$ and $\ov{M}$ the complex manifold conjugate to $M$. Define the holomorphic $2n$-form on $M\times\ov{M}$ by $$ K(z,\ov{w})=\sum_{j=0}^{\infty}\phi_j(z)\wedge\ov{\phi_j(w)}. $$ This construction is independent of the choice of orthonormal basis. Using the diagonal embedding $\iota:M\hookrightarrow M\times\ov{M}$, defined by $\iota(z)=(z,\ov{z})$, and the natural identification of $M$ with $\iota(M)$, $K(z,\ov{z})$ can be considered as a $2n$-form on $M$. This is called the {\it Bergman kernel form} of $M$. Consider the case that the Bergman kernel form is non-zero at any point of $M$. In a local coordinate system $(U,(z_1,\ldots,z_n))$, the Bergman kernel form can be written as $$ K(z,\ov{z})=K^{\ast}_U(z,\ov{z})dz_1\wedge\cdots\wedge dz_n\wedge d\ov{z_1}\wedge\cdots\wedge d\ov{z_n}, $$ where $K^{\ast}_U(z,\ov{z})$ is a well-defined function on $U$. Set $$ ds^2_M:=\sum\limits_{j,k=1}^n g_{j\ov{k}}(z)dz_j\otimes d\ov{z_k}=\sum\limits_{j,k=1}^n \frac{\partial^2\log K^{\ast}_U(z,\ov{z})}{\partial z_j\partial\ov{z_k}}dz_j\otimes d\ov{z_k}. $$ This is independent of the choice of local coordinate system. When the matrix $G(z):=(g_{j\ov{k}}(z))$ is positive-definite for each $z\in M$, $ds^2_M$ is called the {\it Bergman metric} of $M$. \subsection{Bergman representative coordinates} From now on, suppose that $M$ is a complex manifold which possesses the Bergman metric. (In fact, many complete non-compact K\"{a}hler manifold with negative curvature admit the Bergman metric \cite{GW1979}, Theorem H). In a local coordinate system $(U\times \ov{V},(z_1,\ldots,z_n,\ov{w_1},\ldots,\ov{w_n}))$ for $M\times\ov{M}$, $$ K(z,\ov{w})=K^{\ast}_{U\times\ov{V}}(z,\ov{w})dz_1\wedge\cdots\wedge dz_n\wedge d\ov{w_1}\wedge\cdots\wedge d\ov{w_n}, $$ where $K^{\ast}_{U\times\ov{V}}(z,\ov{w})$ is a well-defined function on $U\times \ov{V}$. Given a point $\ov{p}\in\ov{V}$, define the following holomorphic coordinate system centered at $p$ (cf. \cite{davidov1977},\cite{GKK2011}, and \cite{dinew2011}). \begin{defn} The {\it Bergman representative coordinate system} at $p$ is defined by $$ \hbox{rep}_p(z)=(\zeta_1(z),\ldots,\zeta_n(z)), $$ where: $$ \zeta_j(z):={g}^{\ov{k}j}(p)\Big\{\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K^{\ast}_{U\times\ov{V}}(z,\ov{w})-\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K^{\ast}_{V\times\ov{V}}(w,\ov{w})\Big\}. $$ \end{defn} \begin{rmk} The above construction is independent of the choice of a coordinate system $(U,(z_1,\ldots,z_n))$ for $M$. But it depends on the choice of local coordinate system $(V,(w_1,\ldots,w_n))$. Note that $\hbox{rep}_p$ extends to a global function, well-defined on the whole of $M$ except for the analytic variety $Z^p_0:=\{z\in M:K(z,\ov{p})=0\}$. \end{rmk} \section{Bochner's normal coordinates and connection} For the real analytic K\"{a}hler manifolds, Bochner constructed the K\"{a}hler normal coordinate system, a version of the representative coordinate system from the K\"{a}hler potential \cite{bochner1947}. This normal coordinate system is strongly related to the exponential map of the K\"{a}hler metric \cite{Demailly1982}. We feel that this relation can be better explained via the language of vector bundles and connections \cite{kapranov1999}. Therefore, we reorganize this information, scattered in the literature. \subsection{Bochner's normal coordinates} Suppose that $M$ is a K\"{a}hler manifold with the real analytic K\"{a}hler metric $g$. In \cite{bochner1947}, a K\"{a}hler normal coordinate system is defined as follows: \begin{prp}[Bochner's normal coordinates] Given $p\in M$, there exist holomorphic coordinates $(\zeta_1,\ldots,\zeta_n)$, unique up to unitary linear transformations satisfying \begin{enumerate} \item [(i)] $\zeta(p)=0$,\smallskip \item [(ii)] $g_{j\ov{k}}(p)=\delta_{jk}$,\smallskip \item [(iii)] $dg_{j\ov{k}}(p)=0$,\smallskip \item [(iv)] $\frac{\partial^Ig_{j\ov{k}}}{\partial \zeta_1^{i_1} \cdots\partial \zeta_n^{i_n}}(p)=0$, for all $I\geq 0$ and $i_1+\cdots+i_n=I$. \end{enumerate} \end{prp} In \cite{bcov1994}, Bochner's coordinate system was rediscovered in the context of mathematical physics. There, the Bochner coordinates were called the {\it canonical coordinates}. Their result is \begin{prp}[Bershadsky, Cecotti, Ooguri and Vafa \cite{bcov1994}] Bochner's normal coordinates $(\zeta_1,\ldots,\zeta_n)$ can be expressed in terms of the K\"{a}hler potential $\psi(z,\ov{z})$: $$ \zeta_j(z)=\sqrt{g}^{\ov{k}j}(p)\Big\{\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\psi(z,\ov{w})-\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\psi(w,\ov{w})\Big\}, $$ where ${\sqrt{g}}^{\ov{k}j} (p)$ is as follows: Since $G(p):=(g_{j\ov{k}}(p))$ is a positive-definite Hermitian matrix, there exists a matrix $A$ such that $G(p)=A \ov{A^t}$. Denote by ${\sqrt{g}}^{\ov{k}j}(p)$ the $(k,j)$-th entry of the inverse matrix of $A$. \end{prp} \begin{cor} Bochner's normal coordinate system for a manifold with the Bergman metric is the same as the Bergman representative coordinate system of the K\"{a}hler potential $\log K(z,\ov{z})$ up to the normalization factor $\sqrt{g}^{\ov{k}j}(p)$. \end{cor} We explain how to separate the holomorphic part from the Riemannian exponential map and show that it coincides with the inverse map of the Bochner normal coordinate system. Our exposition follows those of \cite{Demailly1982}, \cite{Demailly1994}, and \cite{kapranov1999}. \subsection{The holomorphic exponential map} Let $M$ be a real-analytic K\"{a}hler manifold. The construction of the holomorphic exponential map from the Riemannian exponential map $\text{exp}_p:T_pM\rightarrow M$ consists of two steps: {\small\bf (1) complexification}, {\small\bf (2) restriction.}\medskip\\ {\small\bf Step 1. Complexification (Polarization).} Note that the real-analytic manifold $M$ of real dimension $n$ can be embedded to become a totally real submanifold of the complex manifold $\mathbb{C}M$ of complex dimension $n$. \begin{thm}[Whitney-Bruhat \cite{Whitney1959}] Every real analytic manifold $M$ can be embedded to become a totally real submanifold of a complex manifold. This embedding is unique in the sense that, if $\iota_1:M\hookrightarrow\mathbb{C}M_1$ and $\iota_2:M\hookrightarrow\mathbb{C}M_2$ are such embeddings, then there exist neighborhoods $U_1$ and $U_2$ of $M$ in $\mathbb{C}M_1$ and $\mathbb{C}M_2$ respectively, and a biholomorphism $f:U_1\rightarrow U_2$ such that $\iota_2=f\circ\iota_1$. \end{thm} Take the complexification $T_pM\hookrightarrow T_p^{\mathbb{C}}M$ and the diagonal embedding $\iota:M \hookrightarrow M\times\ov{M}$. Then, apply the following lemma to the exponential map $\text{exp}_p:T_pM\rightarrow M$. \begin{lem} Let $M$ and $N$ be totally real submanifolds of complex manifolds $\mathbb{C}M$ and $\mathbb{C}N$, and $f:M\rightarrow N$ a real-analytic diffeomorphism. Then there are neighborhoods $U$ and $V$ of $M$ and $N$, and a unique holomorphic map $f^{\mathbb{C}}:U\rightarrow V$ extending $f$. \end{lem} Denote by $\text{exp}_p^{\mathbb{C}}$ the unique holomorphic extension of $\text{exp}_p$.\medskip\\ {\small\bf Step 2. Restriction.} Use the decomposition $T_p^{\mathbb{C}}M\cong T'_pM\oplus T_p''M$ where $T'_pM$: the holomorphic tangent space and $T''_pM$: the anti-holomorphic tangent space. Then restrict the complexified map $\text{exp}_p^{\mathbb{C}}$ to $T'_pM$. \begin{defn} The restriction map $\text{exp}_p^{\mathbb{C}}\big|_{T'_pM}(\zeta):=\text{exp}_p^{\mathbb{C}}(\zeta,0)$ is called the {\it holomorphic exponential map}. \end{defn} We remark that the above definition is the same as the following definition, appeared first in \cite{Demailly1982}. \begin{defn} Take the power series expansion of the exponential map of the K\"{a}hler metric $\text{exp}_p:T'_pM\oplus T_p''M\rightarrow M$ and the decompostion $$ \text{exp}_p(\zeta,\ov{\zeta})=f(\zeta) + g(\zeta,\ov{\zeta}), $$ on some neighborhood of $0$, where $f$ is holmorphic in $\zeta$ and $g$ is the sum of all monomials which are not holomorphic in $\zeta$. Then {\it the holomorphic part} of the exponential map at $p$ is defined to be $$ \text{exph}_p(\zeta):=f(\zeta). $$ \end{defn} \subsection{The Bochner connection}\label{bc} We present the construct of the holomorphic affine connection, whose affine exponential map is the {\it holomorphic exponential map} $\text{exph}_p$. We also show that $\text{exph}_p$ is the same as the inverse to the Bochner normal coordinate system, using the affine geodesic equations of the connection. \begin{thm} [Kapranov \cite{kapranov1999}] There exists a holomorphic affine connection $\nabla^{\mathbb{C}}$ on $T'(M\times\ov{M})$, defined over a neighborhood of $\iota(M)$, whose affine exponential map is $\text{\rm exp}_p^{\mathbb{C}}$. The restriction of $\nabla^{\mathbb{C}}$ to $T'_pM$ is also a holomorphic affine connection, defined over a neighborhood of $p$ in $M$. The affine exponential map of $\nabla^\mathbb{C}|_{T'_pM}$ is $\text{\rm exph}_p$. \end{thm} \begin{proof} Let $\nabla$ be the K\"{a}hler connection, defined by the Christoffel symbols $\Gamma^j_{kl}(z,\ov{z})=\frac{\partial g_{k\ov{m}}(z,\ov{z})}{\partial z_l}g^{\ov{m}j}(z,\ov{z})$. Denote by $\nabla^{\mathbb{C}}$ the analytic continuation (complexification) of $\nabla$. Then $\nabla^{\mathbb{C}}$ is an affine connection, defined by the coefficients of the connection 1-form: $$ \Gamma^j_{kl}(z,\ov{w})=\frac{\partial g_{k\ov{m}}(z,\ov{w})}{\partial z_l}g^{\ov{m}j}(z,\ov{w}), $$ where $(z,\ov{w})$ are holomorphic coordinates for $M\times\ov{M}$. Moreover, its affine exponential map is the same as $\text{exp}_p^{\mathbb{C}}$, since the complexification of $\text{exp}_p$ is unique. To prove the second statement, take the decomposition $$ T'_{(p,\ov{p})}(M\times\ov{M})=T'_pM\oplus T'_{\ov{p}}\ov{M}, $$ and restrict $\nabla^\mathbb{C}$ to $T'_pM$. Then this is a holomorphic affine connection on $T'_pM$, defined only in some neighborhood of $p$ in $M$. The affine exponential map of this connection is the holomorphic exponential map $\text{exph}_p$. \end{proof} From now on, we denote $\nabla^\mathbb{C}|_{T'_pM}$ by $\nabla^p$, and call it the {\it Bochner connection} at $p$. The following lemma shows the affine geodesic equations for the holomorphic exponential map $\text{exph}_p$. \begin{lem} The geodesics of the Bochner connection $\nabla^p$ emanating from $p$ in the initial direction $\zeta\in T'_pM$ satisfies the following system of second order ODE: \begin{equation}\label{eqn} \left\{ \begin{array}{l} \frac{d^2z_{j}(t)}{dt^2}+\Gamma^j_{kl}(z(t),\ov{p})\frac{dz_k(t)}{dt}\frac{dz_l(t)}{dt}=0,\medskip\medskip\\ \ \ \ \ \ z(0)=p,\ \ \ \frac{dz_{j}}{dt}(0)=\zeta_j. \end{array}\right. \end{equation} \end{lem} \begin{proof} The curve $\text{exp}_p^{\mathbb{C}}(\zeta t,\ov{\xi} t)$, constructed by the affine exponential map of $\nabla^\mathbb{C}$ satisfies \begin{equation}\label{eqn1} \left\{ \begin{array}{l} \frac{d^2z_j(t)}{dt^2}+\Gamma^j_{kl}(z(t),\ov{w}(t))\frac{dz_k(t)}{dt}\frac{dz_l(t)}{dt}=0,\medskip\medskip\\ \ \ \ \ \ z(0)=p,\ \ \ \frac{dz_{j}}{dt}(0)=\zeta_j, \end{array}\right. \end{equation} and \begin{equation}\label{eqn2} \left\{ \begin{array}{l} \frac{d^2\ov{w_j}(t)}{dt^2}+\Gamma^{\ov{j}}_{\ov{k}\ov{l}}(z(t),\ov{w}(t))\frac{d\ov{w_k}(t)}{dt}\frac{d\ov{w_l}(t)}{dt}=0,\medskip\medskip\\ \ \ \ \ \ \ov{w}(0)=\ov{p},\ \ \ \frac{d\ov{w_j}}{dt}(0)=\ov{\xi_j}, \end{array}\right. \end{equation} where $(\zeta,\ov{\xi})\in T'_pM\oplus T''_pM=\mathbb{C}T_pM$. It suffices to let $\xi\equiv0$, since the solution of (\ref{eqn2}) becomes the constant map $(w_1,\ldots,w_n)\equiv(p_1,\ldots,p_n)$. \end{proof} Using the above lemma, we prove the follwing proposition, appeared first in \cite{kapranov1999}. \begin{prp}\label{geo} The inverse to the holomorphic exponential map at $p$ of the real analytic K\"{a}hler metric is the Bochner normal coordinate system at $p$, up to unitary linear transformations. \end{prp} \begin{proof} Let $\varphi$ be the inverse to the Bochner normal coordinate system and $\tilde{\gamma}(t)$ the curve in $M$ given by $\tilde{\gamma}(t)=\varphi(v t)$ where $v\in\mathbb{C}^n\cong T'_pM$. It is enough to show that $\tilde{\gamma}(t)=(z_1(t),\ldots,z_n(t))$ satisfies (\ref{eqn}). By the definition of the normal coordinates, we obtain $\tilde{\gamma}(0)=\varphi(0)=p$ and \begin{equation}\label{atz} \frac{\partial \zeta_k}{\partial z_l}=g^{\ov{j}k}(p)g_{l\ov{j}}(z,\ov{p}),\ \ \ \frac{\partial z_k}{\partial \zeta_r}=g_{r\ov{\lambda}}(p)g^{\ov{\lambda}k}(z,\ov{p}). \end{equation} Since $\frac{\partial \zeta_k}{\partial z_l}\big|_{z=p}=\delta_{lk}$, $\tilde{\gamma}'(0)=(\frac{dz_1}{dt}(0),\ldots,\frac{dz_n}{dt}(0))=(\frac{d\zeta_1}{dt}(0),\ldots,\frac{d\zeta_n}{dt}(0))=v$. Then the holomorphicity of the Bochner normal coordinates implies that \begin{eqnarray*} \frac{d^2z_{j}(t)}{dt^2} & + & \Gamma^j_{kl}(\tilde{\gamma}(t),\ov{p})\frac{dz_k(t)}{dt}\frac{dz_l(t)}{dt}\\ & = & \frac{\partial^2z_j}{\partial \zeta_r \partial \zeta_s}\frac{d\zeta_r}{dt}\frac{d\zeta_s}{dt}+\Gamma^j_{kl}(\tilde{\gamma}(t),\ov{p})\frac{\partial z_k}{\partial \zeta_r}\frac{d\zeta_r}{dt}\frac{\partial z_l}{\partial \zeta_s}\frac{d\zeta_s}{dt} \\ & = & \Big\{\frac{\partial^2z_j}{\partial \zeta_r \partial \zeta_s}+\Gamma^j_{kl}(\tilde{\gamma}(t),\ov{p})\frac{\partial z_k}{\partial \zeta_r}\frac{\partial z_l}{\partial \zeta_s}\Big\}v_rv_s . \end{eqnarray*} Thus it suffices to show that the following analytic differential equations hold: $$ \frac{\partial^2z_j}{\partial \zeta_r \partial \zeta_s}+\Gamma^j_{kl}(\tilde{\gamma}(t),\ov{p})\frac{\partial z_k}{\partial \zeta_r}\frac{\partial z_l}{\partial \zeta_s}=0. $$ The matrix equation $d(A\cdot A^{-1})=0$ and the equation (\ref{atz}) yield \begin{align*} \frac{\partial^2z_j}{\partial \zeta_r\partial \zeta_s}&=g_{r\ov{\lambda}}(p)\frac{\partial g^{\ov{\lambda}j}(z,\ov{p})}{\partial z_l}\frac{\partial z_l}{\partial \zeta_s}\\ &=-g_{r\ov{\lambda}}(p)g^{\ov{\lambda}k}(z,\ov{p})\frac{\partial g_{k\ov{m}}(z,\ov{p})}{\partial z_l}g^{\ov{m}j}(z,\ov{p})\frac{\partial z_l}{\partial \zeta_s}\\ &=-\frac{\partial z_k}{\partial \zeta_r}\frac{\partial g_{k\ov{m}}(z,\ov{p})}{\partial z_l}g^{\ov{m}j}(z,\ov{p})\frac{\partial z_l}{\partial \zeta_s}. \end{align*} Therefore, we arrive at $$ \frac{\partial^2z_j}{\partial \zeta_r \partial \zeta_s}+\Gamma^j_{kl}(\tilde{\gamma}(t),\ov{p})\frac{\partial z_k}{\partial \zeta_r}\frac{\partial z_l}{\partial \zeta_s}=0. $$ \end{proof} \begin{rmk} For a bounded domain with the Bergman metric, it is known that the above analytic equations hold for the Bergman representative map \cite{Lu1984}. This is, of course, strongly analogous to the analysis on the flow of vector fields in the context of Riemannian geometry. \end{rmk} \section{The Bochner connection on a manifold with the Bergman metric} \label{bb} Let $M$ be a complex manifold with the Bergman metric and $p$ a point in $M$. Since the Bergman metric is a real-analytic K\"{a}hler metric, the Bochner connection can be constructed in an open neighborhood of $p$ as in Section \ref{bc}. On the other hand, we show that the Bochner connection actually extends to the whole manifold except possibly for an analytic variety. \subsection{The extended Bochner connection} Suppose that $M$ is a complex manifold which possesses the Bergman metric. Recall that in a local coordinate system $(U\times \ov{V},(z_1,\ldots,z_n,\ov{w_1},\ldots,\ov{w_n}))$ for $M\times\ov{M}$, the Bergman kernel form is $$ K(z,\ov{w})=K^{\ast}_{U\times\ov{V}}(z,\ov{w})dz_1\wedge\cdots\wedge dz_n\wedge d\ov{w_1}\wedge\cdots\wedge d\ov{w_n}, $$ where $K^{\ast}_{U\times\ov{V}}(z,\ov{w})$ is a well-defined function on $U\times\ov{V}$. Define the tensor on $M\times\ov{M}$ by $$ G(z,\ov{w}):=\sum\limits_{j,k=1}^n g_{j\ov{k}}(z,\ov{w})dz_j\otimes d\ov{w_k}=\sum\limits_{j,k=1}^n \frac{\partial^2\log K^{\ast}_{U\times V}(z,\ov{w})}{\partial z_j\partial\ov{w_k}}dz_j\otimes d\ov{w_k}. $$ Let $(\widetilde{U}\times \ov{\widetilde{V}},(\widetilde{z}_1,\ldots,\widetilde{z}_n,\ov{\widetilde{w}_1},\ldots,\ov{\widetilde{w}_n}))$ be another coordinate system. Then, in $(U\times\ov{V})\cap(\widetilde{U}\times\ov{\widetilde{V}})$, the following transformation formula \begin{equation}\label{trk} K^{\ast}_{U\times\ov{V}}(z,\ov{w})=K^{\ast}_{\widetilde{U}\times\ov{\widetilde{V}}}(\widetilde{z},\ov{\widetilde{w}}) \det{J_U^{\widetilde{U}}(z)}\ov{\det{J_V^{\widetilde{V}}(w)}} \end{equation} holds, where $J_U^{\widetilde{U}}(z)=\big(\frac{\partial \widetilde{z}_k}{\partial z_j}\big)_{n\times n}$ and $J_V^{\widetilde{V}}(w)=\big(\frac{\partial \widetilde{w}_k}{\partial w_j}\big)_{n\times n}$. In terms of matrices, \begin{equation}\label{trg} G_{U\times\ov{V}}(z,\ov{w})=J_U^{\widetilde{U}}(z)\cdot\widetilde{G}_{\widetilde{U}\times \ov{\widetilde{V}}}(\widetilde{z},\ov{\widetilde{w}})\cdot\ov{J_V^{\widetilde{V}}(w)}^t \end{equation} where $G_{U\times \ov{V}}(z,\ov{w})=\Big(\frac{\partial^2\log K^{\ast}_{U\times \ov{V}}(z,\ov{w})}{\partial z_j\partial\ov{w_k}}\Big)_{n\times n}$ and $\widetilde{G}_{\widetilde{U}\times \ov{\widetilde{V}}}(\widetilde{z},\ov{\widetilde{w}})=\Big(\frac{\partial^2\log K^{\ast}_{\widetilde{U}\times\ov{\widetilde{V}}}(\widetilde{z},\ov{\widetilde{w}})}{\partial \widetilde{z}_j\partial\ov{\widetilde{w}_k}}\Big)_{n\times n}$.\\ Given a point $\ov{p}\in\ov{M}$, define the analytic varieties $$ Z^p_0:=\{z\in M: K(z,\ov{p})=0\} \text{\ \ and\ \ } Z^p_1:=\{z\in M-Z^p_0: \det(G(z,\ov{p}))=0\}. $$ \begin{lem} If $f:M\rightarrow \widetilde{M}$ is a biholomorphism with $q=f(p)$, then it satisfies\smallskip \begin{enumerate} \item [(1)] $f(Z^p_0)=\widetilde{Z}^q_0$,\smallskip \item [(2)] $f(Z^p_1)=\widetilde{Z}^q_1$,\smallskip \item [(3)] $f(M^p)=\widetilde{M}^q$, \end{enumerate} where $M^p:=M-(Z^p_0\cup Z^p_1)$ and $\widetilde{M}^q:=\widetilde{M}-(\widetilde{Z}^q_0\cup\widetilde{Z}^q_1)$. \end{lem} \begin{proof} The transformation formulae (\ref{trk}) and (\ref{trg}) prove that these sets are well-defined and invariant under biholomorphisms. \end{proof} Let $T'M^p$ be the holomorphic tangent bundle over $M^p$. Then, \begin{thm}\label{connection} There exists a holomorphic affine connection $\nabla^p$ on $T'M^p$ satisfying: \begin{enumerate} \item [(1)] $\nabla^p$ is locally flat, i.e. the torsion and curvature of $\nabla^p$ are zero.\medskip \item [(2)] {\rm(\ref{eqn})} are the affine geodesic equations for $\nabla^p$.\medskip \item [(3)] $f_{\ast}(\nabla^p_XY)=\nabla^q_{\widetilde{X}}{\widetilde{Y}}$ for all $X,Y\in T'M^p$ where $\widetilde{X}=f_{\ast}(X), \widetilde{Y}=f_{\ast}(Y)$ and $f$ is the same as in the preceding lemma. \end{enumerate} \end{thm} \begin{proof} The proof is essentially the same as that of the case of bounded domains (cf. \cite{Webster1979}). Define the connection 1-forms as follows: Note that $G:=G_{U\times\ov{V}}(z,\ov{p})$ is an invertible holomorphic $(n\times n)$-matrix on $U\cap M^p$ so that $G^{-1}$ is well-defined on $U\cap M^p$. Define the $(n\times n)$-matrix $\omega$ of holomorphic 1-forms by $\omega:=\partial G\cdot G^{-1}$, locally defined on $U\cap M^p$. In other words, $$ \omega_i^j(z)=\Gamma_{ik}^j(z,\ov{p})dz_k=\frac{\partial g_{i\ov{m}}(z,\ov{p})}{\partial z_k}g^{\ov{m}j}(z,\ov{p})dz_k. $$ Since $\partial G=\omega\cdot G$, the transformation formula (\ref{trg}) yields the \textit{transformation rule for connection 1-forms}: \begin{equation}\label{trc} \omega\cdot J=\partial J+J\cdot\widetilde{\omega}. \end{equation} To show (1), observe that $\nabla^p$ is torsion-free, because $\frac{\partial}{\partial z_k}g_{i\ov{j}}=\frac{\partial}{\partial z_i}g_{k\ov{j}}$. Moreover, its curvature form $\Omega:=d\omega-\omega\wedge\omega$ is also zero, because $G:=(g_{j\ov{k}}(z,\ov{p}))$ is holomorphic. More precisely, \begin{align*} d(\partial G\cdot G^{-1})-(\partial G\cdot G^{-1})\wedge(\partial G\cdot G^{-1})&=\partial(\partial G\cdot G^{-1})+\partial G\wedge\partial G^{-1}\cdot G\cdot G^{-1}\\ &=-\partial G\wedge\partial G^{-1}+\partial G\wedge\partial G^{-1}=0. \end{align*} Now, (2) follows immediately from the construction, and (3) follows by (\ref{trc}). \end{proof} \begin{rmk}\label{remc} The last statement in Theorem \ref{connection} implies the $\mathbb{C}$-linearity of the Bergman representative coordinates as follows: Since geodesics are straight lines in this coordinate system, $\hbox{\rm rep}_{f(p)}\circ f\circ \hbox{\rm rep}_p^{-1}$ maps straight lines to straight lines. Thus it is $\mathbb{R}$-linear. Since the representative map is holomorphic, this is $\mathbb{C}$-linear. This is the geometric proof of Theorem \ref{clin}, promised in Section \ref{br}. \end{rmk} It is possible to find the formula of the inverse to the affine exponential map of $\nabla^p$ not only at $p$ but also at an arbitrary point $q\in M^p$. The proof is the same as that of Theorem \ref{geo}. \begin{prp} Denote by $\text{\rm exph}_q$ the affine exponential map of $\nabla^p$ at $q$. Let $(\zeta_1,\ldots,\zeta_n)$ be the coordinate system for the holomorphic tangent space at $q$, $T'_qM^p$. Then, in the local coordinate neighborhood $(U,(z_1,\ldots,z_n))$ containing $q$, $$ \text{\rm exph}_q^{-1}(z)=(\zeta_1(z),\ldots,\zeta_n(z)), $$ where: $$ \zeta_j(z)=\sqrt{g}^{\ov{k}j}(q,\ov{p})\Big\{\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K^{\ast}_{U\times\ov{V}}(z,\ov{w})-\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K^{\ast}_{U\times\ov{V}}(q,\ov{w})\Big\}. $$ \end{prp} \begin{rmk} The above proposition implies that ${\hbox{\rm exph}_q}^{-1}$ is a linear transform of ${\hbox{\rm exph}_p}^{-1}$. Therefore $M^p$ can be covered by copies (by linear maps) of the representative coordinates. This can be explained by the concept of affine structures in \cite{matsushima1968}. This concept will be introduced in the following subsection. \end{rmk} \begin{rmk} For a real-analytic K\"{a}hler manifold, it is known that such an affine structure exists on a local neighborhood of a given point [24]. \end{rmk} \subsection{Affine structure of $M^p$} The proof of Theorem \ref{geo} also implies the following \begin{prp} Let $U$ be a local neighborhood of $p$ and $V$ a local neighborhood of $0$ such that ${\hbox{\rm exph}_p}^{-1}:U\rightarrow V$ is biholomorphic. Take any straight line $l$ in $V$ (not necessarily passing through $p$). Then ${\hbox{\rm exph}_p(l)}$ is a geodesic of $\nabla^p$. \end{prp} This proposition follows immediately, understanding the affine structures as follows: \begin{defn} Let $X$ be a complex manifold of dimension $n$ and $\mathcal{M}=\{U_i,\phi_i\}_{i\in I}$ the maximal atlas. A subset $\mathcal{A}=\{U_j,\phi_j\}_{j\in J},J\subset I,$ of $\mathcal{M}$ is called an {\it affine atlas} of $X$ if all transition maps are complex affine transformation of $\mathbb{C}^n$. We say that each maximal affine atlas defines a complex {\it affine structure} of $X$. \end{defn} \begin{thm} [Gunning \cite{Gunning1967}, Matsushima \cite{matsushima1968}] There is a one-to-one correspondence between the set of all complex affine structures on a complex manifold $X$ and the set of all locally flat holomorphic affine connections on $X$. \end{thm} \begin{rmk} For any $x,y\in M^p$, $\text{exph}_y\circ\text{exph}_x^{-1}$ is an affine transformation of $\mathbb{C}^n$. Thus $M^p$ has a complex affine structure and the Bochner connection $\nabla^p$ is the corresponding locally flat holomorphic affine connection. \end{rmk} \subsection{Geodesics of $\nabla^p$} The behavior of geodesics of $\nabla^p$ played an important role in the proof of the following theorem, which generalizes Fefferman's extension theorem. \begin{thm}[Webster \cite{Webster1979}] Let $f:\Omega\rightarrow\widetilde{\Omega}$ be a biholomorphism between bounded domains with smooth boundaries. Suppose that their Bergman kernels are smooth up to the boundaries. Then $f$ extends smoothly to a dense open subset of $\partial\Omega$. \end{thm} \begin{proof} [Sketch of the proof] Since the kernel is smooth up to the boundary, $\nabla^p$ extends to a dense open subset of the boundary in the sense that, it is defined over the boundary except for limit points of the variety $Z^p_0\cup Z^p_1$. Thus the Bochner normal coordinate system, i.e. the Bergman representative coordinate system, extends over the boundary so that the corresponding geodesics extend through the boundary points. Then the result follows. \end{proof} Notice that the incompleteness of $\nabla^p$ has played an important special role in this proof. On the other hand, one might expect to find a suitable K\"{a}hler metric, compatible to $\nabla^p$. But this is impossible because the exponential map of a K\"{a}hler metric is holomorphic if and only if the metric is flat (the Euclidean metric). However, using the image of geodesics under $\text{rep}_p=\text{exph}_p^{-1}$, it is possible to define a distance between two points in a connected manifold $M^p$. \begin{defn} [Intrinsic distance] Let $M$ be a connected manifold with the Bergman metric and $\mathcal{A}=\{U_i,\phi_i\}_{i\in I}$ the affine structure of $M^p$, given by the Bochner normal coordinate system. If $x,y\in U_i$ for some $U_i$, then define $\delta^p(x,y)$ to be the euclidean norm of the vector $$ \Big(\ldots,\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K(x,\ov{w})-\frac{\partial}{\partial \ov{w_k}}\Big|_{w=p}\log K(y,\ov{w}),\ldots\Big). $$ In general, if $x,y$ are arbitrary points in $M^p$, then define the {\it intrinsic distance} by $$ \text{d}^p(x,y):=\inf\sum\limits_{j=1}^N\delta^p(p_{j-1},p_j), $$ where the infimum is taken over all possible partitions with $x=p_0,\ldots,p_N=y$. This is well-defined since there always exists a broken geodesic between two points in the connected affine manifold $(M^p,\nabla^p)$. \end{defn} \begin{rmk} For the symmetry $\text{d}^p(x,y)=\text{d}^p(y,x)$, we do not use the normalization factor of the Bochner normal coordinate system. Although $\text{d}^p$ is not a biholomorphic invariant, its finiteness between two points is a biholomorphic invariant. \end{rmk} The following theorem shows the relation between the intrinsic distance $\text{d}^p$ and the analytic variety $Z_0^p$. \begin{thm} If $q\in Z_0^p$, then there are no geodesics connecting to $q$ with the intrinsic distance finite. \end{thm} \begin{proof} Suppose that there exists a geodesic connecting $q$ with the intrinsic distance finite. Then the Bochner normal coordinate system is well-defined at $q$. Note that $\frac{\partial}{\partial\ov{w_k}}K(q,\ov{w})/K(q,\ov{w})$ is an anti-holomorphic function in $w$ for each $k$. Fix an index $k$ and then in the $w_k$-section, this is an one-variable function. Since $K(q,\ov{p})=0$, it has a simple pole at $p$ so that the value must diverge. This implies that $\delta^p$ diverges, which contradicts the finiteness of the intrinsic distance. \end{proof} \section{On the variety $Z^p_0\cup Z^p_1$ and the Skwarczynski annulus} \subsection{On the variety $Z^p_0$} Let $\Omega$ be a bounded domain in $\mathbb{C}^n$ and $K(z,\ov{w})$ the Bergman kernel of $\Omega$. Fix a point $p\in\Omega$. Then the Bochner connection $\nabla^p$ can be constructed over $\Omega^p=\Omega-(Z^p_0\cup Z^p_1)$. Whether $Z^p_0$ is empty is related to the well-known Lu Qi-Keng conjecture; a bounded domain $\Omega$ is called a {\it Lu Qi-Keng domain} if $Z^p_0$ is empty for each point $p\in\Omega$. \begin{rmk} It was anticipated in 1960's that every bounded domain should be a Lu Qi-Keng domain, called the Lu Qi-Keng conjecture. However, many counterexamples have been discovered (\cite{Skwarczynski1969}, \cite{boas1986,boas1996}, and others.), and in contrast, many domains are Lu Qi-Keng. \end{rmk} {\bf Example} (Skwarczynski's annulus) Let $A:=\{z\in\mathbb{C}:0<r<|z|<1\}$. Then the Bergman kernel of $A$ is $$ K(z,\ov{w})=\frac{\wp(\log z\ov{w})+\frac{\eta_1}{\omega_1}}{\pi z\ov{w}}, $$ where $\wp$ is the Weierstrass elliptic function with half periods $\omega_1=\log(1/r)$, $\omega_2=\pi i$ and $\eta_1$ is the increment of the Weierstrass zeta function $\zeta$ with respect to $\omega_1$.\smallskip {\small \bf Zeros of $K(z,\ov{p})$:} Define $h(\lambda):=\wp(\log\lambda)+\frac{\eta_1}{\omega_1}$ on the set $\widetilde{A}:=\{\lambda\in\mathbb{C}:r^2<|\lambda|<1\}$. Then $K(z,\ov{w})=\frac{h(\lambda)}{\pi\lambda}$, where $\lambda=z\ov{w}$. In \cite{Skwarczynski1969}, Skwarczynski proved that \begin{itemize} \item $h(\lambda)$ is real, $\forall\lambda\in\mathbb{R}$.\smallskip \item For $r<e^{-2}$, there exists a point $\lambda\in\widetilde{A}$ such that $h(\lambda)=0$. \end{itemize} Later, the above result could be improved as follows (cf. \cite{blocki2010}, Theorem 3.4): \begin{itemize} \item $h(-1)=h(-r^2)<0$ and $h(-r)>0$.\smallskip \item For $r<1$, there exist only two solutions $\lambda_1, \lambda_2$ of the equation $h(\lambda)=0$ in $\widetilde{A}=\{\lambda\in\mathbb{C}:r^2<|\lambda|<1\}$, where $\lambda_2\in(-1,-r)$ and $\lambda_1\in(-r,-r^2)$. \end{itemize} Fix a point $p\in A$. The symmetry of the annulus allows to assume that $p\in(r,1)$ on the real line. Let $\lambda_1^p$ and $\lambda_2^p$ be the solutions of the equation $h(z\ov{p})=0$ satisfying $\lambda_1=\lambda_1^p\ov{p}$ and $\lambda_2=\lambda_2^p\ov{p}$. Since $\lambda_2^p\in(-1/p,-r/p)$, $\lambda_1^p\in(-r/p,-r^2/p)$, the number of elements of the zero set $\{z\in A:K(z,\ov{p})=0\}$ depends on the location of $p$. For example, if $p$ is close enough to $1$ (or $r$), then there exists only one solution of the equation $K(z,\ov{p})=0$ in the annulus $A$, located in $(-1,-r)$ (or $(-r,-r^2)$). {\small \bf Geodesics of the Bochner connection $\nabla^p$:} Recall that the exponential map of the Bochner connection $\nabla^p$ is the inverse map of $\hbox{\rm rep}_p$. A simple computation shows that $$ \hbox{\rm rep}_p(z)=C_1\cdot\frac{\wp'(\log z\ov{p})}{\wp(\log z\ov{p})+\frac{\eta_1}{\omega_1}}+C_2, $$ where $C_1$ and $C_2$ are constants, and $\wp'$ is the first derivative of the Weierstrass elliptic function $\wp$. Then $\hbox{\rm rep}_p$ is also an elliptic function that shares the same periods with $\wp$ and possesses three simple poles. Since the image of any elliptic function contains $\mathbb{C}$, the pre-image of each straight line consists of three curves (counting multiplicity) in the annulus $\{z\in\mathbb{C}:r^2<|z\ov{p}|<1\}$. Note that the geodesics of $\nabla^p$ are the images of straight lines by the holomorphic exponential map $\hbox{\rm exph}_p=\hbox{\rm rep}_p^{-1}$. Therefore, it is enough to study the pre-images of straight lines by the elliptic function $f(\lambda):=\frac{\wp'(\lambda)}{\wp(\lambda)+c}$ with the lattice $\Lambda=\{\mathbb{Z}(2\omega_1)+\mathbb{Z}(2\omega_2)\}$. Since $\omega_1\in\mathbb{R}$ and $\omega_2\in i\mathbb{R}$, the function $\wp$ and its derivative $\wp'$ have rectangular lattice so that $\wp(z)=\ov{\wp(\ov{z})}, \wp'(z)=\ov{\wp'(\ov{z})}$. This implies that for all $t\in\mathbb{R}$, \begin{itemize} \item $\wp(t2\omega_j)=\ov{\wp(t\ov{2\omega_j})}=\ov{\wp(t2\omega_j)}$ and $\wp(\omega_i+t2\omega_j)=\ov{\wp(\omega_i+t2\omega_j)}$,\smallskip \item $\wp'(t2\omega_j)=\pm\ov{\wp'(t2\omega_j)}$ and $\wp'(\omega_i+t2\omega_j)=\pm\ov{\wp'(\omega_i+t2\omega_j)}$. \end{itemize} Since $c=\frac{\eta_1}{\omega_1}\in\mathbb{R}$, we see that \begin{itemize} \item $f(\mathbb{R})\subset\mathbb{R}$ and $f(i\mathbb{R})\subset i\mathbb{R}$.\smallskip \item $f(\mathbb{R}+\omega_2)\subset\mathbb{R}$ and $f(\omega_1+i\mathbb{R})\subset i\mathbb{R}$. \end{itemize} Therefore, $f^{-1}(\mathbb{C}-(\mathbb{R}\cup i\mathbb{R}))$ consists of 4 open sub-rectangles which are divided by $\omega_1$ and $\omega_2$ in the fundamental region $\{z\in\mathbb{C}:0\leq Re(z)\leq 2\omega_1,0\leq Im(z)\leq Im(2\omega_2)\}$. This provides an approximate, but useful, information on the location of geodesics. \subsection{The relation between $Z^p_0$ and $Z^p_1$} In the light of the Cheng-Yau conjecture \cite{yau1982} (solutions are announced in \cite{fw1997} and \cite{huangxiao2016}), we present the following \begin{prp} Let $\Omega$ be a strictly pseudoconvex domain with smooth boundary. If the Bergman metric of $\Omega$ is K\"{a}hler-Einstein, then $Z^p_0=Z^p_1$ for every $p\in\Omega$. \end{prp} \begin{proof} It turns out that $\frac{K(z,\ov{z})}{\det[g_{j\ov{k}}]}$ is a positive constant if the Bergman metric is K\"{a}hler-Einstein (cf. \cite{fw1997}, Proposition 1.1). Therefore, the polarization gives the result. \end{proof} On the other hand, note that $$ \hbox{\rm det}[G(z,\ov{p})]=\frac{\hbox{\rm det}[K(z,\ov{p})\frac{\partial^2}{\partial z_i \partial \ov{w_j}}\big|_{w=p} K(z,\ov{w})-\frac{\partial}{\partial z_i}K(z,\ov{p})\frac{\partial}{\partial\ov{w_j}}\big|_{w=p}K(z,\ov{w})]}{K(z,\ov{p})^{2n}}. $$ Set $$ \widehat{Z}^p_1:=\Big\{z\in\Omega:\hbox{\rm det}\Big[K(z,\ov{p})\frac{\partial^2}{\partial z_i \partial \ov{w_j}}\Big|_{w=p} K(z,\ov{w})-\frac{\partial}{\partial z_i}K(z,\ov{p})\frac{\partial}{\partial\ov{w_j}}\Big|_{w=p}K(z,\ov{w})\Big]=0\Big\}. $$ Then we present: \begin{rmk} If $n>1$, then $Z^p_0\subset \widehat{Z}^p_1$, and the statement is false if $n=1$ (cf. \cite{dinew2011}, Lemma 2.1). In particular, ${Z^p_0} \nsubseteq \widehat{Z}^p_1$ for the annulus in the complex plane. Therefore, it is natural to ask whether a domain of higher dimension satisfying $Z^p_0\subsetneq \widehat{Z}^p_1$ exists. Such a domain actually exists; when $r$ is small enough, the product domain $A_r\times D$ has a point $p$ such that $Z^p_0\subsetneq \widehat{Z}^p_1$, where $A_r:=\{z\in\mathbb{C}:0<r<|z|<1\}$ is the annulus and $D$ is the unit disk in $\mathbb{C}$. The proof is as in the following: Let $K((z_1,z_2),(\ov{w_1},\ov{w_2}))=K_{A_r}(z_1,\ov{w_1})K_D(z_2,\ov{w_2})$ be the Bergman kernel of $A_r\times D$. Set $F_{\Omega}(z,\ov{w}):=\hbox{\rm det}[K_{\Omega}(z,\ov{w})\frac{\partial^2}{\partial z_i \partial \ov{w_j}} K_{\Omega}(z,\ov{w})-\frac{\partial}{\partial z_i}K_{\Omega}(z,\ov{w})\frac{\partial}{\partial\ov{w_j}}K_{\Omega}(z,\ov{w})]$. Then $$ F_{A_r\times D}((z_1,z_2),(\ov{w_1},\ov{w_2}))=K_{A_r}(z_1,\ov{w_1})^2K_D(z_2,\ov{w_2})^2F_{A_r}(z_1,\ov{w_1})F_D(z_2,\ov{w_2}). $$ Since $K_D(z_2,\ov{w_2})^2F_D(z_2,\ov{w_2})\neq0$ for all $(z_2,\ov{w_2})\in D\times\ov{D}$, it suffices to show that there exists a point $(z,\ov{p})\in A_r\times\ov{A_r}$ satisfying $K_{A_r}(z,\ov{p})\neq 0$ and $$ K_{A_r}(z,\ov{p})^2F_{A_r}(z,\ov{p})=K_{A_r}(z,\ov{p})^4\partial_z\ov{\partial_w}|_{w=p}\log K_{A_r}(z,\ov{w})=0. $$ It is known that if $r$ is sufficiently close to $0$ and $p$ is on the real axis, such a point $(z,\ov{p})$ exists, where $z$ is near the imaginary axis (see \cite{dinew2011}, the proof of Theorem 1.5). Moreover, this example can be modified to the case of irreducible strictly pseudoconvex domains as follows: Consider a strictly pseudoconvex exhaustion $\Omega_j$ for $A_r\times D$. Note that they are irreducible domains (cf. \cite{huckleberry1977}). On the other hand, the Bergman kernel of $\Omega_j$ and its derivatives uniformly converge on compacta to those of $A_r\times D$ (cf. \cite{ramadanov1967}). By Hurwitz's theorem, $\Omega_j$ satisfies ${Z^p_0} \subsetneq \widehat{Z}^p_1$ when $j$ is large enough. \end{rmk} \section{A generalization of the Lu theorem} We present an application of the Bochner connection. Let $\Omega$ be a bounded domain in $\mathbb{C}^n$ and $M$ a complex manifold with the positive-definite Bergman metric. Denote their Bergman metric by $\beta_{\Omega}$ and $\beta_M$, respectively. Call the point $p\in\Omega$ a {\it pole of the Bochner connection} $\nabla^p$ whenever $\text{rep}_p:\Omega^p\rightarrow\mathbb{C}^n$ is one-to-one. \begin{thm} \label{main} Suppose that $\Omega$ has a pole $p$ of $\nabla^p$. If there is a surjective holomorphic map $f:\Omega\rightarrow M$ satisfying $f^{\ast}\beta_{M}=\beta_{\Omega}$, then $f$ is a biholomorphism. \end{thm} This theorem is a generalization of the following well-known result. \begin{thm}[Lu Qi-Keng \cite{Lu1966}] \label{lu} If $\Omega$ is a bounded domain in $\mathbb{C}^n$, whose Bergman metric is complete and has constant holomorphic sectional curvature, then $\Omega$ is biholomorphic to the unit ball. \end{thm} \begin{proof} [Proof of $\ulcorner Theorem\ \ref{main} \Rightarrow Theorem\ \ref{lu}\lrcorner$] Although the below proof is included in the proof of Theorem 4.2.2 in \cite{GKK2011}, we recall that for convinience. Let $c$ be the constant, the holomorphic sectional curvature of $\Omega$. If $c>0$, then $\Omega$ would be a complete Riemannian manifold with all sectional curvatures $\geq c/4>0$. Thus Myers' theorem in Riemannian geometry implies that $\Omega$ is compact, a contradiction. If $c=0$, then the covering space is $\mathbb{C}^n$. Therefore the covering map is constant by Liouville's theorem, which is impossible. Consequently, $c<0$. In that case, it is known that the universal covering space of $\Omega$ is biholomorphic to the unit ball $\mathbb{B}^n$ and the covering map $f:\mathbb{B}^n\rightarrow\Omega$ is a Bergman isometry. Therefore $f$ has to be one-to-one by Theorem \ref{main}, and hence the conclusion of Theorem \ref{lu} follows. \end{proof} \begin{rmk} Notice that Theorem \ref{main} does not assume the completeness of the Bergman metric $\beta_{\Omega}$. Moreover, the bounded domain $\Omega$ need not possess the constant holomorphic sectional curvature. Besides the unit ball, the following domains satisfy the hypothesis of Theorem \ref{main}. \begin{itemize} \item Every complete circular domain; the center is a pole. \item Every bounded homogeneous domain; every point is a pole (cf. \cite{Xu1983}). \end{itemize} More generally, every bounded domain, which possesses a point $p$ such that the matrix $G(z,\ov{p})$ is independent of $z$, satisfies the hypothesis (the point $p$ is a pole). This is called a {\it representative domain} according to \cite{Lu1984}. \end{rmk} With slight modification of the proof of Theorem 4.2.2 in \cite{GKK2011}, we present \begin{proof} [Proof of Theorem \ref{main}] We are only to show that $f$ is one-to-one. Since $f^{\ast}\beta_{M}=\beta_{\Omega}$ implies that $df$ is non-singular, $f$ is locally invertible. Let $V$ be a neighborhood of $p$ and $U$ a neighborhood of $q:=f(p)$ such that $f|_{V}:V\rightarrow U$ is a biholomorphism. Denote by $g_0$ the inverse of $f|_V$. On the other hand $\nabla_{\Omega}=f^{\ast}\nabla_{M}$, since $f^{\ast}\beta_{M}=\beta_{\Omega}$, where $\nabla$ denotes the Bergman metric connection. The uniqueness of the polarization and the holomorphicity of $f$ yield that $\nabla_{\Omega}^{p}=f^{\ast}\nabla_{M}^{q}$. This means that $f$ maps geodesics of $\nabla^p$ to geodesics of $\nabla^q$, and one sees that $A:=\text{rep}_q \circ f|_V \circ \text{rep}_p^{-1}$ is $\mathbb{C}$-linear as in Remark \ref{remc}. Thus $g_0:=f|_V^{-1}=\text{rep}_p^{-1}\circ A\circ\text{rep}_{q}$, where $A$ is an invertible $\mathbb{C}$-linear map. Note that $\text{rep}_q\circ f=A^{-1}\circ\text{rep}_p$ on $\Omega-(Z^p\cup f^{-1}(Z^q))$, where $Z^p:=Z^p_0\cup Z^p_1$ and $Z^q:=Z^q_0\cup Z^q_1$. Then the restriction map $\text{rep}_p^{-1}|_{A\circ\text{rep}_q(M-(f(Z^p)\cup Z^q))}$ is a well-defined holomorphic map. Since the linear map $A$ is everywhere defined and $\text{rep}_q$ extends to a holomorphic mapping of $M-Z^q$, so does $g_0$. Denote by $g$ the extension of $g_0$. Let $X:=f^{-1}(Z^q)$. Then $g\circ f:\Omega-X\rightarrow\mathbb{C}^n$ is holomorphic and $g\circ f(z)=z$ for every $z\in\Omega-X$. Therefore, for every $\zeta\in M-Z^q$, choose $x\in\Omega$ such that $f(x)=\zeta$. Since $g(\zeta)=g(f(x))=x$, $g(M-Z^q)\subset\Omega$. Note that $g$ is a bounded holomorphic map on the connected manifold $M-Z^q$. By the Riemann extension theorem, $g$ extends to a holomorphic mapping of $M$ into $\mathbb{C}^n$. This shows that $g$ is the left inverse to $f$, and hence $f$ is one-to-one. \end{proof} \textit{Acknowledgements}: The author would like to express his deep gratitude to Professor Kang-Tae Kim for valuable guidance and encouragements, and to Professor J.P. Demailly for pointing out the relevance of \cite{Demailly1982,Demailly1994}. This work is part of author's Ph.D. dissertation at Pohang University of Science and Technology. \bibliographystyle{amsplain}
2,869,038,155,192
arxiv
\section{Introduction} The idea of doing jet tomography in ultrarelativistic heavy-ion (A-A) collisions, i.e. to utilize hard processes taking place along the creation of a soft bulk medium to probe both the geometry and the degrees of freedom of the medium has been proposed many years ago \cite{radiative1, radiative2, radiative3, radiative4, radiative5,radiative6}. At the Brookhaven Relativistic Heavy Ion Collider (RHIC), the observables considered to probe this physics was initially the nuclear suppression factor of single inclusive hadrons $R_{AA}$ \cite{PHENIX_first_RAA,PHENIX_RAA,PHENIX_RAA_phi} and the suppression factor $I_{AA}$ of hard back-to-back dihadron correlations \cite{STAR_dihadron_first,STAR_dihadron,STAR_dihadron_DzT}. Recent high statistics runs at RHIC as well as the significantly larger kinematic reach of heavy-ion experiments at the CERN Large Hadron Collider (LHC) have led to a large variety of new high $P_T$ observables, in particular also observables involving jet reconstruction using several different jet definitions, among them dijet imbalance measurements \cite{ATLAS,CMS}, jet-hadron (jet-h) correlations \cite{STAR-jet-h}, h-jet correlations \cite{h-jet}, jet fragmentation function \cite{jet-FF} and jet shapes \cite{jet-shape} or observales utilizing rare electroweak triggers such as $\gamma$-h correlations \cite{gamma-h-STAR,gamma-h-PHENIX} or $\gamma$-jet correlations \cite{CMS_gamma_jet}. To add to the complexity, jet definitions vary from calorimetric jets in which no unfolding of background fluctuations is done as used e.g. in \cite{ATLAS} to combined track/tower jets with $P_T$ cuts imposed on constituents and a hard tower trigger condition imposed as used e.g. in \cite{STAR-jet-h}. In this situation, it is fairly difficult to assemble a picture of what information the various observables actually carry, to what degree they are mutually consistent and what features of models they constrain. The aim of this paper is to improve on this situation by providing a clear conceptual framework in which similarities and differences between the various observables become transparent. The key observation for this is that the vast majority of observables (with the exception of nuclear modification factors) are measurements of a conditional probability given a trigger condition. The fundamental reason for this is that both hard and electroweak processes are rare, i.e. if there would be no selection of the subclass of events containing hard processes, the background of soft bulk medium physics would dilute all signatures of hard probes to the point where they would no longer be observable. However, conditional probabilities are well known to be frequently non-intuitive, and the natural starting point for analyzing them is Bayes' formula, which will be utilized in the following. \section{Observables and conditional probabilities} \subsection{General considerations} In perturbative Quantum Chromodynamics (pQCD), the rate of hard scattering processes can be computed with reasonable accuracy once the momentum transfer in the scattering process exceeds a few GeV. The uncertainty principle allows to estimate the timescale for the hard reaction as $\tau \sim E/Q^2$ where $E$ is the energy scale of the final state partons and $Q \sim O(E)$ the virtuality scale. Inserting typical numbers, one finds that hard processes occur before a soft medium can be formed, which is the reason that the pQCD computation of hard processes can safely be assumed to factorize from any medium physics. This property makes high $P_T$ observables a meaningful tomographic probe. The highly virtual back-to-back partons subsequently undergo a final state shower evolution in which the virtuality scale decreases from its initial high value to a non-perturbative scale via the branching into additional partons. This process in vacuum is well described by MC formulations such as the PYSHOW algorithm of PYTHIA \cite{PYSHOW}. Once at the non-perturbative scale, the parton shower hadronizes and becomes a collimated spray of hadrons. Jet clustering algorithms such as anti-$k_T$ or SIS-cone as provided e.g. by the FastJet package \cite{FastJet} aim to 'undo' the QCD shower evolution and turn the spray of hadrons again into a 'jet', i.e. an object which is a reasonable proxy for the original parton largely free of the complications of shower evolution and hadronization and sensitive to hard physics only. Measurements of hard probes in the context of heavy ion collisions aim at answering the question how the medium modifies this evolution, i.e. in what way the properties of the shower are different if it evolves inside a medium. If a jet contains $n$ hadrons, since the position space information can not be resolved, the complete theoretically measurable information about the jet is contained in the momentum space density $\rho_n(P_1, P_2,... P_n)$ and in the knowledge of hadron identities. However, currently the focus is on measurements of the single particle distribution $\rho_1(P_1) = \int dP_2\dots dP_n \rho_n(P_1,P_2,\dots P_n)$, usually represented as parallel and perpendicular momentum spectra of particles with respect to the jet axis. In the future measurements may also include intra-jet correlations. These would be given e.g. by two particle correlation $C_2(P_1, P_2)$ and three particle correlations $C_3(P_1,P_2,P_3)$ or expressed in terms of subjet fractions. This information may be represented in different form, for instance the integrated jet shape \begin{equation} \Psi_{int}(r,R) = \frac{\sum_i E_i \theta(r-R_i)}{\sum_i E_i \theta(R-R_i)} \end{equation} (the integrated flux of energy as a function of angle $r$ with the jet axis of a jet of radius $R$, normalized to the total jet energy) is computable from the angular distribution of hadrons at given energy $dN/d\phi dE$ (as for instance obtained from a correlation measurement) as\begin{equation} \Psi_{int}(r,R) = \frac{\int_0^r d\phi dE E \frac{dN}{d\phi dE}}{\int_0^R d\phi dE E \frac{dN}{d\phi dE}}. \end{equation} It is thus theoretically sufficient to measure one representation of the single particle distribution, different representations contain redundant information. However, in practice a jet shape is always conditional on having found a jet in an event, whereas the angular distribution of hadrons obtained from a triggered correlation measurement is conditional on a different trigger condition, and hence the two representations will in practice not conntain precisely the same information. Moreover, no real measurement can resolve the true particle composition of every jet. If we use the notation that $P(A|B)$ stands for the probability of event $A$ occurring given another event $B$, the computation of the probability of observing shower properties $S$ (for instance the probability of measuring a shower hadron between 2 and 3 GeV) given a set of trigger conditions $T$ (for example given that a jet is clustered in an energy between 100 and 150 GeV) is written as $P(S|T,M)$ where $M$ stands for the particular model in which the calculation is carried out. Bayes' formula then allows to compute this as \begin{equation} \label{E-1} P(S|T,M) = \frac{P(T|S,M) P(S|M)}{P(T|M)} \end{equation} In words, the probability for observing shower properties $S$ given a trigger $T$ is the product of the probability to fulfill the trigger condition in a shower with property $S$ times the probability to generate a shower $S$, divided by the probability to generate a trigger \emph{independent} if property $S$ is realized or not. Since a rate is obtained by multiplying a probability with a repetition frequency, the whole language trivially generalizes to event rates or particle spectra. What is measured is usually the left hand side of the equation, somtimes also the denominator of the right hand side (which corresponds to the rate at which the trigger condition is fulfilled). Eq.~(\ref{E-1}) states then that in a large class of measurements, the medium modification as computable in a model $P(S|M)$ is not be observed directly, but rather is distorted through a \emph{bias factor} $\frac{P(T|S,M)}{P(T|M)}$ which is characterized by the trigger condition $T$. This bias can vary a lot, for instance the requirement to find a 100 GeV calorimetric jet leads to a very different bias than the requirement to find a 20 GeV charged hadron. However as these examples indicate, the formalism applies as well to jet finding followed by an analysis of the fragmentation pattern of the clustered jet \cite{jet-FF} (in which case the jet finding constitutes the trigger condition and the observable is the momentum spectrum of the shower parallel to the jet axis) as to $I_{AA}$ in triggered h-h correlations (in which the requirement to find a hard hadron constitutes the trigger condition and the ratio of parallel momentum spectra of correlated hadrons in medium over vacuum is the observable). This suggests a clear strategy to make the information content of measurements apparent and comparable: Measure the observable (e.g. the single particle distribution of jet constituents) in the same representation in all measurements and view the different trigger condition as a variation of the bias factor. Tomographic information is then contained in the way the observable responds to a change of the bias factor. \subsection{Theoretical formulation of in-medium showers} As discussed in detail in \cite{Constraining}, modelling of the medium modification of a shower involves a procedure to compute the medium-modified fragmentation function (MMFF). The MMFF can be written in the rather general form $D_{i \rightarrow h} (z, E, Q_0^2 | T_1(\zeta), T_2(\zeta), \dots T_n(\zeta))$, where it describes the distribution of hadrons $h$ given a parton $i$ with initial energy $E$ and initial virtuality $Q_0^2$ where the hadron energy $E_h = z E$ and the parton has traversed a medium along the path $\zeta$ where $T_i(\zeta)$ are the medium transport coefficients relevant for the process. Since the MMFF should approach the usual vacuum fragmentation function when the transport coefficients vanish, the properties of a vacuum shower are largely determined by just three parameters --- the shower-initiating parton type $i$, its initial energy $E$ and virtuality $Q_0$. In contrast, the determination of medium modifications in principles require $n$ different functions $T_i(\zeta)$. However, it turns out that in practice three are most relevant: $\hat{q}$ (the medium-induced perpendicular momentum squared per unit pathlength, effectively corresponding to a medium-induced virtuality), $\hat{e}$ (the mean momentum transfer parallel to the parton direction into the medium per unit pathlength, effectively corresponding to parton energy loss) and $\hat{e}_2$ (the variance of the energy loss) \cite{AbhijitReview}. Moreover, in many models it turns out that the full functional dependence of the transport coefficient is not needed but rather the line integral along the parton path $\zeta(\tau)$ as $M_1 = \int d\zeta T_i(\zeta)$ and the line integral along the path with a weight given by the pathlength $\zeta$, i.e. $M_2 = \int d\zeta \zeta T_i(\zeta)$ are to good accuracy sufficient \cite{ASWScaling,YaJEM2}. This implies that the medium modification of a shower can be characterized reasonably well by the set of $M_1(\hat{q}), M_2(\hat{q}), M_1(\hat{e}), M_2(\hat{e}), M_1(\hat{e}_2), M_2(\hat{e}_2)$ which now contain all tomographic information. Thus, ideally one would like to compare $D_{i \rightarrow h} (z, E, Q_0^2 | M_1(\hat{q}), \dots)$ with a measurement to deduce the tomographic information on the properties of the medium. Photon-triggered correlation come in practice closest to this ideal as they can provide stringent constraints on $E$, but they leave $Q_0^2$ and the location of the initial vertex and hence the set $M_i$ unconstrained. A number of models for the computation of the MMFF are proposed. Historically, the computation has often been based on the leading parton energy loss approximation in which the virtuality evolution of the shower is not treated explicitly and the focus is only on induced radiation from the leading parton \cite{QuenchingWeights,AMY-1,AMY-2,radiative5,WHDG, radiative6}. Since this approximation is not well suited for the interpretation in terms of conditional probabilities, we will not consider it here. Alternatively, Monte-Carlo (MC) codes for in-medium shower evolution \cite{YaJEM2,YaJEM1,JEWEL,Q-PYTHIA,MARTINI}, parton cascade \cite{VNI} as well as analytical approaches \cite{HT-DGLAP,Vitev} exist. \subsection{Initial state and final state biases} In order to directly test a model of jet quenching, it would be desirable if an observable could be constructed in such a way that the vacuum shower model parameters $(i,E,Q_0)$ or the medium parameter moments take fixed values. In this case, the theoretical model would only ever need to consider events which fulfill the trigger condition by construction, rendering the bias factor $\frac{P(T|S,M)}{P(T|M)}$ identically unity, which simplifies the computation tremendously. This is the reason schematic investigations and toy models follow this strategy. In other words, if one could prepare a situation in which a quark with specified energy propagates through a given length of medium with given density, jet tomography through comparison of experiment and theory would be easy to do. Unfortunately, experimental measurements are hardly ever conditioning on initial state properties of the shower, in which case the bias factor is different from unity and a model to compute the MMFF is insufficient to compare with data. Instead, experimental trigger conditions usually key on some property of the observed final state after shower evolution and hadronization. Consider the term $P(T|M)$ which can be written as \begin{equation} \label{E-2} P(T|M) = \sum_{S'} P(T|S',M) P(S'|M) \end{equation} using the fact that probabilities normalize to unity. Eq.~(\ref{E-2}) states that in order to compute the rate at which the trigger condition is fulfilled, we need not only compute the shower $S$ exhibiting a particular property we are interested in but in fact all possible shower configurations and medium modifications $S'$ which are allowed by the physics of the collision and do an appropriate sum over them. It is this need to compute all possible initial configurations and check them for the trigger condition in the final state which makes a proper computation vastly more complicated than a toy model estimate. In practical terms, this means that in order to compute observables which can be compared with experiment, an in-medium shower model needs to be embedded into a framework simulating the hard process and the evolution of the surrounding medium (for a detailed discussion see \cite{Constraining}). The final state trigger condition than maps (in a model- and embedding-dependent way) into distributions in the space of initial shower parameters. \subsection{Monte-Carlo treatment of jet quenching} Let us for illustration consider a MC description of jet quenching. Biases are taken into account by generating events according to the full abailable space of initial parameters with correct weight assigned to the individual contributions, then searching which of these events fulfill the trigger condition in the final state and analyzing only this subset of event to obtain the observable. Pictorially this is shown in Fig.~\ref{F-MC}. Ultimately interesting for the observable are only the two shaded regions, i.e. the class of events which fulfills the trigger T and the class of events which fulfills T and shows property S. \begin{figure}[htb] \epsfig{file=mc.eps, width=8cm} \caption{\label{F-MC}A schematic illustration of initial parameter space sampling for a conditional probability observable.} \end{figure} However, the computational problem is that the full range of events generated by sampling all the available initial parameter space is usually so huge that a naive application of the above strategy is bound to be so slow that it is useless in practice. The challenge resulting from this is to introduce an intemediate layer, i.e. to understand the bias structure in such a way that only initial parameter ranges are sampled which have a reasonable chance to lead to a trigger in the final state. Pictorially, this corresponds to drawing the dashed line as closely as possible to the intersecting circles without actually cutting parameter space out of a circle (which would introduce an unphysical sampling bias). In this way, computations become feasible. This illustrates that good knowledge of the mapping of final state conditions to initial state parameters in terms of biases is not only conceptually important, but also has consequences of immediate practical value. \section{Types of biases} Following the discussion in \cite{Dihadron2}, we can classify the various biases induced by a trigger condition on the final state of a hard event as follows: First, there are biases on the structure of the hard pQCD event itself which act even in vacuum. These have to do with the relation between hadronic (or jet) and parton kinematics dependent on parton type. Once a medium is present, the correlation of the strength of the medium modification with the density of the medium and the time spent in the medium leads to additional biases on the reaction geometry. Since all these biases act on the hard event itself rather than the final state shower, they affect both trigger side and away side simultaneously. This can be contrasted with shower biases, which affect the structure of the shower evolution itself and do not bias the kinematics or position of the hard event and are thus always only relevant for the trigger side. In this section, we review qualitatively the effects of the most relevant biases, which we study later with case studies in a full modelling framework. In order to illustrate the isolated effects of the various biases, the examples shown outside the full case studies are theoretical situations in which the initial state of the shower is given, whereas the later experimentally relevant case studies show results given an observed final state. \subsection{Biases in vacuum showers} Neither a hadron nor a jet typically contain all the initial parton energy $E$. In the case of a hadron, this is because of the production of subleading hadrons as well as hadron species which are not registered by the detector in the shower. In the case of a jet, the reason is typically the production of hadrons at large angles with the jet axis which correspond to energy flow outside the jet radius $R$, but for instance in charged jets also neutral hadron production in the shower constitute an energy component not part of the jet. For both jet and hadron, the relation of observed energy to parton energy can be written into the form $E_{obs} = z_{had/jet} E$. Typically, the chief difference between jet and hadron observation is that a jet tends to recover a higher fraction of the parton energy than a single hard hadron, i.e. $\langle z_{jet} \rangle > \langle z_{had} \rangle$ where the average is done over many showers with a fixed parton energy $E$. This is illustrated in Fig.~\ref{F-Pz} where $P(z)$, the probability to observe the fraction $z$ of the original energy of a 20 GeV quark in the final state is shown for three different objects: 1) the leading hadron if it is $\pi^+, \pi^-, \pi^0, K^+, K^-, p$ or $\overline{p}$ 2) a STAR jet definition \cite{STAR-jet-h} where all particles which are $\pi^+, \pi^-, \pi^0, K^+, K^-, p$ or $\overline{p}$ or $\gamma$ and have $P_T > 2$ GeV are clustered usign the anti-$k_T$ algorithm with a radius of $R=0.4$ and 3) an ideal jet definition where all particles, regardless of PID or $P_T$, are clustered with anti-$k_T$ using $R=0.4$. \begin{figure}[htb] \epsfig{file=P_z_f.eps, width=8cm} \caption{\label{F-Pz} (Color online) The probability density $P(z)$ to observe a trigger object with fraction $z = E_{obs}/E$ given an initial parton energy $E$ and an observed trigger energy $E_{obs}$ for various possible trigger objects, shown for the example of a fragmenting 20 GeV quark.} \end{figure} It is evident that the leading hadron in this kinematical regime typically carries only about 15\% of the original parton energy whereas on the other end of the spectrum clustering into a jet ideally recovers typically 95\% of the energy. Jet definitions matching realistic experimental conditions fall between the two cases. A {\itshape kinematic bias} arises then because in an experimental context $P(z)$ is typically not probed for fixed parton energy, but rather folded with the steeply falling primary parton production spectrum which can be computed in pQCD and typically falls approximately like a power $1/p_T^n$ with $n=7..8$ at RHIC kinematics and $n=4..5$ at the LHC. A trigger energy requirement then demands a fixed $E_{obs} = zE$ where both $z$ and $E$ are allowed to vary event by event. For the ideal jet described above where $P(z) \approx \delta(z-1)$, the bias is negligible and $E_{obs}$ approximately corresponds to the parton energy. For a hadron trigger however, both $E$ and $z$ prefer to be individually small, yet their product is forced to a certain value. As a result, $E_{obs}$ maps to a characteristic range in $E$ which depends on $n$ and the details of $P(z)$, i.e. the distribution of parton energies contributing to a trigger is no longer the primary pQCD spectrum but becomes biased. In \cite{TriggerBias} this is referred to as 'trigger bias', however in the following we will use this term in a more general sense referring to any bias introduced by a trigger condition in either vacuum or medium. Another part of the kinematic bias is related to the fact that due to higher order pQCD effects and nuclear initial state effects a hard parton pair is never exactly back to back. These effects can be approximated by introducing a randomly oriented vector ${\bf k_t}$ with a Gaussian distribution in magnitude which is added to the pair momenta. A trigger condition then biases this {\itshape a priori} randomly oriented vector to be pointing towards the trigger direction \cite{Dihadron2}. The {\itshape parton type bias} then has to do with the fact that the functional form of $P(z)$ depends on the shower-initiating parton type: On average, quarks fragment into harder and more collimated showers than gluons. As a result, any trigger condition corresponding to an observed energy is more likely to be fulfilled by a quark than by a gluon. Thus, on the trigger side the fraction of quark jets is generically enhanced as compared to an unbiased pQCD spectrum. How the bias acts on the away side depends on the kinematical situation. In a regime where the subprocess $qg \rightarrow qg$ is dominant, enhancing the near side quark fraction biases the away side towards gluon jets \cite{Dihadron2} which is relevant for instance for the 5-20 GeV momentum regime at RHIC. The biases in vacuum are summarized in Tab.~\ref{T-BiasesV}. \begin{table}[htb] \begin{tabular}{|l|l|} \hline bias & cause\\ \hline \hline kinematic & the relationship between parton and trigger\\ & energy results from both spectrum and\\ & fragmentation process\\ parton type & gluon jets are softer and less likely to\\ & fulfill a trigger condition\\ \hline \end{tabular} \caption{\label{T-BiasesV}The various biases in vacuum} \end{table} \subsection{Biases in the medium} Medium modifications to the shower structure generically tend to equilibrate the shower, i.e. they drive the kinematical properties of shower partons closer to those of medium partons. This implies that medium-modified showers are softer and broader, i.e. more weight in $P(z)$ shifts to lower $z$. As a result, the kinematical bias is changed in a medium, the same $E_{obs}$ maps on average to a higher $E$ in a medium than in a vacuum. The strength of the medium modification is (up to coherence effects which are important in detail) driven by the number of interactions with the medium, which is a function of the medium density, the coupling strength of shower partons to the medium and the time/length of the shower spent in medium. Out of these, the coupling strength is relevant for a modification of the parton type bias by the medium: Since gluons interact with a factor of 9/4 more strongly with color charges, the medium modification of gluon jets is correspondingly stronger than that for quark jets. Note that the factor 9/4 does not accurately describe the difference between quark and gluon parton showers, as for instance a gluon may split into a $q\overline{q}$ pair which after decoherence interacts as independent quark color charges. However, the dominant radiation pattern in a shower, both for quarks and gluons, is the emission of soft gluons which preserves the identity of the leading parton, and thus gluon jets in practice have a stronger interaction with the medium than quark jets, although the real difference is somewhat smaller than 9/4. For this reason, triggered objects are even more biased to be quark jets than this is already the case in vacuum. This effect is sometimes referred to as \emph{gluon filtering}. The combined effect of medium density and pathlength of a parton through the medium leads to a \emph{geometrical bias} on the position of the vertex leading to the triggered event in the transverse plane in position space. Vertices leading to triggered events have a tendency to be close to the medium surface, with the trigger parton travelling outward. This implies that the same effect biases the away side parton to have a longer than average pathlength in the medium. \subsection{Shower biases} While all biases discussed so far affect properties of the hard event itself, and thus refer equally to near and away side, there are also biases which affect the trigger parton side only. Those are here referred to as \emph{shower biases}. For instance, requiring that a single hard hadron is produced in a shower restricts the phase space for associated hadron production via the conservation of energy and momentum. Generically, shower biases make observables more robust against medium modifications, as a shower bias implies that there are properties of the shower which are by the trigger condition protected against medium modifications. A list of the medium-induced biases discussed in this work is given in Tab.~\ref{T-Biases}. \begin{table}[htb] \begin{tabular}{|l|l|} \hline bias & cause\\ \hline \hline kinematic & medium-induced radiation changes relation \\ & between parton and trigger energy\\ \hline parton type & medium interaction preferentially\\ & suppresses gluon jets\\ \hline geometry & short in-medium pathlengths are more\\ & likely to fulfill trigger condition\\ \hline shower & strongly broadened and softened showers\\ & are unlikely to lead to a trigger\\ \hline \end{tabular} \caption{\label{T-Biases}The various medium-induced biases} \end{table} \section{Model description} In order to illustrate the qualitative remarks made above quantitatively, we will in the following show results obtained with the in-medium shower evolution code YaJEM \cite{YaJEM2,YaJEM1} in its latest version YaJEM-DE \cite{YaJEM-DE} which gives a fair account of a large number of observables both at RHIC and at LHC \cite{Constraining}. YaJEM is a tool to obtain the MMFF given initial parton energy and a path through the medium, hence for a complete model description of a hard process in a medium also the medium evolution and the pQCD process have to be taken into account. \subsection{The perturbative hard process} Any simulation of hard events inside a heavy-ion collision which is not a theoretical quantity with a fixed initial state but refers to an experimentally observed final state must start with the computation of the probability to obtain certain parton momenta and types from the hard process itself. In LO pQCD, the production of two hard partons $k,l$ is described by \begin{equation} \label{E-2Parton} \frac{d\sigma^{AB\rightarrow kl +X}}{dp_T^2 dy_1 dy_2} \negthickspace = \sum_{ij} x_1 f_{i/A}(x_1, Q^2) x_2 f_{j/B} (x_2,Q^2) \frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}} \end{equation} where $A$ and $B$ stand for the colliding objects (protons or nuclei) and $y_{1(2)}$ is the rapidity of parton $k(l)$. The distribution function of a parton type $i$ in $A$ at a momentum fraction $x_1$ and a factorization scale $Q \sim p_T$ is $f_{i/A}(x_1, Q^2)$. The distribution functions are different for free protons \cite{CTEQ1,CTEQ2} and nucleons in nuclei \cite{NPDF,EKS98,EPS09}. The fractional momenta of the colliding partons $i$, $j$ are given by $ x_{1,2} = \frac{p_T}{\sqrt{s}} \left(\exp[\pm y_1] + \exp[\pm y_2] \right)$. Expressions for the pQCD subprocesses $\frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{t},\hat{u})$ as a function of the parton Mandelstam variables $\hat{s}, \hat{t}$ and $\hat{u}$ can be found e.g. in \cite{pQCD-Xsec}. To account for various effects, including higher order pQCD radiation, transverse motion of partons in the nucleon (nuclear) wave function and effectively also the fact that hadronization is not a collinear process, the distribution is commonly folded with an intrinsic transverse momentum $k_T$ with a Gaussian distribution, thus creating a momentum imbalance between the two partons as ${\bf p_{T_1}} + {\bf p_{T_2}} = {\bf k_T}$. In a MC description of the process, Eq.~(\ref{E-2Parton}) is sampled to generate the parton type and momentum of the back-to-back pair. Subsequently the intrinsic ${\bf k_T}$ imbalance is sampled and added to the parton pair momentum. In correlation studies, one of the partons is randomly picked as a trigger candidate. \subsection{Medium-modified fragmentation} Hard vertices are assumed to be distributed with a binary overlap profile as appropriate for LO pQCD parton production, i.e. the {\itshape a priori} probability density for finding a vertex in the transverse $(x,y)$ plane is given \begin{equation} \label{E-Profile} P(x_0,y_0) = \frac{T_{A}({\bf r_0 + b/2}) T_A(\bf r_0 - b/2)}{T_{AA}({\bf b})}, \end{equation} where the thickness function is given in terms of Woods-Saxon distributions of the the nuclear density $\rho_{A}({\bf r},z)$ as $T_{A}({\bf r})=\int dz \rho_{A}({\bf r},z)$ and $T_{AA}({\bf b})$ is the standard nuclear overlap function $T_{AA}({\bf b}) = \int d^2 {\bf s}\, T_A({\bf s}) T_A({\bf s}-{\bf b})$ for impact parameter ${\bf b}$. In the MC procedure, we place the parton pair at a probabilistically sampled vertex $(x_0,y_0)$ sampled from this distribution with a random orientation $\phi$ with respect to the reaction plane. We rotate the event for the purpose of extracting vertex distributions such that the vector of the trigger candidate parton defines the $-x$ direction. In studies of explict dependence of observables on the angle of the parton with the various $v_n$ event planes we would use the relevant event plane angle instead and would consider only parton propagation with a set angle to the event plane. The event is now embedded into a hydrodynamical description of the medium (\cite{hydro2d} for the RHIC case and the extrapolation of this scenario to larger $\sqrt{s}$ in the LHC case \cite{RAA-LHC}) which allows to extract e.g. the energy density $\epsilon(\zeta)$ at any point of the propagating parton path $\zeta$. In the absence of a medium, YaJEM is identical to the PYSHOW algorithm \cite{PYSHOW} which evolves partons as a series of $a\rightarrow bc$ branchings in the energy fraction $z = E_b/E_a$ and the virtuality $t = \ln(Q_a^2)/\Lambda_{QCD}^2$ with $\Lambda_{QCD} = O(300)$ MeV. In YaJEM, it is assumed that the virtuality $Q_a^2$ and energy $E_a$ of any intermediate shower parton $a$ is modified by the medium via two transport coeffients, $\hat{q}$ and $\hat{e}$ as \begin{equation} \label{E-Qgain} \Delta Q_a^2 = \int_{\tau_a^0}^{\tau_a^0 + \tau_a} d\zeta \hat{q}(\zeta) \end{equation} and \begin{equation} \label{E-Drag} \Delta E_a = \int_{\tau_a^0}^{\tau_a^0 + \tau_a} d\zeta \hat{e}(\zeta). \end{equation} To evaluate these equations requires a mapping of the shower evolution of PYSHOW in momentum space to the hydrodynamical evolution in position space and a model of the transport coefficients as a function of thermodynamical properties of the medium. The temporal structure of the shower evolution can be parametrically recovered by uncertainty arguments. The mean lifetime of a virtual parton $b$ coming from a parent $a$ is hence given as \begin{equation} \label{E-Lifetime} \langle \tau_b \rangle = \frac{E_b}{Q_b^2} - \frac{E_b}{Q_a^2}. \end{equation} In the MC simulation of the shower, the actual lifetime is determined from this mean value according to the probability distribution \begin{equation} \label{E-RLifetime} P(\tau_b) = \exp\left[- \frac{\tau_b}{\langle \tau_b \rangle} \right]. \end{equation} For the relation between transport coefficients and hydrodynamical parameters, \begin{equation} \label{E-qhat} \hat{q}[\hat{e}](\zeta) = K_Q[K_E] \cdot 2 \cdot [\epsilon(\zeta)]^{3/4} (\cosh \rho(\zeta) - \sinh \rho(\zeta) \cos\psi) \end{equation} is assumed where $\rho$ is the transverse flow rapidity of the medium, $\psi$ the angle between parton direction and medium flow direction and $K_Q$ and $K_E$ are two free parameters parametrizing the strength of the coupling of medium and shower partons. In this expression, $\epsilon^{3/4}$ represents a quantity with the dimensions of $\hat{q}$ and in an ideal gas parametrically corresponds to the medium density, whereas the latter factor accounts for the Lorentz contraction (and hence effective density increase) of the volume passed by the hard parton. Following the procedure in \cite{YaJEM-DE}, these are adjusted to $K_Q = 0.8 K$ and $K_E = 0.1 K$ (corresponding to about a 10\% elastic energy loss contribution leading to direct energy transfer into the medium) and $K$ is fit to the nuclear suppression factor $R_{AA}$ in 0-10\% central Au-Au collisions at RHIC. Note that in this work, any results presented are intended as illustration of qualitative effects and the order of magnitude of various biases. Hence no attempt is made to obtain a good fit to any other data set by either fitting $K$ to a more extended set of data or by exploiting the freedom to choose a different fluid dynamical description of the medium. For this reason comparison with data (where it is available) is left for future work. Changing the kinematics of evolving shower partons according to Eqs.~(\ref{E-Qgain},\ref{E-Drag}) in YaJEM results in a medium-modified parton shower. The resulting distribution of partons is then passed to the Lund model \cite{Lund} to compute the non-perturbative hadronization. \subsection{Analysis} The resulting output in terms of medium-modified hadron showers is now analyzed if the trigger condition is fulfilled. Since the event record at this point contains the full information on hadron PID and momenta, in principle any set of cuts can be evaluated (in practice the statistics may become too low). In case the trigger is a hadron, the test for the trigger condition is trivial. In the case of a jet trigger, the resulting event record is clustered with the anti-$k_T$ algorithm of the FastJet package \cite{FastJet}. At this point particles computed from a bulk hydrodynamical event could be inserted and clustered together with the hard event to study the influence of background fluctuations. This however is computationally very expensive and not done here --- throughout this work, it is assumed that any background fluctuations are sufficiently trivial to be removed. Dependent on the trigger conditions, this may not be a good assumption in practice (see e.g. \cite{BgFluct,LeticiaBG} for studies of the influence of the soft background on observables). In the case a jet trigger in combination with conditions on PID or constituent momenta is required, all particles not fulfilling the conditions are removed before clustering is done. \subsection{Relevance of the results} Despite the fact that the following case studies are performed with a specific parton-medium interaction model, YaJEM-DE, and for a specific choice of medium evolution, the qualitative conclusions drawn about the role of biases in hard observables require a significantly less stringent set of assumptions. The medium-induced biases will appear acting in the same direction as illustrated by YaJEM-DE for any model which has the following characteristics: 1) the medium on average softens fragmentation in a shower 2) the medium on average broadens the perpendicular distribution of hadrons in a shower 3) the effects of softening and broadening increase monotonously with medium density and in-medium pathlength 4) gluons couple more strongly to the medium than quarks. Most current models of parton-medium interactions exhibit these traits, so the following qualitative statements can be expected to hold fairly generically. However, quantitatively the relative strength of different biases depends on specific model assumptions. \section{Shower biases} A trigger condition may refer to one or both showers generated in a back-to-back hard event. If the shower on the trigger side is studied, in general the trigger condition biases the shower evolution itself. The shower bias is however absent when the trigger condition is evaluated for one parton (the 'near side') whereas the measurement of jet properties is done for the other parton (the 'away side'). Typical examples for measurements in which shower biases occur are the near side associated hadron distribution for hadron triggered events, or the distribution of hadrons inside reconstructed jets. In order to study the effect of imposing a trigger condition on the shower evolution in isolation, we keep the parameters which are subject to other biases, i.e. parton type, initial energy and the strength of the medium modification fixed for the following section, i.e. the test case is always chosen to be a 20 GeV quark, either in vacuum or propagating through a medium such that the line-integrated medium-induced virtuality is $\Delta Q^2 = 5 $ GeV $^2$. \subsection{Hard track conditions} Let us now consider showers in which a trigger condition forces a single hard hadron to have the momentum $P_h$. Energy-momentum conservation inside the shower requires to recover (in the medium case approximately) the original shower-initiating parton energy as $E= \sum_i P_i$ (where hadron masses have been neglected), i.e. in the limit where $P_h/E$ is sufficiently large, a significant bias for the remaining distribution is found. This is illustrated in Fig.~\ref{F-FF-track} where the parallel momentum distribution inside a vacuum shower is plotted for various values of the imposed hard track condition. \begin{figure}[htb] \epsfig{file=FF-trackbias_f.eps, width=8cm} \caption{\label{F-FF-track}(Color online) Conditional distribution of hadrons at energy $E$ in a shower originating from a 20 GeV quark, given a trigger hadron with the indicated energy in the same shower.} \end{figure} In essence, a hard track condition leads (by construction) to an enhancement of the hadron yield in the momentum region above the track requirement, and by momentum conservation to a depletion of the yield below. This pattern becomes more pronounced if the trigger hadron takes a sizeable fraction of the total jet energy. \begin{figure}[htb] \epsfig{file=track_requirements_f.eps, width=8cm} \caption{\label{F-IAA-track} (Color online) Medium over vacuum ratio of conditional hadron energy distributions in a shower originating from a 20 GeV quark, given a trigger hadron with the indicated energy in the same shower.} \end{figure} Fig.~\ref{F-IAA-track} illustrates how the medium modification of the shower responds to a hard track trigger condition at the example of the ratio of the parallel momentum distributions in medium and in vacuum. In the unbiased case, a depletion of the yield at high $P_T$ ('jet quenching') is balanced by a significant yield increase at low $P_T$. A hard track condition tends to remove the depletion at high $P_T$ above the required track momentum. This is a very natural outcome --- while hard tracks are unlikely in the unbiased case and made even less likely by the effect of the medium, a single hard track is always guaranteed once the trigger condition is imposed, and hence it cannot be quenched by the medium. In the presence of such a trigger condition, the medium effect may reduce the rate of triggered events, but it may no longer lead to a quenched high $P_T$ shower pattern. This is a very generic finding --- imposing a trigger bias always tends to reduce the medium modifications of the shower pattern because the trigger condition generates protected structures in the jet. \subsection{Jet energy conditions} Another commonly found bias on the shower is the requirement that a jet with at least a certain energy is found. The precise nature of the bias depends on the algorithm used to cluster hadrons into jets and their parameter settings (often an angular radius parameter $R$), as well as on the energy threshold. Qualitatively, it is clear that requiring a substantial flow of the total jet energy into a small cone radius selects showers in which only few branchings take place and as a consequence the parallel spectrum is harder than average while the perpendicular shape is more collimated than average. This is shown in Fig.~\ref{F-FF-jet} where the parallel momentum distribution inside a vacuum shower originating from a 20 GeV quark is plotted for various jet energy cuts after the shower has been clustered with anti-$K_T$ for the indicated radius parameter. \begin{figure}[htb] \epsfig{file=FF_jetbias_f.eps, width=8cm} \caption{\label{F-FF-jet}(Color online) Conditional distribution of hadrons at energy $E$ in a shower originating from a 20 GeV quark, given that the shower after clustering with radius $R$ results in a jet energy above $E_{jet}$.} \end{figure} As expected, requiring a very collimated jet by asking at least 75\% of the jet energy in a cone of radius $R=0.2$ leads to a sizeable hardening of the hadron spectrum in the jet, with the bias successively decreasing for larger radii or smaller $E_{jet}$. However, unlike in the case of a hard track requirements, there are no pronounced discontinuities in the distribution created by a jet energy condition. \begin{figure}[htb] \epsfig{file=Ejet_requirements_f.eps, width=8cm} \caption{\label{F-IAA-jet}(Color online) Medium over vacuum ratio of conditional hadron energy distributions in a shower originating from a 20 GeV quark, given that the shower after clustering with radius $R$ results in a jet energy above $E_{jet}$..} \end{figure} Fig.~\ref{F-IAA-jet} illustrates how the medium modification of the shower structure is affected by imposing a jet energy condition. As in the case of a hard track condition, generically a bias on the shower tends to remove the modification, as an increasingly significant part of the shower becomes protected against any modification by the trigger condition. \subsection{Jet mass conditions} A final state jet property which is currently not used as a trigger condition in measurements is the mass of a jet. This can be related to the virtuality of the shower-initiating parton which in turn determines the phase space available for vacuum branchings. Thus, highly virtual partons undergo a much richer branching history before the medium is encountered than hard partons with low initial virtuality. In this way, tagging high jet masses selects events in which configurations of multiple partons undergo medium modification, whereas tagging low mass jets tends to prefer configrations which are dominated by a single leading parton. The strength of medium modifications observed in a shower is thus expected to scale with the jet mass, which can be exploited to get a more differential picture of medium-modified showers \cite{jetMass}. \section{Case study: away side $I_{AA}$ } Let us in the following consider a more realistic situation in which the trigger condition refers to a pure final state condition and hence other types of biases (in this section with the exception of the shower bias) occur, in particular kinematic, parton type and geometry bias. The test case discussed in this section is a measurement of $I_{AA}$ of the conditional yield of hadrons as a function of $P_T$ on the away side (which removes the shower bias), binned as a function of $z_T$ where $z_T = P_T/E_{obs}$. Conditional away side yields were first obtained by the STAR collaboration\cite{STAR_dihadron,STAR_dihadron_DzT,STAR_dihadron_PRL} and the strong quenching of the away side correlation peak was almost immediately seen as a spectacular confirmation of the expectations of monojet events in a medium. Similar measurements have now also been performed by the ALICE collaboration at LHC kinematics \cite{ALICE_dihadron} which found somewhat reduced suppression as compared to the RHIC case. While theoretically challenging to compute, dihadron correlations have been a valuable tool to probe for instance pathlength-dependence of energy loss \cite{AdS,ElasticPhenomenology} and to track the fate of subleading hadrons \cite{YaJEM-DE}. In the case study, $E_{obs}$ is always the trigger energy given the trigger condition. We consider four different trigger trigger conditions: 1) a $\gamma$ ($\gamma$-h) 2) a single hadron (h-h) 3) a jet as defined by STAR \cite{STAR-jet-h}, including only $\pi^+, \pi^-, \pi^0, K^+, K^-, p, \overline{p}$ or $\gamma$ above 2 GeV clustered with a radius of $R=0.4$ (jet-h) and 4) an ideal jet with all particles clustered into $R=0.4$ (ijet-h). The trigger energy range is in all cases $12-15$ GeV. This selection contains strong kinematical bias (h-h, jet-h) as well as weak kinematical bias (ijet-h, $\gamma$-h), strong parton type bias ($\gamma$-h, h-h) as well as weak parton type bias (jet-h, ijet-h) and strong geometry bias (h-h, jet-h) as well as weak geometry bias ($\gamma$-h, ijet-h). There is some freedom in the choice of the away side observable, and in principle one could have chosen for instance $P_T$ rather than $z_T$. Each of these choices emphasizes different physics: At low $P_T$, the jet structure is determined largely by the appearance of the medium-induced radiation. As argued in \cite{jet-h}, the enhancement region is essentially set by medium physics and thus is seen at constant $P_T$, not constant $z_T$, in which case plotting the correlation in $P_T$ emphasizes the relevant physics. In contrast, in the high $P_T$ region where $z>0.5$, energy-momentum conservation is a major influence, and the constraints by energy-momentum conservation scale on average approximately with $z_T$ (i.e. as a constant fraction of the trigger energy) rather than $P_T$ (i.e. indepdent of the trigger energy). The choice made here thus emphasizes the high $P_T$ physics at the expense of obscuring the physics of the enhancement due to medium-induced radiation. Note again that the following results are case studies for the sake of illustration rather than model predictions, since they do not correct for effects like background fluctuations in jet finding and ignore the systematic uncertainty inherent in the choice of the hydrodynamical background, which is known to be important in comparison with real data \cite{HydroSys}. \subsection{The situation at RHIC} We consider first the situation for 0-10\% central Au-Au collisions at RHIC. The distribution of trigger vertices as obtained in the model calculation illustrating the amount of geometrical bias is shown in Fig.~\ref{F-geo-RHIC}, the distribution of away side parton momenta given a trigger in the 12-15 GeV energy range is shown in Fig.~\ref{F-kinbias-RHIC}. \begin{figure*} \epsfig{file=vdist_h-h_12-15.eps, width=5.9cm}\epsfig{file=vdist_jet-h_12-15.eps, width=5.9cm}\epsfig{file=vdist_ijet-h_12-15.eps, width=5.9cm} \caption{\label{F-geo-RHIC}Conditional distribution of production vertices in the transverse plane, given a trigger with observed energy $E_{obs}$ between 12 and 15 GeV in 0-10\% central 200 AGeV Au-Au collisions for hadron triggers (left), a jet definition used by STAR (middle) and an idealized jet definition (right). In all cases, the trigger object momentum vector defines the $-x$ direction.} \end{figure*} \begin{figure}[htb] \epsfig{file=kinbias_comp_f.eps, width=8cm} \caption{\label{F-kinbias-RHIC}(Color online) Conditional momentum distribution of the away side parton given a triggered object in the range of $E_{obs}$ between 12 and 15 GeV for various possibilities for the trigger. Shown for reference is the situation for p-p collisions (lines) as well as the situation in 0-10\% central 200 AGeV AuAu collisions (symbols).} \end{figure} These results confirm in a quantitative way what has been stated earlier: Both h-h and jet-h correlations have a relatively strong geometry bias to trigger on events in which the vertex is close to the surface. This is not so for ijet-h correlations (and since the $\gamma$ does not undergo any final state interaction, the $\gamma$-h correlation has no geometrical bias at all). At the same time, a $\gamma$-trigger is, up to intrinsic $k_T$ smearing, a relatively faithful representation of the parton kinematics. An ideal jet maps to a somewhat larger region in parton $p_T$, whereas jet-h and h-h probe the widest range in parton kinematics. \begin{table}[htb] \begin{tabular}{|l|cccc|} \hline trigger & $f_{glue}^{vac}$ near& $f_{glue}^{vac}$ away& $f_{glue}^{med}$ near& $f_{glue}^{med}$ away\\ \hline $\gamma$-h & N/A & 0.03 & N/A & 0.03\\ h-h & 0.04 & 0.69 & 0.04 & 0.69\\ jet-h & 0.12 & 0.68 & 0.08 & 0.69\\ ijet-h & 0.44 & 0.55 & 0.33 & 0.61\\ \hline \end{tabular} \caption{\label{T-gluefrac-RHIC}Conditional fraction of gluon jets on near and away side given a trigger object in the range of $E_{obs}$ between 12 and 15 GeV both in vacuum and in 0-10\% central 200 AGeV Au-Au collisions.} \end{table} The parton type bias is summarized in Tab.~\ref{T-gluefrac-RHIC} where the fraction of gluon jets $f_{glue}$ is shown on near and away side in both vacuum and medium. While the away side for the $\gamma$-h trigger is almost a pure quark jet sample, all other trigger conditions lead to a sizeable gluon jet fraction of $\sim 60$\%. Let us briefly review how these biases affect $I_{AA}$: A strong kinematical bias increases $I_{AA}$ since the available parton energy on the away side increases, giving a larger phase space for particle production. Parton type bias towards gluon jets decreases $I_{AA}$ since gluon jets show softer fragmentation, a comparison of the numbers suggests however that in the particular kinematic window studied here the differences between vacuum and medium are rather small and gluon filtering is not an issue. Finally, a strong geometrical bias decreases $I_{AA}$ since the average in-medium pathlength (and hence the strength of the medium modifications) grows. However, there is no easy {\itshape a priori} argument which would indicate which bias determines the end result. \begin{figure}[htb] \epsfig{file=IAA_biases_f.eps, width=8cm} \caption{\label{F-IAA-RHIC}(Color online) Away side hadron yield modification as a function of $z_T = E_h/E_{obs}$ for various trigger objects in 0-10\% 200 AGeV Au-Au collisions.} \end{figure} The actual outcome of the model in terms of away side $I_{AA}$ is shown in Fig.~\ref{F-IAA-RHIC}. Given the fairly sizeable differences in geometry and kinematics probed, the default expectation would be that the resulting $I_{AA}$ exhibits differences to the same degree. However, the actual outcome looks at first glance rather similar. Qualitatively all curves show suppression at high $z_T$ whereas there is enhancement at low $z_T$ (which reflects the generic physics of a MMFF as determined by comparison with a large body of data \cite{Constraining} --- energy lost from hard shower modes is recovered in the enhanced production of subleading hadrons. Quantitatively, there are few differences between $\gamma$-h and ijet-h (which have a markedly different away side population of quark jets). Jet-h is not separable from h-h, in spite of the fact that the underlying kinematics is somewhat different. There is however a splitting in the high $z_T$ value of $I_{AA}$ between $\gamma$-h and ijet-h on the ond hand and h-h and jet-h on the other hand which reflects the different geometry bias and/or kinematcial bias. Note however that the split is not very large and in practice might me difficult to resolve within the systematic uncertainties associated with the choice of a hydrodynamical evolution model for the bulk matter. There are two possible scenarios which can generate the observed similarity between $\gamma$-h and ijet-h: Either a generic effect makes the outcome of the computation insensitive to the details of the bias, or there is an accidential cancellation of biases acting in different directions. \begin{figure}[htb] \epsfig{file=IAA_biases_gamma-h_f.eps, width=8cm} \caption{\label{F-IAA-gamma-h-RHIC}(Color online)Away side hadron yield modification as a function of $z_T = E_h/E_{obs}$ for a $\gamma$ trigger in 0-10\% 200 AGeV Au-Au collisions, assuming the actual pQCD scattering and a scenario in which only the channel $q\overline{q}\rightarrow g\gamma$ is active.} \end{figure} The result shown in Fig.~\ref{F-IAA-gamma-h-RHIC} argues that the latter scenario is true --- if the parton type bias is changed to the (unphysical) case that only gluons recoil from a $\gamma$ trigger, the stronger interaction of the gluon with the medium is expected to lead to additional softening of the away side yield --- which is exactly what is observed. Thus, the observation that $\gamma$-h and ijet-h results fall almost on top of each other is not due to some generic mechanism, but results from a non-trivial cancellation of biases. This in turn argues that if the relative strength of the biases can be changed experimentally, the cancellation can no longer be expected to occur. One possibility to do so is to consider the LHC kinematic range at a significantly higher $\sqrt{s} = 2.76$ ATeV where we will see differences between $\gamma$-h and ijet-h results (cf. Fig.~\ref{F-IAA-LHC}). \subsection{The situation at LHC} When going from $\sqrt{s} = 200$ AGeV to $\sqrt{s}$ of 2.76 ATeV with trigger momentum ranges kept fixed, the following trends are expected in the biases: First, the hard collisions probe the nuclear initial state at lower $x \sim 2 E_{obs}/\sqrt{s}$, and consequently there is a transition to a significantly more gluon-dominated regime, as gluons increasingly constitute the largest share of the low $x$ parton distribution. This has an effect on the parton type bias. In addition, the momentum spectrum of produced partons gets much harder, which implies a weakening of the kinematic bias since the 'penalty' for using a very energetic parton to produce a high $P_T$ hadron decrases. As a result, the correlation between parton momentum and jet or leading hadron momentum generically weakens. Finally, there is also a more copious production of bulk matter, both medium temperature and density are increasing with $\sqrt{s}$, which implies a strengthened geometrical bias. However, since the available kinematic range grows $\sim \sqrt{s}$ whereas the medium density grows as a weak power of $\sqrt{s}$ (for instance $\sim \sqrt{s}^{0.574}$ in the EKRT model \cite{EKRT}), there is some reason to expect a net weakening of the geometrical bias. Again, note that the following results are for illustration and not predictions, as they use a direct extrapolation from RHIC to LHC energies \cite{RAA-LHC} with no attempt to tune model parameters to LHC data or to explore the systematic uncertainty given by the choice of the hydrodynamical background model. \begin{figure*} \epsfig{file=vdist_h-h_12-15_LHC.eps, width=5.9cm}\epsfig{file=vdist_jet-h_12-15_LHC.eps, width=5.9cm}\epsfig{file=vdist_ijet-h_12-15_LHC.eps, width=5.9cm} \caption{\label{F-geo-LHC}Conditional distribution of production vertices in the transverse plane, given a trigger with observed energy $E_{obs}$ between 12 and 15 GeV in 0-10\% central 2.76 ATeV Pb-Pb collisions for hadron triggers (left), a jet definition used by STAR (middle) and an idealized jet definition (right). In all cases, the trigger object momentum vector defines the $-x$ direction.} \end{figure*} An explicit computation of the geometrical bias shown in Fig.~\ref{F-geo-LHC} confirms this expectation --- despite the higher temperature and density of the LHC medium, the resulting bias on geometry is found to be considerably less due to the harder parton spectrum. This can also be seen from Fig.~\ref{F-kinbias-LHC} where the conditional distribution of away side parton momenta given a trigger is shown. \begin{figure}[htb] \epsfig{file=kinbias_comp_LHC_f.eps, width=8cm} \caption{\label{F-kinbias-LHC}(Color online) Conditional momentum distribution of the away side parton given a triggered object in the range of $E_{obs}$ between 12 and 15 GeV for various possibilities for the trigger. Shown for reference is the situation for p-p collisions (lines) as well as the situation in 0-10\% central 2.76 ATeV Pb-Pb collisions (symbols). Note the change in the scale of the $x$-axis in comparison with Fig.~\ref{F-kinbias-RHIC}.} \end{figure} It is evident that the same range in trigger $P_T$ maps to a much wider range in possible parton kinematics at the LHC than at RHIC. The underlying reason is again the reduced penalty for starting with a high parton energy, which in turn is due to the harder primary parton spectrum. The changes in parton type bias are summarized in Table~\ref{T-gluefrac-LHC}. \begin{table}[htb] \begin{tabular}{|l|cccc|} \hline trigger & $f_{glue}^{vac}$ near& $f_{glue}^{vac}$ away& $f_{glue}^{med}$ near& $f_{glue}^{med}$ away\\ \hline $\gamma$-h & N/A & 0.04 & N/A & 0.04\\ h-h & 0.33 & 0.79 & 0.32 & 0.78\\ jet-h & 0.47 & 0.79 & 0.38 & 0.80\\ ijet-h & 0.77 & 0.78 & 0.69 & 0.78\\ \hline \end{tabular} \caption{\label{T-gluefrac-LHC}Conditional fraction of gluon jets on near and away side given a trigger object in the range of $E_{obs}$ between 12 and 15 GeV both in vacuum and in 0-10\% central 2.76 ATeV Au-Au collisions.} \end{table} As expected, it can be seen that in particularly the near side gluon fraction at LHC is much increased over RHIC values, but also that the away side gluon fraction is more independent of the near side gluon fraction. This dependence at RHIC happened because of the dominance of the $gq \rightarrow qg$ reaction, which is no longer dominating at LHC kinematics. \begin{figure}[htb] \epsfig{file=IAA_biases_LHC_f.eps, width=8cm} \caption{\label{F-IAA-LHC}(Color online) Away side hadron yield modification as a function of $z_T = E_h/E_{obs}$ for various trigger objects in 0-10\% 2.76 ATeV Pb-Pb collisions.} \end{figure} The final model result in terms of away side $I_{AA}$ are shown in Fig.~\ref{F-IAA-LHC}. These results show that the change in $\sqrt{s}$ from RHIC to LHC, resulting in different kinematical, geometrical and parton type biases is in principle strong enough to leave significant traces in observable quantities. Any similarity between RHIC and LHC results should therefore not be seen as caused by the same generic (and hence trivial) dynamics, but rather as carrying meaningful information in terms of the relative strength of biases and their cancellations. \section{Case study --- the parallel momentum distribution of jets} Clustering hadrons into jets has been introduced in the study of hard QCD processes in p-p collisions with the aim of providing an easy comparison between pQCD calculations on the partonic level and the experimentally observed hadronic final state. The basic idea is that clustering largely removes the effect of any soft physics like hadronization or additional soft gluon emission which can not alter the flux of energy and momentum in the shower significantly, and thus a fairly direct comparison of experimental observables is made possible. It is doubtful if this still constitutes an advantage in the study of medium-modified showers, since medium modification occurs predominantly driven by the medium temperature scale $T\sim 300-500$ MeV (which is soft), i.e. clustering into jets tends to suppress the very effect one sets out to study. It can be shown that this renders dijet imbalance observations fairly insensitive to even gross features of the parton-medium interaction \cite{myA_J}. One way to overcome this problem is to analyze the spectrum of particles in the observed jets and hence get a more differential picture. In the language developed above, this corresponds to a situation where in addition to kinematic, parton type and geometry bias also the shower bias is relevant. The test case considered here is an analysis of the parallel momentum distribution of jets in 2.76 ATeV PbPb collisions clustered from hadrons above 1 GeV with anti-$k_T$ using $R=0.3$ with the jet energy $E_{jet}$ required to fall into the range of 100 - 110 GeV (note that this is similar to the fragmentation function analysis by the CMS collaboration \cite{CMS-FF}). From Fig.~\ref{F-geo-LHC} we may infer that the geometry bias in this situation is weak, and from Fig.~\ref{F-kinbias-LHC} we can see that we may expect partons from the trigger energy threshold $E_{jet}$ to about 1.5-2 times the trigger energy to contribute to the yield of jets in the trigger energy range. The complication due to the shower bias can be estimated from Fig.~\ref{F-IAA-jet}: For parton energies close to $E_{jet}$ (i.e. parton energies around 110 GeV) there is reason to expect a strong bias towards a shower structure which is not medium-modified, for parton energies sufficiently above $E_{jet}$ this bias gradually lessens, with the relative weight of these situations being determined by the combination of kinematical and parton type bias. (Note that Fig.~\ref{F-IAA-jet} is obtained for 20 GeV quarks, however due to the approximally self-similar nature of jets caused by the lack of a scale in the QCD splitting kernels corrections evolve only logarithmically in jet energy). \begin{figure}[htb] \epsfig{file=kinbias_jet_LHC_f.eps, width=8cm} \caption{\label{F-kinbias-jet-LHC}Conditional momentum distribution of the near side parton given a triggered jet in the range of $E_{obs}$ between 100 and 110 GeV. Shown for reference is the situation for p-p collisions (line) as well as the situation in 0-10\% central 2.76 ATeV Pb-Pb collisions (symbols). } \end{figure} The kinematic bias as obtained in the model calculation is illustrated in Fig.~\ref{F-kinbias-jet-LHC}. As expected, parton energies are probed in a range from about 100 to 150 GeV, with a slight shift towards higher energies in the medium case. At the same time, the fraction of gluon jets contributing to the yield in the trigger range decreases from $f_{glue}^{vac} = 0.44$ to $f_{glue}^{med} = 0.36$. A large fraction of jets is hence required to carry 2/3 of the parton energy inside a cone of $R=0.3$, which according to Fig.~\ref{F-IAA-jet} argues for an appreciable shower bias. \begin{figure}[htb] \epsfig{file=JetIAA_f.eps, width=8cm} \caption{\label{F-IAA-jetFF}(Color online) Near side hadron yield medium modification in a $R=0.3$ anti-$k_T$ jet as a function of $P_T$, shown as full result and obtained by neglecting the shower bias.} \end{figure} The final result of the model calculation is shown in Fig.~\ref{F-IAA-jetFF} and compared with a computation in which the shower bias effect has been deliberately removed (i.e. the fragmentation is computed for a population of showers as given by the kinematic, geometry and parton type bias as given by evaluating the trigger condition using the full simulation, but it is not checked in this run if a given shower actually clusters to an $E_{jet}$ in the trigger energy range --- note also that without such rejection, the available statistics is much higher). The results show dramatic differences between taking the shower bias into account or not. In all cases, there is an enhancement of the yield below $P_T \sim3$ GeV (which is not properly resolved by the binning). This is followed by a statistically significant region of depletion in the full calculation which ends at around 30-40 GeV where the full result becomes compatible with unity before statistics runs out. Qualitatively, this agrees with CMS measurements \cite{CMS-FF}. In contrast, the result without shower bias continues to show increasing depletion of the yield up to the highest hadron $P_T$. The reason for the peculiar pattern of enhancement, depletion and unity observed in the full calculation is a good illustration of the interplay between different biases. At small $P_T$, the contribution of gluon jets is still appreciable, and so the full calculation shows the same enhancement and depletion as the unbiased calculation, albeit driven towards unity by the shower bias. However, at high $P_T$ the yield is almost exclusively due to quark jets since the fragmentation of gluon jets is generically softer. Thus, at some point the enhanced fraction of quark jets in the medium due to gluon filtering leads to a parton type bias towards $I_{AA} > 1$ which happens to approximately compensate the softening of the spectrum due to the medium modification in this kinematic range. The net result is $I_{AA} \approx 1$ in the high $P_T$ region. \section{Complicated biases} In several experimentally relevant situations, even more complicated bias structures appear. One example are 2+1 triggered correlation in which the trigger condition corresponds to observing a coincidence of hard hadrons on both the near and the away side. A different example are triggered or seeded jets in which clustering of the event into jets is only done if a high $P_T$ track has been seen in the event. Let us study these cases in somewhat more detail. \subsection{2+1 triggered correlations} \begin{figure*} \epsfig{file=vdist_h-h_12-15.eps, width=5.9cm}\epsfig{file=vdist_di_12-15_4-8.eps, width=5.9cm}\epsfig{file=vdist_di_12-15_8-10.eps, width=5.9cm} \caption{\label{F-geo-dihadron}Conditional distribution of production vertices in the transverse plane, given a dihadron trigger with observed energy T1 between 12 and 15 GeV in 0-10\% central 200 AGeV Au-Au collisions and T2 set to the indicated values. Shown (left) is also the situation without T2 requirement (see Fig.~\ref{F-geo-RHIC}). In all cases, the T1 momentum vector defines the $-x$ direction.} \end{figure*} While in hadron (or jet) triggered correlations the away side parton propagation is constrained in azimuth to be approximately back to back with the trigger parton, the rapidity of the away side parton is only weakly constrained given the observed rapidity of the near side parton, and only at sufficiently high $P_T$ kinematics forces them to a similar rapidity (see e.g. \cite{MachRap}). The original motivation for introducing 2+1 triggered correlations in which a hard hadron on both the near (T1) and the away side (T2) serves as trigger condition was to explicitly constrain the rapidity of the away side parton, and hence to have a better lever arm to study correlations caused by energy-deposition into the medium. It was however realized fairly quickly that such a trigger condition biases the event towards minimal medium modification of the shower, which tends to make the observation of energy redistribution difficult to impossible \cite{DihadronTrigger}. Since hard hadron production is rare to begin with, the hard fragmentation in coincidence is an even rarer phenomenon, and this implies a strong bias in the event structure. We may hence expect a strong kinematical bias with on average significantly higher parton energies probed than in the hadron triggered case, a parton type bias leaning towards quark jet coincidences and a symmetric (tangential) geometry bias which minimized the in-medium pathlength for both near and away side parton, combined with a shower bias on each side given by the trigger requirement. While the original motivation for measuring 2+1 coincidences has a doubtful prospect of being used in practice, 2+1 triggered correlations have the appealing feature that changing the momentum range for $T_2$ allows to change the underlying bias structure in a profound way with minimal effort. The downside is that since hard dihadron coincidences are rare, finite statistics limits their usefulness. In the following case study, we set T1 to the window of 12-15 GeV in order to compare with previous results for hadron-triggered correlations and study two ranges for T2, 4-8 GeV and 8-10 GeV. We consider the case of 0-10\% central Au-Au collisions at 200 AGeV only. Fig.~\ref{F-geo-dihadron} shows the geometrical bias obtained from the model calculation. While in the hadron-triggered case there is a surface bias, coming from the requirement of having a short in-medium path for the trigger hadron, with increased momentum required for T2 this gradually changes into a tangential bias for which both near and away side parton in-medium pathlength are minimized. This means that paths through the center of the medium are progressively suppressed and for close to equal momenta of T1 and T2, chiefly the periphery of the medium is probed. \begin{figure}[htb] \epsfig{file=kinbias_dihadron_f.eps, width=8cm} \caption{\label{F-kinbias-dihadron}(Color online) Conditional momentum distribution of the near side parton given a dihadron trigger with T1 between 12 and 15 GeV and T2 in the indicated range. Shown for reference is the situation for p-p collisions (line) as well as the situation in 0-10\% central 200 AGeV Au-Au collisions (symbols). } \end{figure} As expected, the T2 condition has also implications for the parton kinematics. This is demonstrated in Fig.~\ref{F-kinbias-dihadron}. For the highest T2 range, the mean parton momentum probed by the triggered correlation is moved about 10 GeV higher than for the single hadron trigger. The implication of this is naturally a substantial suppression of the trigger rate. \begin{table}[htb] \begin{tabular}{|l|cccc|} \hline trigger & $f_{glue}^{vac}$ near& $f_{glue}^{vac}$ away& $f_{glue}^{med}$ near& $f_{glue}^{med}$ away\\ \hline h-h & 0.04 & 0.69 & 0.04 & 0.69\\ T2 = 4-8 GeV & 0.071 & 0.49 & 0.07 & 0.38\\ T2 = 8-10 GeV & 0.10 & 0.29 & 0.05 & 0.20\\ \hline \end{tabular} \caption{\label{T-gluefrac-dihadron}Conditional fraction of gluon jets on near and away side given a trigger object in the range of T1 between 12 and 15 GeV and T2 in the indicated range both in vacuum and in 0-10\% central 200 AGeV Au-Au collisions.} \end{table} The evolution of the gluon jet fraction with T2 is shown in Tab.~\ref{T-gluefrac-dihadron}. It is apparent that in vacuum the dominance of quark jets leading to a near side with correlated gluons on the away side is progressively broken. This is a natural consequence of the kinematic shift. In the medium, there is a strong gluon filtering effect apparent on the away side, leading to the dominance of correlated quark jets on both near and away side. \begin{figure}[htb] \epsfig{file=IAA_dihadrons_f.eps, width=8cm} \caption{\label{F-IAA-dihadrons}(Color online) Away side associated hadron yield medium modification given a single hadron trigger and a dihadron trigger with T1 = 12-15 GeV and T2 in the indicated range in 0-10\% central 200 AGeV Au-Au collisions.} \end{figure} The resulting away side $I_{AA}(z_T)$ for the different ranges of T2 is shown in Fig.~\ref{F-IAA-dihadrons}. Perhaps not surprisingly, a significant enhancement of the yield above vacuum is found. This is a result of the strong kinematic bias, shifting parton energy upward and allowing for more phase space for subleading hadron production, the tangential bias which reduces the medium modification for the away side shower as compared with the single hadron triggered case and the parton type bias which drives the away side towards harder quark jets. The immediate consequence of these biases is a strong reduction in the rate at which triggers are produced (which here reflects in the larger statistical errors for the dihadron triggered results, as significantly more events need to be created for this observable than for hadron triggered events). \subsection{Jet finding in triggered events} From an experimental point of view, it is often undesirable to run jet finding algorithms on a set of minimum bias events in heavy-ion collisions, as the vast majority of these events will not contain a hard process and hence significant numerical effort is used to cluster events which are not relevant for the study of hard probes. In such a situation, a triggered event sample where events are only processed further if they contain a hard track or tower (which can be determined early on) can be used. The STAR jet analysis \cite{STAR-jet-h} exemplifies this strategy for instance, whereas jets at ATLAS or CMS do not require such an extra trigger. However, triggering on events in this way introduces a shower bias. Assuming that in addition it is required that the hard track/tower is part of the leading jet, there is a combined bias from both a jet energy and a track energy condition. Since the kinematical or geometry bias are rather different for jets than for leading hadrons, an interesting question is then whether the objects triggered in this way behave more like jets or like hadrons. Obviously this depends on the ratio of clustered jet energy to required hadron energy --- if the hadron is in such a momentum regime that a typical jet contains one or more hadrons at this scale, the bias can be expected to be small (for instance, requiring a 5 GeV hadron in a 100 GeV jet is not expected to bias the jet sample in a significant way as the vast majority of 100 GeV jets produces hadrons in the 5 GeV range). On the other hand, once the hadron carries a significant fraction of the total jet energy, jets containing hadrons at such a scale become rare and the additional bias will be substantial. \begin{figure*}[htb] \epsfig{file=FF_jet_track_R0.2_f.eps, width=8cm}\epsfig{file=FF_jet_track_R0.4_f.eps, width=8cm} \caption{\label{F-jet-track}(Color online) Conditional distribution of hadrons at energy $E$ in a vacuum shower originating from a 20 GeV quark, given that the shower clustered using anti-$k_T$ to an energy $E_{jet}>15$ GeV and a trigger hadron with the indicated energy in the same shower for $R=0.2$ (left) and $R=0.4$ (right).} \end{figure*} In Fig.~\ref{F-jet-track} sample calculations with a combined shower bias are shown. The rule of thumb emerging from this and similar studies appears to be that the additional bias becomes relevant once the hadron energy reaches about half of the jet energy, with a fairly weak dependence on the radius used to cluster the jet. Here it has been tacitly assumed that kinematics is such that jet finding typically recovers ~75\% of the parton energy, which according to Fig.~\ref{F-Pz} is not a bad assumption. \section{Conclusions} \subsection{Biases are everywhere} As the results of this work show, biases occur for almost any high $P_T$ observable that is in any sense related to a conditional probability --- be it that an explicit trigger condition is evaluated for the event or be it an implicit condition that a jet needs to be clustered before it can be analyzed. This means that understanding and discussing biases is an integral part of any theoretical analysys of hard probes. The main structure of the biases involved is usually already apparent in the vacuum, and the medium modification to the bias structure of the problem can in many cases be regarded as a correction. The strength of the medium-induced bias is always apparent from the modification (in most cases suppression) of the trigger rate, which in turn is directly measured in disappearance observables such as nuclear modification factors $R_{AA}$ for various trigger objects. However, the strength of the medium-induced bias does not provide an {\itshape a priori} indication of the modification of conditional yield observables --- some biases (for instance the kinematic bias) may lead to {\itshape increasing} conditional yields despite a suppression of the trigger rate, whereas other biases such as the geometry bias work towards a suppression of conditional yields. \subsection{Biases are important} As for instance Fig.~\ref{F-IAA-jetFF} indicates, taking biases into account properly can change the result of a computation quantitatively and even qualitatively. Thus, a theoretical model calculation in which the fate of unbiased parton showers in a medium is obtained can not be expected to compare with data based on the notion that the bias of finding a jet is somehow small or would not influence the results in a relevant way. In the particular case of the parallel intra-jet hadron distribution which experimentally appears unchanged in a medium over a large momentum range \cite{CMS-FF}, the naive comparison with theory without shower bias would find a large discrepancy to the data, and hence lead to the conclusion that a previously not considered mechanism which makes shower evolution in medium similar to vacuum needs to be introduced. However, taking the shower bias into account properly, the need for any additional mechanism goes away. Biases are at least equally important for triggered correlation measurements --- however for these this is usually expected, although the relative strength of different biases can lead to counter-intuitive results when one e.g. expects suppression of a yield based on the geometry bias whereas in the actual situation the kinematical bias dominates, leading to a net enhancement. \subsection{Use of biases} Biases can appear as a nuisance in cases where they suppress the physics one is interested in studying, perhaps the most striking illustration is the shower bias for a hadron or jet trigger (see Figs.~\ref{F-IAA-track} and \ref{F-IAA-jet}) where strong medium modifications which are {\itshape a priori} present in the shower are suppressed by the bias with the effect that $I_{AA}$ is driven towards unity. Such nuisance biases should be avioded if possible --- in the context of shower biases, this can be done at the simple expense of separating trigger side and observable side, i.e. study away side jets with a trigger hadron on the near side (as suggested e.g. in \cite{Peter_Jets}). However, in many cases biases can be utilized by the design of a measurement to control the relevant parameters of the hard process to specifically probe the dependence of a medium-modification on a single control parameter. As an example, consider for instance a comparison of jet-h and $\gamma$-h correlations at RHIC kinematics. According to Fig.~\ref{F-geo-RHIC}, the geometrical bias of a sufficiently inclusive jet definition is very weak, i.e. in this case $\gamma$-h and ijet-h correlations probe almost the same geometry. According to Fig.~\ref{F-kinbias-RHIC}, they also have a fairly similar kinematical bias, and a small shift in trigger energy range can make the underlying parton enery on average the same. The main difference between the two situations is then given by Tab.~\ref{T-gluefrac-RHIC} from which one can read off that the $\gamma$-h correlation produces a high fraction of quark jets on the away side whereas the ijet-h correlation is dominated by away side gluon jets. Thus, in a measurement the bias can be designed to specifically probe the different evolution of quark and gluon jets in the medium. In a similar way, the geometry bias can be systematically varied by changing the constituent cut used for the clustering (cf. Fig~\ref{F-geo-RHIC}). If the variation of parton kinematics associated with the change is compensated for by a change in the trigger energy range, the measurement can be made to probe various regions of the medium selectively. Of course, such designed observables are inevitably to some degree model-dependent. However, in most cases the dominant structure of e.g. kinematical or parton type bias is given by vacuum QCD, on top of which the medium-induced bias is a correction. This means that parameters like the necessary shift in trigger energy range can be determined approximately by well-known vacuum physics and do not have to rely on a particular model of parton-medium interaction in a significant way. \section{Summary} The ancient Chinese strategist Sun Tzu writes in his 'Art of War': {\itshape It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle.} Similarly, one might summarize the results of this work as: {\itshape If you understand parton medium interaction and the involved biases, everything will become clear. If you understand parton-medium interaction but not the biases, some observables will make sense, others will appear as puzzles; but if you have neither an understanding of biases nor a good model of parton-medium interaction, you can not know the implication of any hard probe.} \begin{acknowledgments} I'd like to thank H.~Caines, J.~Putschke, P.~Jacobs, A.~Ohlson, M.~Connors, O.~Evdokimov, B.~Cole, G.~Roland, P.~Steinberg, M.~van Leeuwen and K.~Loizides for interesting discussions which all in some way led to this paper. This work is supported by the Academy researcher program of the Academy of Finland, Project No. 130472. \end{acknowledgments}
2,869,038,155,193
arxiv
\section{Generalised Archimedean Copula Models for Currency Exchange Rate Baskets} As noted in the previous section we develop flexible mixture Archimedean copula models for the currency baskets. Such models have the advantage that they produce asymmetric dependence relationships in the upper tails and the lower tails in the multivariate model. In addition we can perform a type of model selection purely by incorporating into the estimation the mixture weights associated with each dependence hypothesis obtained from each mixture component. We consider three models; two Archimedean mixture models and one outer power transformed Clayton copula. The mixture models considered are the Clayton-Gumbel mixture and the Clayton-Frank-Gumbel mixture, where the Frank component allows for periods of no tail dependence within the basket as well as negative dependence. We fit these copula models to each of the long and short baskets separately. \begin{definition}[Mixture Copula] A mixture copula is a linear weighted combination of copulae of the form: \begin{equation} C_M(\mathbf{u}; \mathbf{\theta}) = \sum_{i=1}^N w_i C_i ({\bf u};{\bf \theta_i}) \end{equation} where $0 \leq w_i \leq 1 \;\; \forall i \in \{1, ..., N\}$ and $\sum_{i=1}^N w_i = 1$ \end{definition} Thus we can combine a copula with lower tail dependence, a copula with positive or negative dependence and a copula with upper tail dependence to produce a more flexible copula capable of modelling the multivariate log returns of forward exchange rates of a basket of currencies. \begin{definition}[Archimedean Generator] An Archimedean generator is a continuous, decreasing function $\psi:[0, \infty) \rightarrow [0, 1]$ which satisfies the following conditions: \begin{enumerate} \item $\psi(0) = 1$, \item $\psi(\infty) = lim_{t \rightarrow \infty} \psi(t) = 0$, \item $\psi$ is strictly decreasing on $[0, inf\{t: \psi(t) = 0\}]$ \end{enumerate} \label{archm_gen} \end{definition} \begin{definition}[Archimedean Copula] A d-dimensional copula C is called Archim- edean if it can be represented as: \begin{equation} C({\bf u}) = \psi \{\psi^{-1}(u_1) + \cdots + \psi^{-1}(u_d)\} = \psi \{t({\bf u}) \} \;\;\;\; \forall {\bf u} \in [0, 1]^d \label{eq:archimedean_copula} \end{equation} where $\psi$ is an Archimedean generator and $\psi ^{-1}:[0, 1] \rightarrow [0, \infty)$ is the inverse generator with $\psi^{-1}(0) = inf\{t: \psi(t) = 0\}$. \end{definition} One can then obtain formulas for computing the copula densities, of relevance to parameter estimation under a maximum likelihood approach. We can see from Equation~(\ref{eq:density}) that we are required to compute high dimensional derivatives of a composite function. In order to achieve this we utilise a specific multivariate chain rule result widely known as the Fa\`{a} di Bruno's Formula, see \cite{faa1857note} and discussions in for example \cite{constantine1996multivariate} and \cite{roman1980formula}. \begin{definition}[Archimedean Copula Density] \cite{mcneil2009} prove that an Archimedean copula C admits a density c if and only if $\psi^{(d-1)}$ exists and is absolutely continuous on $(0, \infty)$. When this condition is satisfied, the copula density c is given by \begin{equation} c({\bf u})\;\; = \;\; \frac{ \partial^d C(u_1, \ldots , u_d)}{\partial u_1 \ldots \partial u_d} \;\; = \;\;\ \psi^{(d)} \{t({\bf u})\} \prod_{j=1}^d (\psi^{-1})'(u_j) \;\; , \;\;\;\; {\bf u} \in (0, 1)^d \label{eq:density} \end{equation} \end{definition} \section{Exchange Rate Multivariate Data Description and Currency Portfolio Construction} In our study we fit copula models to the high interest rate basket and the low interest rate basket updated for each day in the period 02/01/1989 to 29/01/2014 using log return forward exchange rates at one month maturities for data covering both the previous 6 months and previous year as a sliding window analysis on each trading day in this period. Our empirical analysis consists of daily exchange rate data for a set of 34 currency exchange rates relative to the USD, as in \cite{Menkhoff2012}. The currencies analysed included: Australia (AUD), Brazil (BRL), Canada (CAD), Croatia (HRK), Cyprus (CYP), Czech Republic (CZK), Egypt (EGP), Euro area (EUR), Greece (GRD), Hungary (HUF), Iceland (ISK), India (INR), Indonesia (IDR), Israel (ILS), Japan (JPY), Malaysia (MYR), Mexico (MXN), New Zealand (NZD), Norway (NOK), Philippines (PHP), Poland (PLN), Russia (RUB), Singapore (SGD), Slovakia (SKK), Slovenia (SIT), South Africa (ZAR), South Korea (KRW), Sweden (SEK), Switzerland (CHF), Taiwan (TWD), Thailand (THB), Turkey (TRY), Ukraine (UAH) and the United Kingdom (GBP). We have considered daily settlement prices for each currency exchange rate as well as the daily settlement price for the associated 1 month forward contract. We utilise the same dataset (albeit starting in 1989 rather than 1983 and running up until January 2014) as studied in \cite{Lustig2011} and \cite{Menkhoff2012} in order to replicate their portfolio returns without tail dependence risk adjustments. Due to differing market closing days, e.g. national holidays, there was missing data for a couple of currencies and for a small number of days. For missing prices, the previous day's closing prices were retained. As was demonstrated in Equation (\ref{UIP}), the differential of interest rates between two countries can be estimated through the ratio of the forward contract price and the spot price, see \cite{Juhl2006} who show this holds empirically on a daily basis. Accordingly, instead of considering the differential of risk free rates between the reference and the foreign countries, we build our respective baskets of currencies with respect to the ratio of the forward and the spot prices for each currency. On a daily basis we compute this ratio for each of the $n$ currencies (available in the dataset on that day) and then build five baskets. The first basket gathers the $n/5$ currencies with the highest positive differential of interest rate with the US dollar. These currencies are thus representing the ``investment" currencies, through which we invest the money to benefit from the currency carry trade. The last basket will gather the $n/5$ currencies with the highest negative differential (or at least the lowest differential) of interest rate. These currencies are thus representing the ``financing" currencies, through which we borrow the money to build the currency carry trade. Given this classification we investigate then the joint distribution of each group of currencies to understand the impact of the currency carry trade, embodied by the differential of interest rates, on currencies returns. In our analysis we concentrate on the high interest rate basket (investment currencies) and the low interest rate basket (funding currencies), since typically when implementing a carry trade strategy one would go short the low interest rate basket and go long the high interest rate basket. \section{Conclusion} \label{conclusion} In this paper, we have shown that the positive and negative multivariate tail risk exposures present in currency carry trade baskets are additional factors needing careful consideration when one constructs a carry portfolio. Ignoring these exposures leads to a perceived risk return profile that is not reflective of the true nature of such a strategy. In terms of marginal model selection, it was shown that one is indifferent between the log Generalised Gamma model and the frequently used GARCH(1,1) model. However, in combination with the three different Archimedean copula models considered in this paper the log Generalised Gamma marginals provided a better overall model fit. \section{Likelihood Based Estimation of the Mixture Copula Models} \section{Currency Basket Model Estimations via Inference Function For the Margins} \label{section:likelihood} The inference function for margins (IFM) technique introduced in \cite{Joe1996} provides a computationally faster method for estimating parameters than Full Maximum Likelihood, i.e. simultaneously maximising all model parameters and produces in many cases a more stable likelihood estimation procedure. This two stage estimation procedure was studied with regard to the asymptotic relative efficiency compared with maximum likelihood estimation in \cite{Joe2005} and in \cite{Hafner2010}. It can be shown that the IFM estimator is consistent under weak regularity conditions. In modelling parametrically the marginal features of the log return forward exchange rates, we wanted flexibility to capture a broad range of skew-kurtosis relationships as well as potential for sub-exponential heavy tailed features. In addition, we wished to keep the models to a selection which is efficient to perform inference and easily interpretable. We consider a flexible three parameter model for the marginal distributions given by the Log-Generalized-Gamma distribution (l.g.g.d.), see details in \cite{lawless1980inference}, where $Y$ has a l.g.g.d. if $Y = \mbox{log} (X)$ such that $X$ has a g.g.d. The density of $Y$ is given by \begin{equation} \label{EqnLGGD} f_{Y}(y; k,u,b) = \frac{1}{b \Gamma(k)}\exp\left[k\left(\frac{y - u}{b} \right) - \exp\left(\frac{y-u}{b}\right) \right], \end{equation} with $u = \mbox{log} \, (\alpha)$, $b = \beta^{-1}$ and the support of the l.g.g.d. distribution is $y \in \mathbb{R}$. This flexible three parameter model admits the LogNormal model as a limiting case (as $k \rightarrow \infty$). In addition the g.g.d. also includes the exponential model $(\beta=k=1)$, the Weibull distribution $(k=1)$ and the Gamma distribution $(\beta=1)$. As an alternative to the l.g.g.d. model we also consider a time series approach to modelling the marginals,given by the GARCH($p$,$q$) model, as described in \cite{bollerslev1986generalized} and \cite{brechmann2012risk}, and characterised by the error variance: \begin{equation} \sigma^2 = \alpha_0 + \sum\limits_{i=1}^q \alpha_i \epsilon_{t-i}^2 + \sum\limits_{i=1}^p \beta_i \sigma_{t-i}^2 \;\; . \end{equation} \subsection{Stage 1: Fitting the Marginal Distributions via MLE} The estimation for the three model parameters in the l.g.g.d. can be challenging due to the fact that a wide range of model parameters, especially for $k$, can produce similar resulting density shapes (see discussions in \cite{lawless1980inference}). To overcome this complication and to make the estimation efficient it is proposed to utilise a combination of profile likelihood methods over a grid of values for $k$ and perform profile likelihood based MLE estimation for each value of $k$, over the other two parameters $b$ and $u$. The differentiation of the profile likelihood for a given value of $k$ produces the system of two equations: \begin{equation} \exp(\tilde\mu) = \left[ \frac{1}{n}\sum_{i=1}^n \exp\left(\frac{y_i}{\tilde\sigma\sqrt{k}}\right) \right]^{\tilde\sigma \sqrt{k}}\\ \hspace{5mm} ; \hspace{5mm} \frac{\sum_{i=1}^n y_i \exp\left(\frac{y_i}{\tilde\sigma\sqrt{k}}\right)}{\sum_{i=1}^n \exp\left(\frac{y_i}{\tilde\sigma\sqrt{k}}\right)} - \overline{y} - \frac{\tilde\sigma}{\sqrt{k}} = 0 \; , \label{lggd_mle} \end{equation} where $n$ is the number of observations, $y_i = \mbox{log} \, x_i$, $\tilde\sigma = b/\sqrt{k}$ and $\tilde\mu = u + b \, \mbox{log} \,k$. The second equation is solved directly via a simple root search to give an estimation for $\tilde{\sigma}$ and then substitution into the first equation results in an estimate for $\tilde{\mu}$. Note, for each value of $k$ we select in the grid, we get the pair of parameter estimates $\tilde\mu$ and $\tilde\sigma$, which can then be plugged back into the profile likelihood to make it purely a function of $k$, with the estimator for $k$ then selected as the one with the maximum likelihood score. As a comparison we also fit the GARCH(1,1) model using the MATLAB MFEtoolbox using the default settings. \subsection{Stage 2: Fitting the Mixture Copula via MLE} In order to fit the copula model the parameters are estimated using maximum likelihood on the data after conditioning on the selected marginal distribution models and their corresponding estimated parameters obtained in Stage 1. These models are utilised to transform the data using the CDF function with the l.g.g.d. MLE parameters ($\hat k$, $\hat u$ and $\hat b$) or using the conditional variances to obtain standardised residuals for the GARCH model. Therefore, in this second stage of MLE estimation we aim to estimate either the one parameter mixture of CFG components with parameters ${\bf\underline\theta} = (\rho_{clayton}, \rho_{frank}, \rho_{gumbel}, \lambda_{clayton}, \lambda_{frank}, \lambda_{gumbel})$, the one parameter mixture of CG components with parameters ${\bf\underline\theta} = (\rho_{clayton}, \rho_{gumbel}, \lambda_{clayton}, \lambda_{gumbel})$ or the two parameter outer power transformed Clayton with parameters ${\bf\underline\theta} = (\rho_{clayton}, \beta_{clayton})$. The log likelihood expression for the mixture copula models, is given generically by: \begin{equation} l({\bf\underline\theta}) = \sum_{i=1}^n \mbox{log} \; c(F_1(X_{i1};\hat\mu_1, \hat\sigma_1), \dots, F_d(X_{id};\hat\mu_d, \hat\sigma_d)) \; + \; \sum_{i=1}^n \sum_{j=1}^d \mbox{log} \;f_j(X_{ij};\hat\mu_j, \hat\sigma_j). \label{loglik} \end{equation} This optimization is achieved via a gradient descent iterative algorithm which was found to be quite robust given the likelihood surfaces considered in these models with the real data. Alternative estimation procedures such as expectation-maximisation were not found to be required. \FloatBarrier \section{Currency Carry Trade and Uncovered Interest Rate Parity} One of the most robust puzzles in finance still to be satisfactorily explained is the uncovered interest rate parity puzzle and the associated excess average returns of currency carry trade strategies. Such trading strategies are popular approaches which involve constructing portfolios by selling low interest rate currencies in order to buy higher interest rate currencies, thus profiting from the interest rate differentials. The presence of such profit opportunities, pointed out by \cite{Hansen1980,Fama1984,backus2001affine} and more recently by \cite{Lustig2007,Brunnermeier2008,burnside2011peso,christiansen2011time,Lustig2011,Menkhoff2012}, violates the fundamental relationship of uncovered interest rate parity (UIP). The UIP refers to the parity condition in which exposure to foreign exchange risk, with unanticipated changes in exchange rates, is uninhibited and therefore if one assumes rational risk-neutral investors, then changes in the exchange rates should offset the potential to profit from the interest rate differentials between high interest rate (investment) currencies and low interest rate (funding) currencies. We can more formally write this relation by assuming that the forward price, $F_{t}^{T}$, is a martingale under the risk neutral probability $\mathbb{Q}$ (\cite{musiela2011martingale}): \begin{align} E_{\mathbb{Q}}\Bigg[\frac{S_{T}}{S_{t}}\Bigg|\mathcal{F}_{t}\Bigg]=\frac{F_{t}^{T}}{S_{t}}=e^{(r_{t}-r_{t}^{\star})(T-t)}. \label{UIP} \end{align} The UIP Equation~(\ref{UIP}) thus states that under the risk neutral probability the expected variation of the exchange rate $S_{t}$ should equal the differential between the interest rate of the two associated countries, denoted by respectively $r_{t}$ and $r_{t}^{\star}$. The currency carry trade strategy investigated in this paper aims at exploiting violations of the UIP relation by investing a certain amount in a basket of high interest rate currencies (the long basket) while funding it through a basket of low interest rate currencies (the short basket). When the UIP holds, then given foreign exchange market equilibrium, no profit should arise on average from this strategy; however such opportunities are routinely observed and exploited by large volume trading strategies. In this paper we build on the existing literature by studying a stochastic feature of the joint tail behaviours of the currencies within each of the long and the short baskets, which form the carry trade. We aim to explore to what extent one can attribute the excess average returns with regard to compensation for exposure to tail risk, for example either dramatic depreciations in the value of the high interest rate currencies or dramatic appreciations in the value of the low interest rate currencies in times of high market volatility. We postulate that such analyses should also benefit from consideration not only of the marginal behaviours of the processes under study, in this case the exchange rates of currencies in a portfolio, but also a rigorous analysis of the joint dependence features of such relationships. We investigate such joint relationships in light of the UIP condition. To achieve this, we study the probability of joint extreme movements in the funding and investment currency baskets and interpret these extremal tail probabilities as relative risk exposures of adverse and beneficial joint currency movements which would affect the portfolio returns. This allows us to obtain a relative contribution to the exposure of the portfolio profit decomposed in terms of the downside and upside risks that are contributed from such tail dependence features in each currency basket. We argue that the analysis of the carry trade is better informed by jointly modelling the multivariate behaviour of the marginal processes of currency baskets accounting for potential multivariate extremes, whilst still incorporating heavy-tailed relationships studied in marginal processes. We fit mixture copula models to vectors of daily exchange rate log returns between 1989 - 2014 for both the investment and funding currency baskets making up the carry trade portfolio. The method and the dataset considered for the construction of the respective funding and investing currencies baskets are thoroughly described in \cite{Ames2013}. The currency compositions of the funding and investment baskets are varying daily over time as a function of the interest rate differential processes for each currency relative to the USD. Our analysis concludes that the appealing high return profile of a carry portfolio is not only compensating the tail thickness of each individual component probability distribution but also the fact that extreme returns tend to occur simultaneously and lead to a portfolio particularly sensitive to the risk of what is known as drawdown. Furthermore, we also demonstrate that high interest rate currency baskets and low interest rate currency baskets can display periods during which the tail dependence gets inverted, demonstrating when periods of construction of the aforementioned carry positions are being undertaken by investors. \section{Interpreting Tail Dependence as Financial Risk Exposure in Carry Trade Portfolios} \label{joint_tail_risk_exposure} In order to fully understand the tail risks of joint exchange rate movements present when one invests in a carry trade strategy we can look at both the downside extremal tail exposure and the upside extremal tail exposure within the funding and investment baskets that comprise the carry portfolio. The downside tail exposure can be seen as the crash risk of the basket, i.e. the risk that one will suffer large joint losses from each of the currencies in the basket. These losses would be the result of joint appreciations of the currencies one is short in the low interest rate basket and/or joint depreciations of the currencies one is long in the high interest rate basket. \begin{definition}[Downside Tail Risk Exposure in Carry Trade Portfolios] \\Consider the funding currency (short) basket with $n$-exchange rates relative to base currency, on day $t$, with currency log-returns {\scriptsize{ $\smash{(X^{(1)}_t,X^{(2)}_t,\ldots,X^{(n)}_t)}$.}} Then the downside tail exposure risk for the carry trade will be defined as the conditional probability of adverse currency movements in the short basket, corresponding to its upper tail dependence, given by {\scriptsize{ \begin{equation}\label{EqnDown1} \lambda_u^{(i)} (u) := \mathbb{P}\text{r}\left(X^{(i)}_t > F_i^{-1}(u)|X^{(1)}_t > F_1^{-1}(u),\ldots,X^{(i-1)}_t>F_{i-1}^{-1}(u), X^{(i+1)}_t>F_{i+1}^{-1}(u),\ldots,X^{(n)}_t>F_n^{-1}(u)\right) \end{equation} }} for a currency of interest $i \in \left\{1,2,\ldots,n\right\}$. Conversely the downside tail exposure for the investment (long) basket with $n$ currencies will be defined as the conditional probability of adverse currency movement in the long basket, given by {\scriptsize{ \begin{equation}\label{EqnDown2} \lambda_l^{(i)}(u) := \mathbb{P}\text{r}\left(X^{(i)}_t < F_i^{-1}(u)|X^{(1)}_t < F_1^{-1}(u),\ldots,X^{(i-1)}_t < F_{i-1}^{-1}(u), X^{(i+1)}_t < F_{i+1}^{-1}(u),\ldots,X^{(n)}_t < F_n^{-1}(u)\right). \end{equation} }} In general then a basket's upside or downside risk exposure would be quantified by the probability of a loss (or gain) arising from an appreciation or depreciation jointly of magnitude $u$ and the dollar cost associated to a given loss/gain of this magnitude. The standard approach in economics would be to associate say a linear cost function in $u$ to such a probability of loss to get say the downside risk exposure in dollars according to $E(u) = C_u({F_{X_t^{(i)}}(u)}) \times \lambda_u(u)$, which will be a function of the level $u$. As $\lambda_u$ becomes independent of the marginals, i.e. as $u \rightarrow 0$ or $u \rightarrow 1$, $C$ also becomes independent of the marginals. \end{definition} Conversely, we will also define the upside tail exposure that will contribute to profitable returns in the carry trade strategy when extreme movements that are in favour of the carry position held. These would correspond to precisely the probabilities discussed above applied in the opposite direction. That is the upside risk exposure in the funding (short) basket is given by Equation~(\ref{EqnDown1}) and the upside risk exposure in the investment (long) basket is given by Equation~(\ref{EqnDown2}). That is the upside tail exposure of the carry trade strategy is defined to be the risk that one will earn large joint profits from each of the currencies in the basket. These profits would be the result of joint depreciations of the currencies one is short in the low interest rate basket and/or joint appreciations of the currencies one is long in the high interest rate basket. \begin{rems} In a basket with $n$ currencies, $n \geq 2$, if one considers capturing the upside and downside financial risk exposures from a model based calculation of these extreme probabilities then if the parametric model is exchangeable, such as an Archimedean copula, then swapping currency $i$ in Equation (\ref{EqnDown1}) and Equation (\ref{EqnDown2}) with another currency from the basket, say $j$ will not alter the downside or upside risk exposures. If they are not exchangeable then one can consider upside and downside risks for each individual currency in the carry trade portfolio. \end{rems} We thus consider these tail upside and downside exposures of the carry trade strategy as features that can show that even though average profits may be made from the violation of UIP, it comes at significant tail exposure. We can formalise the notion of the dependence behaviour in the extremes of the multivariate distribution through the concept of tail dependence, limiting behaviour of Equations (\ref{EqnDown1}) and (\ref{EqnDown2}), as $u \uparrow 1$ and $u \downarrow 0$ asymptotically. The interpretation of such quantities is then directly relevant to assessing the chance of large adverse movements in multiple currencies which could potentially increase the risk associated with currency carry trade strategies significantly, compared to risk measures which only consider the marginal behaviour in each individual currency. Under certain statistical dependence models these extreme upside and downside tail exposures can be obtained analytically. we develop a flexible copula mixture example that has such properties below. \section{Generalised Archimedean Copula Models for Currency Exchange Rate Baskets} In order to study the joint tail dependence in the investment or funding basket we consider an overall tail dependence analysis which is parametric model based, obtained by using flexible mixtures of Archimedean copula components. Such a model approach is reasonable since typically the number of currencies in each of the long basket (investment currencies) and the short basket (funding currencies) is 4 or 5. In addition these models have the advantage that they produce asymmetric dependence relationships in the upper tails and the lower tails in the multivariate model. We consider three models; two Archimedean mixture models and one outer power transformed Clayton copula. The mixture models considered are the Clayton-Gumbel mixture and the Clayton-Frank-Gumbel mixture, where the Frank component allows for periods of no tail dependence within the basket as well as negative dependence. We fit these copula models to each of the long and short baskets separately. \begin{definition}[Mixture Copula] A mixture copula is a linear weighted combination of copulae of the form: \begin{equation} C_M(\mathbf{u}; \mathbf{\theta}) = \sum_{i=1}^N \lambda_i C_i ({\bf u};{\bf \theta_i}), \end{equation} where $0 \leq \lambda_i \leq 1 \;\; \forall i \in \{1, ..., N\}$ and $\sum_{i=1}^N \lambda_i = 1$. \end{definition} \begin{definition}[Archimedean Copula] A d-dimensional copula C is called Archim- edean if it can be represented by the form: \begin{equation} C({\bf u}) = \psi \{\psi^{-1}(u_1) + \cdots + \psi^{-1}(u_d)\} = \psi \{t({\bf u}) \} \;\;\;\; \forall {\bf u} = \{u_1, \ldots, u_d\} \in [0, 1]^d , \label{eq:archimedean_copula} \end{equation} where $\psi$ is an Archimedean generator satisfying the conditions given in \cite{McNeil2009}. $\psi ^{-1}:[0, 1] \rightarrow [0, \infty)$ is the inverse generator with $\psi^{-1}(0) = \mbox{inf}\{t: \psi(t) = 0\}$. \end{definition} In the following section we consider two stages to estimate the multivariate basket returns, firstly the estimation of suitable heavy tailed marginal models for the currency exchange rates (relative to USD), followed by the estimation of the dependence structure of the multivariate model composed of multiple exchange rates in currency baskets for long and short positions. Once the parametric Archimedean mixture copula model has been fitted to a basket of currencies, it is possible to obtain the upper and lower tail dependence coefficients, via closed form expressions for the class of mixture copula models and outer-power transform models we consider. The tail dependence expressions for many common bivariate copulae can be found in \cite{Nelsen2006}. This concept was recently extended to the multivariate setting by \cite{de2012multivariate}. \begin{definition}[Generalized Archimedean Tail Dependence Coefficient] Let $X = (X_1,..., X_n)^T$ be an n-dimensional random vector with distribution \newline $C(F_1(X_1), \ldots, F_n(X_n))$, where $C$ is an Archimedean copula and $F_1, ..., F_n$ are the marginal distributions. The coefficients of upper and lower tail dependence are defined respectively as: \small{ \begin{equation} \begin{aligned} \lambda_u^{1,...,h|h+1,...,n} &= \lim_{u \rightarrow 1-} P\left( X_1 > F_1^{-1}(u),...,X_h > F_h^{-1}(u) | X_{h+1} > F_{h+1}^{-1}(u), ..., X_n > F_n^{-1}(u) \right) \\ &= \lim_{t \rightarrow 0^+} \frac{\sum_{i=1}^d \left( \binom{d}{d-i} i (-1)^{i} \left[ \psi^{'} (it) \right] \right) }{\sum_{i=1}^{n-h} \left( \binom{n-h}{n-h-i} i (-1)^i \left[ \psi^{'} (it) \right] \right)} \;\;\;\; , \end{aligned} \label{eq:archmuppertd} \end{equation}} \begin{equation} \begin{aligned} \lambda_l^{1,...,h|h+1,...,n} &= \lim_{u \rightarrow 0+} P \left( X_1 < F_1^{-1}(u),...,X_h < F_h^{-1}(u) | X_{h+1} < F_{h+1}^{-1}(u), ..., X_n < F_n^{-1}(u) \right) \\ &= \lim_{t \rightarrow \infty} \frac{n}{n-h} \frac{\psi^{'} (nt)}{\psi^{'} ((n-h)t)} \end{aligned} \label{eq:archmlowertd} \end{equation} \normalsize for the model dependence function `generator' $\psi(\cdot)$ and its inverse function. \end{definition} In \cite{de2012multivariate} the analogous form of the generalized multivariate upper and lower tail dependence coefficients for outer-power transformed Clayton copula models is provided. The derivation of Equations (\ref{eq:archmuppertd}) and (\ref{eq:archmlowertd}) for the outer power case follows from \cite{feller1971}, i.e. the composition of a completely monotone function with a non-negative function that has a completely monotone derivative is again completely monotone. The densities for the outer power Clayton copula can be found in \cite{Ames2013}. In the above definitions of model based parametric upper and lower tail dependence one gets the estimates of joint extreme deviations in the whole currency basket. It will often be useful in practice to understand which pairs of currencies within a given currency basket contribute significantly to the downside or upside risks of the overall currency basket. In the class of Archimedean based mixtures we consider, the feature of exchangeability precludes decompositions of the total basket downside and upside risks into individual currency specific components. To be precise we aim to perform a decomposition of say the downside risk of the funding basket into contributions from each pair of currencies in the basket, we will do this is achieved via a simple linear projection onto particular subsets of currencies in the portfolio that are of interest, which leads for example to the following expression: \begin{equation} \mathbb{E}\left[\left.\hat \lambda_u^{i|1,2,...,i-1,i+1,...,n} \right| \hat\lambda_u^{2|1}, \hat\lambda_u^{3|1}, \hat\lambda_u^{3|2},\ldots, \hat\lambda_u^{n|n-1}\right] = \alpha_0 + \sum_{i \neq j}^n \alpha_{ij}\hat\lambda_u^{i|j}, \end{equation} where $\hat\lambda_u^{i|1,2,...,i-1,i+1,...,n}$ is a random variable since it is based on parameters of the mixture copula model which are themselves functions of the data and therefore random variables. Such a simple linear projection will then allow one to interpret directly the marginal linear contributions to the upside or downside risk exposure of the basket obtained from the model, according to particular pairs of currencies in the basket by considering the coefficients $\alpha_{ij}$, i.e. the projection weights. To perform this analysis we need estimates of the pairwise tail dependence in the upside and downside risk exposures $\hat\lambda_u^{i|j}$ and $\hat\lambda_l^{i|j}$ for each pair of currencies $i,j\in \left\{1,2,\ldots,n\right\}$. We obtain this through non-parametric (model-free) estimators, see \cite{Cruz2013}. \begin{definition}Non-Parametric Pairwise Estimator of Upper Tail Dependence (Extreme Exposure) \\ \begin{equation} \hat \lambda_u = 2 - \mbox{min} \left[2 \hspace{2mm}, \hspace{2mm} \frac{\mbox{log} \, \hat C_n \left( \frac{n - k}{n}, \frac{n - k}{n} \right)}{\mbox{log} (\frac{n - k}{n})} \right] \hspace{3mm} \hspace{3mm} k = {1,2, \ldots n-1}, \label{eq:nptd} \end{equation} where $\hat C_n \left( u_1, u_2 \right) = \frac{1}{n} \sum\limits_{i=1}^n \mathbf{1} \left( \frac{R_{1i}}{n} \leq u_1 , \frac{R_{2i}}{n} \leq u_2 \right)$ and $R_{ji}$ is the rank of the variable in its marginal dimension that makes up the pseudo data. \end{definition} In order to form a robust estimator of the upper tail dependence a median of the estimates obtained from setting $k$ as the $1^{st}, 2^{nd}, \ldots, 20^{th}$ percentile values was used. Similarly, $k$ was set to the $80^{th}, 81^{st}, \ldots, 99^{th}$ percentiles for the lower tail dependence. \section{Generalised Archimedean Copula Models for Currency Exchange Rate Baskets} As noted in the previous section we develop flexible mixture Archimedean copula models for the currency baskets. Such models have the advantage that they produce asymmetric dependence relationships in the upper tails and the lower tails in the multivariate model. In addition we can perform a type of model selection purely by incorporating into the estimation the mixture weights associated with each dependence hypothesis obtained from each mixture component. We consider three models; two Archimedean mixture models and one outer power transformed Clayton copula. The mixture models considered are the Clayton-Gumbel mixture and the Clayton-Frank-Gumbel mixture, where the Frank component allows for periods of no tail dependence within the basket as well as negative dependence. We fit these copula models to each of the long and short baskets separately. \begin{definition}[Mixture Copula] A mixture copula is a linear weighted combination of copulae of the form: \begin{equation} C_M(\mathbf{u}; \mathbf{\theta}) = \sum_{i=1}^N w_i C_i ({\bf u};{\bf \theta_i}) \end{equation} where $0 \leq w_i \leq 1 \;\; \forall i \in \{1, ..., N\}$ and $\sum_{i=1}^N w_i = 1$ \end{definition} Thus we can combine a copula with lower tail dependence, a copula with positive or negative dependence and a copula with upper tail dependence to produce a more flexible copula capable of modelling the multivariate log returns of forward exchange rates of a basket of currencies. \begin{definition}[Archimedean Generator] An Archimedean generator is a continuous, decreasing function $\psi:[0, \infty) \rightarrow [0, 1]$ which satisfies the following conditions: \begin{enumerate} \item $\psi(0) = 1$, \item $\psi(\infty) = lim_{t \rightarrow \infty} \psi(t) = 0$, \item $\psi$ is strictly decreasing on $[0, inf\{t: \psi(t) = 0\}]$ \end{enumerate} \label{archm_gen} \end{definition} \begin{definition}[Archimedean Copula] A d-dimensional copula C is called Archim- edean if it can be represented as: \begin{equation} C({\bf u}) = \psi \{\psi^{-1}(u_1) + \cdots + \psi^{-1}(u_d)\} = \psi \{t({\bf u}) \} \;\;\;\; \forall {\bf u} \in [0, 1]^d \label{eq:archimedean_copula} \end{equation} where $\psi$ is an Archimedean generator and $\psi ^{-1}:[0, 1] \rightarrow [0, \infty)$ is the inverse generator with $\psi^{-1}(0) = inf\{t: \psi(t) = 0\}$. \end{definition} One can then obtain formulas for computing the copula densities, of relevance to parameter estimation under a maximum likelihood approach. We can see from Equation~(\ref{eq:density}) that we are required to compute high dimensional derivatives of a composite function. In order to achieve this we utilise a specific multivariate chain rule result widely known as the Fa\`{a} di Bruno's Formula, see \cite{faa1857note} and discussions in for example \cite{constantine1996multivariate} and \cite{roman1980formula}. \begin{definition}[Archimedean Copula Density] \cite{mcneil2009} prove that an Archimedean copula C admits a density c if and only if $\psi^{(d-1)}$ exists and is absolutely continuous on $(0, \infty)$. When this condition is satisfied, the copula density c is given by \begin{equation} c({\bf u})\;\; = \;\; \frac{ \partial^d C(u_1, \ldots , u_d)}{\partial u_1 \ldots \partial u_d} \;\; = \;\;\ \psi^{(d)} \{t({\bf u})\} \prod_{j=1}^d (\psi^{-1})'(u_j) \;\; , \;\;\;\; {\bf u} \in (0, 1)^d \label{eq:density} \end{equation} \end{definition} \section{Exchange Rate Multivariate Data Description and Currency Portfolio Construction} In our study we fit copula models to the high interest rate basket and the low interest rate basket updated for each day in the period 02/01/1989 to 29/01/2014 using log return forward exchange rates at one month maturities for data covering both the previous 6 months and previous year as a sliding window analysis on each trading day in this period. Our empirical analysis consists of daily exchange rate data for a set of 34 currency exchange rates relative to the USD, as in \cite{Menkhoff2012}. The currencies analysed included: Australia (AUD), Brazil (BRL), Canada (CAD), Croatia (HRK), Cyprus (CYP), Czech Republic (CZK), Egypt (EGP), Euro area (EUR), Greece (GRD), Hungary (HUF), Iceland (ISK), India (INR), Indonesia (IDR), Israel (ILS), Japan (JPY), Malaysia (MYR), Mexico (MXN), New Zealand (NZD), Norway (NOK), Philippines (PHP), Poland (PLN), Russia (RUB), Singapore (SGD), Slovakia (SKK), Slovenia (SIT), South Africa (ZAR), South Korea (KRW), Sweden (SEK), Switzerland (CHF), Taiwan (TWD), Thailand (THB), Turkey (TRY), Ukraine (UAH) and the United Kingdom (GBP). We have considered daily settlement prices for each currency exchange rate as well as the daily settlement price for the associated 1 month forward contract. We utilise the same dataset (albeit starting in 1989 rather than 1983 and running up until January 2014) as studied in \cite{Lustig2011} and \cite{Menkhoff2012} in order to replicate their portfolio returns without tail dependence risk adjustments. Due to differing market closing days, e.g. national holidays, there was missing data for a couple of currencies and for a small number of days. For missing prices, the previous day's closing prices were retained. As was demonstrated in Equation (\ref{UIP}), the differential of interest rates between two countries can be estimated through the ratio of the forward contract price and the spot price, see \cite{Juhl2006} who show this holds empirically on a daily basis. Accordingly, instead of considering the differential of risk free rates between the reference and the foreign countries, we build our respective baskets of currencies with respect to the ratio of the forward and the spot prices for each currency. On a daily basis we compute this ratio for each of the $n$ currencies (available in the dataset on that day) and then build five baskets. The first basket gathers the $n/5$ currencies with the highest positive differential of interest rate with the US dollar. These currencies are thus representing the ``investment" currencies, through which we invest the money to benefit from the currency carry trade. The last basket will gather the $n/5$ currencies with the highest negative differential (or at least the lowest differential) of interest rate. These currencies are thus representing the ``financing" currencies, through which we borrow the money to build the currency carry trade. Given this classification we investigate then the joint distribution of each group of currencies to understand the impact of the currency carry trade, embodied by the differential of interest rates, on currencies returns. In our analysis we concentrate on the high interest rate basket (investment currencies) and the low interest rate basket (funding currencies), since typically when implementing a carry trade strategy one would go short the low interest rate basket and go long the high interest rate basket. \section{Conclusion} \label{conclusion} In this paper, we have shown that the positive and negative multivariate tail risk exposures present in currency carry trade baskets are additional factors needing careful consideration when one constructs a carry portfolio. Ignoring these exposures leads to a perceived risk return profile that is not reflective of the true nature of such a strategy. In terms of marginal model selection, it was shown that one is indifferent between the log Generalised Gamma model and the frequently used GARCH(1,1) model. However, in combination with the three different Archimedean copula models considered in this paper the log Generalised Gamma marginals provided a better overall model fit. \section{Likelihood Based Estimation of the Mixture Copula Models} \section{Currency Basket Model Estimations via Inference Function For the Margins} \label{section:likelihood} The inference function for margins (IFM) technique introduced in \cite{Joe1996} provides a computationally faster method for estimating parameters than Full Maximum Likelihood, i.e. simultaneously maximising all model parameters and produces in many cases a more stable likelihood estimation procedure. This two stage estimation procedure was studied with regard to the asymptotic relative efficiency compared with maximum likelihood estimation in \cite{Joe2005} and in \cite{Hafner2010}. It can be shown that the IFM estimator is consistent under weak regularity conditions. In modelling parametrically the marginal features of the log return forward exchange rates, we wanted flexibility to capture a broad range of skew-kurtosis relationships as well as potential for sub-exponential heavy tailed features. In addition, we wished to keep the models to a selection which is efficient to perform inference and easily interpretable. We consider a flexible three parameter model for the marginal distributions given by the Log-Generalized-Gamma distribution (l.g.g.d.), see details in \cite{lawless1980inference}, where $Y$ has a l.g.g.d. if $Y = \mbox{log} (X)$ such that $X$ has a g.g.d. The density of $Y$ is given by \begin{equation} \label{EqnLGGD} f_{Y}(y; k,u,b) = \frac{1}{b \Gamma(k)}\exp\left[k\left(\frac{y - u}{b} \right) - \exp\left(\frac{y-u}{b}\right) \right], \end{equation} with $u = \mbox{log} \, (\alpha)$, $b = \beta^{-1}$ and the support of the l.g.g.d. distribution is $y \in \mathbb{R}$. This flexible three parameter model admits the LogNormal model as a limiting case (as $k \rightarrow \infty$). In addition the g.g.d. also includes the exponential model $(\beta=k=1)$, the Weibull distribution $(k=1)$ and the Gamma distribution $(\beta=1)$. As an alternative to the l.g.g.d. model we also consider a time series approach to modelling the marginals,given by the GARCH($p$,$q$) model, as described in \cite{bollerslev1986generalized} and \cite{brechmann2012risk}, and characterised by the error variance: \begin{equation} \sigma^2 = \alpha_0 + \sum\limits_{i=1}^q \alpha_i \epsilon_{t-i}^2 + \sum\limits_{i=1}^p \beta_i \sigma_{t-i}^2 \;\; . \end{equation} \subsection{Stage 1: Fitting the Marginal Distributions via MLE} The estimation for the three model parameters in the l.g.g.d. can be challenging due to the fact that a wide range of model parameters, especially for $k$, can produce similar resulting density shapes (see discussions in \cite{lawless1980inference}). To overcome this complication and to make the estimation efficient it is proposed to utilise a combination of profile likelihood methods over a grid of values for $k$ and perform profile likelihood based MLE estimation for each value of $k$, over the other two parameters $b$ and $u$. The differentiation of the profile likelihood for a given value of $k$ produces the system of two equations: \begin{equation} \exp(\tilde\mu) = \left[ \frac{1}{n}\sum_{i=1}^n \exp\left(\frac{y_i}{\tilde\sigma\sqrt{k}}\right) \right]^{\tilde\sigma \sqrt{k}}\\ \hspace{5mm} ; \hspace{5mm} \frac{\sum_{i=1}^n y_i \exp\left(\frac{y_i}{\tilde\sigma\sqrt{k}}\right)}{\sum_{i=1}^n \exp\left(\frac{y_i}{\tilde\sigma\sqrt{k}}\right)} - \overline{y} - \frac{\tilde\sigma}{\sqrt{k}} = 0 \; , \label{lggd_mle} \end{equation} where $n$ is the number of observations, $y_i = \mbox{log} \, x_i$, $\tilde\sigma = b/\sqrt{k}$ and $\tilde\mu = u + b \, \mbox{log} \,k$. The second equation is solved directly via a simple root search to give an estimation for $\tilde{\sigma}$ and then substitution into the first equation results in an estimate for $\tilde{\mu}$. Note, for each value of $k$ we select in the grid, we get the pair of parameter estimates $\tilde\mu$ and $\tilde\sigma$, which can then be plugged back into the profile likelihood to make it purely a function of $k$, with the estimator for $k$ then selected as the one with the maximum likelihood score. As a comparison we also fit the GARCH(1,1) model using the MATLAB MFEtoolbox using the default settings. \subsection{Stage 2: Fitting the Mixture Copula via MLE} In order to fit the copula model the parameters are estimated using maximum likelihood on the data after conditioning on the selected marginal distribution models and their corresponding estimated parameters obtained in Stage 1. These models are utilised to transform the data using the CDF function with the l.g.g.d. MLE parameters ($\hat k$, $\hat u$ and $\hat b$) or using the conditional variances to obtain standardised residuals for the GARCH model. Therefore, in this second stage of MLE estimation we aim to estimate either the one parameter mixture of CFG components with parameters ${\bf\underline\theta} = (\rho_{clayton}, \rho_{frank}, \rho_{gumbel}, \lambda_{clayton}, \lambda_{frank}, \lambda_{gumbel})$, the one parameter mixture of CG components with parameters ${\bf\underline\theta} = (\rho_{clayton}, \rho_{gumbel}, \lambda_{clayton}, \lambda_{gumbel})$ or the two parameter outer power transformed Clayton with parameters ${\bf\underline\theta} = (\rho_{clayton}, \beta_{clayton})$. The log likelihood expression for the mixture copula models, is given generically by: \begin{equation} l({\bf\underline\theta}) = \sum_{i=1}^n \mbox{log} \; c(F_1(X_{i1};\hat\mu_1, \hat\sigma_1), \dots, F_d(X_{id};\hat\mu_d, \hat\sigma_d)) \; + \; \sum_{i=1}^n \sum_{j=1}^d \mbox{log} \;f_j(X_{ij};\hat\mu_j, \hat\sigma_j). \label{loglik} \end{equation} This optimization is achieved via a gradient descent iterative algorithm which was found to be quite robust given the likelihood surfaces considered in these models with the real data. Alternative estimation procedures such as expectation-maximisation were not found to be required. \FloatBarrier \section{Currency Carry Trade and Uncovered Interest Rate Parity} One of the most robust puzzles in finance still to be satisfactorily explained is the uncovered interest rate parity puzzle and the associated excess average returns of currency carry trade strategies. Such trading strategies are popular approaches which involve constructing portfolios by selling low interest rate currencies in order to buy higher interest rate currencies, thus profiting from the interest rate differentials. The presence of such profit opportunities, pointed out by \cite{Hansen1980,Fama1984,backus2001affine} and more recently by \cite{Lustig2007,Brunnermeier2008,burnside2011peso,christiansen2011time,Lustig2011,Menkhoff2012}, violates the fundamental relationship of uncovered interest rate parity (UIP). The UIP refers to the parity condition in which exposure to foreign exchange risk, with unanticipated changes in exchange rates, is uninhibited and therefore if one assumes rational risk-neutral investors, then changes in the exchange rates should offset the potential to profit from the interest rate differentials between high interest rate (investment) currencies and low interest rate (funding) currencies. We can more formally write this relation by assuming that the forward price, $F_{t}^{T}$, is a martingale under the risk neutral probability $\mathbb{Q}$ (\cite{musiela2011martingale}): \begin{align} E_{\mathbb{Q}}\Bigg[\frac{S_{T}}{S_{t}}\Bigg|\mathcal{F}_{t}\Bigg]=\frac{F_{t}^{T}}{S_{t}}=e^{(r_{t}-r_{t}^{\star})(T-t)}. \label{UIP} \end{align} The UIP Equation~(\ref{UIP}) thus states that under the risk neutral probability the expected variation of the exchange rate $S_{t}$ should equal the differential between the interest rate of the two associated countries, denoted by respectively $r_{t}$ and $r_{t}^{\star}$. The currency carry trade strategy investigated in this paper aims at exploiting violations of the UIP relation by investing a certain amount in a basket of high interest rate currencies (the long basket) while funding it through a basket of low interest rate currencies (the short basket). When the UIP holds, then given foreign exchange market equilibrium, no profit should arise on average from this strategy; however such opportunities are routinely observed and exploited by large volume trading strategies. In this paper we build on the existing literature by studying a stochastic feature of the joint tail behaviours of the currencies within each of the long and the short baskets, which form the carry trade. We aim to explore to what extent one can attribute the excess average returns with regard to compensation for exposure to tail risk, for example either dramatic depreciations in the value of the high interest rate currencies or dramatic appreciations in the value of the low interest rate currencies in times of high market volatility. We postulate that such analyses should also benefit from consideration not only of the marginal behaviours of the processes under study, in this case the exchange rates of currencies in a portfolio, but also a rigorous analysis of the joint dependence features of such relationships. We investigate such joint relationships in light of the UIP condition. To achieve this, we study the probability of joint extreme movements in the funding and investment currency baskets and interpret these extremal tail probabilities as relative risk exposures of adverse and beneficial joint currency movements which would affect the portfolio returns. This allows us to obtain a relative contribution to the exposure of the portfolio profit decomposed in terms of the downside and upside risks that are contributed from such tail dependence features in each currency basket. We argue that the analysis of the carry trade is better informed by jointly modelling the multivariate behaviour of the marginal processes of currency baskets accounting for potential multivariate extremes, whilst still incorporating heavy-tailed relationships studied in marginal processes. We fit mixture copula models to vectors of daily exchange rate log returns between 1989 - 2014 for both the investment and funding currency baskets making up the carry trade portfolio. The method and the dataset considered for the construction of the respective funding and investing currencies baskets are thoroughly described in \cite{Ames2013}. The currency compositions of the funding and investment baskets are varying daily over time as a function of the interest rate differential processes for each currency relative to the USD. Our analysis concludes that the appealing high return profile of a carry portfolio is not only compensating the tail thickness of each individual component probability distribution but also the fact that extreme returns tend to occur simultaneously and lead to a portfolio particularly sensitive to the risk of what is known as drawdown. Furthermore, we also demonstrate that high interest rate currency baskets and low interest rate currency baskets can display periods during which the tail dependence gets inverted, demonstrating when periods of construction of the aforementioned carry positions are being undertaken by investors. \section{Interpreting Tail Dependence as Financial Risk Exposure in Carry Trade Portfolios} \label{joint_tail_risk_exposure} In order to fully understand the tail risks of joint exchange rate movements present when one invests in a carry trade strategy we can look at both the downside extremal tail exposure and the upside extremal tail exposure within the funding and investment baskets that comprise the carry portfolio. The downside tail exposure can be seen as the crash risk of the basket, i.e. the risk that one will suffer large joint losses from each of the currencies in the basket. These losses would be the result of joint appreciations of the currencies one is short in the low interest rate basket and/or joint depreciations of the currencies one is long in the high interest rate basket. \begin{definition}[Downside Tail Risk Exposure in Carry Trade Portfolios] \\Consider the funding currency (short) basket with $n$-exchange rates relative to base currency, on day $t$, with currency log-returns {\scriptsize{ $\smash{(X^{(1)}_t,X^{(2)}_t,\ldots,X^{(n)}_t)}$.}} Then the downside tail exposure risk for the carry trade will be defined as the conditional probability of adverse currency movements in the short basket, corresponding to its upper tail dependence, given by {\scriptsize{ \begin{equation}\label{EqnDown1} \lambda_u^{(i)} (u) := \mathbb{P}\text{r}\left(X^{(i)}_t > F_i^{-1}(u)|X^{(1)}_t > F_1^{-1}(u),\ldots,X^{(i-1)}_t>F_{i-1}^{-1}(u), X^{(i+1)}_t>F_{i+1}^{-1}(u),\ldots,X^{(n)}_t>F_n^{-1}(u)\right) \end{equation} }} for a currency of interest $i \in \left\{1,2,\ldots,n\right\}$. Conversely the downside tail exposure for the investment (long) basket with $n$ currencies will be defined as the conditional probability of adverse currency movement in the long basket, given by {\scriptsize{ \begin{equation}\label{EqnDown2} \lambda_l^{(i)}(u) := \mathbb{P}\text{r}\left(X^{(i)}_t < F_i^{-1}(u)|X^{(1)}_t < F_1^{-1}(u),\ldots,X^{(i-1)}_t < F_{i-1}^{-1}(u), X^{(i+1)}_t < F_{i+1}^{-1}(u),\ldots,X^{(n)}_t < F_n^{-1}(u)\right). \end{equation} }} In general then a basket's upside or downside risk exposure would be quantified by the probability of a loss (or gain) arising from an appreciation or depreciation jointly of magnitude $u$ and the dollar cost associated to a given loss/gain of this magnitude. The standard approach in economics would be to associate say a linear cost function in $u$ to such a probability of loss to get say the downside risk exposure in dollars according to $E(u) = C_u({F_{X_t^{(i)}}(u)}) \times \lambda_u(u)$, which will be a function of the level $u$. As $\lambda_u$ becomes independent of the marginals, i.e. as $u \rightarrow 0$ or $u \rightarrow 1$, $C$ also becomes independent of the marginals. \end{definition} Conversely, we will also define the upside tail exposure that will contribute to profitable returns in the carry trade strategy when extreme movements that are in favour of the carry position held. These would correspond to precisely the probabilities discussed above applied in the opposite direction. That is the upside risk exposure in the funding (short) basket is given by Equation~(\ref{EqnDown1}) and the upside risk exposure in the investment (long) basket is given by Equation~(\ref{EqnDown2}). That is the upside tail exposure of the carry trade strategy is defined to be the risk that one will earn large joint profits from each of the currencies in the basket. These profits would be the result of joint depreciations of the currencies one is short in the low interest rate basket and/or joint appreciations of the currencies one is long in the high interest rate basket. \begin{rems} In a basket with $n$ currencies, $n \geq 2$, if one considers capturing the upside and downside financial risk exposures from a model based calculation of these extreme probabilities then if the parametric model is exchangeable, such as an Archimedean copula, then swapping currency $i$ in Equation (\ref{EqnDown1}) and Equation (\ref{EqnDown2}) with another currency from the basket, say $j$ will not alter the downside or upside risk exposures. If they are not exchangeable then one can consider upside and downside risks for each individual currency in the carry trade portfolio. \end{rems} We thus consider these tail upside and downside exposures of the carry trade strategy as features that can show that even though average profits may be made from the violation of UIP, it comes at significant tail exposure. We can formalise the notion of the dependence behaviour in the extremes of the multivariate distribution through the concept of tail dependence, limiting behaviour of Equations (\ref{EqnDown1}) and (\ref{EqnDown2}), as $u \uparrow 1$ and $u \downarrow 0$ asymptotically. The interpretation of such quantities is then directly relevant to assessing the chance of large adverse movements in multiple currencies which could potentially increase the risk associated with currency carry trade strategies significantly, compared to risk measures which only consider the marginal behaviour in each individual currency. Under certain statistical dependence models these extreme upside and downside tail exposures can be obtained analytically. we develop a flexible copula mixture example that has such properties below. \section{Generalised Archimedean Copula Models for Currency Exchange Rate Baskets} In order to study the joint tail dependence in the investment or funding basket we consider an overall tail dependence analysis which is parametric model based, obtained by using flexible mixtures of Archimedean copula components. Such a model approach is reasonable since typically the number of currencies in each of the long basket (investment currencies) and the short basket (funding currencies) is 4 or 5. In addition these models have the advantage that they produce asymmetric dependence relationships in the upper tails and the lower tails in the multivariate model. We consider three models; two Archimedean mixture models and one outer power transformed Clayton copula. The mixture models considered are the Clayton-Gumbel mixture and the Clayton-Frank-Gumbel mixture, where the Frank component allows for periods of no tail dependence within the basket as well as negative dependence. We fit these copula models to each of the long and short baskets separately. \begin{definition}[Mixture Copula] A mixture copula is a linear weighted combination of copulae of the form: \begin{equation} C_M(\mathbf{u}; \mathbf{\theta}) = \sum_{i=1}^N \lambda_i C_i ({\bf u};{\bf \theta_i}), \end{equation} where $0 \leq \lambda_i \leq 1 \;\; \forall i \in \{1, ..., N\}$ and $\sum_{i=1}^N \lambda_i = 1$. \end{definition} \begin{definition}[Archimedean Copula] A d-dimensional copula C is called Archim- edean if it can be represented by the form: \begin{equation} C({\bf u}) = \psi \{\psi^{-1}(u_1) + \cdots + \psi^{-1}(u_d)\} = \psi \{t({\bf u}) \} \;\;\;\; \forall {\bf u} = \{u_1, \ldots, u_d\} \in [0, 1]^d , \label{eq:archimedean_copula} \end{equation} where $\psi$ is an Archimedean generator satisfying the conditions given in \cite{McNeil2009}. $\psi ^{-1}:[0, 1] \rightarrow [0, \infty)$ is the inverse generator with $\psi^{-1}(0) = \mbox{inf}\{t: \psi(t) = 0\}$. \end{definition} In the following section we consider two stages to estimate the multivariate basket returns, firstly the estimation of suitable heavy tailed marginal models for the currency exchange rates (relative to USD), followed by the estimation of the dependence structure of the multivariate model composed of multiple exchange rates in currency baskets for long and short positions. Once the parametric Archimedean mixture copula model has been fitted to a basket of currencies, it is possible to obtain the upper and lower tail dependence coefficients, via closed form expressions for the class of mixture copula models and outer-power transform models we consider. The tail dependence expressions for many common bivariate copulae can be found in \cite{Nelsen2006}. This concept was recently extended to the multivariate setting by \cite{de2012multivariate}. \begin{definition}[Generalized Archimedean Tail Dependence Coefficient] Let $X = (X_1,..., X_n)^T$ be an n-dimensional random vector with distribution \newline $C(F_1(X_1), \ldots, F_n(X_n))$, where $C$ is an Archimedean copula and $F_1, ..., F_n$ are the marginal distributions. The coefficients of upper and lower tail dependence are defined respectively as: \small{ \begin{equation} \begin{aligned} \lambda_u^{1,...,h|h+1,...,n} &= \lim_{u \rightarrow 1-} P\left( X_1 > F_1^{-1}(u),...,X_h > F_h^{-1}(u) | X_{h+1} > F_{h+1}^{-1}(u), ..., X_n > F_n^{-1}(u) \right) \\ &= \lim_{t \rightarrow 0^+} \frac{\sum_{i=1}^d \left( \binom{d}{d-i} i (-1)^{i} \left[ \psi^{'} (it) \right] \right) }{\sum_{i=1}^{n-h} \left( \binom{n-h}{n-h-i} i (-1)^i \left[ \psi^{'} (it) \right] \right)} \;\;\;\; , \end{aligned} \label{eq:archmuppertd} \end{equation}} \begin{equation} \begin{aligned} \lambda_l^{1,...,h|h+1,...,n} &= \lim_{u \rightarrow 0+} P \left( X_1 < F_1^{-1}(u),...,X_h < F_h^{-1}(u) | X_{h+1} < F_{h+1}^{-1}(u), ..., X_n < F_n^{-1}(u) \right) \\ &= \lim_{t \rightarrow \infty} \frac{n}{n-h} \frac{\psi^{'} (nt)}{\psi^{'} ((n-h)t)} \end{aligned} \label{eq:archmlowertd} \end{equation} \normalsize for the model dependence function `generator' $\psi(\cdot)$ and its inverse function. \end{definition} In \cite{de2012multivariate} the analogous form of the generalized multivariate upper and lower tail dependence coefficients for outer-power transformed Clayton copula models is provided. The derivation of Equations (\ref{eq:archmuppertd}) and (\ref{eq:archmlowertd}) for the outer power case follows from \cite{feller1971}, i.e. the composition of a completely monotone function with a non-negative function that has a completely monotone derivative is again completely monotone. The densities for the outer power Clayton copula can be found in \cite{Ames2013}. In the above definitions of model based parametric upper and lower tail dependence one gets the estimates of joint extreme deviations in the whole currency basket. It will often be useful in practice to understand which pairs of currencies within a given currency basket contribute significantly to the downside or upside risks of the overall currency basket. In the class of Archimedean based mixtures we consider, the feature of exchangeability precludes decompositions of the total basket downside and upside risks into individual currency specific components. To be precise we aim to perform a decomposition of say the downside risk of the funding basket into contributions from each pair of currencies in the basket, we will do this is achieved via a simple linear projection onto particular subsets of currencies in the portfolio that are of interest, which leads for example to the following expression: \begin{equation} \mathbb{E}\left[\left.\hat \lambda_u^{i|1,2,...,i-1,i+1,...,n} \right| \hat\lambda_u^{2|1}, \hat\lambda_u^{3|1}, \hat\lambda_u^{3|2},\ldots, \hat\lambda_u^{n|n-1}\right] = \alpha_0 + \sum_{i \neq j}^n \alpha_{ij}\hat\lambda_u^{i|j}, \end{equation} where $\hat\lambda_u^{i|1,2,...,i-1,i+1,...,n}$ is a random variable since it is based on parameters of the mixture copula model which are themselves functions of the data and therefore random variables. Such a simple linear projection will then allow one to interpret directly the marginal linear contributions to the upside or downside risk exposure of the basket obtained from the model, according to particular pairs of currencies in the basket by considering the coefficients $\alpha_{ij}$, i.e. the projection weights. To perform this analysis we need estimates of the pairwise tail dependence in the upside and downside risk exposures $\hat\lambda_u^{i|j}$ and $\hat\lambda_l^{i|j}$ for each pair of currencies $i,j\in \left\{1,2,\ldots,n\right\}$. We obtain this through non-parametric (model-free) estimators, see \cite{Cruz2013}. \begin{definition}Non-Parametric Pairwise Estimator of Upper Tail Dependence (Extreme Exposure) \\ \begin{equation} \hat \lambda_u = 2 - \mbox{min} \left[2 \hspace{2mm}, \hspace{2mm} \frac{\mbox{log} \, \hat C_n \left( \frac{n - k}{n}, \frac{n - k}{n} \right)}{\mbox{log} (\frac{n - k}{n})} \right] \hspace{3mm} \hspace{3mm} k = {1,2, \ldots n-1}, \label{eq:nptd} \end{equation} where $\hat C_n \left( u_1, u_2 \right) = \frac{1}{n} \sum\limits_{i=1}^n \mathbf{1} \left( \frac{R_{1i}}{n} \leq u_1 , \frac{R_{2i}}{n} \leq u_2 \right)$ and $R_{ji}$ is the rank of the variable in its marginal dimension that makes up the pseudo data. \end{definition} In order to form a robust estimator of the upper tail dependence a median of the estimates obtained from setting $k$ as the $1^{st}, 2^{nd}, \ldots, 20^{th}$ percentile values was used. Similarly, $k$ was set to the $80^{th}, 81^{st}, \ldots, 99^{th}$ percentiles for the lower tail dependence. \section{Results and Discussion} In order to model the marginal exchange rate log-returns we considered two approaches. Firstly, we fit Log Generalised Gamma models to each of the 34 currencies considered in the analysis, updating the fits for every trading day based on a 6 month sliding window. A time series approach was also considered to fit the marginals, as is popular in much of the recent copula literature, see for example \cite{brechmann2012risk}, using GARCH(1,1) models for the 6 month sliding data windows. In each case we are assuming approximate local stationarity over these short 6 month time frames. A summary of the marginal model selection can be seen in Table~\ref{AIC_margins}, which shows the average AIC scores for the 4 most frequent currencies in the high interest rate and the low interest rate baskets over the data period. Whilst the AIC for the GARCH(1,1) model is consistently lower than the respective AIC for the Generalised Gamma, the standard errors are sufficiently large for there to be no clear favourite between the two models. \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{8pt} \begin{table \centering \caption{Average AIC for the Generalized Gamma (GG) and the GARCH(1,1) for the four most frequent currencies in the high interest rate and the low interest rate baskets over the 2001 - 2014 data period split into 2 chunks, i.e. 6 years. Standard deviations are shown in parentheses. Similar performance was seen between 1989-2001.} \begin{tabular}{cccccc} \toprule & \textbf{} & \multicolumn{2}{c}{\textbf{01 - 07}} & \multicolumn{2}{c}{\textbf{07 - 14}} \\ \midrule & \textbf{Currency} & \textbf{GG} & \textbf{GARCH} & \textbf{GG} & \textbf{GARCH} \\ \multirow{4}[0]{*}{\begin{sideways}Investment\end{sideways}} & \textbf{TRY} & 356.9 (3.5) & \textbf{341.1 (21.7)} & 358.7 (3.0) & \textbf{349.1 (16.8)} \\ & \textbf{MXN} & 360.0 (1.2) & \textbf{357.04 (3.8)} & 358.6 (4.0) & \textbf{344.5 (28.1)} \\ & \textbf{ZAR} & 358.7 (3.0) & \textbf{353.5 (11.4)} & 358.0 (6.1) & \textbf{352.8 (12.2)} \\ & \textbf{BRL} & 359.0 (2.8) & \textbf{341.6 (19.4)} & 360.0 (2.1) & \textbf{341.6 (23.2)} \\ \midrule \multirow{4}[0]{*}{\begin{sideways}Funding\end{sideways}} & \textbf{JPY} & 361.2 (0.9) & \textbf{356.5 (7.2)} & 356.9 (6.8) & \textbf{355.0 (7.0)} \\ & \textbf{CHF} & 360.8 (1.4) & \textbf{359.1 (2.9)} & 358.6 (7.4) & \textbf{355.4 (8.8)} \\ & \textbf{SGD} & 360.0 (2.7) & \textbf{356.8 (5.7)} & 360.0 (2.6) & \textbf{353.7 (7.5)} \\ & \textbf{TWD} & 358.7 (6.2) & \textbf{347.0 (16.4)} & 359.1 (5.8) & \textbf{348.5 (13.2)} \\ \bottomrule \end{tabular}% \label{AIC_margins}% \end{table}% However, when we consider the model selection of the copula in combination with the marginal model we observe lower AIC scores for copula models fitted on the pseudo data resulting from using Generalised Gamma margins than using GARCH(1,1) margins. This is the case for all three copula models under consideration in the paper. Figure~\ref{CFG_GG_vs_GARCH_high_and_low} shows the AIC differences when using the Clayton-Frank-Gumbel copula in combination with the two choices of marginal for the high interest rate and the low interest rate basket respectively. Over the entire data period the mean difference between the AIC scores for the CFG model with Generalised Gamma vs GARCH(1,1) marginals for the high interest rate basket is 12.3 and for the low interest rate basket is 3.6 in favour of the Generalised Gamma. \begin{figure \centering \includegraphics[width =\textwidth , height = 60mm]{figures/CFG_GG_vs_GARCH_high_and_low} \caption{Comparison of AIC for Clayton-Frank-Gumbel model fit on the pseudo data resulting from Generalised Gamma vs GARCH(1,1) margins . The high interest rate basket is shown in the upper panel and the low interest rate basket is shown in the lower panel.} \label{CFG_GG_vs_GARCH_high_and_low} \end{figure} Thus, it is clear that the Generalised Gamma model is the better model in our copula modelling context and so is used in the remainder of the analysis. We now consider the goodness-of-fit of the three copula models applied to the high interest rate basket and low interest rate basket pseudo data. We used a scoring via the AIC between the three component mixture CFG model versus the two component mixture CG model versus the two parameter OpC model. One could also use the Copula-Information-Criterion (CIC), see \cite{Gronneberg2010} for details. The results are presented for this comparison in Figure~\ref{CFG_GG_vs_CG_OpC_high_and_low}, which shows the differentials between AIC for CFG versus CG and CFG versus OpC for each of the high interest rate and the low interest rate currency baskets. We can see it is not unreasonable to consider the CFG model for this analysis, since over the entire data period the mean difference between the AIC scores for the CFG and the CG models for the high interest rate basket is 1.33 and for the low interest rate basket is 1.62 in favour of the CFG. However, from Figure~\ref{CFG_GG_vs_CG_OpC_high_and_low} we can see that during the Credit crisis period the CFG model is performing much better. The CFG copula model provides a much better fit when compared to the OpC model, as shown by the mean difference between the AIC scores of 9.58 for the high interest rate basket and 9.53 for the low interest rate basket. Similarly, the CFG model performs markedly better than the OpC model during the Credit crisis period. \begin{figure \centering \includegraphics[width =\textwidth , height = 60mm]{figures/CFG_GG_vs_CG_OpC_high_and_low} \caption{Comparison of AIC for Clayton-Frank-Gumbel model with Clayton-Gumbel and Outer power Clayton models on high and low interest rate baskets with Generalised Gamma margins. The high interest rate basket is shown in the upper panel and the low interest rate basket is shown in the lower panel.} \label{CFG_GG_vs_CG_OpC_high_and_low} \end{figure} \subsection{Tail Dependence Results} Below we will examine the time-varying parameters of the maximum likelihood fits of this mixture CFG copula model. Here, we shall focus on the strength of dependence present in the currency baskets, given the particular copula structures in the mixture, which is considered as tail upside/downside exposure of a carry trade over time. Figure~\ref{CFG_OPC_VIX_high_TD} shows the time-varying upper and lower tail dependence, i.e. the extreme upside and downside risk exposures for the carry trade basket, present in the high interest rate basket under the CFG copula fit and the OpC copula fit. Similarly, Figure~\ref{CFG_OPC_VIX_low_TD} shows this for the low interest rate basket. \begin{remark}[Model Risk and its Influence on Upside and Downside Risk Exposure] In fitting the OpC model, we note that independent of the strength of true tail dependence in the multivariate distribution, the upper tail dependence coefficient $\lambda_u$ for this model strictly increases with dimension very rapidly. Therefore, when fitting the OpC model, if the basket size becomes greater than bivariate, i.e. from 1999 onwards, the upper tail dependence estimates become very large (even for outer-power parameter values very close to $\beta=1$). This lack of flexibility in the OpC model only becomes apparent in baskets of dimension greater than 2, but is also evident in the AIC scores in Figure~\ref{CFG_GG_vs_CG_OpC_high_and_low}. Here we see an interesting interplay between the model risk associated to the dependence structure being fit and the resulting interpreted upside or downside financial risk exposures for the currency baskets. \end{remark} Focusing on the tail dependence estimate produced from the CFG copula fits we can see that there are indeed periods of heightened upper and lower tail dependence in the high interest rate and the low interest rate baskets. There is a noticeable increase in upper tail dependence in the high interest rate basket at times of global market volatility. Specifically, during late 2007, i.e. the global financial crisis, there is a sharp peak in upper tail dependence. Preceding this, there is an extended period of heightened lower tail dependence from 2004 to 2007, which could tie in with the building of the leveraged carry trade portfolio positions. This period of carry trade construction is also very noticeable in the low interest rate basket through the very high levels of upper tail dependence. \begin{figure \centering \includegraphics[width =\textwidth , height=80mm]{figures/CFG_OPC_VIX_high_TD2} \caption{Comparison of Volatility Index (VIX) with upper and lower tail dependence of the high interest rate basket in the CFG copula and OpC copula. US NBER recession periods are represented by the shaded grey zones. Some key crisis dates across the time period are labelled.} \label{CFG_OPC_VIX_high_TD} \end{figure} \begin{figure \centering \includegraphics[width =\textwidth , height=80mm]{figures/CFG_OPC_VIX_low_TD2} \caption{Comparison of Volatility Index (VIX) with upper and lower tail dependence of the low interest rate basket in the CFG copula and OpC copula. US NBER recession periods are represented by the shaded grey zones. Some key crisis dates across the time period are labelled.} \label{CFG_OPC_VIX_low_TD} \end{figure} We compare in Figures~\ref{CFG_OPC_VIX_high_TD} and \ref{CFG_OPC_VIX_low_TD} the tail dependence plotted against the VIX volatility index for the high interest rate basket and the low interest rate basket respectively for the period under investigation. The VIX is a popular measure of the implied volatility of S\&P 500 index options - often referred to as the \emph{fear index}. As such it is one measure of the market's expectations of stock market volatility over the next 30 days. We can clearly see here that in the high interest rate basket there are upper tail dependence peaks at times when there is an elevated VIX index, particularly post-crisis. However, we would not expect the two to match exactly since the VIX is not a direct measure of global FX volatility. We can thus conclude that investors' risk aversion clearly plays an important role in the tail behaviour. This conclusion corroborates recent literature regarding the skewness and the kurtosis features characterizing the currency carry trade portfolios \cite{Farhi2008,Brunnermeier2008, Menkhoff2012}. \subsection{Pairwise Decomposition of Basket Tail Dependence} In order to examine the contribution of each pair of currencies to the overall n-dimensional basket tail dependence we calculated the corresponding non-parametric pairwise tail dependencies for each pair of currencies. In Figure~\ref{crisis3_heatmap} we can see the average upper and lower non-parametric tail dependence for each pair of currencies during the Credit crisis, with the 3 currencies most frequently in the high interest rate and the low interest rate baskets labelled accordingly. The lower triangle represents the non-parametric pairwise lower tail dependence and the upper triangle represents the non-parametric pairwise upper tail dependence. If one was trying to optimise their currency portfolio with respect to the tail risk exposures, i.e. to minimise negative tail risk exposure and maximise positive tail risk exposure, then one would sell short currencies with high upper tail dependence and low lower tail dependence whilst buying currencies with low upper tail dependence and high lower tail dependence. \begin{figure \centering \includegraphics[width =\textwidth, height=75mm]{figures/crisis_3_hmBONE} % \caption{Heat map showing the strength of non-parametric tail dependence between each pair of currencies averaged over the Credit crisis period. Lower tail dependence is shown in the lower triangle and upper tail dependence is shown in the upper triangle. The 3 currencies most frequently in the high interest rate and the low interest rate baskets are labelled.} \label{crisis3_heatmap} \end{figure} Similarly, in Figure~\ref{last_12_months_heatmap} we see the pairwise non-parametric tail dependencies averaged over the last 12 months (01/02/2013 to 29/01/2014). Comparing this heat map to the heat map during the Credit crisis (Figure~\ref{crisis3_heatmap}) we notice that in general there are lower values of tail dependence amongst the currency pairs. \begin{figure \centering \includegraphics[width =\textwidth, height=75mm]{figures/last_12_months_hmBONE} \caption{Heat map showing the strength of non-parametric tail dependence between each pair of currencies averaged over the last 12 months (01/02/2013 to 29/01/2014). Lower tail dependence is shown in the lower triangle and upper tail dependence is shown in the upper triangle. The 3 currencies most frequently in the high interest rate and the low interest rate baskets are labelled.} \label{last_12_months_heatmap} \end{figure} We performed linear regression of the pairwise non-parametric tail dependence on the respective basket tail dependence for the days on which the 3 currencies all appeared in the basket (224 out of 250 for the lower interest rate basket and 223 out of 250 for the high interest rate basket). The regression coefficients and $R^2$ values can be seen in Table~\ref{nptd_td_regression}. We can interpret this as the relative contribution of each of the 3 currency pairs to the overall basket tail dependence. We note that for the low interest rate lower tail dependence and for the high interest rate upper tail dependence there is a significant degree of cointegration between the currency pair covariates and hence we might be able to use a single covariate due to the presence of a common stochastic trend. \setlength{\tabcolsep}{6pt} \begin{table \centering \caption{Pairwise non-parametric tail dependence regressed on respective basket tail dependence (standard errors are shown in parentheses). The 3 currencies most frequently in the respective baskets are used as independent variables.} \begin{tabular}{cccccc} \toprule \textbf{Low IR Basket } & \textbf{Constant} & \textbf{CHF JPY} & \textbf{CZK CHF} & \textbf{CZK JPY} & \textbf{$R^2$} \\ \midrule \textbf{Upper TD} & 0.22 (0.01) & 0.02 (0.03) & 0.18 (0.02) & 0.38 (0.05) & 0.57 \\ \textbf{Lower TD} & 0.71 (0.17) & -0.62 (0.25) & -0.38 (0.26) & 0.23 (0.32) & 0.28 \\ \bottomrule \textbf{} & & & & & \\ \toprule \textbf{High IR Basket } & \textbf{Constant} & \textbf{EGP INR} & \textbf{UAH EGP} & \textbf{UAH INR} & \textbf{$R^2$} \\ \midrule \textbf{Upper TD} & 0.07 (0.01) & -0.06 (0.33) & 0.59 (0.08) & 2.37 (0.42) & 0.4 \\ \textbf{Lower TD} & 0.1 (0.02) & 0.56 (0.05) & 0.44 (0.08) & -0.4 (0.07) & 0.44 \\ \bottomrule \end{tabular}% \label{nptd_td_regression}% \end{table}% \subsection{Understanding the Tail Exposure associated with the Carry Trade and its Role in the UIP Puzzle} As was discussed in Section~\ref{joint_tail_risk_exposure}, the tail exposures associated with a currency carry trade strategy can be broken down into the upside and downside tail exposures within each of the long and short carry trade baskets. The downside relative exposure adjusted returns are obtained by multiplying the monthly portfolio returns by one minus the upper and the lower tail dependence present respectively in the high interest rate basket and the low interest rate basket at the corresponding dates. The upside relative exposure adjusted returns are obtained by multiplying the monthly portfolio returns by one plus the lower and upper tail dependence present respectively in the high interest rate basket and the low interest rate basket at the corresponding dates. Note that we refer to these as relative exposure adjustments only for the tail exposures since we do not quantify a market price per unit of tail risk. However, this is still informative as it shows a decomposition of the relative exposures from the long and short baskets with regard to extreme events. As can be seen in Figure~\ref{risk_adj_Downside}, the relative adjustment to the absolute cumulative returns for each type of downside exposure is greatest for the low interest rate basket, except under the OpC model, but this is due to the very poor fit of this model to baskets containing more than 2 currencies which we see transfers to financial risk exposures. This is interesting because intuitively one would expect the high interest rate basket to be the largest source of tail exposure. However, one should be careful when interpreting this plot, since we are looking at the extremal tail exposure. The analysis may change if one considered the intermediate tail risk exposure, where the marginal effects become significant. Similarly, Figure~\ref{risk_adj_Upside} shows the relative adjustment to the absolute cumulative returns for each type of upside exposure is greatest for the low interest rate basket. The same interpretation as for the downside relative exposure adjustments can be made here for upside relative exposure adjustments. \begin{figure \centering \includegraphics[width =\textwidth , height=75mm]{figures/downside_risk_adj_returns} \caption{Cumulative log returns of the carry trade portfolio (HML = High interest rate basket Minus Low interest rate basket). Downside exposure adjusted cumulative log returns using upper/lower tail dependence in the high/low interest rate basket for the CFG copula and the OpC copula are shown for comparison.} \label{risk_adj_Downside} \end{figure} \begin{figure \centering \includegraphics[width =\textwidth ,height=75mm]{figures/upside_risk_adj_returns} \caption{Cumulative log returns of the carry trade portfolio (HML = High interest rate basket Minus Low interest rate basket). Upside exposure adjusted cumulative log returns using lower/upper tail dependence in the high/low interest rate basket for the CFG copula and the OpC copula are shown for comparison.} \label{risk_adj_Upside} \end{figure} \section{Results and Discussion} In order to model the marginal exchange rate log-returns we considered two approaches. Firstly, we fit Log Generalised Gamma models to each of the 34 currencies considered in the analysis, updating the fits for every trading day based on a 6 month sliding window. A time series approach was also considered to fit the marginals, as is popular in much of the recent copula literature, see for example \cite{brechmann2012risk}, using GARCH(1,1) models for the 6 month sliding data windows. In each case we are assuming approximate local stationarity over these short 6 month time frames. A summary of the marginal model selection can be seen in Table~\ref{AIC_margins}, which shows the average AIC scores for the 4 most frequent currencies in the high interest rate and the low interest rate baskets over the data period. Whilst the AIC for the GARCH(1,1) model is consistently lower than the respective AIC for the Generalised Gamma, the standard errors are sufficiently large for there to be no clear favourite between the two models. \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{8pt} \begin{table \centering \caption{Average AIC for the Generalized Gamma (GG) and the GARCH(1,1) for the four most frequent currencies in the high interest rate and the low interest rate baskets over the 2001 - 2014 data period split into 2 chunks, i.e. 6 years. Standard deviations are shown in parentheses. Similar performance was seen between 1989-2001.} \begin{tabular}{cccccc} \toprule & \textbf{} & \multicolumn{2}{c}{\textbf{01 - 07}} & \multicolumn{2}{c}{\textbf{07 - 14}} \\ \midrule & \textbf{Currency} & \textbf{GG} & \textbf{GARCH} & \textbf{GG} & \textbf{GARCH} \\ \multirow{4}[0]{*}{\begin{sideways}Investment\end{sideways}} & \textbf{TRY} & 356.9 (3.5) & \textbf{341.1 (21.7)} & 358.7 (3.0) & \textbf{349.1 (16.8)} \\ & \textbf{MXN} & 360.0 (1.2) & \textbf{357.04 (3.8)} & 358.6 (4.0) & \textbf{344.5 (28.1)} \\ & \textbf{ZAR} & 358.7 (3.0) & \textbf{353.5 (11.4)} & 358.0 (6.1) & \textbf{352.8 (12.2)} \\ & \textbf{BRL} & 359.0 (2.8) & \textbf{341.6 (19.4)} & 360.0 (2.1) & \textbf{341.6 (23.2)} \\ \midrule \multirow{4}[0]{*}{\begin{sideways}Funding\end{sideways}} & \textbf{JPY} & 361.2 (0.9) & \textbf{356.5 (7.2)} & 356.9 (6.8) & \textbf{355.0 (7.0)} \\ & \textbf{CHF} & 360.8 (1.4) & \textbf{359.1 (2.9)} & 358.6 (7.4) & \textbf{355.4 (8.8)} \\ & \textbf{SGD} & 360.0 (2.7) & \textbf{356.8 (5.7)} & 360.0 (2.6) & \textbf{353.7 (7.5)} \\ & \textbf{TWD} & 358.7 (6.2) & \textbf{347.0 (16.4)} & 359.1 (5.8) & \textbf{348.5 (13.2)} \\ \bottomrule \end{tabular}% \label{AIC_margins}% \end{table}% However, when we consider the model selection of the copula in combination with the marginal model we observe lower AIC scores for copula models fitted on the pseudo data resulting from using Generalised Gamma margins than using GARCH(1,1) margins. This is the case for all three copula models under consideration in the paper. Figure~\ref{CFG_GG_vs_GARCH_high_and_low} shows the AIC differences when using the Clayton-Frank-Gumbel copula in combination with the two choices of marginal for the high interest rate and the low interest rate basket respectively. Over the entire data period the mean difference between the AIC scores for the CFG model with Generalised Gamma vs GARCH(1,1) marginals for the high interest rate basket is 12.3 and for the low interest rate basket is 3.6 in favour of the Generalised Gamma. \begin{figure \centering \includegraphics[width =\textwidth , height = 60mm]{figures/CFG_GG_vs_GARCH_high_and_low} \caption{Comparison of AIC for Clayton-Frank-Gumbel model fit on the pseudo data resulting from Generalised Gamma vs GARCH(1,1) margins . The high interest rate basket is shown in the upper panel and the low interest rate basket is shown in the lower panel.} \label{CFG_GG_vs_GARCH_high_and_low} \end{figure} Thus, it is clear that the Generalised Gamma model is the better model in our copula modelling context and so is used in the remainder of the analysis. We now consider the goodness-of-fit of the three copula models applied to the high interest rate basket and low interest rate basket pseudo data. We used a scoring via the AIC between the three component mixture CFG model versus the two component mixture CG model versus the two parameter OpC model. One could also use the Copula-Information-Criterion (CIC), see \cite{Gronneberg2010} for details. The results are presented for this comparison in Figure~\ref{CFG_GG_vs_CG_OpC_high_and_low}, which shows the differentials between AIC for CFG versus CG and CFG versus OpC for each of the high interest rate and the low interest rate currency baskets. We can see it is not unreasonable to consider the CFG model for this analysis, since over the entire data period the mean difference between the AIC scores for the CFG and the CG models for the high interest rate basket is 1.33 and for the low interest rate basket is 1.62 in favour of the CFG. However, from Figure~\ref{CFG_GG_vs_CG_OpC_high_and_low} we can see that during the Credit crisis period the CFG model is performing much better. The CFG copula model provides a much better fit when compared to the OpC model, as shown by the mean difference between the AIC scores of 9.58 for the high interest rate basket and 9.53 for the low interest rate basket. Similarly, the CFG model performs markedly better than the OpC model during the Credit crisis period. \begin{figure \centering \includegraphics[width =\textwidth , height = 60mm]{figures/CFG_GG_vs_CG_OpC_high_and_low} \caption{Comparison of AIC for Clayton-Frank-Gumbel model with Clayton-Gumbel and Outer power Clayton models on high and low interest rate baskets with Generalised Gamma margins. The high interest rate basket is shown in the upper panel and the low interest rate basket is shown in the lower panel.} \label{CFG_GG_vs_CG_OpC_high_and_low} \end{figure} \subsection{Tail Dependence Results} Below we will examine the time-varying parameters of the maximum likelihood fits of this mixture CFG copula model. Here, we shall focus on the strength of dependence present in the currency baskets, given the particular copula structures in the mixture, which is considered as tail upside/downside exposure of a carry trade over time. Figure~\ref{CFG_OPC_VIX_high_TD} shows the time-varying upper and lower tail dependence, i.e. the extreme upside and downside risk exposures for the carry trade basket, present in the high interest rate basket under the CFG copula fit and the OpC copula fit. Similarly, Figure~\ref{CFG_OPC_VIX_low_TD} shows this for the low interest rate basket. \begin{remark}[Model Risk and its Influence on Upside and Downside Risk Exposure] In fitting the OpC model, we note that independent of the strength of true tail dependence in the multivariate distribution, the upper tail dependence coefficient $\lambda_u$ for this model strictly increases with dimension very rapidly. Therefore, when fitting the OpC model, if the basket size becomes greater than bivariate, i.e. from 1999 onwards, the upper tail dependence estimates become very large (even for outer-power parameter values very close to $\beta=1$). This lack of flexibility in the OpC model only becomes apparent in baskets of dimension greater than 2, but is also evident in the AIC scores in Figure~\ref{CFG_GG_vs_CG_OpC_high_and_low}. Here we see an interesting interplay between the model risk associated to the dependence structure being fit and the resulting interpreted upside or downside financial risk exposures for the currency baskets. \end{remark} Focusing on the tail dependence estimate produced from the CFG copula fits we can see that there are indeed periods of heightened upper and lower tail dependence in the high interest rate and the low interest rate baskets. There is a noticeable increase in upper tail dependence in the high interest rate basket at times of global market volatility. Specifically, during late 2007, i.e. the global financial crisis, there is a sharp peak in upper tail dependence. Preceding this, there is an extended period of heightened lower tail dependence from 2004 to 2007, which could tie in with the building of the leveraged carry trade portfolio positions. This period of carry trade construction is also very noticeable in the low interest rate basket through the very high levels of upper tail dependence. \begin{figure \centering \includegraphics[width =\textwidth , height=80mm]{figures/CFG_OPC_VIX_high_TD2} \caption{Comparison of Volatility Index (VIX) with upper and lower tail dependence of the high interest rate basket in the CFG copula and OpC copula. US NBER recession periods are represented by the shaded grey zones. Some key crisis dates across the time period are labelled.} \label{CFG_OPC_VIX_high_TD} \end{figure} \begin{figure \centering \includegraphics[width =\textwidth , height=80mm]{figures/CFG_OPC_VIX_low_TD2} \caption{Comparison of Volatility Index (VIX) with upper and lower tail dependence of the low interest rate basket in the CFG copula and OpC copula. US NBER recession periods are represented by the shaded grey zones. Some key crisis dates across the time period are labelled.} \label{CFG_OPC_VIX_low_TD} \end{figure} We compare in Figures~\ref{CFG_OPC_VIX_high_TD} and \ref{CFG_OPC_VIX_low_TD} the tail dependence plotted against the VIX volatility index for the high interest rate basket and the low interest rate basket respectively for the period under investigation. The VIX is a popular measure of the implied volatility of S\&P 500 index options - often referred to as the \emph{fear index}. As such it is one measure of the market's expectations of stock market volatility over the next 30 days. We can clearly see here that in the high interest rate basket there are upper tail dependence peaks at times when there is an elevated VIX index, particularly post-crisis. However, we would not expect the two to match exactly since the VIX is not a direct measure of global FX volatility. We can thus conclude that investors' risk aversion clearly plays an important role in the tail behaviour. This conclusion corroborates recent literature regarding the skewness and the kurtosis features characterizing the currency carry trade portfolios \cite{Farhi2008,Brunnermeier2008, Menkhoff2012}. \subsection{Pairwise Decomposition of Basket Tail Dependence} In order to examine the contribution of each pair of currencies to the overall n-dimensional basket tail dependence we calculated the corresponding non-parametric pairwise tail dependencies for each pair of currencies. In Figure~\ref{crisis3_heatmap} we can see the average upper and lower non-parametric tail dependence for each pair of currencies during the Credit crisis, with the 3 currencies most frequently in the high interest rate and the low interest rate baskets labelled accordingly. The lower triangle represents the non-parametric pairwise lower tail dependence and the upper triangle represents the non-parametric pairwise upper tail dependence. If one was trying to optimise their currency portfolio with respect to the tail risk exposures, i.e. to minimise negative tail risk exposure and maximise positive tail risk exposure, then one would sell short currencies with high upper tail dependence and low lower tail dependence whilst buying currencies with low upper tail dependence and high lower tail dependence. \begin{figure \centering \includegraphics[width =\textwidth, height=75mm]{figures/crisis_3_hmBONE} % \caption{Heat map showing the strength of non-parametric tail dependence between each pair of currencies averaged over the Credit crisis period. Lower tail dependence is shown in the lower triangle and upper tail dependence is shown in the upper triangle. The 3 currencies most frequently in the high interest rate and the low interest rate baskets are labelled.} \label{crisis3_heatmap} \end{figure} Similarly, in Figure~\ref{last_12_months_heatmap} we see the pairwise non-parametric tail dependencies averaged over the last 12 months (01/02/2013 to 29/01/2014). Comparing this heat map to the heat map during the Credit crisis (Figure~\ref{crisis3_heatmap}) we notice that in general there are lower values of tail dependence amongst the currency pairs. \begin{figure \centering \includegraphics[width =\textwidth, height=75mm]{figures/last_12_months_hmBONE} \caption{Heat map showing the strength of non-parametric tail dependence between each pair of currencies averaged over the last 12 months (01/02/2013 to 29/01/2014). Lower tail dependence is shown in the lower triangle and upper tail dependence is shown in the upper triangle. The 3 currencies most frequently in the high interest rate and the low interest rate baskets are labelled.} \label{last_12_months_heatmap} \end{figure} We performed linear regression of the pairwise non-parametric tail dependence on the respective basket tail dependence for the days on which the 3 currencies all appeared in the basket (224 out of 250 for the lower interest rate basket and 223 out of 250 for the high interest rate basket). The regression coefficients and $R^2$ values can be seen in Table~\ref{nptd_td_regression}. We can interpret this as the relative contribution of each of the 3 currency pairs to the overall basket tail dependence. We note that for the low interest rate lower tail dependence and for the high interest rate upper tail dependence there is a significant degree of cointegration between the currency pair covariates and hence we might be able to use a single covariate due to the presence of a common stochastic trend. \setlength{\tabcolsep}{6pt} \begin{table \centering \caption{Pairwise non-parametric tail dependence regressed on respective basket tail dependence (standard errors are shown in parentheses). The 3 currencies most frequently in the respective baskets are used as independent variables.} \begin{tabular}{cccccc} \toprule \textbf{Low IR Basket } & \textbf{Constant} & \textbf{CHF JPY} & \textbf{CZK CHF} & \textbf{CZK JPY} & \textbf{$R^2$} \\ \midrule \textbf{Upper TD} & 0.22 (0.01) & 0.02 (0.03) & 0.18 (0.02) & 0.38 (0.05) & 0.57 \\ \textbf{Lower TD} & 0.71 (0.17) & -0.62 (0.25) & -0.38 (0.26) & 0.23 (0.32) & 0.28 \\ \bottomrule \textbf{} & & & & & \\ \toprule \textbf{High IR Basket } & \textbf{Constant} & \textbf{EGP INR} & \textbf{UAH EGP} & \textbf{UAH INR} & \textbf{$R^2$} \\ \midrule \textbf{Upper TD} & 0.07 (0.01) & -0.06 (0.33) & 0.59 (0.08) & 2.37 (0.42) & 0.4 \\ \textbf{Lower TD} & 0.1 (0.02) & 0.56 (0.05) & 0.44 (0.08) & -0.4 (0.07) & 0.44 \\ \bottomrule \end{tabular}% \label{nptd_td_regression}% \end{table}% \subsection{Understanding the Tail Exposure associated with the Carry Trade and its Role in the UIP Puzzle} As was discussed in Section~\ref{joint_tail_risk_exposure}, the tail exposures associated with a currency carry trade strategy can be broken down into the upside and downside tail exposures within each of the long and short carry trade baskets. The downside relative exposure adjusted returns are obtained by multiplying the monthly portfolio returns by one minus the upper and the lower tail dependence present respectively in the high interest rate basket and the low interest rate basket at the corresponding dates. The upside relative exposure adjusted returns are obtained by multiplying the monthly portfolio returns by one plus the lower and upper tail dependence present respectively in the high interest rate basket and the low interest rate basket at the corresponding dates. Note that we refer to these as relative exposure adjustments only for the tail exposures since we do not quantify a market price per unit of tail risk. However, this is still informative as it shows a decomposition of the relative exposures from the long and short baskets with regard to extreme events. As can be seen in Figure~\ref{risk_adj_Downside}, the relative adjustment to the absolute cumulative returns for each type of downside exposure is greatest for the low interest rate basket, except under the OpC model, but this is due to the very poor fit of this model to baskets containing more than 2 currencies which we see transfers to financial risk exposures. This is interesting because intuitively one would expect the high interest rate basket to be the largest source of tail exposure. However, one should be careful when interpreting this plot, since we are looking at the extremal tail exposure. The analysis may change if one considered the intermediate tail risk exposure, where the marginal effects become significant. Similarly, Figure~\ref{risk_adj_Upside} shows the relative adjustment to the absolute cumulative returns for each type of upside exposure is greatest for the low interest rate basket. The same interpretation as for the downside relative exposure adjustments can be made here for upside relative exposure adjustments. \begin{figure \centering \includegraphics[width =\textwidth , height=75mm]{figures/downside_risk_adj_returns} \caption{Cumulative log returns of the carry trade portfolio (HML = High interest rate basket Minus Low interest rate basket). Downside exposure adjusted cumulative log returns using upper/lower tail dependence in the high/low interest rate basket for the CFG copula and the OpC copula are shown for comparison.} \label{risk_adj_Downside} \end{figure} \begin{figure \centering \includegraphics[width =\textwidth ,height=75mm]{figures/upside_risk_adj_returns} \caption{Cumulative log returns of the carry trade portfolio (HML = High interest rate basket Minus Low interest rate basket). Upside exposure adjusted cumulative log returns using lower/upper tail dependence in the high/low interest rate basket for the CFG copula and the OpC copula are shown for comparison.} \label{risk_adj_Upside} \end{figure}
2,869,038,155,194
arxiv
\section*{Introduction} The theory of Nichols algebras is relatively young, but it has interesting applications to other research fields such as Kac-Moody Lie superalgebras~\cite{inp-Andr14} and conformal field theory~\cite{Semi-2011,Semi-2012,Semi-2013}. Besides, it plays an important role in quantum groups~\cite{inp-Andr02,a-AndrGr99,inp-AndrSchn02, a-Schauen96}. The theory of Nichols algebras is motivated by the Hopf algebra theory. In any area of mathematics the classification of all objects is very important. In Hopf algebra theory, the classification of all finite dimensional Hopf algebras is a tough question\cite{inp-Andr02}. The structure of Nichols algebras appears naturally in the classification of pointed Hopf algebras in the following way. Given a Hopf algebra $H$, consider its coradical filtration $$H_0\subset H_1\subset \ldots $$ such that $H_0$ is a Hopf subalgebra of $H$ and the associated graded coalgebra $$\gr H=\bigoplus _iH_i/H_{i-1}.$$ Then $\gr H$ is a graded Hopf algebra, since the coradical $H_0$ of $H$ is a Hopf subalgebra. In addition, consider a projection $\pi: \gr H\to H_{0}$; let $R$ be the algebra of coinvariants of $\pi$. Then, by a result of Radford and Majid, $R$ is a braided Hopf algebra and $\gr H$ is the bosonization (or biproduct) of $R$ and $H_{0}$: $\gr H \simeq R\# H_{0}$. The principle of the "Lifting method" introduced in~\cite{a-AndrSchn98,inp-AndrSchn02} is first to study $R$, then to transfer the information to $\gr H$ via bosonization, and finally to lift to $H$. The braided Hopf algebra $R$ is generated by the vector space $V$ of $H_0$-coinvariants of $H_1/H_0$, namely \textit{Nichols algebra} $\cB(V)$ generated by $V$ \cite{a-AndrSchn98} in commemoration of W.~Nichols who started to study these objects as bialgebras of type one in~\cite{n-78}. Nichols algebras can be described in many different but alternative ways, see for example~\cite{l-2010,maj05,Rosso98,w1987,w1989}. The crucial step to classify pointed Hopf algebras is to determine all Nichols algebras $\cB(V)$ is finite dimensional. N.~Andruskiewitsch stated the following question. \begin{question} \textbf{(N.~Andruskiewitsch~\cite{inp-Andr02})}\label{quse:classification} \it ~ Given a braiding matrix $(q_{ij})_{1\leq i,j\leq \theta} $ whose entries are roots of $1$, when $\cB(V)$ is finite-dimensional, where $V$ is a vector space with basis $x_1,\dots ,x_{\theta}$ and braiding $c(x_i\otimes x_j)=q_{ij}(x_j\otimes x_i)$? If so, compute $\dim_\Bbbk \cB(V)$, and give a ''nice'' presentation by generators and relations. \end{question} \noindent Several authors obtained the classification result for infinite and finite dimensional Nichols algebra of Cartan type, see~\cite{a-AndrSchn00,a-Heck06a,Rosso98}. I.~Heckenberger determined all finite dimensional Nichols algebra of diagonal type~\cite{a-Heck04d,a-Heck04bb,a-Heck09}, that is when $V$ is the direct sum of 1-dimensional \YD modules over $H_{0}$. The generators and relations of such Nichols algebras were also given~\cite{Ang1,Ang2}. The methods developed in the study of the generalizations of Lie algebras are useful to analyze Nichols algebras~\cite{b-Bahturin92}. V.~Kharchenko proved that any Hopf algebra generated by skew-primitive and group-like elements has a restricted \PBW basis~\cite[Theosrem~2]{a-Khar99}. Note that V.~Kharchenko results can apply to Nichols algebras of diagonal type. Motivated by the close relation to Lie theory, I.~Heckenberger~\cite{a-Heck06a} defined the arithmetic root system and Weyl groupoid for Nichols algebras $\cB(V)$ of diagonal type. Late, M.~Cuntz, I.~Heckenberger and H.~Yamane developed the combinatorial theory of these two structures~\cite{c-Heck09b,Y-Heck08a}. Then the theory of root systems and Weyl groupoids was carried out in more general Nichols algebras~\cite{a-AHS08,HS10,a-HeckSchn12a}. Further, all finite Weyl groupoids were classified in~\cite{c-Heck12a,c-Heck14a}. Those constructions are very important theoretical tools for the classification of Nichols algebra $\cB(V)$. With the classification result, N.~Andruskiewitsch and H.-J.~Schneider~\cite{a-AndrSchn05} obtained a classification theorem about finite-dimensional pointed Hopf algebras under some technical assumptions. Based on such successful applications, the analyze to Nichols algebras over arbitrary fields is crucial and has also potential applications. Towards this direction, new examples of Nichols algebras in positive characteristic and a combinatorial formula to study the relations in Nichols algebras were found~\cite{clw}. Over fields of positive characteristic, rank 2 and rank 3 finite dimensional Nichols algebras of diagonal type were listed in~\cite{WH-14,W-17}. In this paper, we give the complete classification result of rank 4 case. Besides, the notations and conventions in~\cite{WH-14,W-17} are followed and several results from these papers will be used. The paper is organized as follows. Section~\ref{se:Pre} is devoted to preliminaries. In Section~\ref{se:Rank4Cartan}, we explicitly characterize finite connected indecomposable Cartan graphs of rank 4. In order to do that, we introduce good $A_4$ neighborhood and good $B_4$ neighborhood see Definitions~\ref{defA4} and ~\ref{defB4}. In Theorem~\ref{Theo:goodnei}, we prove that every finite connected indecomposable Cartan graph of rank 4 contains a point which has at least one of the good neighborhoods. Theorem~\ref{Theo:goodnei} allows us to avoid complicated computations in the final proof of Theorem~\ref{theo:clasi}. Finally, in Section~\ref{se:clasi} we formulate the classification Theorem~\ref{theo:clasi} and present all the possible generalized Dynkin diagrams of rank 4 braided vector spaces of diagonal type with a finite root system over arbitrary fields in Table~(\ref{tab.1}). As a corollary of Theorem~\ref{theo:clasi}, all rank 4 finite-dimensional Nichols algebras of diagonal type in positive characteristic are given, see Corollary~\ref{coro-cla}. \section{Nichols algebras of diagonal type}\label{se:Pre} In this section, we recall \YD modules, braided vector spaces, and their relations. The main object of this paper is also presented. For further details on these topics we refer to \cite{inp-Andr14,inp-Andr02,a-AndrGr99} \subsection{\YD modules} Let $\Bbbk$ be a field of characteristic $p> 0$. Let $\Bbbk^*=\Bbbk\setminus \{0\}$, $\ndN_0=\ndN\bigcup \{0\}$, $\theta\in \ndN_0$, and $I=\{1,\dots,\theta\}$,. We start by recalling the main objects. \begin{defn} Let $V$ a $\theta$-dimensional vector space over $\Bbbk$. The pair $(V, c)$ is called a \textit{braided vector space}, if $c\in \Aut(V\otimes V)$ is a solution of the braid equation, that is \begin{equation*} (c \otimes \id)(\id \otimes c)(c \otimes \id) = (\id \otimes c)(c \otimes \id)(\id\otimes c). \end{equation*} A braided vector space $(V, c)$ is termed \textit{of diagonal type} if $V$ admits a basis $\{x_i | i\in I\}$ such that for all $i, j \in I$ one has \begin{equation*} c(x_i \otimes x_j) = q_{ij}x_j \otimes x_i \quad \textit{for some} \quad q_{ij} \in \Bbbk^*. \end{equation*} \end{defn} The matrix $(q_{ij})_{i,j\in I}$ is termed of the \textit{braiding matrix} of $V$. We say that the braiding matrix $(q_{ij})_{i,j\in I}$ is \textit{indecomposable} if for any $i\not=j$ there exists a sequence $i_1 = i, i_2, \dots, i_t = j$ of elements of $I$ such that $q_{i_si_{s+1}}q_{i_{s+1}i_s}\not= 1$, where $1\leq s\leq t-1$. In this paper, we mainly concern the braided vector spaces of diagonal type with an indecomposable braiding matrix. \begin{defn} Let $H$ be a Hopf algebra. A \textit{\YD module} $V$ over $H$ is a left $H$-module with left action $.$ : $H\otimes V \longrightarrow V$ and a left $H$-comodule with left coaction $\delta_L : V \longrightarrow H \otimes V$ satisfying the compatibility condition \begin{equation*} \delta_L(h.v) = h_{(1)}v_{(-1)} \kappa (h_{(3)}) \otimes h_{(2)}.v_{(0)}, h \in H, v\in V. \end{equation*} We say that $V$ is \textit{of diagonal type} if $H=\Bbbk G$ and $V$ is a direct sum of one-dimensional \YD modules over the group algebra $\Bbbk G$, where $G$ is abelian. \end{defn} \indent We denote by $\ydH$ the category of \YD modules over $H$, where morphisms preserve both the action and the coaction of $H$. The category $\ydH$ is braided with braiding \begin{equation}\label{def.brading} c_{V,W}(v\otimes w)=v_{(-1)}.w \otimes v_{(0)} \end{equation} for all $V, W\in \ydH$, $v\in V$, and $w\in W$. Actually, the category $\ydH$ ia a braided monoidal category, where the monoidal structure is given by the tensor product over $\Bbbk$. Then any \YD module $V\in \ydH$ over $H$ admits a braiding $c_{V,V}$ and hence $(V, c_{V,V})$ is a braided vector space. Conversely, any braided vector space can be realized as a \YD module over some Hopf algebras if and only if the braiding is rigid \cite[Section 2]{T00}. Notice that \YD module structures on $V$ with different Hopf algebras can give the same braiding and not all braidings of $V$ are induced by the above Equation~(\ref{def.brading}). If $H=\Bbbk G$ then we write $\ydD$ for the category of \YD modules over $\Bbbk G$ and say that $V\in \ydD$ is a \YD module over $G$. Notice that if $V\in \ydD$ is of diagonal type then $(V, c_{V,V})$ is a braided vector space of diagonal type. Any braided vector space of diagonal type is also a \YD module of diagonal type. Indeed, assume that $(V, c)$ is a braided vector space of diagonal type with an indecomposable braiding matrix $(q_{ij})_{i, j\in I}$ of a basis $\{x_i|i\in I\}$. Let $G_0$ be an abelian group generated by elements $\{g_i|i\in I\}$. Define the left coaction and left action by \[ \delta_L(x_i)=g_i\ot x_i \in G_0\otimes V, \quad g_i\lact x_j=q_{ij}x_j\in V. \] Then $V=\oplus_{i\in I}\Bbbk x_i$ and each $\Bbbk x_i$ is one-dimensional \YD modules over $G_0$. Hence $V$ is a \YD module of diagonal type over $G_0$. \subsection{Nichols algebras of diagonal type} Let $(V, c)$ be a $\theta$-dimensional braided vector space of diagonal type. In this section, we recall a definition of the Nichols algebra $\cB(V)$ generated by $(V, c)$. In order to do that, we introduce one more notion in the category $\ydH$. \begin{defn} Let $H$ be a Hopf algebra. A \textit{braided Hopf algebra} in $\ydH$ is a $6$-tuple $\cB=(\cB, \mu, 1, \delta, \epsilon, \kappa_B)$, where $(B, \mu, 1)$ is an algebra in $\ydH$ and a coalgebra $(B, \delta, \epsilon)$ in $\ydH$, and $\kappa_B: B\rightarrow B$ is a morphism in $\ydH$ such that $\delta$, $\epsilon$ and $\kappa_B$ satisfy $ \kappa_B(b^{(1)})b^{(2)}=b^{(1)}\kappa_B(b^{(2)})=\epsilon(b) 1$, where we define $\delta(b)=b^{(1)}\otimes b^{(2)}$ as the coproduct of $\cB$ to avoid the confusion. \end{defn} \begin{defn} The \textit{Nichols algebra} generated by $V\in \ydH$ is defined as the quotient \[ \cB(V)=T(V)/\cI(V)=(\oplus_{n=0}^{\infty} V^{\otimes n})/\cI(V) \] where $\cI(V)$ is the unique maximal coideal among all coideals of $T(V)$ which are contained in $\oplus_{n\geq 2}V^{\otimes n}$. Nichols algebra $\cB(V)$ is said to be~\textit{of diagonal type} if $V$ is a \YD module of diagonal type. The dimension of $V$ is the \textit{rank} of Nichols algebra $\cB(V)$. \end{defn} Note that if $V\in \ydH$ and $V$ is an algebra in $\ydH$ then $B\otimes B$ is an algebra in $\ydH$ with the product given by \begin{equation}\label{eq-alg} (a\otimes b)(c\otimes d)=a(b_{(-1)}\lact c)\otimes b_{(0)}d, \end{equation} for all $a, b, c, d\in V$, where $.$ denotes the left action of $H$ on $V$. The \textit{tensor algebra} $T(V)$ admits a natural structure of a \YD module and an algebra structure in $\ydH$. It is then a braided Hopf algebra in $\ydH$ with coproduct $\delta(v)=1\otimes v+ v\otimes 1\in T(V)\otimes T(V)$ and counit $\epsilon(v)=0$ for all $v\in V$ such that $\delta$ and $\epsilon$ are the algebra morphisms. The antipode of $T(V)$ exists, see \cite[Section 2.1]{inp-AndrSchn02}. Notice that the product defined by Equation (\ref{eq-alg}) on $T(V)$ is the unique algebra structure such that $\delta(v)=1\otimes v+ v\otimes 1\in T(V)\otimes T(V)$ for all $v\in V$. The coproduct can be extended from $V$ to $T(V)$. For example, for all $v, w\in V$ one gets (we write the elements of $T(V)$ without the tensor product sign for brevity) \begin{align*} \begin{split} \delta(vw)=&\delta(v)\delta(w)\\ =&(1\otimes v+ v\otimes 1)(1\otimes w+ w\otimes 1)\\ =& 1\otimes vw+ v_{(-1)}.w\otimes v_{(0)}+v\otimes w+vw\otimes 1. \end{split} \end{align*} Let $(I_i)_{i\in I}$ be the family of all ideals of $T(V)$ contained in $\oplus_{n\geq 2}V^{\otimes n}$, i.e. \[\delta(I_i)\subset I_i\otimes T(V)+ T(V)\otimes I_i.\] Then the ideal $\cI(V):=\sum_{i\in I}I_i$ is the largest element in $(I_i)_{i\in I}$. Hence $\cB(V)$ is a braided Hopf algebra in $\ydH$. As proved in \cite[Propssition 3.2.12]{a-AndrGr99}, Nichols algebra $\cB(V)$ is the unique $\ndN_0$-graded braided Hopf algebra generated by $V$ in $\ydH$ with homogenous components $\cB(V)(0)=\Bbbk$, $\cB(V)(1)=V$, and $P(\cB(V))=V$, where $P(\cB(V))$ is the space of primitive elements of $\cB(V)$. \subsection{Weyl groupoids} In this section, we recall the notations of semi-Cartan graphs, root systems and Weyl groupoids. We mainly follow the terminology from \cite{c-Heck09a},\cite{Y-Heck08a}. See also~\cite{WH-14} and~\cite{W-17}. \begin{defn} \it A \textit{generalized Cartan matrix} is a matrix $A =(a_{ij})_{i,j\in I}$ with integer entries such that \begin{itemize} \itemsep=0pt \item $a_{ii}=2$ and $a_{jk}\le 0$ for any $i,j,k\in I$ with $j\not=k$, \item if $a _{ij}=0$ for some $i,j\in I$, then $a_{ji}=0$. \end{itemize} \end{defn} \noindent A generalized Cartan matrix $A\in \ndZ ^{I \times I}$ is \textit{decomposable} if there exists a nonempty proper subset $I_1\subset I$ such that $a_{ij}=0$ for any $i \in I_1$ and $j \in I\setminus I_1$. We say that $A$ is \textit{indecomposable} if $A$ is not decomposable. \begin{defn} Let $\cX$ be a non-empty set and $A^X =(a_{ij}^X)_{i,j \in I}$ a generalized Cartan matrix for all $X\in \cX$. For any $i\in I$ let $r_i : \cX \to \cX$, $X \mapsto r(i,X)$, where $r: I\times \cX \to \cX$ is a map. The quadruple \[\cC = \cC (I, \cX, r, (A^X)_{X \in \cX}) \] is called a \textit{semi-Cartan graph} if $r_i^2 = \id_{\cX}$ for all $i \in I$, and $a^X_{ij} = a^{r_i(X)}_{ij}$ for all $X\in \cX$ and $i,j\in I$. We say that a semi-Cartan graph $\cC$ is~\textit{indecomposable} if $A^X$ is indecomposable for all $X\in \cX$. \end{defn} \noindent For the sake of simplicity, the elements of the set $\{r_i(X), i\in I\}$ are termed the \textit{neighbors} of $X$ for all $X\in \cX$. The cardinality of $I$ is termed the \textit{rank} of $\cC$ and the elements of $\cX$ are the \textit{points} of $\cC$. \begin{defn} The \textit{exchange graph} of $\cC$ is a labeled non-oriented graph with vertices set $\cX$ and edges set $I$, where two vertices $X, Y$ are connected by an edge $i$ if and only if $X\not=Y$ and $r_i(X)=Y$ (and $r_i(Y)=X$). We display one edge with several labels instead of several edges for simplification. A semi-Cartan graph $\cC$ is said to be \textit{connected} if its exchange graph is connected. \end{defn} \noindent For the remaining part of this section, we assume that $\cC = \cC (I, \cX, r, (A^X)_{X \in \cX})$ is a connected semi-Cartan graph. Let $(\alpha_i)_{i\in I}$ be the standard basis of $\ndZ^I$. For all $X\in \cX$ let \begin{equation* s_i^X\in \Aut(\ndZ^I), ~~ s_i^X \alpha _j=\alpha_j-a_{ij}^X \alpha_i. \end{equation*} for all $j\in I$. Let $\cD(\cX, I)$ be the category such that Ob$\cD(\cX, I)=\cX$ and morphisms $\Hom(X,Y)=\{(Y,f,X)|f\in \End(\ndZ ^I)\}$ for $X, Y\in \cX$ with the composition $(Z,g,Y)\circ (Y,f,X)=(Z, gf, X)$ for all $ X, Y, Z\in \cX$, $f,g\in \End(\ndZ ^I)$. Let \textit{$\cW(\cC)$} be the smallest subcategory of $\cD(\cX, I)$, where the morphisms are generated by $(r_i(X), s_i^X, X)$, with $i\in I$, $X\in \cX$. From now on, we write $s_i^X$ instead of $(r_i(X), s_i^X, X)$, if no confusion is possible. Notice that all generators $s_i^X$ are reflections and hence are invertible. Then $\cW(\cC)$ is a groupoid. For any category $\cD$ and any object $X$ in $\cD$ let $\Hom(\cD, X)=\cup_{Y\in \cD}\Hom(Y,X)$. \begin{defn} For all $X\in \cX$, the set \begin{equation}\label{eq-realroots} \rersys{X}=\{\omega\alpha_i \in \ndZ^I|\omega \in \Hom(\cW(\cC),X)\} \end{equation} is called the set of real roots of $\cC$ at $X$. The elements of $\rersys{X}_{\boldsymbol{+}}=\rersys{X}\cap \ndN_0^I$ are called positive roots and those of $\rersys{X}\cap -\ndN_0^I$ negative roots, denoted by $\rersys{X}_{\boldsymbol{-}}$. If the set $\rersys{X}$ is finite for all $X\in \cX$ then we say that $\cC$ is finite. \end{defn} \begin{defn We say that $\cR=\cR(\cC,(\rsys ^{X})_{X\in \cX})$ is a \textit{root system of type $\cC$} if for all $X \in \cX $, the sets $\rsys ^X $ are the subsets of $\ndZ ^I$ such that \begin{itemize} \itemsep=0pt \item $\rsys ^X=(\rsys^ X\cap \ndN _0^I)\cup -(\rsys ^X\cap \ndN _0^I)$. \item $\rsys ^X\cap \ndZ \al _i=\{\al _i,-\al _i\}$ for all $i\in I$. \item $s_i^X(\rsys ^X)= \rsys ^{r_i(X)}$ for all $i \in I$. \item $(r_i r_j)^{m_{ij}^X}(X) = X$ for any $i,j \in I$ with $i\not=j$ where $m_{ij}^X= |\rsys^X\cap (\ndN _0\al _i + \ndN _0 \al_j)|$ is finite. \end{itemize} \end{defn} \noindent We say that $\cW(R):=\cW(\cC)$ is the groupoid of $\cR$. As in \cite[Definition 4.3]{c-Heck09b} we say that $\cR$ is \textit{reducible} if there exist non-empty disjoint subsets of $I',I''\subset I$ such that $I=I'\cup I''$ and $a_{i j}=0$ for all $i\in I'$, $j\in I''$ and \[ \rsys ^{X}=\Big(\rsys ^{X}\cap \sum _{i\in I'}\ndZ \al _i\Big)\cup \Big(\rsys ^{X}\cap \sum _{j\in I''}\ndZ \al _j\Big)\qquad \text{for all}\quad X\in \cX. \] In this case, we write $\cR =\cR |_{I_1}\oplus \cR |_{I_2}$. If $\cR \not=\cR |_{I_1}\oplus \cR |_{I_2}$ for all non-empty disjoint subsets $I_1,I_2\subset I$, then $\cR$ is termed \textit{irreducible}. \begin{defn} Let $\cR=\cR(\cC,(\rsys ^{X})_{X\in \cX})$ be a root system of type $\cC$. We say that $\cR$ is~\textit{finite} if $\rsys ^{X}$ is finite for all $X\in \cX$. \end{defn} Let $\cR=\cR(\cC,(\rsys ^{X})_{X\in \cX})$ be a root system of type $\cC$. We recall some properties of $\cR$ from \cite{c-Heck09b} and \cite{c-Heck12a}. \begin{lemma}\label{lem:jik} Let $X\in \cX$, $k \in \ndZ$, and $i, j\in I$ such that $i\not= j$. Then $\alpha _j + k\alpha_i\in \rersys{X}$ if and only if $0\leq k\leq -a_{ij}^X$. \end{lemma} Notice that the finiteness of $\cR$ does not mean that $\cW(\cR)$ is also finite, since the set $\cX$ might be infinite. \begin{lemma}\label{lem:finite} Let $\cC = \cC (I, \cX, r, (A^X)_{X \in \cX})$ be a connected semi-Cartan graph and $\cR=\cR(\cC,(\rsys ^{X})_{X\in \cX})$ is a root system of type $\cC$. Then the following are equivalent. \begin{itemize} \itemsep=0pt \item[$(1)$] $\cR$ is finite. \item[$(2)$] $\rsys^{X}$ is finite for some $X\in \cX$. \item[$(3)$] $\cC$ is finite. \item[$(4)$] $\cW(\cR)$ is finite. \end{itemize} \end{lemma} Recall that $\cC$ is a connected semi-Cartan graph. Then we get the following. \begin{prop}\label{prop.indecom} Let $\cR=\cR(\cC,(\rsys ^{X})_{X\in \cX})$ be a root system of type $\cC$. Then the following are equivalent. \begin{itemize} \itemsep=0pt \item[$(1)$] There exists $X\in \cX$ such that $A^X$ is indecomposable \item[$(2)$] The semi-Cartan graph $\cC$ is indecomposable. \end{itemize} If $\cR$ is finite then the semi-Cartan graph $\cC$ is indecomposable if and only if the root system $\cR$ is irreducible. \end{prop} \begin{defn}\label{def.Weylgroupoid} We say $\cC$ is a Cartan graph if the following hold: \begin{itemize} \itemsep=0pt \item For all $X\in \cX$ the set $\rersys{X}=\rersys{X}_{\boldsymbol{+}}\bigcup \rersys{X}_{\boldsymbol{-}}$. \item If $l_{mn}^Y:=|\rersys{Y}\cap (\ndN_0 \alpha_m+\ndN_0 \alpha_n)|$ is finite, then $(r_m r_n)^{l_{mn}^Y}(Y)=Y$, where $m, n\in I$, $Y\in \cX$. \end{itemize} In this case, $\cW(\cC)$ is called the Weyl groupoid of $\cC$. \end{defn} Let $\cR^{re}:=\cR(\cC,(\rersys{X})_{X\in \cX})$. Then $\cC$ is a Cartan graph if and only if $\cR^{re}$ is a root system of type $\cC$. Indeed, we get that $s_i^X(\rersys{X})=\rersys{r_i(X)}$ by Equation (\ref{eq-realroots}). For all $X\in \cX$, we obtain that $\rersys{X}=\rersys{X}_{\boldsymbol{+}}\cup \rersys{X}_{\boldsymbol{-}}$, since $\omega s_i^{r_i(X)}(\al_i)=-\omega(\al_i)$ for any $\omega \in \Hom(\cW(\cC),X)$. The following proposition implies that if $\cR$ is a finite root system of type $\cC$ then $\cR=\cR^{re}$, namely, all roots are real and $\cR$ is uniquely determined by $\cC$. \begin{prop} \label{prop.allposroots} Let $\cR=\cR(\cC,(\rsys ^{X})_{X\in \cX})$ be a root system of type $\cC$. Let $X\in \cX$, $m\in \ndN _0$, and $i_1,\ldots ,i_m\in I$ such that \[\omega =\id_X s _{i_1} s_{i_2}\cdots s_{i_m}\in \Hom(\cW(\cC), X)\] and $\ell (\omega )=m$. Then the elements \[ \beta _n=\id_X s_{i_1} s_{i_2} \cdots s_{i_{n-1}}(\alpha_{i_n})\in \rsys^ X\cap \ndN _0^I, \] are pairwise different, where $n\in \{1,2,\ldots ,m\}$ (and $\beta _1=\alpha _{i_1}$). Here, \[\ell(\omega)=\mathrm{min}\{m\in \ndN_0|\omega=\id_X s_{i_1}s_{i_2}\cdots s_{i_m}, i_1, i_2, \ldots, i_m\in I\}\] is the length of $\omega\in \Hom(\cW(\cC), X)$. In particular, if $\cR$ is finite and $\omega \in \Hom (\cW (\cC ))$ is the longest element, then \[ \{\beta _n\,|\,1\le n\le \ell (\omega )=|\rsys ^{X}|/2\}=\rsys^ X\cap \ndN _0^I. \] \end{prop} \begin{rem If $\cC$ is a finite Cartan graph then $\cR$ is finite and hence $$\cR^{re}=\cR^{re}(\cC,(\rersys{X})_{X\in \cX})$$ is the unique root system of type $\cC$ by Proposition~\ref{prop.allposroots}, that is, $\cR$ is uniquely determined by $\cC$. \end{rem} \subsection{Cartan graphs for Nichols algebras of diagonal type} In this section we attach a semi-Cartan graph to a tuple of finite-dimensional \YD modules under some finiteness conditions. Let $G$ be an abelian group. Let $\ffg$ be the set of $\theta$-tuples of finite-dimensional irreducible objects in $\ydD$ and $\fiso^G$ be the set of $\theta $-tuples of isomorphism classes of finite-dimensional irreducible objects in $\ydD$. For any $(M_1, \dots, M_{\theta})\in \ffg$, write $[M]:=([M_1], \dots, [M_{\theta}])\in \fiso^G$ the corresponding isomorphism class of $(M_1, \dots, M_{\theta})$. Assume that $V_M=\oplus_{i\in I}\Bbbk x_i\in \ydD$ is a \YD module of diagonal type over $G$, where $\{x_i|i\in I\}$ is a basis of $V$. Then there exists a matrix $(q_{ij})_{i,j\in I}$ such that $\delta(x_i)=g_i\otimes x_i$ and $g_i. x_j=q_{ij}x_j$ for all $i,j\in I$. We fix that $M=(\Bbbk x_1, \Bbbk x_2, \dots, \Bbbk x_{\theta})\in \ffg$ is a tuple of one-dimensional \YD over $G$ and $[M]\in \fiso^G$. We say that the matrix $(q_{ij})_{i,j\in I}$ is the \textit{braiding matrix of $M$}. Recall that the matrix is independent of the basis $\{x_i|i\in I\}$ up to permutation of $I$. We say $\cB(V_M)=\cB(\oplus_{i=1}^{n}\Bbbk x_i)$ is the Nichols algebra of the tuple $M$, denoted by $\cB(M)$. Recall that the adjoint representation $\ad$ of a Nichols algebra $\cB(V)$ from \cite{a-AndrSchn98} is the linear map $\ad_c:$ $V\rightarrow \End(\cB(V))$ \[ \ad_{c} x(y)=\mu (\id-c)(x\otimes y)=xy-(x_{(-1)}\lact y)x_{(0)} \] for all $x\in V$, $y\in \cB(V)$, where $\mu$ is the multiplication map of $\cB(V)$ and $c$ is defined by Equation (\ref{def.brading}). In particular, the braided commutator $ad_{c}$ of $\cB(M)$ takes the form \[ad_{c}x_i(y)=x_i y-(g_i\lact y) x_i~ \textit{for all} ~i\in I,~ y\in \cB(M).\] In order to construct a semi-Cartan graph to $M$, we recall some finiteness conditions from \cite{inp-AndrSchn02} and \cite{HS10}. \begin{defn} Let $i\in I$. We say that $M$ is \textit{$i$-finite}, if for any $j\in I\setminus \{i\}$, $(\ad_{c} x_i)^m (x_j)=0$ for some $m\in \ndN$. \end{defn} \begin{lemma}\label{le:aijM} For any $i,j\in I$ with $i\not=j$,s the following are equivalent. \begin{itemize} \item $(m+1)_{q_{ii}}(q_{ii}^mq_{ij}q_{ji}-1)=0$ and $(k+1)_{q_{ii}}(q_{ii}^kq_{ij}q_{ji}-1)\not=0$ for all $0\leq k<m$. \item $(ad_{c}x_i)^{m+1}(x_j)=0$ and $(ad_{c}x_i)^m(x_j)\not=0$ in $\cB (V)$. \end{itemize} Here $(n)_q:=1+q+\cdots+q^{n-1}$, which is $0$ if and only if $q^n=1$ for $q\not=1$ or $p|n$ for $q=1$. Notice that $(1)_q\not=0$ for any $q\in \ndN$. \end{lemma} Hence we get the following from Lemma~\ref{le:aijM}. \begin{lemma}\label{lem:aij} Let $i\in I$. Then $M=(\Bbbk x_j)_{j\in I}$ is $i$-finite if and only if for any $j\in I\setminus\{i\}$ there is a non-negative integer $m$ satisfying $(m+1)_{q_{ii}}(q_{ii}^mq_{ij}q_{ji}-1)=0$. \end{lemma} Let $i\in I$. Assume that $M$ is $i$-finite. Let $(a_{ij}^{M})_{j\in I}\in \ndZ^I$ and $R_i(M)=({R_i(M)}_j)_{j\in I}$, where \begin{align*} a_{ij}^M=& \begin{cases} 2& \text{if $j=i$,}\\ -\mathrm{max}\{m\in \ndN_0 \,|\,(\ad_c x_i)^m(x_j)\not=0 \}& \text{if $j\not=i$.} \end{cases} \end{align*} \begin{equation}\label{eq-ri} {R_i(M)}_i= \Bbbk y_i,\qquad {R_i(M)}_j= \Bbbk(\ad_{c}x_i)^{-a_{ij}^M}(x_j), \end{equation} where $y_i\in (\Bbbk x_i)^*\setminus \{0\}$. If $M$ is not $i$-finite, then let $R_i(M)=M$. Then $R_i(M)$ is a $\theta$-tuple of one-dimensional \YD modules over $G$. Let \[ \ffg(M)=\{R_{i_1} \cdots R_{i_n}(M)\in \ffg|\, n\in \ndN_0, i_1,\dots, i_n\in I\} \] and \[ \fiso^G(M)=\{[R_{i_1} \cdots R_{i_n}(M)]\in \fiso^G |\,n\in \ndN_0, i_1,\dots, i_n\in I\}. \] \begin{defn}\label{defn-admitsallref} We say that $M$ \textit{admits all reflections} if $N$ is $i$-finite for all $N\in \ffg(M)$. \end{defn} Notice that the reflections depend only on the braiding matrix $(q_{ij})_{i,j\in I}$. We recall the notion of generalized Dynkin diagram for a braided vector spaces of diagonal type~\cite{a-Heck04e}. \begin{defn} Let $V$ be a $\theta$-dimensional braided vector space of diagonal type with the braiding matrix $(q_{ij})_{i,j\in I}$. The $\textit{generalized Dynkin diagram}$ of $V$ is a non-directed graph $\cD$ with the following properties: \begin{itemize} \itemsep=0pt \item there is a bijective map $\phi$ from $I$ to the vertices of $\cD$, \item for all $i\in I$ the vertex $\phi (i)$ is labeled by $q_{ii}$, \item for all $i,j\in I$ with $i\not=j$, the number $n_{ij}$ of edges between $\phi (i)$ and $\phi (j)$ is either $0$ or $1$. If $q_{ij}q_{ji}=1$ then $n_{ij}=0$, otherwise $n_{ij}=1$ and the edge is labeled by $q_{ij}q_{ji}.$ \end{itemize} \end{defn} We say that the \textit{generalized Dynkin diagram of $M$} is the generalized Dynkin diagram of braided vector space $\oplus_{i\in I}M_i$. Notice that the generalized Dynkin diagram of $M$ is connected if the braiding matrix of $M$ is indecomposable. In more details, one can obtain the labels of the generalized Dynkin diagram of $R_i(M)=(R_i(M)_j)_{j\in I}$ by the following lemma. \begin{lemma}\label{le:Dynkin} Let $i\in I$. Assume that $M$ is $i$-finite and let $a_{ij}:=a_{ij}^M$ for all $j\in I$. Let $(q'_{jk})_{j,k\in I}$ be the braiding matrix of $R_i(M)$ with respect to $(y_j)_{j\in I}$. Then \begin{gather*} q_{jj}'= \begin{cases} q_{ii} & \text{if $j=i$},\\ q_{jj} & \text{if $j\not=i$, $q_{ij}q_{ji}=q_{ii}^{a_{ij}}$},\\ q_{ii}q_{jj}{(q_{ij}q_{ji})^{-a_{ij}}} & \text{if $j\not=i$, $q_{ii}\in G'_{1-a_{ij}}$},\\ q_{jj}{(q_{ij}q_{ji})^{-a_{ij}}} & \text{if $j\not=i$, $q_{ii}=1$}, \end{cases} \end{gather*} \begin{gather*} q_{ij}'q_{ji}'= \begin{cases} q_{ij}q_{ji} & \text{if $j\not=i$, $q_{ij}q_{ji}=q_{ii}^{a_{ij}}$},\\ q_{ii}^2(q_{ij}q_{ji})^{-1} & \text{if $j\not=i$, $q_{ii}\in G'_{1-a_{ij}}$},\\ (q_{ij}q_{ji})^{-1} & \text{if $j\not=i$, $q_{ii}=1$}, \end{cases} \end{gather*} and \begin{gather*} q_{jk}'q_{kj}'= \begin{cases} q_{jk}q_{kj} & \text{if $q_{ir}q_{ri}=q_{ii}^{a_{ir}}$, $r\in \{j, k\}$},\\ q_{jk}q_{kj}(q_{ik}q_{ki}q_{ii}^{-1})^{-a_{ij}}& \text{if $q_{ij}q_{ji}=q_{ii}^{a_{ij}}$, $q_{ii}\in G'_{1-a_{ik}}$},\\ q_{jk}q_{kj}(q_{ij}q_{ji})^{-a_{ik}}(q_{ik}q_{ki})^{-a_{ij}} & \text{if $q_{ii}=1$,}\\ q_{jk}q_{kj}q_{ii}^{2}(q_{ij}q_{ji}q_{ik}q_{ki})^{-a_{ij}} & \text{if $q_{ii}\in G'_{1-a_{ik}}$, $q_{ii}\in G'_{1-a_{ij}}$}. \end{cases} \end{gather*} for $j, k\not=i$, $j\not=k$. Here, $G_n'$ denotes the set of primitive $n$-th roots of unity in $\Bbbk$, that is $G'_n=\{q\in \Bbbk^*|\,\, q^n=1, q^k\not=1~\text{for all}~ 1\leq k < n\}$ for $n\in \ndN$. \end{lemma} If $M$ admits all reflections, then we are able to construct a semi-Cartan graph $\cC(M)$ of $M$ by the following theorem. \begin{theorem}\label{theo.regualrcar} Assume that $M$ admits all reflections. For all $X\in \fiso^G(M)$ let \[[X]_{\theta}=\{Y\in \fiso^G(M)| \,\text{Y and X have the same generalized Dynkin diagram}\}.\] Let $\cY_{\theta}(M)=\{[X]_{\theta} |\, X\in \fiso^G(M)\}$ and $A^{[X]_{\theta}}=A^X$ for all $X\in \fiso^G(M)$. Let $t: I\times \cY_{\theta}(M)\rightarrow \cY_{\theta}(M)$, $(i, [X]_{\theta})\mapsto [R_i(X)]_{\theta}$. Then the tuple \[ \cC(M)=\{I, \cY_{\theta}(M), t, (A^Y)_{Y\in \cY_{\theta}(M)}\} \] is a connected semi-Cartan graph. We say that $\cC(M)$ is the semi-Cartan graph attached to $M$. \end{theorem} \begin{proof} Let $X\in \fiso^G(M)$. Since $M$ admits all reflections, we obtain that all entries of $A^X$ are finite. Hence $A^X$ is well-defined. Moreover, if $a_{ij}^X=0$ then $a_{ji}^X=0$ by Lemma \ref{lem:aij}. Hence $A^X$ is a well-defined generalized Cartan matrix for all $X\in \fiso^G(M)$. For any $X, Y\in \fiso^G(M)$, if $X$ and $Y$ have the same generalized Dynkin diagram then $A^X=A^Y$ and hence $A^{[X]_{\theta}}=A^{[Y]_{\theta}}$. Then $A^{[X]_{\theta}}$ is well-defined for all $X\in \fiso^G(M)$. Hence $\{A^Y\}_{Y\in \cY_{\theta}(M)}$ is a family of generalized Cartan matrices. Besides, if $N$ is $i$-finite then $a_{ij}^N=a_{ij}^{R_i(N)}$ and $R^2_i(N)=N$ for all $N\in \fiso^G(M)$ by \cite[Theorem 3.12(2)]{a-AHS08}. Hence $t_i$ is a reflection map for all $i\in I$. Then $\cC(M)$ is a well-defined semi-Cartan graph. From the construction of the reflection $R_i$ by Equation (\ref{eq-ri}) we obtain that $\cC(M)$ is connected. \end{proof} Furthermore, one can attach a groupoid $\cW(M):=\cW(\cC(M))$ to $M$ if $M$ admits all reflections. Notice that Nichols algebra $\cB(M)$ is $\ndN_0^{\theta}$-graded with $\deg M_i=\al_i $ for all $i\in I$. Following the terminology in~\cite{HS10}, we say that the Nichols algebra $\cB(M)$ is \textit{decomposable} if there exists a totally ordered index set $(L,\le)$ and a sequence $(W_l)_{l\in L}$ of finite-dimensional irreducible $\ndN _0^\theta $-graded objects in $\ydD $ such that \begin{equation}\label{eq-decom} \cB (M)\simeq \bigotimes _{l\in L}\cB (W_l). \end{equation} For each decomposition~(\ref{eq-decom}), we define the set of~ \textit{positive roots} $\rsys^{[M]}_{+}\subset \ndZ^I$ and the set of~ \textit{roots} $\rsys^{[M]}\subset \ndZ^I$ of $[M]$ by $$\rsys^{[M]}_{+}=\{\deg(W_l)|\, l\in L\}, \quad \rsys^{[M]}=\rsys^{[M]}_{+}\cup-\rsys^{[M]}_{+}.$$ By~\cite[Theorem~4.5]{HS10} we obtain that the set of roots $\rsys^{[M]}$ of $[M]$ does not depend on the choice of the decomposition. \begin{rem}\label{rem-decom} If $\dim M_i=1$ for all $i\in I$, then the Nichols algebra $\cB(M)$ is decomposable based on the theorem of V.~Kharchenko~\cite[Theorem~2]{a-Khar99}.The set of roots of Nichols algebra $\cB(M)$ can be always defined and it is denoted by $\rsys^{[M]}$. If the set of roots $\rsys^{[M]}$ is finite then we can check that $M$ admits all reflections by \cite[Corollary 6.12]{HS10}. \end{rem} If $M$ admits all reflections and $\rsys^{[M]}$ is finite, then we can define a finite root system $\cR(M)(\cC(M),(\rsys^{[N]})_{N\in \ffg(M)})$ of type $\cC(M)$. \begin{theorem} \label{thm:rootofR_M} Assume that $M$ admits all reflections. Then the following are equivalent. \begin{itemize} \itemsep=0pt \item[$(1)$] $\rsys^{[M]}$ is finite. \item[$(2)$] $\cC(M)$ is a finite Cartan graph. \item[$(3)$] $\cW(M)$ is finite. \item[$(4)$] $\cR(M):=\cR(M)(\cC(M),(\rsys^{[N]})_{N\in \ffg(M)})$ is finite. \end{itemize} In all cases, $\cR(M)$ is the unique root system of type $\cC(M)$. \end{theorem} \begin{proof} Since $M$ is a $\theta$-tuple of one-dimensional \YD modules, we obtain that the Nichols algebra $\cB(M)$ generated by $V_M$ is decomposable and hence $\rsys^{[M]}$ is defined. Then $\cR(M)=\cR(M)(\cC(M),(\rsys^{[N]})_{N\in \ffg(M)})$ is the root system of type $\cC(M)$ by \cite[Theorem 6.11]{HS10}. Hence the claim is true by Lemma~\ref{lem:finite}. \end{proof} \section{Finite Cartan graphs of rank 4}\label{se:Rank4Cartan} We give the properties of finite connected indecomposable Cartan graphs of rank 4 in Theorem~\ref{Theo:goodnei}, which will be essential for our classification in the next section. Let $I=\{1,2,3,4\}$ and $\cC = \cC (I, \cX, r, (A^X)_{X \in \cX})$ be a semi-Cartan graph. Recall that a semi-Cartan graph $\cC$ is standard, if $A^X = A^Y$ for all $X, Y \in Ob(\cW(\cC))$. To classify all finite Cartan graphs, the following fact is necessary~\cite[corollary 5.4]{HS10}. It shows an important property of finite standard Cartan graphs. \begin{theorem}~\label{thm:CGfinite} Let $\cC$ be a standard Cartan graph with generalized Cartan matrix $A = A^N$ for all $N\in Ob(\cW(\cC))$. Let $\cR$ be a root system of type $\cC$. Then the Cartan graph $\cW(\cR)$ is finite if and only if $A$ is a Cartan matrix of finite type. \end{theorem} For the general Cartan graphs, the points of $\cC$ could have many different neighborhoods. In this case, we define the following "good neighborhoods" in order to cover all the finite connected indecomposable Cartan graphs in such way that at least one point of $\cC$ has one of the good neighborhoods. To avoid confusion, let $A_4=\begin{pmatrix}2&-1&0&0\\ -1&2&-1&0\\ 0&-1&2&-1\\ 0&0&-1&2 \end{pmatrix}$, $B_4=\begin{pmatrix}2&-1&0&0\\ -1&2&-1&0\\ 0&-1&2&-1\\ 0&0&-2&2 \end{pmatrix}$,\\ $C_4=B_4'=\begin{pmatrix}2&-1&0&0\\ -1&2&-1&0\\ 0&-1&2&-2\\ 0&0&-1&2 \end{pmatrix}$, $D_4=\begin{pmatrix}2&-1&0&0\\ -1&2&-1&-1\\ 0&-1&2&0\\ 0&-1&0&2 \end{pmatrix}$, and $F_4=\begin{pmatrix}2&-1&0&0\\ -1&2&-2&0\\ 0&-1&2&-1\\ 0&0&-1&2 \end{pmatrix}$. \begin{defn}\label{defA4} We say that $X$ has a \textbf{good $A_4$ neighborhood} if there exists a permutation of $I$ and an integer sequence $(a,b)\in \ndN^2$ such that\\ $A^X=A^{r_1(X)}=A_4$, $A^{r_2(X)}=\begin{pmatrix}2&-1&0&0\\-1&2&-1&0\\0&-1&2&-a\\0&0&-1&2\end{pmatrix}$, $A^{r_3(X)}=\begin{pmatrix}2&-1&0&0\\-1&2&-1&-1\\0&-1&2&-1\\0&-1&-1&2\end{pmatrix}$, and $A^{r_4(X)}=\begin{pmatrix}2&-1&0&0\\-1&2&-1&0\\0&-b&2&-1\\0&0&-1&2\end{pmatrix}$, where $(a,b)$ satisfies one of the following. \begin{enumerate} \item[$(1)$] $(a,b)\in \{(2,1),(2,2)\}$. \item[$(2)$] $(a,b)=(1,2)$, $a_{24}^{r_1r_3(X)}=-1$. \item[$(3)$] $(a,b)=(1,1)$, $a_{14}^{r_2r_3(X)}=a_{41}^{r_2r_3(X)}\in \{0,-1\}$. \end{enumerate} \end{defn} \begin{defn}\label{defB4} We say that $X$ has a \textbf{good $B_4$ neighborhood} if there is a permutation of $I$ with respect to which\\ $A^X=A^{r_i(X)}=B_4$, for all $i\in I$ and $a^{r_3r_4(X)}_{24}=-1$. \end{defn} We get the following property of the finite connected indecomposable Cartan graphs by computer calculations algorithms. \begin{theorem}\label{Theo:goodnei} Let $M:=(\Bbbk x_1,\dots, \Bbbk x_4)$ be a tuple of one-dimensional \YD modules over $\Bbbk G$. Assume that $M$ admits all reflections and $\cC(M) = \cC (I, \cX, r, (A^X)_{X \in \cX})$ be the attached indecomposable semi-Cartan graph to $M$. If $\rsys^{[M]}$ is finite, then one of the following is true. \begin{enumerate} \item[$(1)$] The Cartan graph $\cC(M)$ is standard (of type $A_4$, $B_4$, $C_4$, $D_4$, $F_4$). \item[$(2)$] Up to equivalence, there exists a point $Y\in \cX$ such that $Y$ has one of the good $A_4$, $B_4$ neighborhoods. \end{enumerate} \end{theorem} \begin{proof} If $\rsys^{[M]}$ is finite, then $\cC(M)$ is a finite Cartan graph and it has a unique finite root system by Theorem \ref{thm:rootofR_M}, say $\cR(M)$. Assume that $\cR=\cR(\cC,(\rersys{X})_{X\in \cX})$ is the unique root system, where $\rersys{X}$ is the real roots of $X$. Moreover, the root system $\cR(M)$ is irreducible by Proposition \ref{prop.indecom}. For any $X\in \cX$, let $\rersys{X}_{\boldsymbol{+}}$ be the positive roots of $X$. By~\cite[Theorem~4.1]{c-Heck14a} there exists a point $X\in \cX$ satisfying that the set $\rersys{X}_{\boldsymbol{+}}$ is in the list of~\cite[Appendix B.2.]{c-Heck14a} up to a permutation of $I$. There are precisely 11 such possible sets of real roots for the rank 4 case. We analyze each set of the real roots in the list. For point $Y$, we assume that $\rersys{Y}_{\boldsymbol{+}}$ in the list. Since the reflection $s_i^Y$ maps $\rersys{Y}_{\boldsymbol{+}}\setminus \{\alpha_i\}$ bijectively to $\rersys{r_i(Y)}_{\boldsymbol{+}}\setminus \{\alpha_i\}$ for any $i\in I$ , the Cartan matrices of all neighbors of $Y$ can be obtained from $\rersys{Y}_{\boldsymbol{+}}$ by Lemma~\ref{lem:jik}. If the Cartan graph $\cC(M)$ is standard or $Y$ has a good $A_4$ or $B_4$ neighborhood, then the claim is true. Otherwise repeat the previous step to the neighbours of $Y$. Since $\cX$ is finite, this algorithm terminates. The elementary calculations are done by GAP algorithms and they are skipped here. \end{proof} \section{Classification theorem for rank 4 case}\label{se:clasi} In this section, all rank 4 Nichols algebras of diagonal type with a finite set of roots are determined. We formulate the main result in Theorem~\ref{theo:clasi} and present the corresponding generalized Dynkin diagrams in Table~\ref{tab.1}. \begin{theorem}\label{theo:clasi} Let $\Bbbk$ is a field of characteristic $p>0$. Let $I=\{1,2,3,4\}$. Let $(V,c)$ be a braided vector space of diagonal type over $\Bbbk$ with basis $\{x_k|k\in I\}$ satisfying \begin{equation*} c(x_i \otimes x_j) = q_{ij}x_j \otimes x_i \quad \textit{for some} \quad q_{ij} \in \Bbbk^*. \end{equation*} Assume that the $(q_{ij})_{i,j\in I}$ is an indecomposable braiding matrix. Let $M:=(\Bbbk x_i)_{i\in I}$. Then the Nichols algebra $\cB(V)$ generated by $(V,c)$ has a finite set of roots ${\roots}^{[M]}$ if and only if the generalized Dynkin diagram $\cD$ of $V$ appears in Table~\ref{tab.1}. In this case, the row of Table~\ref{tab.1} containing $\cD$ consists precisely of the generalized Dynkin diagrams of all the points of $\cC(M)$. The corresponding row of Table~\ref{tab.2} contains the exchange graph of $\cC(M)$. \end{theorem} We claim that Theorem~\ref{theo:clasi} is enough to classify finite-dimensional Nichols algebra of diagonal type by~\cite[Corollary~6]{a-Heck04e}. \begin{cor}\label{coro-cla} Assume that $p>0$. Then the Nichols algebra $\cB(V)$ is finite dimensional if and only if the generalized Dynkin diagram $\cD$ of $V$ appears in Table~(\ref{tab.1}) and the labels of the vertices of $\cD$ are roots of unity (including $1$). \end{cor} \begin{rem} In the part of the proof, we give the following statement to avoid confusion. \begin{itemize} \item[$(1)$] Label the vertices of the generalized Dynkin diagrams from left to right and then top to bottom by $1, \ldots, 4$ (for example: label $D_4$ as in Fig.\ \ref{fig_D4}). \begin{figure}[h!] \begin{center} \setlength{\unitlength}{3947sp} \begingroup\makeatletter\ifx\SetFigFont\undefined \gdef\SetFigFont#1#2#3#4#5 \reset@font\fontsize{#1}{#2pt \fontfamily{#3}\fontseries{#4}\fontshape{#5 \selectfont \fi\endgroup \begin{picture}(3348,928)(500,-705) \thicklines \put(700,-300){$\cD:$} {\put(1250,-211){\line( 1, 0){500}} {\put(1860,-170){\line( 1, 1){400}} {\put(1840,-270){\line( 1, -1){400}} \put(1726,-436){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2$}}}}} \put(2480,-736){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$4$}}}}} \put(1126,-436){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$1$}}}}} \put(2476,164){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$3$}}}}} {\put(1201,-211){\circle{100}} {\put(1801,-211){\circle{100}} {\put(2300,230){\circle{100}}} {\put(2300,-701){\circle{100}} \end{picture} \end{center} \caption{generalized Dynkin diagram of type $D_4$} \label{fig_D4} \end{figure} \item[$(2)$] For a generalized Dynkin diagram $\cD$ and $i,j,k,l\in I$, write $\tau_{ijkl} \cD$ for the graph $\cD$ where the vertices of $\cD$ change to $i$, $j$, $k$, $l$ respectively. \end{itemize} \end{rem} \begin{proof} We prove the theorem by the following two steps: \begin{enumerate} \item[(1)] The if part is clear. Indeed, assume that the generalized Dynkin diagram $\cD$ appears in row $r$ of any of Table~\ref{tab.1}. From Lemmas~\ref{lem:aij} and~\ref{le:Dynkin} we determine that $M$ admits all reflections. The detailed calculations are skipped at this point here. Hence the Cartan graph $\cC(M)$ can be defined by theorem~\ref{theo.regualrcar}. Notice that $\cC(M)$ is the same with the Cartan graph obtained from the Dykin diagrams listed in row $s$ of Table~3 in~\cite{a-Heck09}, where $s$ appears in the third column of row $r$ of Table~\ref{tab.2}. Moreover, the arithmetic root systems of the above Cartan graphs are finite, see~\cite[Theorem~17]{a-Heck09}. Hence $\cW(\cC(M))$ is finite. Then $\cB(V)$ has a finite set of roots ${\roots}^{[M]}$ by Theorem~\ref{thm:rootofR_M}. \item[(2)] Next we prove that if $\cB(V)$ has a finite set of roots then the generalized Dynkin diagram of $V$ appears in Table~\ref{tab.1}. we assume that $\cB(V)$ has a finite set of roots ${\roots}^{[M]}$. Let $X=[M]^s_4$, $I=\{1,2,3,4\}$, and $A^X:=(a_{ij})_{i,j\in I}$ be the Cartan matrix of $X$. Let $(q_{i,j})_{i,j\in I}$ be the braiding matrix of $X$ and $(q_{i,j}^{r_i(X)})_{i,j\in I}$ be the braiding matrix of $r_i(X)$. To simply the labels, we write $q_{ij}':=q_{ij}q_{ji}$ for $1\leq i,j\leq 4$. Since $\cB(V)$ has a finite set of roots ${\roots}^{[M]}$, we obtain that $\cC(M)$ is a finite Cartan graph by Theorem~\ref{thm:rootofR_M}. we are free to assume that either $\cC(M)$ is standard or there exists a point $X$ such that $A^X$ has a good $A_4$ or $B_4$ neighborhood by Theorem~\ref{Theo:goodnei}. Case a. Assume that either $X$ has a good $A_4$ neighborhood or $\cC(M)$ is standard of type $A_4$. Let $(a,b):=(-a_{34}^{r_2(X)}, -a_{32}^{r_4(X)})$. From~Lemma~\ref{le:Dynkin} the condition $A^X=A_4$ implies that $(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j-1,j}'-1)=0$, for all $i\in \{1,2,3\}$ and $j\in \{2,3,4\}$. Hence we distinguish the following subcases: $a_1$, $a_2$, $\dots$, $a_{18}$. Subcase $a_1$. Consider that $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for all $i\in \{1,2,3\}$ and $j\in \{2,3,4\}$. Then $\cC(M)$ is standard and $\cD=\cD_{11}$. Subcase $a_2$. Consider that $q_{11}=-1$ and $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for all $i\in \{2,3\}$ and $j\in \{2,3,4\}$. To avoid the repetition we assume that $q_{12}'\not=-1$. Otherwise we get the case in $a_1$. Then $\cC(M)$ is standard and $\cD=\cD_{61}$. Subcase $a_3$. Consider that $q_{22}=-1$ and $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for all $i\in \{1,3\}$ and $j\in \{3,4\}$. Assume that the condition $\{q_{12}',q_{23}'\}=\{-1\}$ doesn't hold in this case to avoid the repetition. Then by~\cite[Lemma 1.4]{W-17} for $a_{13}^{r_2(X)}=0$ we get that $q_{12}'q_{23}'=1$. Then $q_{12}'\not=-1$ and $\cC(M)$ is standard. Hence $\cD=\cD_{10,1}$. Subcase $a_4$. Consider that $q_{33}=-1$ and $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for all $i\in \{1,2\}$ and $j\in \{2,4\}$. Assume that the condition $\{q_{23}',q_{34}'\}=\{-1\}$ doesn't hold in this case to avoid repetition. From~\cite[Lemma 1.4]{W-17} we get $(a,b)=(1,1)$. Let $q:=q_{23}'$ and $r:=q_{34}'$. Then the condition $\{q,r\}=\{-1\}$ doesn't hold. If $\cC(M)$ is standard, then $qr=1$ and $q\not=-1$ by~\cite[Lemma 1.4]{W-17}. Hence $\cD=\tau_{4321}\cD_{10,1}$. If $X$ has a good $A_4$ neighborhood, then $a_{14}^{r_2r_3(X)}=a_{41}^{r_2r_3(X)}\in \{0,-1\}$ by Definition~\ref{defA4}(3). If $a_{14}^{r_2r_3(X)}=0$, then $q^2r=1$, $q\not=-1$, and hence $\cD=\cD_{13,1}$. If $a_{14}^{r_2r_3(X)}=a_{41}^{r_2r_3(X)}=-1$, then $q^2r\not=1$ and the reflections of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{3}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$ {$-1$}{$r$}{$r^{-1}$} \quad \Rightarrow r_3(X):~ \Drightofway{m}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$}{$qr$ {$-1$}{$r^{-1}$}{$-1$} \end{align*} \begin{align*} \quad \Rightarrow r_2r_3(X):~ \tau_{3214} \ \ \Drightofway{}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$}{$(qr)^{-1}$ {$-1$}{$q^2r$}{$qr$} \end{align*} imply that $qr=-1$ or $q^3r^2=1$ by~Lemma~\ref{le:Dynkin}. If $qr=-1$, then $q\not=-1$, $p\not=2$, and $\cD=\cD_{14,1}$. If $q^3r^2=1$, then $\cD=\cD_{94}$. Subcase $a_5$. Consider that $q_{44}=-1$, $q_{34}'\not=-1$, and $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for all $i\in \{1,2,3\}$ and $j\in \{2,3\}$. Then $\cC(M)$ is standard and $\cD=\tau_{4321}\cD_{61}$. Subcase $a_6$. Consider that $q_{11}=q_{22}=-1$, $q_{12}'\not=-1$, and $q_{33}q_{34}'-1=q_{ii}q_{i-1,i}'-1=0$ for all $i\in \{3,4\}$. Then $A^{r_3(X)}=A_4$ and hence $\cC(M)$ is standard by the assumption. Then $q_{12}'q_{23}'=1$ by~\cite[Lemma 1.4]{W-17} and hence $\cD=\cD_{62}$. Subcase $a_7$. Consider that $q_{11}=q_{33}=-1$, $q_{12}'\not=-1$, and $q_{22}q_{23}'-1=q_{ii}q_{i-1,j}'-1=0$ for all $i\in \{2,4\}$. Then we get $(a,b)=(1,1)$. Let $q:=q_{23}'$ and $r:=q_{34}'$. Then $q\not=-1$. If $\cC(M)$ is standard, then $qr=1$ and $\cD=\cD_{10,3}$. If $X$ has a good $A_4$ neighborhood, then $a_{41}^{r_2r_3(X)}=a_{14}^{r_2r_3(X)}\in \{0,1\}$ from Definition~\ref{defA4}(3). If $a_{41}^{r_2r_3(X)}=0$, then $q^2r=1$, $q\not=-1$, and hence $\cD=\cD_{12,3}$. If $a_{14}^{r_2r_3(X)}=a_{41}^{r_2r_3(X)}=-1$, then $q^2r\not=1$ and the reflections of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{3}{$-1$}{$q$}{$q^{-1}$}{$q$ {$-1$}{$r$}{$r^{-1}$} \quad \Rightarrow \quad r_3(X):~ \Drightofway{m}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$qr$ {$-1$}{$r^{-1}$}{$-1$} \end{align*} \begin{align*} \quad \Rightarrow r_2r_3(X):~ \tau_{3214} \ \ \Drightofway{}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$}{$(qr)^{-1}$ {$q$}{$q^2r$}{$qr$} \end{align*} show that $q^3r=1$, $qr=-1$, and $p\not=2$. Hence $\cD=\cD_{22,3}$. Subcase $a_8$. Consider that $q_{11}=q_{44}=-1$ and $q_{ii}q_{i,i+1}'-1=q_{ii}q_{i-1,i}'-1=0$ for all $i\in \{2,3\}$. To avoid repetition we assume that $q_{12}', q_{34}'\not=-1$. Then $\cC(M)$ is standard and $\cD=\cD_{10,5}$. Subcase $a_9$. Consider that $q_{22}=q_{33}=-1$ and $q_{11}q_{12}'-1=q_{44}q_{34}'-1=0$. Set $q:=q_{11}$, $r:=q_{23}'$, and $s:=q_{44}$. To avoid repetition we assume that the condition $\{q,r\}=\{-1\}$ doesn't hold. We obtain that $b=1$ by~Lemma~\ref{le:Dynkin}. If $\cC(M)$ is standard, then $q=r=s\not=-1$, and $\cD=\cD_{63}$. If $X$ has a good $A_4$ neighborhood, then $r=q\not=s$ and $(a,b)\in \{(1,1),(2,1)\}$ by Definition~\ref{defA4}. Hence the reflections of $X$ \setlength{\unitlength}{1mm} \begin{align* r_3(X):~ \Drightofway{t}{$r$}{$r^{-1}$}{$r$}{$r^{-1}$}{$rs^{-1}$ {$-1$}{$s$}{$-1$} \Rightarrow X:~ \Dchainfour{2}{$r$}{$r^{-1}$}{$-1$}{$r$ {$-1$}{$s^{-1}$}{$s$} \end{align*} \begin{align*} \quad \Rightarrow r_2(X):~ \Dchainfour{}{$-1$}{$r$}{$-1$}{$r^{-1}$ {$r$}{$s^{-1}$}{$s$} \end{align*} imply that $a=2$ by $r\notin \{-1,s\}$ and $s=r^2$ by $a_{24}^{r_3(X)}=-1$. Then $\cD=\cD_{83}$. Subcase $a_{10}$. Consider that $q_{22}=q_{44}=-1$, $q_{34}'\not=-1$, and $q_{ii}q_{i,i+1}'-1=q_{33}q_{23}'-1=0$ for all $i\in \{1,3\}$. To avoid repetition we assume that the condition $\{q_{12}',q_{23}'\}=\{-1\}$ doesn't hold. Then $\cC(M)$ is standard by the assumption and $A^{r_3(X)}=A_4$. Hence $q_{12}'q_{23}'=1$ by~\cite[Lemma 1.4]{W-17} and $\cD=\tau_{4321}\cD_{10,3}$. Subcase $a_{11}$. Consider that $q_{33}=q_{44}=-1$, $q_{34}'\not=-1$, and $q_{ii}q_{i,i+1}'-1=q_{22}q_{12}'-1=0$ for all $i\in \{1,2\}$. Let $q:=q_{11}$ and $r:=q_{34}'$. Then $r\not=-1$. If $\cC(M)$ is standard, then $q=r$ by~\cite[Lemma 1.4]{W-17} and hence $\cD=\tau_{4321}\cD_{62}$. If $X$ has a good $A_4$ neighborhood, then $a=1$, $r\not=q$, and hence $(a,b)\in \{(1,1),(1,2)\}$ by Definition~\ref{defA4}. The reflections of $X$ \setlength{\unitlength}{1mm} \begin{align*} r_4(X):~ \Dchainfour{4}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$r$}{$r^{-1}$}{$-1$} \quad \Rightarrow ~ X:~ \Dchainfour{3}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$-1$}{$r$}{$-1$} \end{align*} \begin{align*} \quad \Rightarrow ~ r_3(X):~ \Drightofway{}{$q$}{$q^{-1}$}{$-1$}{$q$}{$rq^{-1}$ {$-1$}{$r^{-1}$}{$r$} \end{align*} show that $b=2$ by the condition $r\notin \{-1, q\}$. Then we obtain that $r^2q^{-1}=1$ by $a_{42}^{r_3(X)}=-1$ and $(3)_r(r^2q^{-1}-1)=0$ by $b=2$. If $r^2q^{-1}=1$ and $r^3\not=1$, then $\cD=\cD_{92}$. If $r^2q^{-1}=1$ and $r^3=1$, then $p\not=3$, $r\in G_3'$, $qr=1$, and $\cD=\cD_{18,2}$. Subcase $a_{12}$. Consider that $q_{11}=q_{22}=q_{33}=-1$, $q_{12}'\not=-1$, and $q_{44}q_{34}'-1=0$. Let $q:=q_{12}'$, $r:=q_{23}'$, and $s:=q_{34}'$. Assume that the equations $\{r,s\}=\{-1\}$ doesn't hold. Then $A^{r_4(X)}=A_4$ and $b=1$. If $\cC$ is standard, then $qr=rs=1$ and hence $\cD=\cD_{10,2}$. If $X$ has a good $A_4$ neighborhood, then $qr=1$, $rs\not=1$ and the reflections of $X$ are \setlength{\unitlength}{1mm} \begin{align*} r_3(X):~ \Drightofway{t}{$-1$}{$q$}{$q^{-1}$}{$q$}{$sq^{-1}$ {$-1$}{$s^{-1}$}{$-1$} \quad \Rightarrow X:~ \Dchainfour{2}{$-1$}{$q$}{$-1$}{$q^{-1}$ {$-1$}{$s$}{$s^{-1}$} \end{align*} \begin{align*} \quad \Rightarrow r_2(X):~ \Dchainfour{}{$q$}{$q^{-1}$}{$-1$}{$q$ {$q^{-1}$}{$s$}{$s^{-1}$} \end{align*} with $q\notin \{-1,s\}$. Then we obtain $a=2$ by~Lemma~\ref{le:Dynkin} and $s=q^2$ by $a_{24}^{r_{3}(X)}=-1$. Hence $\cD=\cD_{12,2}$. Subcase $a_{13}$. Consider that $q_{11}=q_{22}=q_{44}=-1$, $q_{12}'\not=-1$, $q_{34}'\not=-1$, and $q_{33}q_{23}'-1=q_{33}q_{34}'-1=0$. Then $\cC$ is standard by $A^{r_3(X)}=A_4$ and the assumption. Hence $q_{12}'q_{23}'=1$ and $\cD=\tau_{4321}\cD_{10,4}$. Subcase $a_{14}$. Consider that $q_{11}=q_{33}=q_{44}=-1$, $q_{12}'\not=-1$, $q_{34}'\not=-1$, and $q_{22}q_{12}'-1=q_{22}q_{23}'-1=0$. Let $q:=q_{12}'$ and $r:=q_{34}'$. Then $q,r\not=-1$. Then $a=1$ by~Lemma~\ref{le:Dynkin}. If $\cC$ is standard, then $qr=1$ and $\cD=\cD_{10,4}$. If $X$ has a good $A_4$ neighborhood, then $qr\not=1$ and the reflections of $X$ are \setlength{\unitlength}{1mm} \begin{align*} r_4(X):~ \Dchainfour{4}{$-1$}{$q$}{$q^{-1}$}{$q$ {$r$}{$r^{-1}$}{$-1$} \quad \Rightarrow X:~ \Dchainfour{3}{$-1$}{$q$}{$q^{-1}$}{$q$ {$-1$}{$r$}{$-1$} \end{align*} \setlength{\unitlength}{1mm} \begin{align}\label{diaa14} \quad \Rightarrow r_3(X):~ \Drightofway{l}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$qr$ {$-1$}{$r^{-1}$}{$r$} \quad \Rightarrow r_1r_3(X):~ \Drightofway{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$qr$ {$-1$}{$r^{-1}$}{$r$} \end{align} with $q, r\not=-1$. Then $qr^2=1$ by $a_{42}^{r_3(X)}=-1$ and hence $b=2$ by~Lemma~\ref{le:Dynkin}. Hence $(a,b)=(1,2)$ and $a_{24}^{r_1r_3(X)}=-1$ by Definition~\ref{defA4}(2). Then $q^2r=1$ by the reflections~(\ref{diaa14}) and~Lemma~\ref{le:Dynkin}. Hence $r=q\in G_3'$, $p\not=3$, and $\cD=\cD_{20,4}$. Subcase $a_{15}$. Consider that $q_{22}=q_{33}=q_{44}=-1$, $q_{34}'\not=-1$, and $q_{11}q_{12}'-1=0$. Let $q:=q_{12}'$, $r:=q_{23}'$, and $s:=q_{34}'$. Assume that the condition $\{q, r\}=\{-1\}$ doesn't hold. If $\cC$ is standard, then $qr=rs=1$ and $\cD=\tau_{4321}\cD_{10,2}$. If $X$ has a good $A_4$ neighborhood, then $qr=1$, $rs\not=1$, $r\not=-1$, and the reflections of $X$ are \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{3}{$r$}{$r^{-1}$}{$-1$}{$r$ {$-1$}{$s$}{$-1$} \quad \Rightarrow \quad r_3(X):~ \Drightofway{}{$r$}{$r^{-1}$}{$r$}{$r^{-1}$}{$rs$ {$-1$}{$s^{-1}$}{$s$} \end{align*} with $r,s\not=-1$. We obtain the equation $rs^2=r^2s=1$ by~Lemma~\ref{le:Dynkin} on $a_{24}^{r_3(X)}=a_{42}^{r_3(X)}=-1$. Then $s=r\in G_3'$, $p\not=3$, and $a=b=2$. Hence $\cD=\cD_{21,6}$. Subcase $a_{16}$. Consider that $q_{11}=q_{22}=q_{33}=q_{44}=-1$, $q_{12}'\not=-1$, and $q_{34}'\not=-1$. Let $q:=q_{12}'$, $r:=q_{23}'$, and $s:=q_{34}'$. We assume that $q,s\not=-1$ to avoid repetition. If $\cC$ is standard, then $qr=rs=1$, $q\not=-1$, and hence $\cD=\cD_{10,6}$. If $X$ has a good $A_4$ neighborhood, then $qr=1$, $sr\not=1$, and the reflection of $X$ is \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{3}{$-1$}{$r^{-1}$}{$-1$}{$r$ {$-1$}{$s$}{$-1$} \quad \Rightarrow \quad r_3(X):~ \Drightofway{}{$-1$}{$r^{-1}$}{$r$}{$r^{-1}$}{$rs$ {$-1$}{$s^{-1}$}{$s$} \end{align*} with $r, s\not=-1$. Then we get $rs^2=r^2s=1$ by~Lemma~\ref{le:Dynkin} on $a_{24}^{r_3(X)}=a_{42}^{r_3(X)}=-1$. Then $s=r\in G_3'$, $p\not=3$, and hence $\cD=\cD_{20,3}$. Case (b). Assume that either $X$ has a good $B_4$ neighborhood or $\cC(M)$ is standard of type $B_4$. Then by~Lemma~\ref{le:Dynkin} on $A^X=B_4$ we get the equations $$(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j-1,j}'-1)=(3)_{q_{44}}(q_{44}^2q_{34}'-1)=0,$$ for all $i\in \{1,2,3\}$ and $j\in \{2,3\}$. Then we distinguish the subcases: $a_1$, $a_2$, $\dots$, $a_{14}$ by~Lemma~\ref{le:Dynkin}. Subcase $b_1$. Consider that $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=q_{44}^2q_{34}'-1=0$ for all $i\in \{1,2,3\}$ and $j\in \{2,3\}$. Then $\cC(M)$ is standard and $\cD=\cD_{21}$. Subcase $b_2$. Consider that $q_{11}=-1$ and $q_{ii}q_{i,i+1}'-1=q_{ii}q_{i-1,i}'-1=q_{44}^2q_{34}'-1=0$ for all $i\in \{2,3\}$. In this case we assume that $q_{12}'\not=-1$ to avoid the repetition. Then $\cC(M)$ is standard and $\cD=\cD_{71}$. Subcase $b_3$. Consider that there exists $k\in \{2,3\}$ such that $q_{kk}=-1$ and $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=q_{44}^2q_{34}'-1=0$ for any $i\in \{1,2,3\}\setminus \{k\}$ and $j\in \{2,3\}\setminus \{k\}$. We assume that the condition $\{q_{k,k-1}', q_{k,k+1}'\}=\{-1\}$ doesn't hold in this case to avoid repetition. We get $a_{13}^{r_k(X)}=0$ by assumption. Then $q_{k,k-1}'q_{k,k+1}'=1$ by~\cite[Lemma 1.4]{W-17} and hence $q_{k,k-1}'\not=-1$. Then $\cC$ is standard. We obtain that if $k=2$ then $\cD=\cD_{11,1}$ and if $k=3$ then $\cD=\cD_{74}$. Subcase $b_4$ Consider that $(3)_{q_{44}}=0$ and $q_{ii}q_{i,i+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for all $i\in \{1,2,3\}$ and $j\in \{2,3\}$. We assume that $q_{44}^2q_{34}'-1\not=0$ to avoid repetition. Set $q:=q_{22}$ and $\zeta:=q_{33}$. The reflection of $X$ is \setlength{\unitlength}{1mm} \begin{align*} X: \Dchainfour{4}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$q$}{$q^{-1}$}{$\zeta$} \quad \Rightarrow \quad r_4(X): \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$\zeta q^{-1}$}{$q\zeta^{-1}$}{$\zeta$} \end{align*} with $(3)_{\zeta}=0$ and $q\notin \{\zeta, \zeta^{-1}\}$. we get that $\zeta q^{-2}=1$ or $\zeta q^{-1}=-1$ by~Lemma~\ref{le:Dynkin} on $A^{r_4(X)}=B_4$. Hence $q=-\zeta^{-1}$, $p\not=2$ or $q=-\zeta$, $p\not=2$. If $q=-\zeta^{-1}$ and $p\not=2, 3$, then $\cC(M)$ is standard and $\cD=\cD_{15,1}$. If $q=-\zeta$ and $p\not=2,3$, then $X$ has a good $B_4$ neighborhood and $\cD=\cD_{17,1}$. If $p=3$, then we get $q=-1$ for $q=-\zeta^{-1}$ and $q=-\zeta$. Then $\cC(M)$ is standard and $\cD=\cD_{15',1}$. Subcase $b_5$. Consider that there exists $k\in \{2,3\}$ such that $q_{kk}=q_{11}=-1$ and $q_{44}^2q_{34}'-1=q_{jj}q_{j,j+1}'-1=q_{jj}q_{j-1,j}'-1=0$ for $j\in \{2,3\}\setminus \{k\}$. We assume that $q_{12}'\not=-1$. Then $q_{k,k-1}'q_{k,k+1}'=1$ by $A^{r_k(X)}=B_4$. Hence $\cC$ is standard. If $k=2$ then $\cD=\cD_{72}$. If $k=3$ then $\cD=\cD_{11,3}$. Subcase $b_6$. Consider that $q_{11}=-1$, $(3)_{q_{44}}=0$, $q_{12}'\not=-1$, $q_{44}^2q_{34}'-1\not=0$, and $q_{ii}q_{i,i+1}'-1=q_{ii}q_{i-1,i}'-1=0$, for all $i\in \{2,3\}$. Let $q:=q_{22}$ and $\zeta:=q_{44}$. Then $(3)_{\zeta}=0$ and $q\notin \{-1,\zeta,\zeta^{-1}\}$. By~Lemma~\ref{le:Dynkin} on $A^{r_4(X)}=B_4$ the reflections of $X$ \setlength{\unitlength}{1mm} \begin{align*} r_1(X): \Dchainfour{1}{$-1$}{$q$}{$-1$}{$q^{-1}$ {$q$}{$q^{-1}$}{$\zeta$} \Rightarrow X: \Dchainfour{4}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$ {$q$}{$q^{-1}$}{$\zeta$} \end{align*} \begin{align*} \Rightarrow r_4(X): \Dchainfour{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$ {$\zeta q^{-1}$}{$q\zeta^{-1}$}{$\zeta$} \end{align*} imply that $\zeta q^{-2}=1$ or $\zeta q^{-1}=-1$. Then $q=-\zeta^{-1}$, $p\not=2$ or $q=-\zeta$, $p\not=2$. If $q=-\zeta^{-1}$ and $p\not=2, 3$, then $\cC(M)$ is standard and $\cD=\cD_{16,1}$. If $q=-\zeta^{-1}$ and $p=3$, then $q=-1$ and $\cD=\cD_{15',1}$. If $q=-\zeta$, then one of the the generalized Dynkin subdiagram of $r_1(X)$ is~ \Dchainthree{}{$-1$}{$-\zeta^{-1}$ {$-\zeta$}{$-\zeta^{-1}$}{$\zeta$}~, which does not appear in~\cite[Tables 1-3]{W-17}. Hence we get a contradiction. Subcase $b_7$. Consider that $q_{22}=q_{33}=-1$ and $q_{44}^2q_{34}'-1=q_{11}q_{12}'-1=0$. Set $q:=q_{12}'$, $r:=q_{23}'$, and $s:=q_{34}'$. We assume that the condition $\{q, r\}=\{-1\}$ doesn't hold and neither does the condition $\{r,s\}=\{-1\}$. Then by~\cite[Lemma 1.4]{W-17} for $A^{r_{2}(X)}=A^{r_{3}(X)}=B_4$ we get $qr=rs=1$. Hence $\cC$ is standard and $\cD=\cD_{73}$. Subcase $b_8$. Consider that $q_{22}=-1$, $(3)_{q_{44}}=0$, $q_{44}^2q_{34}'-1\not=0$, and $q_{ii}q_{i,i+1}'-1=q_{33}q_{23}'-1=0$ for all $i\in \{1,3\}$. Set $q:=q_{12}'$, $r:=q_{23}'$, and $\zeta:=q_{44}$. Assume that $\{q,r\}=\{-1\}$ doesn't hold. Then $(3)_{\zeta}=0$ and $r\notin \{\zeta,\zeta^{-1}\}$. Then by~\cite[Lemma 1.4]{W-17} on $A^{r_2(X)}=B_4$ we get $qr=1$ and $r\not=-1$. Then by~Lemma~\ref{le:Dynkin} the reflections of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{4}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$ {$q$}{$q^{-1}$}{$\zeta$} \quad \Rightarrow \quad r_4(X):~ \Dchainfour{}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$ {$\zeta q^{-1}$}{$q\zeta^{-1}$}{$\zeta$} \end{align*} imply that $\zeta q^{-2}=1$ or $\zeta q^{-1}=-1$ by $A^{r_4(X)}=B_4$. Hence $q=-\zeta^{-1}$, $p\not=2$ or $q=-\zeta$, $p\not=2$. If $q=-\zeta^{-1}$ and $p\not=2,3$, then $\cC(M)$ is standard and $\cD=\cD_{19,1}$. If $q=-\zeta^{-1}$ and $p=3$, then $q=-1$ and $\cD=\cD_{15',1}$. If $q=-\zeta$, then one of the the generalized Dynkin subdiagram of $X$ is \Dchainthree{}{$-1$}{$-\zeta^{-1}$ {$-\zeta$}{$-\zeta^{-1}$}{$\zeta$}~, which does not appear in~\cite[Tables 1-3]{W-17}. Hence we get a contradiction. Subcase $b_{9}$. Consider that $q_{33}=-1$, $(3)_{q_{44}}=0$, $q_{44}^2 q_{34}'-1\not=0$, and $q_{ii}q_{i,i+1}'-1=q_{22}q_{12}'-1=0$, for all $i\in \{1,2\}$. In this case we assume that the condition $\{q_{23}, q_{34}'\}=\{-1\}$ doesn't hold. Set $q:=q_{11}$, $r:=q_{34}'$, and $\zeta:=q_{44}$. We obtain that $q=r$ by $A^{r_3(X)}=B_4$. Then $(3)_{\zeta}=0$ and $q\notin \{-1, \zeta, \zeta^{-1}\}$. Hence the reflection of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{4}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$-1$}{$q$}{$\zeta$} \quad \Rightarrow \quad r_4(X):~ \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$-\zeta q^2$}{$(\zeta q)^{-1}$}{$\zeta$} \end{align*} imply that $\zeta q^2=1$ or $-\zeta q^2 (\zeta q)^{-1}= -\zeta q^2 q^{-1}=1$ by~Lemma~\ref{le:Dynkin} on $A^{r_4(X)}=B_4$. Then $q=-\zeta$, $p\not=2,3$. Hence $\cC$ is standard and $\cD=\cD_{16,4}$. Subcase $b_{10}$. Consider that $q_{11}=q_{22}=q_{33}=-1$, $q_{44}^2 q_{34}'-1=0$, and $q_{12}'\not=-1$. Assume that the condition $\{q_{23}',q_{34}'\}=\{-1\}$ doesn't hold in this case. Set $q:=q_{12}'$, $r:=q_{23}'$, and $s:=q_{44}$. Then we get that $qr=1$, $r=s^2\notin \{1,-1\}$ by $A^{r_2(X)}=A^{r_3(X)}=B_4$. Hence $\cD=\cD_{11,2}$. Subcase $b_{11}$. Consider that $q_{11}=q_{22}=-1$, $(3)_{q_{44}}=0$, $q_{12}'\not=-1$, $q_{44}^2 q_{34}'-1\not=0$, and $q_{33}q_{23}'-1=q_{33}q_{34}'-1=0$. Let $q:=q_{12}'$, $r:=q_{23}'$, and $\zeta:=q_{44}$. Then $(3)_{\zeta}=0$ and $q\notin \{-1,\zeta,\zeta^{-1}\}$. By $A^{r_2(X)}=B_4$ and $q\not=-1$, we get that $qr=1$. Hence by the assumption and~\cite[lemma 1.4]{W-17} we obtain the reflections of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{4}{$-1$}{$q$}{$-1$}{$q^{-1}$ {$q$}{$q^{-1}$}{$\zeta$} \quad \Rightarrow r_4(X):~ \Dchainfour{3}{$-1$}{$q$}{$-1$}{$q^{-1}$} {$\zeta q^{-1}$}{$ q\zeta^{-1}$}{$\zeta$} \end{align*} \begin{align*} \quad \Rightarrow r_3r_4(X): \Drightofway{}{$-1$}{$q$}{-$\zeta q^{-2}$}{$(\zeta q)^{-1}$}{$\zeta^{-1}$} {$\zeta q^{-1}$}{$q\zeta^{-1}$}{$\zeta$} \end{align*} with $(3)_{\zeta}=0$ and $q\notin \{-1, \zeta, \zeta^{-1}\}$. Hence we get $-\zeta q^{-1}=1$ or $\zeta q^{-2} =1$ by~Lemma~\ref{le:Dynkin} on $A^{r_4(X)}=B_4$. Then $q=-\zeta$, $p\not=2,3$ or $q=-\zeta^{-1}$, $p\not=2,3$ by $q\notin \{-1, \zeta, \zeta^{-1}\}$. If $q=-\zeta^{-1}$ and $p\not=2,3$, then $\cC$ is standard and $\cD=\cD_{16,2}$. If $q=-\zeta$ and $p\not=2,3$, then $a_{24}^{r_3r_4(X)}=-2$, which is a contradiction. Subcase $b_{12}$. Consider that $q_{11}=q_{33}=-1$, $(3)_{q_{44}}=0$, $q_{12}'\not=-1$, $q_{44}^2 q_{34}'-1\not=0$, and $q_{22}q_{12}'-1=q_{22}q_{23}'-1=0$. Let $q:=q_{23}'$, $r:=q_{34}'$, and $\zeta:=q_{44}$. Then $q_{12}'=q \not=-1$. Then $qr=1$ by $A^{r_3(X)}=B_4$. Hence by~\cite[Lemma 1.4]{W-17}the reflection of $X$ is \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{4}{$-1$}{$q$}{$q^{-1}$}{$q$ {$-1$}{$q^{-1}$}{$\zeta$} \quad \Rightarrow \quad r_4(X):~ \Dchainfour{}{$-1$}{$q$}{$q^{-1}$}{$q$ {$-\zeta q^{-2}$}{$ q\zeta^{-1}$}{$\zeta$} \end{align*} with $(3)_{\zeta}=0$ and $q\notin\{-1, \zeta, \zeta^{-1}\}$. We obtain that $\zeta q^{-2}=1$ or $-\zeta q^{-2} \zeta^{-1} q= -\zeta q^{-2} q=1$ by~Lemma~\ref{le:Dynkin}$A^{r_4(X)}=B_4$. Then $q=-\zeta^{-1}$ and $p\not=2,3$ by $q\notin\{-1, \zeta, \zeta^{-1}\}$. Hence $\cC$ is standard and $\cD=\cD_{19,3}$. Subcase $b_{13}$. Consider that $q_{22}=q_{33}=-1$, $(3)_{q_{44}}=0$, $q_{44}^2 q_{34}'-1\not=0$, and $q_{11}q_{12}'-1=0$. In this case we assume that the condition $\{q_{12}',q_{23}'\}=\{-1\}$ doesn't hold and nor does the condition $\{q_{23},q_{34}'\}=\{-1\}$. Set $q:=q_{11}$, $r:=q_{12}'$, $s:=q_{34}'$, and $\zeta:=q_{44}$. Then $q=r=s^{-1}q\not=-1$ and $p\not=2$ by~\cite[Lemma 1.4]{W-17} on $A^{r_2(X)}=A^{r_3(X)}=B_4$. Then we get that $(3)_{\zeta}=0$ and $q\notin\{\zeta, \zeta^{-1}, -1\}$. Hence by~\cite[Lemma 1.4]{W-17} the reflection of $X$ \setlength{\unitlength}{1mm} \begin{align*} X: \Dchainfour{4}{$q$}{$q^{-1}$}{$-1$}{$q$ {$-1$}{$q^{-1}$}{$\zeta$} \quad \Rightarrow \quad r_4(X): \Dchainfour{}{$q$}{$q^{-1}$}{$-1$}{$q$ {$-\zeta q^{-2}$}{$ q\zeta^{-1}$}{$\zeta$} \end{align*} implies that $\zeta q^{-2}=1$ or $-\zeta q^{-2} \zeta^{-1} q= -\zeta q^{-2} q=1$ by $A^{r_4(X)}=B_4$. Then $q=-\zeta^{-1}$ and $p\not=2,3$ by $q\notin \{-1,\zeta^{-1}\}$. Hence $\cC$ is standard and $\cD=\cD_{16,3}$. Subcase $b_{14}$. Consider that $q_{11}=q_{22}=q_{33}=-1$, $(3)_{q_{44}}=0$, $q_{12}'\not=-1$, and $q_{44}^2 q_{34}'-1\not=0$. Assume that the condition $\{q_{23}',q_{34}'\}=\{-1\}$ doesn't hold in this case. We get that $q_{12}'q_{23}'=q_{23}'q_{34}'=1$ by~\cite[Lemma 1.4]{W-17} on $A^{r_2(X)}=A^{r_3(X)}=B_4$. Set $q:=q_{12}'$ and $\zeta:=q_{44}$. Then $(3)_{\zeta}=0$ and $q\notin \{-1, \zeta, \zeta^{-1}\}$. By~\cite[Lemmas 1.3, 1.4]{W-17} and $A^{r_4(X)}=B_4$ the reflection of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~ \Dchainfour{4}{$-1$}{$q$}{$-1$}{$q^{-1}$ {$-1$}{$q$}{$\zeta$} \quad \Rightarrow \quad r_4(X):~ \Dchainfour{}{$-1$}{$q$}{$-1$}{$q^{-1}$ {-$\zeta q^{2}$}{$ (q\zeta)^{-1}$}{$\zeta$} \end{align*} implies that $q=-\zeta$ and $p\not=2,3$ Hence $\cD=\cD_{19,2}$. Case (c). Assume that $\cC(M)$ is standard of type $C_4$. By $a_{32}=-1$ and $a_{34}=-2$ we get that $q_{33}q_{23}'-1=(3)_{q_{33}}(q_{33}^2 q_{34}'-1)=0$ by~Lemma~\ref{le:Dynkin}. Further, by the assumption we obtain the equations $(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j,j-1}'-1)=0$ for all $i\in \{1, 2\}$ and $j\in \{2, 4\}$, Here we distinguish two cases: $c_1$ and $c_2$. Subcase $c_1$. Assume that $$q_{33}^2 q_{34}'-1=q_{33}q_{23}'-1=(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j,j-1}'-1)=0$$ for all $i\in \{1, 2\}$ and $j\in \{2, 4\}$. Let $q:=q_{33}$. Then $q_{34}'=q^{-2}\not=1$ and $q_{23}'=q^{-1}\not=-1$. Hence $q_{22}q_{23}'=1$. Otherwise, if $q_{22}q_{23}'\not=1$ then $q_{22}=-1$ and hence $a_{34}^{r_2(X)}=-1$, which is a contradiction. Then $q_{22}q_{12}'=1$ by $a_{21}=-1$. Hence $q_{11}q_{12}'=1$ and $q_{11}\not=-1$. Otherwise, if $q_{11}=-1$ then by~Lemma~\ref{le:Dynkin} and \cite[Lemma 1.4]{W-17} we obtain $a_{34}^{r_2 r_1(X)}=-1$, which is again a contradiction. If $q_{44}q_{34}'-1=0$, then $\cD=\cD_{31}$. If $q_{44}=-1$ and $q_{34}'\not=-1$, then the reflection of $X$ \setlength{\unitlength}{1mm} \begin{align*} X:~\Dchainfour{4}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$q$}{$q^{-2}$}{$-1$} \quad \Rightarrow \quad r_4(X):~ \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$ {$-q^{-1}$}{$q^{2}$}{$-1$} \end{align*} gives that $-q^{-1} q^{-1}=1$ by~Lemma~\ref{le:Dynkin} on $a_{32}^{r_4(X)}=-1$. Then $q_{34}'=q^{-2}=-1$, which is a contradiction. Subcase $c_2$. Assume that $$(3)_{q_{33}}=(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j,j-1}'-1)=0$$ for all $i\in \{1, 2\}$ and $j\in \{2, 4\}$. To avoid repetition we assume that the condition $q_{33}^2 q_{34}'-1\not=0$ holds. Set $\zeta:=q_{33}$. Then $q_{34}'\notin \{\zeta,\zeta^{-1}\}$. By $q_{33}q_{23}'-1=0$ we get $q_{23}'=\zeta^{-1}$. Since $A^{r_3}(X)=C_4$, we get that $q_{23}'q_{34}'=1$ and hence $q_{34}'=\zeta$, which is a contradiction. Case (d). Assume that $\cC(M)$ is standard of type $D_4$. By~Lemma~\ref{le:Dynkin} on $A^X=D_4$ we obtain $(2)_{q_{22}}(q_{22}q_{2i}'-1)=(2)_{q_{ii}}(q_{ii}q_{2i}'-1)=0$ for all $i\in \{1, 3, 4\}$, Then we distinguish three cases: $d_1$, $d_2$ and $d_3$. Subcase $d_1$. Assume that $q_{22}q_{2i}'-1=q_{ii}q_{2i}'-1=0$ for all $i\in \{1, 3, 4\}$. Then $\cD=\cD_{51}$. Subcase $d_2$. Assume that $q_{22}q_{2,i}'-1=0$ for all $i\in \{1, 3, 4\}$ and there exists $l\in \{1,3,4\}$ such that $(2)_{q_{ll}}=(2)_{q_{jj}}(q_{jj}q_{2j}'-1)=0$ for any $j\in \{1, 3, 4\} \setminus \{l\}$. Further, we assume that $q_{2l}'\not=-1$ to avoid the repetition. Set $q:=q_{22}$, $r:=q_{33}$ and $s:=q_{44}$. Then $q\not=-1$. By~\cite[Lemma 1.4]{W-17} the reflection of $X$ \begin{align*} X: \tau_{l2jk}\Dthreefork{l}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q^{-1}$}{$r$}{$s$} \quad \Rightarrow \quad r_l(X): \tau_{l2jk} \Dthreefork{}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q^{-1}$}{$r$}{$s$} \end{align*} with $j\not=k$ and $j, k\in \{1, 3, 4\}\backslash \{l\}$. Since $\cC$ is standard, we get that $q^{-2}=1$ by~\cite[Lemma 1.4]{W-17} on $A^{r_2r_l(X)}=D_4$, which is a contradiction. Subcase $d_3$. Assume that $(2)_{q_{22}}=(2)_{q_{ii}}(q_{ii}q_{2i}'-1)=0$ for all $i\in \{1, 3, 4\}$. Set $q:=q_{21}'$, $r:=q_{23}'$ and $s:=q_{24}'$. Assume that the condition $\{q,r,s\}=\{-1\}$ doesn't hold. Since $A^{r_2(X)}=D_4$, we get $q=r=s=-1$ or $qr=rs=qs=1$ by~\cite[Lemma 1.4]{W-17}. Hence $q=r=s=-1$, which is a contradiction. Case (f). Assume that $\cC(M)$ is standard of type $F_4$. By $a_{21}=-1$ and $a_{23}=-2$ we obtain $q_{22}q_{12}'-1=(3)_{q_{22}}(q_{22}^2 q_{23}'-1)=0$ by~Lemma~\ref{le:Dynkin}. Further, by~Lemma~\ref{le:Dynkin} on $A^X=F_4$ we get $(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j,j-1}'-1)=0$ for all $i\in \{1, 3\}$ and $j\in \{2, 3, 4\}$, Then we distinguish two cases $f_1$ and $f_2$. Subcase $f_1$. Assume that $$q_{22}^2 q_{23}'-1=q_{22}q_{12}'-1=(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j,j-1}'-1)=0$$ for all $i\in \{1, 3\}$ and $j\in \{2, 3, 4\}$. Let $q:=q_{22}$. Then $q_{23}'=q^{-2}\not=1$ and $q_{12}'=q^{-1}\not=-1$ by $q_{22}q_{12}'-1=0$. Hence $q_{11}q_{12}'=1$ by $(2)_{q_{11}}(q_{11}q_{12}'-1)=0$. Otherwise, if $q_{11}=-1$ then by~\cite[Lemmas 1.3 and 1.4]{W-17} we get $a_{23}^{r_1(X)}=-1$, which is a contradiction. If $q_{33}=-1$, then $q_{22}^{r_3(X)}=-q^{-1}$ and $(q_{12}q_{21})^{r_3(X)}=q_{12}'= q^{-1}$ by~\cite[Lemma 1.4]{W-17}. Hence $q^2=-1$ by $a_{21}^{r_3(X)}=-1$. Then $q_{34}'=q_{44}=-1$ and hence $\cD=\cD_{41}$, where $q^2=-1$. If $q_{33}\not=-1$ and $q_{44}=-1$, then $q_{33}q_{23}'-1=q_{33}q_{34}'-1=0$ and $q_{33}=q^2\not=-1$. Then $q_{22}^{r_3r_4(X)}=-q^{-1}$ and $(q_{12}q_{21})^{r_3r_4(X)}=q^{-1}$ by~\cite[Lemma 1.4]{W-17}. Then by $a_{21}^{r_3r_4(X)}=-1$ we obtain that $q^2=-1$, which is a contradiction. If $q_{33}\not=-1$ and $q_{44}\not=-1$, Then $q_{44}=q^2$ and hence $\cD=\cD_{41}$. Subcase $f_2$. Assume that $$(3)_{q_{22}}=q_{22}q_{12}'-1=(2)_{q_{ii}}(q_{ii}q_{i,i+1}'-1)=(2)_{q_{jj}}(q_{jj}q_{j,j-1}'-1)=0$$ for all $i\in \{1, 3\}$ and $j\in \{2, 3, 4\}$. Assume that the condition $q_{22}^2 q_{23}'-1\not=0$ holds to avoid the repetition. Let $\zeta:=q_{22}$. Then $q_{23}'=\zeta$. Since $A^{r_2(X)}=C_4$, which is a contradiction. \end{enumerate} \end{proof} \newpage \setlength{\unitlength}{1mm} \begin{table} \centering \begin{tabular}{r|l|l|l|} row & gener. Dynkin diagrams & \text{fixed parameters} &\text{char} $\Bbbk$ \\ \hline \hline 1 & \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$} & $q\in k^\ast \setminus \{1\}$ & $p>0$\\ \hline 2 & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ \hline 3 & \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$}{$q^{-2}$}{$q^2$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ \hline 4 & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q$}{$q^{-1}$}{$q$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ \hline 5 & \Dthreefork{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q^{-1}$}{$q$}{$q$} & $q\in k^\ast \setminus \{1\}$ & $p>0$\\ \hline 6 & \Dchainfour{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ & \Dchainfour{}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$} & & \\ & \Dchainfour{}{$q$}{$q^{-1}$}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q$} & & \\ \hline 7 & \Dchainfour{}{$-1$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q$} & $q\in k^\ast$, $q^4\not=1$ & $p>0$\\ & \Dchainfour{}{$-1$}{$q^2$}{$-1$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q$} & & \\ & \Dchainfour{}{$q^2$}{$q^{-2}$}{$-1$}{$q^2$}{$-1$}{$q^{-2}$}{$q$} & & \\ & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$-1$}{$q^2$}{$-q^{-1}$}& & \\ \hline 8 & \Dchainfour{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$}{$q^{-2}$}{$q^2$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ & \Dchainfour{}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q$}{$q^{-2}$}{$q^2$} & & \\ & \Dchainfour{}{$q$}{$q^{-1}$}{$-1$}{$q$}{$-1$}{$q^{-2}$}{$q^2$} & & \\ & \Drightofway{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q^{-1}$}{$-1$}{$q^2$}{$-1$} & & \\ \hline 9 & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q$}{$q^{-1}$}{$-1$} & $q\in k^\ast $, $q^2,q^3\not=1$ & $p>0$\\ & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$-1$}{$q$}{$-1$} & & \\ & \Drightofway{}{$q^2$}{$q^{-2}$}{$-1$}{$q^2$}{$q^{-1}$}{$-1$}{$q^{-1}$}{$q$}\ \quad \quad \Drightofway{}{$q^2$}{$q^{-2}$}{$-1$}{$q^2$}{$q$}{$-1$}{$q^{-3}$}{$-1$} & & \\ & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$-1$}{$q^3$}{$q^{-3}$} & & \\ & \Dchainfour{}{$q^2$}{$q^{-2}$}{$q$}{$q^{-1}$}{$-1$}{$q^3$}{$q^{-3}$} & & \\ \hline \end{tabular} \end{table} \newpage \setlength{\unitlength}{1mm} \begin{table} \centering \begin{tabular}{r|l|l|l|} row & gener. Dynkin diagrams & fixed param. &\text{char} $\Bbbk$ \\ \hline \hline 10 & \Dchainfour{}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$ \\ & \Dchainfour{}{$-1$}{$q^{-1}$}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q$} & & \\ & \Dchainfour{}{$-1$}{$q$}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$}{$q$} & & \\ & \Dchainfour{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$-1$}{$q$}{$-1$} & & \\ & \Dchainfour{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$-1$} & & \\ & \Dchainfour{}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$-1$}{$q$}{$-1$} & & \\ \hline 11 & \Dchainfour{}{$q^{-2}$}{$q^2$}{$-1$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$q$} & $q\in k^\ast $, $q^4\not=1$ & $p>0$\\ & \Dchainfour{}{$-1$}{$q^{-2}$}{$-1$}{$q^2$}{$-1$}{$q^{-2}$}{$q$} & & \\ & \Dchainfour{}{$-1$}{$q^2$}{$q^{-2}$}{$q^2$}{$-1$}{$q^{-2}$}{$q$} & & \\ & \Dchainfour{}{$-1$}{$q^{-2}$}{$q^2$}{$q^{-2}$}{$-1$}{$q^2$}{$-q^{-1}$} & & \\ & \Dchainfour{}{$-1$}{$q^2$}{$-1$}{$q^{-2}$}{$-1$}{$q^2$}{$-q^{-1}$} & & \\ & \Dchainfour{}{$q^2$}{$q^{-2}$}{$-1$}{$q^2$}{$q^{-2}$}{$q^2$}{$-q^{-1}$} & & \\ \hline 12 & \Dchainfour{}{$q^{-1}$}{$q$}{$-1$}{$q^{-1}$}{$q$}{$q^{-2}$}{$q^2$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ & \Dchainfour{}{$-1$}{$q^{-1}$}{$-1$}{$q$}{$-1$}{$q^{-2}$}{$q^2$} & & \\ & \Dchainfour{}{$-1$}{$q$}{$q^{-1}$}{$q$}{$-1$}{$q^{-2}$}{$q^2$} & & \\ & \Drightofway{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q^{-1}$}{$-1$}{$q^2$}{$-1$}\ \quad \quad \Drightofway{}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q^{-1}$}{$-1$}{$q^2$}{$-1$} & & \\ & \Dthreefork{}{$q$}{$q^{-1}$}{$-1$}{$q$}{$q$}{$q^{-1}$}{$q^{-1}$} & & \\ \hline 13 & \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$-1$}{$q^2$}{$q^{-2}$} & $q\in k^\ast $, $q^2\not=1$ & $p>0$\\ & \Drightofway{}{$q$}{$q^{-1}$}{$-1$}{$q$}{$q$}{$-1$}{$q^{-2}$}{$-1$}\ \quad \quad \Dthreefork{}{$-1$}{$q$}{$-1$}{$q^{-1}$}{$q^{-1}$}{$q$}{$q$} & & \\ & \Dthreefork{}{$-1$}{$q^{-1}$}{$q$}{$q^{-1}$}{$q^{-1}$}{$q$}{$q$} & & \\ \hline \end{tabular} \end{table} \setlength{\unitlength}{1mm} \begin{table} \centering \begin{tabular}{r|l|l|l|} row & gener. Dynkin diagrams & fixed param. &\text{char} $\Bbbk$ \\ \hline \hline 14 & \Dchainfour{}{$q$}{$q^{-1}$}{$q$}{$q^{-1}$}{$-1$}{$-q$}{$-q^{-1}$} & $q\in k^\ast $, $q^2\not=1$ & $p>2$ \\ & \Drightofway{}{$q$}{$q^{-1}$}{$-1$}{$q$}{$-1$}{$-1$}{$-q^{-1}$}{$-1$}\ \quad \quad \Drightofway{}{$-q^{-1}$}{$-q$}{$-1$}{$-q^{-1}$}{$-1$}{$-1$}{$q$}{$-1$} & & \\ & \Dchainfour{}{$q$}{$q^{-1}$}{$-1$}{$-1$}{$-1$}{$-q$}{$-q^{-1}$} & & \\ & \Dchainfour{}{$-q^{-1}$}{$-q$}{$-q^{-1}$}{$-q$}{$-1$}{$q^{-1}$}{$q$} & & \\ \hline 15 & \Dchainfour{}{$-\zeta ^{-1}$}{$-\zeta $}{$-\zeta ^{-1}$ {$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $}{$\zeta $} & $\zeta \in G_3'$ & $p>3$\\ \hline $15'$ & \Dchainfour{}{$-1$}{$-1$}{$-1$ {$-1$}{$-1$}{$-1$}{$1$} & & $p=3$ \\ \hline 16 & \Dchainfour{}{$-1$}{$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $ {$-\zeta ^{-1}$}{$-\zeta $}{$\zeta $} & $\zeta \in G_3'$& $p>3$ \\ & \Dchainfour{}{$-1$}{$-\zeta ^{-1}$}{$-1$}{$-\zeta $ {$-\zeta ^{-1}$}{$-\zeta $}{$\zeta $} & & \\ & \Dchainfour{}{$-\zeta ^{-1}$}{$-\zeta $}{$-1$}{$-\zeta ^{-1}$ {$-1$}{$-\zeta $}{$\zeta $} & & \\ & \Dchainfour{}{$-\zeta ^{-1}$}{$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $ {$-1$}{$-\zeta ^{-1}$}{$\zeta ^{-1}$} & & \\ \hline 17 & \Dchainfour{}{$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $}{$-\zeta ^{-1}$ {$-\zeta $}{$-\zeta ^{-1}$}{$\zeta $} & $\zeta \in G_3'$ & $p>3$\\ & \Dchainfour{}{$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $}{$-\zeta ^{-1}$ {$-1$}{$-1$}{$\zeta $} & & \\ & \Drightofway{}{$-\zeta $}{$-\zeta ^{-1}$}{$-1$}{$-\zeta $ {$\zeta ^{-1}$}{$-1$}{$-1$}{$\zeta $} \quad \quad \Drightofway{}{$-\zeta $}{$-\zeta ^{-1}$}{$-1$}{$-\zeta $ {$\zeta $}{$-1$}{$-\zeta $}{$-1$} & & \\ & \Dchainfour{}{$-\zeta $}{$-\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$ {$-1$}{$-\zeta ^{-1}$}{$-\zeta $} & & \\ & \Dchainfour{}{$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $}{$-\zeta ^{-1}$ {$-1$}{$-\zeta ^{-1}$}{$-\zeta $} & & \\ \hline \end{tabular} \end{table} \begin{table} \centering \begin{tabular}{r|l|l|l|} row & gener. Dynkin diagrams & fixed param. &\text{char} $\Bbbk$ \\ \hline \hline 18 & \Dchainfour{}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $ {$\zeta $}{$\zeta ^{-1}$}{$-1$} & $\zeta \in G_3'$ & $p\not=3$\\ & \Dchainfour{}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $ {$-1$}{$\zeta $}{$-1$} & & \\ & \Drightofway{}{$\zeta ^{-1}$}{$\zeta $}{$-1$}{$\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$}{$\zeta ^{-1}$}{$\zeta $} \quad \quad \Dthreefork{}{$\zeta ^{-1}$}{$\zeta $}{$-1$}{$\zeta ^{-1}$ {$\zeta $}{$-1$}{$-1$} & & \\ & \Dthreefork{}{$\zeta ^{-1}$}{$\zeta $}{$\zeta $}{$\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$}{$-1$} \quad \quad \Dthreefork{}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $ {$\zeta $}{$-1$}{$-1$} & & \\ \hline 19 & \Dchainfour{}{$-\zeta $}{$-\zeta ^{-1}$}{$-1$}{$-\zeta $}{$-\zeta ^{-1}$ {$-\zeta $}{$\zeta $} & $\zeta \in G_3'$ &$p>3$\\ & \Dchainfour{}{$-1$}{$-\zeta $}{$-1$}{$-\zeta ^{-1}$}{$-1$ {$-\zeta $}{$\zeta $} & & \\ & \Dchainfour{}{$-1$}{$-\zeta ^{-1}$}{$-\zeta $}{$-\zeta ^{-1}$}{$-1$ {$-\zeta $}{$\zeta $} & & \\ & \Dchainfour{}{$-1$}{$-\zeta $}{$-\zeta ^{-1}$}{$-\zeta $}{$-1$ {$-\zeta ^{-1}$}{$\zeta ^{-1}$} & & \\ & \Dchainfour{}{$-1$}{$-\zeta ^{-1}$}{$-1$}{$-\zeta $}{$-1$ {$-\zeta ^{-1}$}{$\zeta ^{-1}$} & & \\ & \Dchainfour{}{$-\zeta ^{-1}$}{$-\zeta $}{$-1$}{$-\zeta ^{-1}$}{$-\zeta $ {$-\zeta ^{-1}$}{$\zeta ^{-1}$} & & \\ \hline 20 & \Dchainfour{}{$\zeta ^{-1}$}{$\zeta $}{$-1$}{$\zeta ^{-1}$}{$\zeta $ {$\zeta $}{$-1$} & $\zeta \in G_3'$ &$p\not=3$ \\ & \Dchainfour{}{$\zeta ^{-1}$}{$\zeta $}{$-1$}{$\zeta ^{-1}$}{$-\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$\zeta ^{-1}$}{$-1$}{$\zeta $}{$-1$ {$\zeta $}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $}{$-1$ {$\zeta $}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$\zeta ^{-1}$}{$-1$}{$\zeta $}{$\zeta $ {$\zeta ^{-1}$}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $}{$\zeta $ {$\zeta ^{-1}$}{$-1$} & &\\ & \Drightofway{}{$-1$}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$}{$\zeta ^{-1}$}{$\zeta $}\ \quad \quad \Drightofway{}{$-1$}{$\zeta $}{$-1$}{$\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$}{$\zeta ^{-1}$}{$\zeta $} & &\\ & \Dthreefork{}{$\zeta $}{$\zeta ^{-1}$}{$-1$}{$\zeta $ {$\zeta $}{$\zeta ^{-1}$}{$-1$}\ \quad \quad \Dthreefork{}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $}{$\zeta $ {$\zeta ^{-1}$}{$\zeta ^{-1}$}{$-1$} & &\\ \hline \end{tabular} \end{table} \begin{table} \centering \begin{tabular}{r|l|l|l|} row & gener. Dynkin diagrams & fixed param.&\text{char} $\Bbbk$ \\ \hline \hline 21 & \Dchainfour{}{$-1$}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$ {$\zeta $}{$\zeta $}{$-1$} & $\zeta \in G_3'$ &$p\not=3$ \\ & \Dchainfour{}{$-1$}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$ {$-\zeta ^{-1}$}{$\zeta ^{-1}$}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$\zeta $}{$-1$}{$\zeta ^{-1}$}{$\zeta $ {$\zeta $}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$\zeta $}{$-1$}{$\zeta ^{-1}$}{$-\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$} & &\\ & \Dchainfour{}{$\zeta $}{$\zeta ^{-1}$}{$-1$}{$\zeta $}{$\zeta $ {$\zeta ^{-1}$}{$-1$} & & \\ & \Dchainfour{}{$\zeta $}{$\zeta ^{-1}$}{$-1$}{$\zeta $}{$-1$ {$\zeta $}{$-1$} & &\\ & \Drightofway{}{$\zeta $}{$\zeta ^{-1}$}{$\zeta $}{$\zeta ^{-1}$ {$\zeta ^{-1}$}{$-1$}{$\zeta ^{-1}$}{$\zeta $} & &\\ \hline 22 & \Dchainfour{}{$-\zeta $}{$\zeta $}{$-1$}{$-\zeta $}{$\zeta $ {$\zeta $}{$-\zeta $} & $\zeta \in G_4'$ &$p\not=2$ \\ & \Dchainfour{}{$-1$}{$-\zeta $}{$-1$}{$\zeta $}{$-1$}{$\zeta $}{$-\zeta $} & &\\ & \Dchainfour{}{$-1$}{$\zeta $}{$-\zeta $}{$\zeta $}{$-1$ {$\zeta $}{$-\zeta $} & &\\ & \Drightofway{}{$-1$}{$-\zeta $}{$\zeta $}{$-\zeta $ {$-1$}{$-1$}{$-\zeta $}{$-1$}\ \quad \quad \Drightofway{}{$-1$}{$\zeta $}{$-1$}{$-\zeta $ {$-1$}{$-1$}{$-\zeta $}{$-1$} & &\\ & \Dchainfour{}{$-1$}{$-\zeta $}{$\zeta $}{$-1$}{$-1$ {$\zeta $}{$-\zeta $} & &\\ & \Dchainfour{}{$-1$}{$\zeta $}{$-1$}{$-1$}{$-1$}{$\zeta $}{$-\zeta $} & &\\ & \Drightofway{}{$-\zeta $}{$\zeta $}{$-1$}{$-\zeta $ {$-1$}{$\zeta $}{$-\zeta $}{$-1$} & & \\ \hline \end{tabular} \caption{generalized Dynkin diagrams in all positive characteristic $p>0$} \label{tab.1} \end{table} \setlength{\unitlength}{1mm} \settowidth{\mpb}{$q_0\in k^\ast \setminus \{-1,1\}$,} \rule[-3\unitlength]{0pt}{8\unitlength} \begin{table} \centering \begin{tabular}{r|p{8.8cm}|l|l|} & \text{exchange graph} &\text{row}&\text{char} $\Bbbk$\\ \hline \hline 1 & \begin{picture}(2,2) \put(1,0){\scriptsize{$\cD_{11}$}} \end{picture} &$1$ & $p>0$\\ \hline 2 &\begin{picture}(2,4) \put(1,0){\scriptsize{$\cD_{21}$}} \end{picture} & $2$ &$p>0$ \\ \hline 3 &\begin{picture}(2,4) \put(1,0){\scriptsize{$\cD_{31}$}} \end{picture} &$3$ & $p>0$\\ \hline 4 &\begin{picture}(2,4) \put(1,0){\scriptsize{$\cD_{41}$}} \end{picture} &$4$ & $p>0$\\ \hline 5 &\begin{picture}(2,4) \put(1,0){\scriptsize{$\cD_{51}$}} \end{picture} &$5$ & $p>0$\\ \hline 6 &\begin{picture}(57,4.5) \put(1,0){\scriptsize{$\cD_{61}$}} \put(6,1){\line(1,0){7}} \put(9,2){\scriptsize{$1$}} \put(13,0){\scriptsize{$\cD_{62}$}} \put(19,1){\line(1,0){7}} \put(21,2){\scriptsize{$2$}} \put(26,0){\scriptsize{$\cD_{63}$}} \put(32,1){\line(1,0){6}} \put(34,2){\scriptsize{$3$}} \put(38,0){\scriptsize{$\tau_{4321} \cD_{62}$}} \put(50,1){\line(1,0){6}} \put(52,2){\scriptsize{$4$}} \put(56,0){\scriptsize{$\tau_{4321} \cD_{61}$}} \end{picture} & $6$ &$p>0$\\ \hline 7 &\begin{picture}(60,4.5) \put(1,0){\scriptsize{$\cD_{71}$}} \put(6,1){\line(1,0){7}} \put(9,2){\scriptsize{$1$}} \put(13,0){\scriptsize{$\cD_{72}$}} \put(19,1){\line(1,0){7}} \put(21,2){\scriptsize{$2$}} \put(26,0){\scriptsize{$\cD_{73}$}} \put(32,1){\line(1,0){7}} \put(35,2){\scriptsize{$3$}} \put(39,0){\scriptsize{$\cD_{74}$}} \end{picture} & $7$ &$p>0$ \\ \hline 8 & \begin{picture}(60,15) \put(1,10){\scriptsize{$\cD_{81}$}} \put(6,11){\line(1,0){7}} \put(9,12){\scriptsize{$1$}} \put(13,10){\scriptsize{$\cD_{82}$}} \put(18,11){\line(1,0){6}} \put(21,12){\scriptsize{$2$}} \put(25,10){\scriptsize{$\cD_{83}$}} \put(30,11){\line(1,0){6}} \put(32,12){\scriptsize{$3$}} \put(37,10){\scriptsize{$\cD_{84}$}} \put(42,11){\line(1,0){8}} \put(46,12){\scriptsize{$4$}} \put(51,10){\scriptsize{$\tau_{1243}\cD_{83}$}} \put(51,0){\scriptsize{$\tau_{1243}\cD_{82}$}} \put(53,3){\line(0,1){5}} \put(55,4.5){\scriptsize{$2$}} \put(30,0){\scriptsize{$\tau_{1243}\cD_{81}$}} \put(43,1){\line(1,0){7}} \put(46,2){\scriptsize{$1$}} \end{picture} & $8$ &$p>0$\\ \hline 9 &\begin{picture}(59,15) \put(1,10){\scriptsize{$\cD_{91}$}} \put(6,11){\line(1,0){7}} \put(9,12){\scriptsize{$4$}} \put(13,10){\scriptsize{$\cD_{92}$}} \put(18,11){\line(1,0){7}} \put(21,12){\scriptsize{$3$}} \put(26,10){\scriptsize{$\cD_{93}$}} \put(31,11){\line(1,0){7}} \put(34,12){\scriptsize{$2$}} \put(39,10){\scriptsize{$\tau_{3214}\cD_{96}$}} \put(51,11){\line(1,0){7}} \put(54,12){\scriptsize{$1$}} \put(59,10){\scriptsize{$\tau_{3214}\cD_{94}$}} \put(39,0){\scriptsize{$\tau_{3241} \cD_{95}$}} \put(42,3){\line(0,1){5}} \put(44,4.5){\scriptsize{$4$}} \end{picture} & $9$ &$p>0$\\ \hline 10 &\begin{picture}(59,35) \put(1,30){\scriptsize{$\cD_{10,1}$}} \put(8,31){\line(1,0){7}} \put(11,32){\scriptsize{$2$}} \put(15,30){\scriptsize{$\cD_{10,2}$}} \put(22.5,31){\line(1,0){7}} \put(25.5,32){\scriptsize{$3$}} \put(30,30){\scriptsize{$\cD_{10,4}$}} \put(37.5,31){\line(1,0){8}} \put(41.5,32){\scriptsize{$4$}} \put(47,30){\scriptsize{$\cD_{10,5}$}} \put(15,20){\scriptsize{$\cD_{10,3}$}} \put(22.5,21){\line(1,0){7}} \put(25.5,22){\scriptsize{$3$}} \put(30,20){\scriptsize{$\cD_{10,6}$}} \put(37.5,21){\line(1,0){7}} \put(40.5,22){\scriptsize{$4$}} \put(45.5,20){\scriptsize{$\tau_{4321}\cD_{10,4}$}} \put(18,23){\line(0,1){5}} \put(19.5,24.5){\scriptsize{$1$}} \put(32.5,23){\line(0,1){5}} \put(33.5,24.5){\scriptsize{$1$}} \put(50,23){\line(0,1){5}} \put(51,24.5){\scriptsize{$1$}} \put(23,10){\scriptsize{$\tau_{3214}\cD_{10,3}$}} \put(37.5,11){\line(1,0){7}} \put(40.5,12){\scriptsize{$4$}} \put(45.5,10){\scriptsize{$\tau_{4321} \cD_{10,2}$}} \put(32.5,13){\line(0,1){5}} \put(33.5,14.5){\scriptsize{$2$}} \put(50,13){\line(0,1){5}} \put(51,14.5){\scriptsize{$2$}} \put(45.5,0){\scriptsize{$\tau_{4321} \cD_{10,1}$}} \put(50,3){\line(0,1){5}} \put(51,4.5){\scriptsize{$3$}} \end{picture} & $10$ &$p>0$\\ \hline 11 &\begin{picture}(59,15) \put(1,10){\scriptsize{$\cD_{11,1}$}} \put(8,11){\line(1,0){7}} \put(11,12){\scriptsize{$2$}} \put(15,10){\scriptsize{$\cD_{11,2}$}} \put(22,11){\line(1,0){7}} \put(25,12){\scriptsize{$1$}} \put(29,10){\scriptsize{$\cD_{11,3}$}} \put(15,0){\scriptsize{$\cD_{11,4}$}} \put(17,3){\line(0,1){6}} \put(15,5){\scriptsize{$3$}} \put(29,0){\scriptsize{$\cD_{11,5}$}} \put(32,3){\line(0,1){6}} \put(30,5){\scriptsize{$3$}} \put(22,1){\line(1,0){7}} \put(25,2){\scriptsize{$1$}} \put(43,0){\scriptsize{$\cD_{11,6}$}} \put(36,1){\line(1,0){7}} \put(39,2){\scriptsize{$2$}} \end{picture} & $11$ &$p>0$\\ \hline 12 &\begin{picture}(50,25) \put(10,20){\scriptsize{$\cD_{12,1}$}} \put(17.5,21){\line(1,0){7}} \put(20.5,22){\scriptsize{$2$}} \put(25,20){\scriptsize{$\cD_{12,2}$}} \put(33,21){\line(1,0){12}} \put(38.5,22){\scriptsize{$1$}} \put(47.5,20){\scriptsize{$\cD_{12,3}$}} \put(28,13){\line(0,1){5}} \put(29,14.5){\scriptsize{$3$}} \put(50,13){\line(0,1){5}} \put(51,14.5){\scriptsize{$3$}} \put(25,10){\scriptsize{$\cD_{12,4}$}} \put(33,11){\line(1,0){12}} \put(38.5,12){\scriptsize{$1$}} \put(47.5,10){\scriptsize{$\cD_{12,6}$}} \put(1,0){\scriptsize{$\tau_{1243}\cD_{12,1}$}} \put(14.5,1){\line(1,0){7}} \put(17.5,2){\scriptsize{$2$}} \put(23,0){\scriptsize{$\tau_{1243}\cD_{12,2}$}} \put(37.5,1){\line(1,0){7}} \put(40.5,2){\scriptsize{$1$}} \put(45.5,0){\scriptsize{$\tau_{1243} \cD_{12,3}$}} \put(28,3){\line(0,1){5}} \put(29,4.5){\scriptsize{$4$}} \put(50,3){\line(0,1){5}} \put(51,4.5){\scriptsize{$4$}} \end{picture} & $12$ &$p>0$\\ \hline 13&\begin{picture}(59,15) \put(1,10){\scriptsize{$\cD_{13,1}$}} \put(8,11){\line(1,0){7}} \put(11,12){\scriptsize{$3$}} \put(15,10){\scriptsize{$\cD_{13,2}$}} \put(22,11){\line(1,0){7}} \put(25,12){\scriptsize{$2$}} \put(29,10){\scriptsize{$\cD_{13,4}$}} \put(36,11){\line(1,0){7}} \put(39,12){\scriptsize{$1$}} \put(43,10){\scriptsize{$\cD_{13,3}$}} \put(10,0){\scriptsize{$\tau_{1243}\cD_{11,1}$}} \put(17,3){\line(0,1){6}} \put(15,5){\scriptsize{$4$}} \end{picture} & $13$ &$p>0$\\ \hline 14 &\begin{picture}(75,15) \put(10,10){\scriptsize{$\cD_{14,1}$}} \put(17.5,11){\line(1,0){7}} \put(20.5,12){\scriptsize{$3$}} \put(25,10){\scriptsize{$\cD_{14,2}$}} \put(33,11){\line(1,0){11}} \put(38.5,12){\scriptsize{$4$}} \put(45.5,10){\scriptsize{$\tau_{1243}\cD_{14,3}$}} \put(59.5,11){\line(1,0){7}} \put(62.5,12){\scriptsize{$2$}} \put(67.5,10){\scriptsize{$\tau_{3412}\cD_{14,5}$}} \put(1,0){\scriptsize{$\tau_{3214}\cD_{14,1}$}} \put(14.5,1){\line(1,0){7}} \put(17.5,2){\scriptsize{$1$}} \put(23,0){\scriptsize{$\tau_{3214}\cD_{14,2}$}} \put(37.5,1){\line(1,0){7}} \put(40.5,2){\scriptsize{$4$}} \put(45.5,0){\scriptsize{$\tau_{3241} \cD_{14,3}$}} \put(59.5,1){\line(1,0){7}} \put(62.5,2){\scriptsize{$2$}} \put(67.5,0){\scriptsize{$\tau_{1432} \cD_{14,5}$}} \put(28,3){\line(0,1){5}} \put(29,4.5){\scriptsize{$2$}} \end{picture} & $14$ &$p\not=2$\\ \hline 15 &\begin{picture}(2,4) \put(1,0){\scriptsize{$\cD_{15,1}$}} \end{picture} & $15$ &$p\not=2,3$\\ \hline $15'$ &\begin{picture}(2,4) \put(1,0){\scriptsize{$\cD_{15',1}$}} \end{picture} & $15$ &$p=3$\\ \hline 16 & \begin{picture}(60,5) \put(1,0){\scriptsize{$\cD_{16,1}$}} \put(9,1){\line(1,0){7}} \put(11,2){\scriptsize{$1$}} \put(17,0){\scriptsize{$\cD_{16,2}$}} \put(25,1){\line(1,0){7}} \put(29,2){\scriptsize{$2$}} \put(34,0){\scriptsize{$\cD_{16,3}$}} \put(42,1){\line(1,0){7}} \put(44,2){\scriptsize{$3$}} \put(49,0){\scriptsize{$\cD_{16,4}$}} \end{picture} & $16$ &$p\not=2,3$\\ \hline \end{tabular} \end{table} \setlength{\unitlength}{1mm} \settowidth{\mpb}{$q_0\in k^\ast \setminus \{-1,1\}$,} \rule[-3\unitlength]{0pt}{8\unitlength} \begin{table} \centering \begin{tabular}{r|p{10.5cm}|l|l|} (continued) & \text{exchange graph} &\text{row}&\text{char} $\Bbbk$\\ \hline \hline 17 &\begin{picture}(50,25) \put(10,20){\scriptsize{$\cD_{17,2}$}} \put(17.5,21){\line(1,0){7}} \put(20.5,22){\scriptsize{$3$}} \put(25,20){\scriptsize{$\cD_{17,3}$}} \put(33,21){\line(1,0){11}} \put(38,22){\scriptsize{$2$}} \put(45.5,20){\scriptsize{$\tau_{3214}\cD_{17,6}$}} \put(60,21){\line(1,0){7}} \put(63,22){\scriptsize{$1$}} \put(68.5,20){\scriptsize{$\tau_{3214}\cD_{17,5}$}} \put(13,13){\line(0,1){5}} \put(15,14.5){\scriptsize{$4$}} \put(28,13){\line(0,1){5}} \put(29,14.5){\scriptsize{$4$}} \put(50,13){\line(0,1){5}} \put(51,14.5){\scriptsize{$4$}} \put(10,10){\scriptsize{$\cD_{17,1}$}} \put(23,10){\scriptsize{$\tau_{3421}\cD_{17,4}$}} \put(45.5,10){\scriptsize{$\tau_{3214}\cD_{17,4}$}} \put(68.5,10){\scriptsize{$\tau_{1432}\cD_{17,1}$}} \put(1,0){\scriptsize{$\tau_{3412}\cD_{17,5}$}} \put(15,1){\line(1,0){7}} \put(17.5,2){\scriptsize{$1$}} \put(23,0){\scriptsize{$\tau_{3412}\cD_{17,6}$}} \put(37.5,1){\line(1,0){7}} \put(40.5,2){\scriptsize{$4$}} \put(45.5,0){\scriptsize{$\tau_{1432} \cD_{17,3}$}} \put(60,1){\line(1,0){7}} \put(63,2){\scriptsize{$3$}} \put(68.5,0){\scriptsize{$\tau_{1432} \cD_{17,2}$}} \put(28,3){\line(0,1){5}} \put(29,4.5){\scriptsize{$2$}} \put(50,3){\line(0,1){5}} \put(51,4.5){\scriptsize{$2$}} \put(73.5,3){\line(0,1){5}} \put(75,4.5){\scriptsize{$2$}} \end{picture} & $17$ &$p\not=2,3$\\ \hline 18 &\begin{picture}(80,25) \put(10,20){\scriptsize{$\cD_{18,1}$}} \put(17.5,21){\line(1,0){7}} \put(20.5,22){\scriptsize{$4$}} \put(25,20){\scriptsize{$\cD_{18,2}$}} \put(33,21){\line(1,0){14}} \put(40,22){\scriptsize{$3$}} \put(47.5,20){\scriptsize{$\cD_{18,3}$}} \put(55,21){\line(1,0){7}} \put(58,22){\scriptsize{$2$}} \put(63.5,20){\scriptsize{$\tau_{3214}\cD_{18,5}$}} \put(73.5,13){\line(0,1){5}} \put(74.5,14.5){\scriptsize{$4$}} \put(58,13){\line(2,1){10}} \put(65,14){\scriptsize{$1$}} \put(50,10){\scriptsize{$\tau_{3214}\cD_{18,6}$}} \put(68.5,10){\scriptsize{$\tau_{3214}\cD_{18,4}$}} \put(1,0){\scriptsize{$\tau_{3241}\cD_{18,1}$}} \put(15.5,1){\line(1,0){7}} \put(18.5,1.5){\scriptsize{$1$}} \put(23,0){\scriptsize{$\tau_{3241}\cD_{18,2}$}} \put(37.5,1){\line(1,0){7}} \put(40.5,1.5){\scriptsize{$4$}} \put(45.5,0){\scriptsize{$\tau_{3241} \cD_{18,3}$}} \put(60,1){\line(1,0){7}} \put(63,1.5){\scriptsize{$2$}} \put(68.5,0){\scriptsize{$\tau_{3241} \cD_{18,5}$}} \put(68.5,3.5){\line(-2,1){11}} \put(65,5.5){\scriptsize{$4$}} \put(73.5,3){\line(0,1){5}} \put(75,4.5){\scriptsize{$1$}} \end{picture} & $18$ &$p\not=3$\\ \hline 19 &\begin{picture}(59,15) \put(1,10){\scriptsize{$\cD_{19,1}$}} \put(8,11){\line(1,0){7}} \put(11,12){\scriptsize{$2$}} \put(15,10){\scriptsize{$\cD_{19,2}$}} \put(22,11){\line(1,0){7}} \put(25,12){\scriptsize{$3$}} \put(29,10){\scriptsize{$\cD_{19,4}$}} \put(36,11){\line(1,0){7}} \put(39,12){\scriptsize{$2$}} \put(43,10){\scriptsize{$\cD_{19,6}$}} \put(15,0){\scriptsize{$\cD_{19,3}$}} \put(17,3){\line(0,1){6}} \put(15,5){\scriptsize{$1$}} \put(29,0){\scriptsize{$\cD_{19,5}$}} \put(32,3){\line(0,1){6}} \put(30,5){\scriptsize{$1$}} \put(22,1){\line(1,0){7}} \put(25,2){\scriptsize{$3$}} \end{picture} & $19$ &$p\not=2,3$\\ \hline 20 &\begin{picture}(59,35) \put(1,30){\scriptsize{$\cD_{20,1}$}} \put(8,31){\line(1,0){7}} \put(11,32){\scriptsize{$4$}} \put(15,30){\scriptsize{$\cD_{20,2}$}} \put(22.5,31){\line(1,0){7}} \put(25.5,32){\scriptsize{$2$}} \put(30,30){\scriptsize{$\cD_{20,5}$}} \put(37.5,31){\line(1,0){8}} \put(41.5,32){\scriptsize{$1$}} \put(47,30){\scriptsize{$\cD_{20,6}$}} \put(30,20){\scriptsize{$\cD_{20,3}$}} \put(38,21){\line(1,0){8}} \put(41.5,22){\scriptsize{$1$}} \put(47,20){\scriptsize{$\cD_{20,4}$}} \put(30,22){\line(-1,1){7}} \put(24,23){\scriptsize{$2$}} \put(32.5,23){\line(0,1){5}} \put(33.5,24.5){\scriptsize{$4$}} \put(50,23){\line(0,1){5}} \put(51,24.5){\scriptsize{$4$}} \put(30,10){\scriptsize{$\cD_{20,7}$}} \put(37.5,11){\line(1,0){8}} \put(40.5,12){\scriptsize{$1$}} \put(47,10){\scriptsize{$\cD_{20,9}$}} \put(32.5,13){\line(0,1){5}} \put(33.5,14.5){\scriptsize{$3$}} \put(50,13){\line(0,1){5}} \put(51,14.5){\scriptsize{$3$}} \put(28,0){\scriptsize{$\cD_{20,10}$}} \put(37.5,1){\line(1,0){8}} \put(40.5,2){\scriptsize{$4$}} \put(47,0){\scriptsize{$\cD_{20,8}$}} \put(50,3){\line(0,1){5}} \put(51,4.5){\scriptsize{$2$}} \end{picture} & $20$ &$p\not=3$\\ \hline 21 &\begin{picture}(59,15) \put(1,10){\scriptsize{$\cD_{21,1}$}} \put(8,11){\line(1,0){7}} \put(11,12){\scriptsize{$1$}} \put(15,10){\scriptsize{$\cD_{21,3}$}} \put(22,11){\line(1,0){7}} \put(25,12){\scriptsize{$2$}} \put(29,10){\scriptsize{$\cD_{21,6}$}} \put(36,11){\line(1,0){7}} \put(39,12){\scriptsize{$3$}} \put(43,10){\scriptsize{$\cD_{21,7}$}} \put(1,0){\scriptsize{$\cD_{21,2}$}} \put(8,1){\line(1,0){7}} \put(11,2){\scriptsize{$1$}} \put(3,3){\line(0,1){6}} \put(1,5){\scriptsize{$4$}} \put(15,0){\scriptsize{$\cD_{21,4}$}} \put(17,3){\line(0,1){6}} \put(15,5){\scriptsize{$4$}} \put(29,0){\scriptsize{$\cD_{21,5}$}} \put(32,3){\line(0,1){6}} \put(30,5){\scriptsize{$4$}} \put(22,1){\line(1,0){7}} \put(25,2){\scriptsize{$2$}} \end{picture} & $21$ &$p\not=3$\\ \hline 22 &\begin{picture}(92,45) \put(1,40){\scriptsize{$\cD_{22,1}$}} \put(8,41){\line(1,0){7}} \put(11,42){\scriptsize{$2$}} \put(15,40){\scriptsize{$\cD_{22,2}$}} \put(22.5,41){\line(1,0){7}} \put(25.5,42){\scriptsize{$3$}} \put(30,40){\scriptsize{$\cD_{22,4}$}} \put(37.5,41){\line(1,0){7}} \put(40.5,42){\scriptsize{$4$}} \put(45.5,40){\scriptsize{$\tau_{1243}\cD_{22,5}$}} \put(15,30){\scriptsize{$\cD_{22,3}$}} \put(22.5,31){\line(1,0){7}} \put(25.5,32){\scriptsize{$3$}} \put(30,30){\scriptsize{$\cD_{22,8}$}} \put(37.5,31){\line(1,0){7}} \put(40.5,32){\scriptsize{$4$}} \put(45.5,30){\scriptsize{$\tau_{1243}\cD_{22,6}$}} \put(18,33){\line(0,1){5}} \put(19.5,34.5){\scriptsize{$1$}} \put(32.5,33){\line(0,1){5}} \put(33.5,34.5){\scriptsize{$1$}} \put(50,33){\line(0,1){5}} \put(51,34.5){\scriptsize{$1$}} \put(32.5,23){\line(0,1){5}} \put(33.5,24.5){\scriptsize{$2$}} \put(50,23){\line(0,1){5}} \put(51,24.5){\scriptsize{$2$}} \put(23,20){\scriptsize{$\tau_{3214}\cD_{22,7}$}} \put(45.5,20){\scriptsize{$\tau_{3412} \cD_{22,7}$}} \put(32.5,13){\line(0,1){5}} \put(33.5,14.5){\scriptsize{$4$}} \put(50,13){\line(0,1){5}} \put(51,14.5){\scriptsize{$4$}} \put(23,10){\scriptsize{$\tau_{1423}\cD_{22,6}$}} \put(37.5,11){\line(1,0){7}} \put(40.5,11.5){\scriptsize{$2$}} \put(45.5,10){\scriptsize{$\tau_{1432} \cD_{22,8}$}} \put(60,11){\line(1,0){7}} \put(63,11.5){\scriptsize{$3$}} \put(68.5,10){\scriptsize{$\tau_{1432} \cD_{22,3}$}} \put(32.5,3){\line(0,1){5}} \put(33.5,4.5){\scriptsize{$1$}} \put(50,3){\line(0,1){5}} \put(51,4.5){\scriptsize{$1$}} \put(75,3){\line(0,1){5}} \put(76,4.5){\scriptsize{$1$}} \put(23,0){\scriptsize{$\tau_{1423}\cD_{22,5}$}} \put(37.5,1){\line(1,0){7}} \put(40.5,1.5){\scriptsize{$2$}} \put(45.5,0){\scriptsize{$\tau_{1432} \cD_{22,4}$}} \put(60,1){\line(1,0){7}} \put(63,1.5){\scriptsize{$3$}} \put(68.5,0){\scriptsize{$\tau_{1432} \cD_{22,2}$}} \put(83,1){\line(1,0){7}} \put(86,1.5){\scriptsize{$4$}} \put(91,0){\scriptsize{$\tau_{1432} \cD_{22,1}$}} \end{picture} & $22$ &$p\not=2$\\ \hline \end{tabular} \caption{The exchange graphs of $\cC(M)$ in Theorem~\ref{theo:clasi}.} \label{tab.2} \end{table} \newpage
2,869,038,155,195
arxiv
\section{Introduction} \label{sec.intro} In this paper we continue our studies of actions of finite groups on algebraic varieties, up to equivariant birational equivalence. Our main tool is the new invariant introduced in \cite{BnG}: to an $n$-dimensional smooth projective variety $X$ with a regular action of a finite group $G$ we assigned a class $$ [X\righttoleftarrow G] \in \Burn_n(G), $$ taking values in the {\em equivariant Burnside group}, which is defined by certain symbols and relations; this class is an equivariant birational invariant. In \cite{HKTsmall}, \cite{KT-struct} we studied various structural properties of this new invariant and provided first applications. In \cite{KT-vector} we presented an algorithm to compute the classes of linear actions and presented examples of nonbirational such actions. Here, we turn to algebraic tori. Recall that an algebraic torus of dimension $n$ over a field $k$ is a linear algebraic group $T$ which is a $k$-form of $\mathbb G_m^n$. The absolute Galois group of $k$ acts on the geometric character group $M:=\mathfrak X^*(T_{\bar{k}})$ via a finite subgroup \[ G\subset \Aut(M)\cong \mathrm{GL}_n({\mathbb Z}). \] A torus $T$ over $k$ is uniquely determined up to isomorphism by its splitting field, Galois over $k$, with Galois group $G$, and this representation of $G$. Rationality properties of tori over nonclosed fields have been extensively studied, see, e.g., \cite{vosk}, \cite{EM}, \cite{Sansuc-CT}, \cite{hoshi}, \cite{lemire-tori}. The relevant cohomological obstruction results from the Galois module $\Pic(X_{\bar k})$, for a smooth projective compactification $X$ of $T$. For any subgroup $G'\subseteq G$ the group \begin{equation} \label{eqn:group} {\mathrm H}^1(G', \Pic(X_{\bar{k}})) \end{equation} is independent of the choice of $X$; its nontriviality is an obstruction to stable $k$-rationality of $T$. This is the only obstruction in dimensions $\le 3$; moreover, every stably rational torus in dimension $\le 3$ is rational \cite{kun}. The Zariski problem for algebraic tori, i.e., the question of whether or not stably rational tori over $k$ are rational over $k$ is still open, in particular, for 4-dimensional tori identified in \cite[Prop. 4.15]{lemire-tori}, with $G$ a subgroup of $C_2\times {\mathfrak A}_5$ or $C_2\times {\mathfrak S}_4$. Here, over an algebraically closed field $k$ of characteristic zero, we study the $G$-equivariant version of this question, for a finite group $G$: \medskip \noindent {\bf Problem:} {\it Is a given $G$-action on an algebraic torus linearizable, i.e., $G$-equivariantly birational to a linear action on projective space? } \medskip There are many similarities but also subtle distinctions between these points of view, highlighted, e.g., in \cite{HT-intersect}. One of the similarities is that the cohomological obstruction \eqref{eqn:group} applies also as an obstruction to (stable) linearizability of the $G$-action, since for linear actions of $G$, the invariant \eqref{eqn:group} vanishes (see, e.g., \cite[Prop. 2.2]{BogPro} and references therein). On the other hand, all tori in dimension 2, over any field, are rational, while there is an action of $G:=C_2\times {\mathfrak S}_3$ on $\mathbb G_m^2$, which is not linearizable \cite{isk-s3}, but is stably linearizable \cite[Prop. 9.11]{lemire}; the corresponding class in $\Burn_2(G)$ is distinct from those of linear actions \cite[Section 7.6]{HKTsmall}. In the case of surfaces both points of view have received ample attention, going back to \cite{manin-ihes}, \cite{IskMin}, with further developments in \cite{DI}, \cite{BogPro}, \cite{prokhorovII}, and in many other papers. The main approach there is via the (equivariant) Minimal Model Program, i.e., classification of all birational models and (equivariant) birational transformations between those models. Much less is known about linearizability of $G$-actions in higher dimensions, in particular, for tori, see \cite{Ch-toric}. Our main results in this paper are: \begin{itemize} \item We give a recursive procedure (Theorem~\ref{thm.main}) to compute the class \begin{equation} \label{eqn:class} [X\righttoleftarrow G] \in \Burn_n(G). \end{equation} This uses the De Concini-Procesi formalism to construct a suitable equivariant birational model of the torus $T$. \item We present an example of such a computation (Proposition~\ref{prop:class-dp6}). \item We discuss the relation between the class \eqref{eqn:class} and existing (stable) $G$-birational invariants, such as group cohomology \eqref{eqn:group}: there exist actions that can be distinguished by one invariant but not the other (Proposition~\ref{prop:class-dp6} and Proposition~\ref{prop:kun}). \end{itemize} \ \noindent {\bf Acknowledgments:} The first author was partially supported by the Swiss National Science Foundation. The second author was partially supported by NSF grant 2000099. This paper is based upon work partially supported by the Swedish Research Council under grant no.~2016-06596 while the first author was in residence at the Institut Mittag-Leffler. \section{Toric varieties: generalities} \label{sec.generalities} We work over an algebraically closed field $k$ of characteristic zero. \subsection{Fans} \label{sect:lf} Let $T={\mathbb G}_m^n$ be an algebraic torus of dimension $n$, $$ M:=\mathfrak X^*(T), \quad \text{ respectively } \quad N:=\mathfrak X_*(T), $$ the lattice of its algebraic characters, respectively, co-characters. A smooth projective toric variety $$ X=X_{\Sigma} $$ of dimension $n$ is an equivariant compactification of $T$. It is uniquely determined the combinatorial structure of a fan $$ \Sigma=\{ \sigma\}, $$ a finite collection of strongly convex rational polyhedral cones $\sigma$ in $N_{{\mathbb R}}:=N\otimes_{{\mathbb Z}}{\mathbb R}$ (see, e.g., \cite{fulton} for basic definitions concerning toric varieties). We let $$ \Sigma(d), \quad d=0,\dots, n, $$ denote the collection of $d$-dimensional cones in $\Sigma$. The fan is subject to various conditions to insure smoothness and projectivity of $X$; see, e.g., \cite{batyrev}: \begin{itemize} \item every cone $\sigma\in\Sigma$ is simplicial and is generated by a part of a basis of $N$, \item the union of cones is all of $N_{{\mathbb R}}$, and \item $\Sigma$ admits a piecewise linear convex support function. \end{itemize} Such a fan $\Sigma$ is called a smooth projective fan. \subsection{Subtori and their closures} \label{sect:sc} A primitive sublattice $N'\subseteq N$ gives rise to a {\em subtorus} $T'\subset T$, and an induced equivariant compactification $X'$ of $T'$. The corresponding fan $\Sigma'$ is the fan in $N'_{{\mathbb R}}$ induced by $\Sigma$, that is, given by intersecting the cones of $\Sigma$ with $N'_{{\mathbb R}}$. The interesting case for us is when $T'$ satisfies the following \medskip \noindent {\bf Property (E):} $\sigma\cap N'_{{\mathbb R}}$ is a face of $\sigma$, for all $\sigma\in \Sigma$ (see \cite{DGwonderful}). \medskip \noindent Equivalently, every $\sigma\in \Sigma$ has strongly convex image under the projection $$ N_{{\mathbb R}} \to (N/N')_{{\mathbb R}}. $$ By \cite[Thm.\ 3.1]{DGwonderful}, property $(\mathrm{E})$ for $T'$, with respect to $\Sigma$, implies that $X'$ is nonsingular, isomorphic to the toric variety $X_{\Sigma'}$. Then we have a closed immersion \begin{equation} \label{eqn.XSigmaprime} X_{\Sigma'}\to X, \end{equation} given on affine charts, for $\sigma\in \Sigma'$, by ring homomorphisms \[ k[\sigma^\vee \cap M]\to k[(\sigma^\vee \cap M)/(N'^\perp\cap M)]. \] Generally, by \cite[Thm.\ 4.1]{DGwonderful}, after a suitable finite subdivision of $\Sigma$, one can insure property $(\mathrm{E})$ for a given subtorus, or any finite collection of subtori of $T$. \subsection{Quotient tori and orbit closures} \label{sect:qo} A primitive sublattice $N'\subseteq N$ also gives rise to a quotient torus $T/T'$. The case of interest to us is the quotient torus $T^\sigma$, associated with the sublattice $N_\sigma$ spanned by generators of a cone $\sigma\in \Sigma$; by the standing smoothness assumption on fans, $\sigma$ is generated by a part of a basis of $N$. Furthermore, $\sigma$ determines an orbit $D_{\sigma}^\circ$, the closed $T$-orbit in the corresponding affine chart $\Spec(k[\sigma^\vee\cap M])$ of $X$. We denote its closure in $X$ by $D_{\sigma}$. This is a smooth projective toric variety, whose fan is obtained as follows \cite[Thm. 3.2.6]{cox-book}: \begin{itemize} \item $\mathfrak X^*(T^\sigma) = {\sigma}^\perp \cap M$, which is dual to $N/N_{\sigma}$, \item for cones $\tau\supseteq \sigma$ let $$ \bar{\tau} := (\tau + {\mathbb R}\sigma)/{\mathbb R} \sigma \subseteq (N/N_{\sigma})_{{\mathbb R}} $$ be the induced cone in the quotient, these form a smooth projective fan $\Sigma^{\sigma}$. \item We have a closed immersion $X_{\Sigma^{\sigma}}\hookrightarrow X$ with image $D_{\sigma}$. On respective affine charts $\Spec(k[\sigma^\perp\cap \tau^\vee\cap M])$ and $\Spec(k[\tau^\vee\cap M])$, for $\tau\supseteq \sigma$, this is given by the surjective ring homomorphism \[ k[\tau^\vee\cap M]\to k[\sigma^\perp\cap \tau^\vee\cap M], \] with kernel generated by characters in $\tau^\vee$, not in $\sigma^\perp$. In particular, we have a canonical isomorphism \[ T^{\sigma}=\Spec(k[\sigma^\perp\cap M])\cong D^\circ_{\sigma} \] \item The projection $T\to T^{\sigma}$ determines a rational map $X\dashrightarrow X_{\Sigma^{\sigma}}$, which is defined as a morphism on the union $U^\sigma$ of affine charts of $X$ corresponding to cones $\tau\supseteq \sigma$. This morphism \begin{equation} \label{eqn.smoothmorphism} U^\sigma\to X_{\Sigma^\sigma}\cong D_\sigma \end{equation} is smooth, given by the injective ring homomorphisms \[ k[\sigma^\perp\cap \tau^\vee\cap M]\to k[\tau^\vee\cap M]. \] \end{itemize} \subsection{Transversality of intersections} \label{sect:ti} By our assumption we have \[ X\setminus T=\bigcup_{\rho\in \Sigma(1)}D_\rho, \] a simple normal crossing divisor in $X$. For $\sigma\in \Sigma(d)$, we have $D_\sigma$ of codimension $d$ in $X$, with transverse intersection \[ D_\sigma=\bigcap_{j=1}^{d} D_{\rho_j}, \] where $\rho_j\in \Sigma(1)$, $j=1$, $\dots$, $d$, are the rays spanning $\sigma$. \begin{lemm} \label{lem.closure} Let $T'\subset T$ be a subtorus satisfying property $(\mathrm{E})$ with respect to $\Sigma$, and let $X'$ be the closure of $T'$ in $X=X_\Sigma$. The morphism $T\to T/T'$ extends to a smooth $T$-equivariant morphism to $T/T'$ from a $T$-invariant neighborhood of $X'$ in $X$, with fiber $X'$ over $1\in T/T'$. As well, $X'$ has transverse intersection with the boundary $X\setminus T$. \end{lemm} \begin{proof} Let $N'\subseteq N$ be the corresponding primitive sublattice, with corresponding fan $\Sigma'$ in $N'_{{\mathbb R}}$. By property $(\mathrm{E})$, the cones of $\Sigma'$ are already in $\Sigma$. The variety $X'$ is contained in the union of affine charts of $X$ associated with cones of $\Sigma$, that belong to $\Sigma'$, by the algebraic description of the closed immersion \eqref{eqn.XSigmaprime}. This union is a $T$-invariant neighborhood of $X'$. Let $\sigma$ be a maximal cone of $\Sigma'$. Now we have a $T$-equivariant morphism \[ \Spec(k[\sigma^\vee\cap M])\to \Spec(k[\sigma^\perp\cap M]), \] extending $T\to T/T'$, and these patch to give the desired morphism, which is smooth. We have \[ \Spec(k[\sigma^\vee\cap M])\cong {\mathbb A}^{n'}\times {\mathbb G}_m^{n-n'}, \] where $n'$ denotes the rank of $N'$. Now \[ X'\cap \Spec(k[\sigma^\vee\cap M])\cong {\mathbb A}^{n'}\times\{1\} \] meets the complement of ${\mathbb G}_m^n$ transversely. \end{proof} \begin{prop} \label{prop:trans} Let $T'\subset T$ be a subtorus satisfying property $(\mathrm{E})$ with respect to $\Sigma$, and let $X'$ be the closure of $T'$ in $X$. Let $\sigma\in \Sigma$ be such that the sublattice $N'\subset N$ corresponding to $T'$ satisfies $\sigma\subset N'_{\mathbb R}$. Then: \begin{itemize} \item The intersection of $X'$ with $D_\sigma^\circ\cong T^\sigma$ is the subtorus $T'^{\sigma}\subset T^{\sigma}$ associated with \[ N'/N_\sigma\subseteq N/N_\sigma. \] \item The subtorus $T'^{\sigma}\subset T^{\sigma}$ satisfies property $(\mathrm{E})$ with respect to $\Sigma^\sigma$, and the intersection of $X'$ with $D_\sigma\cong X_{\Sigma^\sigma}$ is $X_{\Sigma'^\sigma}$, where $\Sigma'$ is the fan in $N'_{\mathbb R}$, induced from $\Sigma$, and $\sigma\in \Sigma'$ determines the fan $\Sigma'^\sigma$ in $(N'/N_\sigma)_{\mathbb R}$. \item We have $X'\cap U^\sigma$ equal to the pre-image of $X_{\Sigma'^\sigma}$ under the smooth morphism \eqref{eqn.smoothmorphism}. \end{itemize} \end{prop} \begin{proof} In the affine chart $\Spec(k[\sigma^\vee\cap M])$ of $X$ associated with $\sigma$, we have closed subvarieties $D_\sigma^\circ$, defined by the characters in $\sigma^\vee\cap M$, not in $\sigma^\perp$, and the intersection with $X'$, given by equating characters whose difference lies in $N'^\perp\cap M$. So the coordinate ring of the intersection is obtained by setting to zero all characters not in $\sigma^\perp$ and equating characters whose difference lies in $N'^\perp\cap M$; this gives $k[(\sigma^\perp\cap M)/(N'^\perp\cap M)]$. The first statement is established, since the subtorus of $T^\sigma=\Spec(k[\sigma^\perp\cap M])$ associated with $N'/N_\sigma$ is $\Spec(k[(\sigma^\perp\cap M)/(N'^\perp\cap M)])$. For the second statement, we want to show, for $\tau\in\Sigma$ with $\tau\supset\sigma$, that $\bar\tau\cap (N'/N_\sigma)_{\mathbb R}$ is equal to $\bar\omega$ for some $\omega\in\Sigma$ with $\omega\supset\sigma$. We take $\omega:=\tau\cap N'_{\mathbb R}$; since $T'$ satisfies property $(\mathrm{E})$ we have $\omega\in \Sigma$. Since $\sigma\subset N'_{\mathbb R}$, we have $\omega\supset \sigma$. Now \[ (\tau+(N_\sigma)_{\mathbb R})\cap N'_{\mathbb R}=\tau\cap N'_{\mathbb R}+(N_\sigma)_{\mathbb R}=\omega+(N_\sigma)_{\mathbb R}. \] So $\bar\tau\cap (N'/N_\sigma)_{\mathbb R}=\bar\omega$, as desired. We treat the remainder of the second statement, and the third, by an analysis on coordinate charts. Let $\tau\in \Sigma'$, with $\tau\supset\sigma$. In the affine chart $\Spec(k[\tau^\vee\cap M])$, we have $D_\sigma$ defined by characters in $\tau^\vee\cap M$, not in $\sigma^\perp$, and the intersection with $X'$, given by equating characters whose difference lies in $N'^\perp\cap M$. This gives a description of the coordinate ring of the intersection as \[ k[(\sigma^\perp\cap \tau^\vee\cap M)/(N'^\perp\cap M)]. \] The same coordinate ring arises when we apply the description of the subtorus closure to $T'^\sigma\subset T^\sigma$. By considering the same coordinate rings, and the injective ring homomorphisms corresponding to \eqref{eqn.smoothmorphism} for $\tau\in \Sigma$ and $\tau\in \Sigma'$, we obtain a commutative diagram of affine schemes which we see easily to be cartesian, and thereby obtain the third statement. \end{proof} \subsection{Equivariant combinatorics} \label{sect:ec} Let $G$ be a finite group. A regular (right) $G$-action on a toric variety $X=X_{\Sigma}$ determines a representation \begin{equation} \label{eqn:rep} G\to {\mathrm{GL}}_n({\mathbb Z})=\Aut(M). \end{equation} We are mainly interested in cases when this is {\em injective}. The homomorphism \eqref{eqn:rep} determines a right action on the cocharacter lattice $N$. The induced action on $N_{\mathbb R}$ leaves the fan $\Sigma$ invariant, i.e., $\sigma\cdot g\in \Sigma$, for all $\sigma\in \Sigma$ and $g\in G$. Since we are interested in equivariant birational types, we can start with a faithful representation \eqref{eqn:rep}. By \cite{CTHS}, there exists a smooth projective fan $\Sigma$, that is invariant under the $G$-action. Correspondingly, $G$ acts on $X_\Sigma$, with associated representation \eqref{eqn:rep}. Suppose we are given a finite collection of subtori of $T$. By combining \cite[Thm.\ 4.1]{DGwonderful} and \cite{CTHS}, we achieve, after a suitable subdivision of $\Sigma$, property $(\mathrm{E})$ for all of the subtori, still requiring $G$ to act regularly on the toric variety. Notice that any further subdivision of $\Sigma$ preserves property $(\mathrm{E})$, for the given collection of subtori. After a further $G$-equivariant subdivision of $\Sigma$, we may suppose, that the boundary $X\setminus T$ may be written as a union $$ X\setminus T=\bigcup_{i\in \mathcal I} \, D_i, \qquad \mathcal I:=\{1,\dots,\ell\}, $$ where $D_i$ is a nonsingular $G$-invariant divisor, for all $i$. This is achieved by iterated star subdivision along cones, where the collection of generators is contained in an orbit of $\Sigma(1)$ and is maximal, with this property. \section{Stabilizer stratification and subdivisions} \subsection{Background} \label{sect:background} We recall basic terminology concerning {\em toric arrangements}, studied in, e.g., \cite{DP-toric}, \cite{DGP}, \cite{DG1}, \cite{DGwonderful}, \cite{DG3}, \cite{Callegaro}, \cite{berg}. The main motivation for the introduction of the combinatorial formalism below was the computation of the cohomology ring of the {\em complement} of the arrangement; the relevant notions are: \begin{itemize} \item Morgan differential algebra \cite{morgan}, \item Orlik-Solomon algebra \cite{orlik-solomon}. \end{itemize} We start with a torus $T$. The ingredients are: \begin{itemize} \item {\em Layers}, i.e., cosets of subtori of $T$. \item {\em Toric arrangement}, i.e., finite collections of layers. \item {\em Saturations} of toric arrangements, obtained by adding all connected components of intersections of layers. \end{itemize} The main result of \cite{DP-toric} and \cite{DG3} is that the {\em rational homotopy type} of the complement of a toric arrangement only depends on discrete data associated with the toric arrangement. Projective {\em wonderful models} for toric arrangements are constructed in \cite{DGwonderful}, building on the wonderful models of subspace arrangements \cite{DP} and conical arrangements \cite{macphersonprocesi}. These are defined as the closures of complements of toric arrangements in products of \begin{itemize} \item a nonsingular projective toric variety $X=X_\Sigma$, such that the subtori associated with the given layers satisfy property $(\mathrm{E})$ with respect to $\Sigma$, and \item the blow-ups of $X$ along the closures of the layers. \end{itemize} \subsection{Arrangements of diagonalizable groups} \label{sect:arr-diag} The arrangements, relevant for our applications, are on the one hand less general than those in \S \ref{sect:background}, in that layers that arise are translates of subtori by \emph{torsion} elements of $T$. On the other hand our inductive scheme requires arrangements in more general \emph{diagonalizable algebraic groups}, than just tori. Let $\Delta$ be a diagonalizable algebraic group over $k$. The identity component $T\subset \Delta$ fits into an exact sequence \begin{equation} \label{eqn.DeltamodT} 1\to T\to \Delta \to \Delta/T\to 1, \end{equation} which is (noncanonically) split. The subgroup $T$ is an algebraic torus, and the quotient $\Delta/T$ is a finite abelian group. A (right) action of a finite group $G$ on $\Delta$, by automorphisms, is determined uniquely by the data of a group homomorphism \[ \mu\colon G\to \Aut(M), \quad M:=\mathfrak X^*(\Delta). \] The induced action of $G$ on $T$ is determined by the induced homomorphism \[ \nu\colon G\to \Aut(M/M_{\mathrm{tors}}). \] There is also an induced action of $G$ on $\Delta/T$. A $k$-point $\delta\in \Delta$ is given by a homomorphism $M\to k^\times$, that we also denote by $\delta$, and the corresponding class $\bar\delta\in \Delta/T$ is given by the restriction of $\delta$ to $M_{\mathrm{tors}}$; so, \[ \ker(\bar\delta)=\ker(\delta)\cap M_{\mathrm{tors}}. \] There is an induced homomorphism \[ \nu_{\bar\delta}\colon G_{\bar\delta}\to \Aut(M/\ker(\bar\delta)), \] where $G_{\bar\delta}=\Stab(\bar\delta)\subseteq G$ denotes the stabilizer of $\bar\delta\in \Delta/T$. We also remark that the $G$-action on $\Delta/T$ always fixes the identity; consequently, when $\Delta$ has nontrivial group of connected components, the action of $G$ cannot be transitive on components of $\Delta$. That $G$ need not act transitively on components, represents a departure from the convention in \cite[\S2]{KT-vector}. For instance, then, different orbits of components can have different isomorphism types of generic stabilizer groups. Nevertheless, we might agree to call the $G$-action on $\Delta$ \emph{generically free} when it satisfies the following equivalent conditions. \begin{lemm} \label{lem.genericallyfree} There exists a $G$-invariant dense open subscheme of $\Delta$, on which $G$ acts freely, if and only if $G$ acts generically freely on the identity component $T$ of $\Delta$. \end{lemm} \begin{proof} A dense open subscheme of $\Delta$, on which $G$ acts freely, has nontrivial intersection with $T$ and exhibits the $G$-action on $T$ as generically free. For the reverse implication, we let $r$ be a positive integer, such that the group of connected components of $\Delta$ is $r$-torsion. Then the $r$th power endomorphism of $\Delta$ is $G$-equivariant and has image $T$. The pre-image of a nonempty invariant open subscheme of $T$, on which $G$ acts freely, is an invariant dense open subscheme of $\Delta$, on which $G$ acts freely. \end{proof} \subsection{Lattice structure} \label{sect:latt} In the following discussion we do not assume that $G$ acts generically freely on $\Delta$. We call a subgroup $\Gamma\subseteq G$ \emph{distinguished} if $\Gamma$ is the largest subgroup, acting trivially on some algebraic subgroup $\Theta\subset \Delta$. We call an algebraic subgroup $\Theta\subset \Delta$ \emph{distinguished} if $\Theta$ is maximal, on which $\Gamma$ acts trivially, for some subgroup $\Gamma\subseteq G$. The operations, associating to $\Theta$ the distinguished $\Gamma$, and to $\Gamma$, the distinguished $\Theta$, restrict to inverse order-reversing bijections between distinguished subgroups of $G$ and distinguished algebraic subgroups of $T$. The set of distinguished subgroups of $G$ has a structure of lattice, with $\Gamma_1\wedge \Gamma_2=\Gamma_1\cap \Gamma_2$. To describe $\Gamma_1\vee \Gamma_2$ we let $\Theta_i$ denote the algebraic subgroup of $\Delta$ associated with $\Gamma_i$ for $i=1$, $2$. Then $\Gamma_1\vee \Gamma_2$ is the subgroup, containing both $\Gamma_1$ and $\Gamma_2$, with associated algebraic subgroup $\Theta_1\cap \Theta_2$. Let $\Gamma$ be a distinguished subgroup, with associated algebraic subgroup $\Theta\subset \Delta$. Then $\Gamma$ is the intersection of the generic stabilizer groups of the components of $\Theta$. We might have $\Gamma$ as generic stabilizer of some component of $\Theta$, indeed this arises when we use the stabilizer of a point of $\Delta$ to define $\Theta$, whereby we see: \emph{every stabilizer group is distinguished}. As the next example shows, a distinguished subgroup is not necessarily the stabilizer group of any point of $\Delta$. \begin{exam} \label{exam.notstabilizer} Consider $G=C_2\times {\mathfrak S}_3$, with $G\to {\mathrm{GL}}_2({\mathbb Z})$ sending the generator of $C_2$, respectively generators of ${\mathfrak S}_3$, to \[ \begin{pmatrix} -1&0\\0&-1\end{pmatrix},\quad \begin{pmatrix} 0&-1\\1&-1\end{pmatrix},\quad \begin{pmatrix} 0&1\\1&0\end{pmatrix}. \] (This is essentially the unique faithful $2$-dimensional representation of $G$ over ${\mathbb Z}$ \cite{voskresenskii2dim}.) The center $C_2$ is distinguished, associated with $(\mu_2)^2$. However, every point of $(\mu_2)^2$ has stabilizer of order $4$ or $12$. \end{exam} We modify the discussion above, by considering \emph{subtori} $T'\subset T$ and their associated distinguished subgroups $\Gamma'\subseteq G$. The maximal algebraic subgroup $\Theta'\subset \Delta$, on which $\Gamma'$ acts trivially, contains $T'$ and has the property, that every connected component has the same generic stabilizer. The above bijection restricts to one, between distinguished subgroups of $\Gamma'\subseteq G$ associated with subtori of $T$ and distinguished algebraic subgroups of $\Delta$ whose components all have the same generic stabilizer; we write $T_{\Gamma'}$ for the identity component of the associated algebraic subgroup of $\Delta$. There is a lattice structure, with $\Gamma'_1\wedge \Gamma'_2=\Gamma'_1\cap \Gamma'_2$, but now $\Gamma'_1\vee \Gamma'_2$ is the subgroup associated with the identity component of $T_{\Gamma'_1}\cap T_{\Gamma'_2}$. We write \[ {\mathcal L}'={\mathcal L}'(T) \] for the lattice of distinguished subgroups of $G$, associated with subtori of $T$. The lattice ${\mathcal L}'$ has maximal element $G$, and minimal element $\ker(\nu)$. To each $\Gamma'\in {\mathcal L}'$, there is the subtorus $T_{\Gamma'}\subset T$, associated with a primitive sublattice $N_{\Gamma'}\subset N$; concretely, $N_{\Gamma'}$ is the sublattice of $N$, where $\Gamma'$ acts trivially. We introduce \[ {\mathcal T}_{{\mathcal L}'}:=\{T_{\Gamma'}\,|\,\Gamma'\in {\mathcal L}'\},\qquad {\mathcal G}_{{\mathcal L}'}:={\mathcal T}_{{\mathcal L}'}\setminus \{T\}. \] \begin{lemm} \label{lem.subtori} Every distinguished algebraic subgroup $\Theta\subset \Delta$ has connected component of the identity in ${\mathcal T}_{{\mathcal L}'}$. \end{lemm} \begin{proof} We let $\Gamma\subseteq G$ denote the distinguished subgroup associated with $\Theta$, and $T'$, the identity component of $\Theta$. The generic stabilizer $\Gamma'$ of $T'$ contains $\Gamma$, and we have $\Gamma'\in {\mathcal L}'$. We have $T'$ contained in $T_{\Gamma'}$, the identity component of the associated algebraic subgroup $\Theta'$. Since $T_{\Gamma'}\subset \Theta'\subset \Theta$, we have $\dim T_{\Gamma'}\le \dim T'$. It follows that $T'=T_{\Gamma'}$. \end{proof} Notice, by Lemma \ref{lem.subtori}, the locus in $\Delta$ with nontrivial stabilizers is contained in a finite union of translates of tori $T_{\Gamma'}$ for $\Gamma'\in {\mathcal L}'$, where the translates are by torsion elements of $\Delta$. (For any stabilizer group, by Lemma \ref{lem.subtori} the associated distinguished algebraic subgroup will be such a finite union.) As well, $T_{\Gamma'}$ is invariant under the action of $g\in G$ if and only if $g\in N_G(\Gamma')$. (The generic stabilizer gets conjugated by $g$.) \subsection{Equivariant compactifications of diagonalizable groups} \label{sect:ecdg} Let $\Delta$ be a diagonalizable algebraic group over $k$, with an action of a finite group $G$. Since the character groups $\mathfrak X^*(T)$ and $\mathfrak X^*(\Delta)$ have a common dual $N$, a smooth projective fan $\Sigma$ in $N_{\mathbb R}$ determines, besides the equivariant compactification \[ T\subset X=X_\Sigma, \] also an equivariant compactification \[ \Delta\subset \mathbb X=\mathbb X_\Sigma. \] The smooth projective scheme $\mathbb X$ has components indexed by $\Delta/T$: \[ \mathbb X=\bigsqcup_{\bar{\delta} \in \Delta/T} \, X\bar\delta. \] Let $r$ be a positive integer, such that $\Delta/T$ is $r$-torsion. A primitive sublattice $N'\subseteq N$ now gives rise to: \begin{itemize} \item The subtorus $T'\subset T$, its $r$-torsion translate \[ T'_{[r]}:=T'\Delta[r]\subset \Delta, \] and the induced surjective homomorphism \[ \vartheta_{[r]}\colon T'_{[r]}/T'\to \Delta/T. \] \item The corresponding equivariant compactification \[ X'_{[r]}:=X'\Delta[r]\subset \mathbb X_\Sigma, \] with components indexed by $T'_{[r]}/T'$. \item The quotient diagonalizable algebraic group \[ \Delta/T', \] which as in Lemma \ref{lem.closure} is the target of a smooth $\Delta$-equivariant morphism from a $\Delta$-invariant neighborhood of $X'_{[r]}$ in $\mathbb X_\Sigma$, with $X'_{[r]}$ as fiber over $(\Delta/T')[r]$. \item In case $N'=N_\sigma$, notation $\Delta^\sigma$ for the quotient diagonalizable algebraic group, with $\Delta^\sigma$ identified with a corresponding $\Delta$-orbit in $\mathbb X_\Sigma$ as in \S \ref{sect:qo}, and closure of $\Delta^\sigma$ in $\mathbb X_\Sigma$ identified with $\mathbb X_{\Sigma^\sigma}$. \end{itemize} \subsection{Constructing the model} \label{sect:cm} Let ${\mathcal L}'={\mathcal L}'(T)$ be the lattice of distinguished subgroups of $G$, associated with subtori of $T$, as above. As we have seen in Section \ref{sec.generalities}, there exists a smooth projective fan $\Sigma$ that meets the following criteria: \begin{itemize} \item Property $(\mathrm{E})$ holds for all subtori in ${\mathcal T}_{{\mathcal L}'}$, with respect to $\Sigma$. \item $\Sigma$ is $G$-invariant. \item No pair of rays of $\Sigma$, in a single $G$-orbit, spans a cone of $\Sigma$. \end{itemize} For $\sigma\in \Sigma$, the stabilizer $\Stab(\sigma)$ acts on the toric variety $D_\sigma\cong X_{\Sigma^\sigma}$, an equivariant compactification of the quotient torus $T^\sigma$. \begin{lemm} \label{lemm:preim} We have $\Stab(\sigma)\in {\mathcal L}'$. The lattice ${\mathcal L}'(T^\sigma)$, associated with the $\Stab(\sigma)$-action on $T^\sigma$, is equal to the sublattice of ${\mathcal L}'$, of elements bounded above by $\Stab(\sigma)$. For $\Gamma'\in {\mathcal L}'(T^\sigma)$, the pre-image of $(T^\sigma)_{\Gamma'}$ under the projection morphism $$ \mathrm{pr}^\sigma\colon T\to T^\sigma $$ is equal to $T_{\Gamma'}$. \end{lemm} \begin{proof} By the third criterion on $\Sigma$, the action of $\Stab(\sigma)$ on $N_\sigma$ is trivial. Consequently, if $\Gamma'\in {\mathcal L}'(T^\sigma)$, so $\Gamma'$ is the generic stabilizer of $(T^\sigma)_{\Gamma'}$, then $\Gamma'$ acts trivially on $(\mathrm{pr}^\sigma)^{-1}((T^\sigma)_{\Gamma'})$. Using Proposition \ref{prop:trans}, we see that $\Gamma'$ is the generic stabilizer of $(\mathrm{pr}^\sigma)^{-1}((T^\sigma)_{\Gamma'})$, and $T_{\Gamma'}=(\mathrm{pr}^\sigma)^{-1}((T^\sigma)_{\Gamma'})$. In the other direction, if $\Gamma'\in {\mathcal L}'$, $\Gamma'\subseteq \Stab(\sigma)$, then $T_{\Gamma'}$ is the pre-image under $\mathrm{pr}^\sigma$ of a subtorus of $T^\sigma$, whose generic stabilizer is $\Gamma'$. \end{proof} Each $\Delta$-orbit of $\mathbb X_\Sigma$ gives an instance of \S \ref{sect:arr-diag}, with $\Stab(\sigma)$ acting on $\Delta^\sigma$, and the locus with nontrivial stabilizer contained in some translates of the subtori in ${\mathcal T}_{{\mathcal L}'(T^\sigma)}$. As noted above, the following are valid: \begin{itemize} \item For a suitable positive integer $r$, the translates of the subtori in ${\mathcal T}_{{\mathcal L}'(T^\sigma)}$ are by $r$-torsion elements of $\Delta^\sigma$. \item The subtori in ${\mathcal T}_{{\mathcal L}'(T^\sigma)}$ have pre-images in $T$, belonging to ${\mathcal T}_{{\mathcal L}'}$. \item Property $(\mathrm{E})$ holds for the subtori in ${\mathcal T}_{{\mathcal L}'(T^\sigma)}$, with respect to $\Sigma^\sigma$. \end{itemize} Since $\Sigma$ has finitely many cones, a single positive integer $r$ may be chosen, so that translation is by $r$-torsion elements of $\Delta^\sigma$, in the first item above, for all $\sigma\in \Sigma$. We suppose, as well, that $\Delta/T$ is $r$-torsion. Consequently, the intersection of any pair of subtori in ${\mathcal T}_{{\mathcal L}'}$ is a diagonalizable algebraic group, whose group of connected components is $r$-torsion. Indeed, if $\Gamma'$, $\Gamma''\in {\mathcal L}'$, with respective associated subtori $T':=T_{\Gamma'}$ and $T'':=T_{\Gamma''}$, then $T''':=T_{\Gamma'\vee\Gamma''}$ is the identity component of $T'\cap T''$. Any non-identity component of $T'\cap T''$ has generic stabilizer contained in $\Gamma'\vee\Gamma''$ and associated distinguished algebraic subgroup $\Theta\subset T$, that satisfies \[ T'''\subset \Theta\subset T'''\Delta[r]. \] So $(T'\cap T'')/T'''$ is $r$-torsion. Inside $\mathbb X=\mathbb X_\Sigma$ there is the union of $r$-torsion translates of the closures of the subtori in ${\mathcal G}_{{\mathcal L}'}$. The complement, the ``open part'' \[ \mathbb X^{\circ}\subset \mathbb X, \] has the stabilizer distinguishing property, for the $G$-action with respect to the toric boundary. The projective model \[ \mathbb X_{\Sigma,{\mathcal L}',[r]} \] is obtained as in \cite{DGwonderful}, by applying the De Concini-Procesi iterated blowup procedure, as developed in \cite{li-wonderful}, to the $r$-torsion translates of the closures in $X_{\Sigma}$ of the subtori in ${\mathcal G}_{{\mathcal L}'}$. So, $\mathbb X_{\Sigma,{\mathcal L}',[r]}$ is the closure of $\mathbb X^{\circ}$, in the product of $\mathbb X$ with all blow-ups \[ B\ell_{X'_{[r]}} \mathbb X \] for $T'\in {\mathcal G}_{{\mathcal L}'}$, where $X'_{[r]}\subset \mathbb X$ denotes the corresponding $r$-torsion translate compactification (\S \ref{sect:ecdg}). There is a projection morphism \begin{equation} \label{eqn.projection} \pi\colon \mathbb X_{\Sigma,{\mathcal L}',[r]}\to \mathbb X, \end{equation} which is an isomorphism over $\mathbb X^{\circ}$. As is the case for $\mathbb X$, the projective model has connected components indexed by $\Delta/T$: \[ \mathbb X_{\Sigma,{\mathcal L}',[r]}=\bigsqcup_{\bar\delta\in \Delta/T} X_{\Sigma,{\mathcal L}',[r]}\bar\delta. \] In particular, when $\Delta$ is a torus already, we have $\mathbb X_{\Sigma,{\mathcal L}',[r]}=X_{\Sigma,{\mathcal L}',[r]}$. The complement of $\mathbb X^{\circ}$ in $\mathbb X_{\Sigma,{\mathcal L}',[r]}$ is a normal crossing divisor \[ \mathbb D=\bigcup_{\ker(\nu)\ne\Gamma'\in {\mathcal L}'}\mathbb D_{\Gamma'}. \] For $\Lambda\subset {\mathcal L}'\setminus\{\ker(\nu)\}$, there is the stratum \[ \mathbb D_{\Lambda}:=\bigcap_{\Gamma'\in \Lambda}\mathbb D_{\Gamma'}, \] with $\mathbb D_\emptyset:=\mathbb X_{\Sigma,{\mathcal L}',[r]}$ by convention. We have $\mathbb D_\Lambda\ne\emptyset$ if and only if $\Lambda$ is a chain in ${\mathcal L}'$, i.e., letting $t$ denote the cardinality of $\Lambda$ we have \begin{equation} \label{eqn.chain} \Lambda=\{\Gamma^1,\dots,\Gamma^t\}\subset {\mathcal L}'\setminus\{\ker(\nu)\},\qquad \Gamma^1\supset\dots\supset \Gamma^t. \end{equation} Suppose $\Lambda$ is a nonempty chain. Then with \begin{equation} \label{eqn.chainfirstT} T':=T_{\Gamma^1}, \end{equation} the connected components of $\mathbb D_\Lambda$ are indexed by $T'_{[r]}/T'$: \begin{equation} \label{eqn.DLambdacomponents} \mathbb D_\Lambda=\bigsqcup_{\bar\tau\in T'_{[r]}/T'} D_\Lambda \bar\tau. \end{equation} \begin{exam} \label{exa.Z2onBlpP2} Let $G={\mathbb Z}/2{\mathbb Z}$ act on $T={\mathbb G}_m^2$, swapping the two factors. With $\Sigma$ the complete fan in $N_{\mathbb R}$, for $N={\mathbb Z}^2={\mathbb Z}\langle e_1,e_2\rangle$, with rays generated by $e_1$, $e_1+e_2$, $e_2$, $-e_1-e_2$, so $X_\Sigma$ is the blow-up of ${\mathbb P}^2$ at a point, the origin in ${\mathbb A}^2\subset{\mathbb P}^2$. We have ${\mathcal L}'=\{\mathrm{triv},G\}$, with respective subtori ${\mathbb G}_m^2$ and $\Delta_{{\mathbb G}_m}$ (the diagonal). The stabilizer locus in $T$ is precisely $\Delta_{{\mathbb G}_m}$. However, we need to take $r$ divisible by $2$, since $\sigma={\mathbb R}_{\ge 0}\cdot (e_1+e_2)$ (and also $\sigma={\mathbb R}_{\ge 0}\cdot (-e_1-e_2)$) leads to $\Stab(\sigma)=G$ acting nontrivially on a one-dimensional torus; this fixes the $2$-torsion subgroup. We only blow up divisors, so \[ X_{\Sigma,{\mathcal L}',[2]}\cong X_\Sigma, \] and $X^\circ$ is the complement of the proper transforms of two lines in ${\mathbb P}^2$, intersecting at the origin in ${\mathbb A}^2\subset {\mathbb P}^2$, with slopes $\pm 1$. \end{exam} \subsection{Properties of the model} \label{sect:propmod} \begin{prop} \label{prop.standardmodel} Suppose that $G$ acts generically freely on $\Delta$. Then, for $\Sigma$ and $r$ as above, the projective variety $$ \mathbb X_{\Sigma,{\mathcal L}',[r]} $$ is in standard form with respect to the union of the strict transform of the toric boundary and the exceptional divisors of the De Concini-Procesi iterated blowup procedure. \end{prop} To be in \emph{standard form} with respect to a simple normal crossing divisor, means that the divisor has smooth $G$-orbits of components and that $G$ acts freely on the complement \cite{reichsteinyoussinessential}. \begin{proof} The exceptional divisors of the De Concini-Procesi blowup form a simple normal crossing divisor. The centers of blowup $X'_{[r]}$ have transverse intersection with the toric boundary by Lemma \ref{lem.closure}. By Proposition \ref{prop:trans}, the exceptional divisors, together with the proper transform of the toric boundary, form a simple normal crossing divisor. We conclude by Lemma \ref{lem.subtori}. \end{proof} By analogy with the treatment in \cite{FK3}, we describe a point $p\in \mathbb X_{\Sigma,{\mathcal L}',[r]}$ as a pair \[ p=(x,V_1\subset W_1\subset\dots\subset V_t\subset W_t) \] consisting of a point $x\in \mathbb X$, say in the $\Delta$-orbit identified with $\Delta^\sigma$ (\S \ref{sect:ecdg}), and a flag of subspaces of $N_k:=N\otimes k$ such that \begin{itemize} \item $V_i=N_{\Gamma^i}\otimes k$ for some $\Gamma^i\in {\mathcal L}'$, for all $i$. \item We have $\dim(W_i)=\dim(V_i)+1$ for all $i$. \item The maximal $\Gamma'\in {\mathcal L}'(T^\sigma)$ with $x\in (T^\sigma)_{\Gamma'}\Delta^\sigma[r]$ is $\Gamma^1$, when $t\ge 1$, otherwise is $\ker(\nu)$. \item The maximal $\Gamma'\in {\mathcal L}'$ with $W_i\subset N_{\Gamma'}\otimes k$, is $\Gamma^{i+1}$, for $i<t$. \item If $t\ge 1$, the maximal $\Gamma'\in {\mathcal L}'$ with $W_t\subset N_{\Gamma'}\otimes k$, is $\ker(\nu)$. \end{itemize} The point $p$ lies in the stratum $\mathbb D_\Lambda$ indexed by the chain \[ \Gamma^1\supset\dots\supset \Gamma^t, \] and not in any deeper stratum. The stabilizer of $p$ is \[ \{g\in G\,|\,x\cdot g=x,\text{ and }V_i\cdot g=V_i \text{ and }W_i\cdot g=W_i\text{ for all }i\}. \] We take $T'$ as in \eqref{eqn.chainfirstT}, with corresponding sublattice $N':=N_{\Gamma^1}$. So, the tangent space to $T/T'$ at the identity is naturally identified with $(N/N')_k$. The normalizer $N_G(\Gamma^1)$ acts on the quotient $\Delta/T'$, and the smooth morphism from a $\Delta$-invariant neighborhood of $X'_{[r]}$ to $\Delta/T'$ with fiber $X'_{[r]}$ over $(\Delta/T')[r]$, mentioned in \S \ref{sect:ecdg}, is $N_G(\Gamma^1)$-equivariant. Denoting such a $\Delta$-invariant neighborhood by $Q$, we have the composite \begin{equation} \label{eqn.compositefromV} Q\to \Delta/T'\stackrel{r\cdot}\to \Delta/T', \end{equation} with image $T/T'$, and pre-image $X'_{[r]}$ of the identity element. \begin{lemm} \label{lem.divisors} Let the notation be as above. \begin{itemize} \item[(i)] The projection morphism $\pi$, of introduced in \eqref{eqn.projection}, factors through $B\ell_{X'_{[r]}} \mathbb X$. \item[(ii)] There exist an $N_G(\Gamma^1)$-invariant neighborhood $W\subset T/T'$ of the identity and an \'etale $N_G(\Gamma^1)$-equivariant morphism from $W$ to the vector space $(N/N')_k$, with fiber over $0$ consisting just of the identity element of $T/T'$, where the corresponding map of tangent spaces gives the natural identification with $(N/N')_k$. \item[(iii)] If we let $U$ denote the pre-image of $W$ under the composite morphism \eqref{eqn.compositefromV}, then by following the composite \eqref{eqn.compositefromV} with the morphism of $\mathrm{(ii)}$, we get a smooth $N_G(\Gamma^1)$-equivariant morphism \[ U\to (N/N')_k, \] with pre-image of $0$ equal to $X'_{[r]}$. \item[(iv)] For the induced equivariant morphism $B\ell_{X'_{[r]}}U\to {\mathbb P}((N/N')_k)$, the class of the exceptional divisor in $\Pic^{N_G(\Gamma^1)}(B\ell_{X'_{[r]}}U)$ is the pullback of ${\mathcal O}_{{\mathbb P}((N/N')_k)}(-1)$. The pullback of ${\mathcal O}_{{\mathbb P}((N/N')_k)}(-1)$ to $\pi^{-1}(U)$ is the class of the divisor \[ \bigcup_{\substack{\Gamma'\in {\mathcal L}'\\ \Gamma^1\subseteq \Gamma'}} \mathbb D_{\Gamma'}, \] with all components of multiplicity $1$. \end{itemize} \end{lemm} \begin{proof} With $\mathbb X_{\Sigma,{\mathcal L}',[r]}$ as the closure of $ \mathbb X^{\circ}$ in a product of blow-ups, projection to $B\ell_{X'_{[r]}} \mathbb X$ yields the factorization in (i). We get (ii) from an equivariant version of a construction of Moci \cite{moci}. The coordinate ring $k[(N'^\perp\cap M)/M_{\mathrm{tors}}]$ of $T/T'$ has a maximal ideal $\mathfrak{m}$, corresponding to the identity element of $T/T'$. A splitting of the surjective $N_G(\Gamma^1)$-equivariant homomorphism \[ \mathfrak{m}\to \mathfrak{m}/\mathfrak{m}^2 \] determines $N_G(\Gamma^1)$-equivariant $T/T'\to (N/N')_k$, sending the identity to $0$. The corresponding map of tangent spaces gives the natural identification of the tangent space to $T/T'$ at the identity with $(N/N')_k$. Then for suitable $W$ we have (ii); an immediate consequence is (iii). Since blowing up commutes with smooth base change, we get the first assertion in (iv) from the standard description of $B\ell_{\{0\}}(N/N')_k$ as the closure of the complement of $0$ in $(N/N')_k\times {\mathbb P}((N/N')_k)$. For the remaining assertion, we use the treatment of iterated blow-ups in \cite{li-wonderful}, which lets us express $\pi$ as an iterated blow-up of $\mathbb X$ in the following manner. The first step is the blow-up \[ \pi_1\colon B\ell_{X'_{[r]}}\mathbb X\to \mathbb X. \] In subsequent steps we blow up the (proper transforms of the) pre-images under $\pi_1$ of $r$-torsion translate compactifications $X''_{[r]}\subsetneq X'_{[r]}$ in any order of weakly increasing dimensions. Finally we blow up the proper transforms of the remaining $r$-torsion translate compactifications, in any order of weakly increasing dimensions. Examination of the behavior of the exceptional divisor of $\pi_1$ under these blow-ups gives what we need. \end{proof} \section{Equivariant Burnside group} \label{sect:ebg} In this section we recall the equivariant Burnside group, introduced in \cite{BnG}, and state the formula, that we will use for our computation in the equivariant Burnside group. The \emph{equivariant Burnside group} \[ \Burn_n(G)=\Burn_{n,k}(G) \] is an abelian group, generated by symbols \[ (H,Y\mathrel{\reflectbox{$\righttoleftarrow$}} K,\beta), \] where \begin{itemize} \item $H\subseteq G$ is an abelian subgroup, \item $K$ is a field, finitely generated over $k$, with faithful action over $k$ of a subgroup $Y\subseteq Z:=Z_G(H)/H$, where $Z_G(H)$ denotes the centralizer of $H$ in $G$, \item $\beta$ is a sequence of length $r:=n-\mathrm{trdeg}_{K/k}$ of nontrivial characters of $H$, that generates $H^\vee$. \end{itemize} The symbols are subject to relations, labeled $$ {\bf{(O)}}, {\bf{(C)}}, {\bf{(B1)}}, \text{ and } {\bf{(B2)}} $$ (which stand for ordering, conjugation, and blowup relations), e.g., the equivalence of symbols that differ by a re-ordering of the sequence of characters. Symbols are also permitted in which $K$ is a Galois algebra for some $Y\subseteq Z$ over a field that is finitely generated over $k$; we identify $(H,Y\mathrel{\reflectbox{$\righttoleftarrow$}} K,\beta)$ with $(H,Z\mathrel{\reflectbox{$\righttoleftarrow$}} \Ind_Y^Z(K),\beta)$. See \cite{KT-vector} for a complete description of relations. In a symbol, $\beta$ is determined uniquely, up to order, by the similarity type of a faithful $(n-\mathrm{trdeg}_{K/k})$-dimensional representation of $H$ over $k$, or any field containing $k$. Furthermore, we declare a symbol to be trivial in case the trivial character occurs, i,e., if the given representation has nontrivial space of invariants. We record a frequently used consequence of defining relations \cite[Prop. 4.7]{BnG}: If $\beta=(b_1,\ldots, b_r)$ is such that for some $$ I\subseteq [1,\ldots, r], \quad |I|\ge 2, $$ one has $$ \sum_{i\in I} b_i = 0,$$ then $$ (H,Y\mathrel{\reflectbox{$\righttoleftarrow$}} K,\beta) = 0\in \Burn_n(G). $$ As an immediate application, we obtain: \begin{prop} \label{prop:s-vanishing} Consider the symbol $$ (H, Y\mathrel{\reflectbox{$\righttoleftarrow$}} K, \beta)\in \Burn_n(G), \quad \beta=(b_1,\ldots, b_r), \quad 1\le r\le n. $$ Let $$ \ell:=\min_{1\le j\le r}(\mathrm{ord}(b_j)) $$ be the smallest order of a character appearing in $\beta$. We have $$ (H, Y \mathrel{\reflectbox{$\righttoleftarrow$}} K(t_1,\ldots, t_{\ell-1}), \beta)=0 \in\Burn_{n+\ell-1}(G). $$ Here $Y$ acts trivially on the variables $t_1,\ldots t_{\ell-1}$. \end{prop} \begin{proof} We may assume that the minimum is at $b_1$. Consider the symbol $$ (H, Y \mathrel{\reflectbox{$\righttoleftarrow$}} K, (\underbrace{b_1, \ldots, b_{1}}_{\ell \text{ times }},b_2,\ldots, b_r)) \in \Burn_{n+\ell-1}(G). $$ This symbol vanishes. Applying relation {\bf{(B2)}} iteratively, we obtain the claim. \end{proof} A nonsingular projective variety $X$ with generically free $G$-action determines, as $G$-equivariant birational invariant, a class \[ [X\righttoleftarrow G]\in \Burn_n(G). \] This is particularly easy to describe, when $X$ is in standard form with respect to a $G$-invariant simple normal crossing divisor, or more generally satisfies Assumption 2 of \cite{BnG}. Then \begin{align*} [&X\righttoleftarrow G]=\\ &\,\, \sum_{\substack{x_0\in X/G\\ x_0=[x],\,x\in X}} \big(\text{generic stabilizer of $\overline{\{x\}}$}, \Gal(k(x)/k(x_0)), ({\mathcal I}_{\overline{\{x\}}}/{\mathcal I}_{\overline{\{x\}}}^2)_x\big). \end{align*} Points of the quotient $X/G$ are in bijective correspondence with $G$-orbits of points of $X$; the sum is over $x_0\in X/G$, and $x$, in the sum, denotes an orbit representative. Importantly, not only $k$-points, but all points are taken in the sum. The residue field $k(x)$ is a Galois extension of $k(x_0)$. The ideal sheaf ${\mathcal I}_{\overline{\{x\}}}$ of the closure $\overline{\{x\}}$ defines the coherent sheaf ${\mathcal I}_{\overline{\{x\}}}/{\mathcal I}_{\overline{\{x\}}}^2$ on $\overline{\{x\}}$, whose stalk at $x$ gives a faithful representation of the generic stabilizer of $\overline{\{x\}}$. In order to get a representation with trivial space of invariants, $x$ has to be a maximal point with this generic stabilizer, so only finitely many points $x_0\in X/G$ yield nontrivial symbols. \begin{rema} \label{rema:vanishing} An immediate corollary of Proposition~\ref{prop:s-vanishing} is that the symbols invariant is stably trivial: For any $G$-variety $X$ of dimension $n$, there exists a $d\in {\mathbb N}$ such that class $$ [X\times {\mathbb P}^{d}\righttoleftarrow G] -(\mathrm{triv}, G\mathrel{\reflectbox{$\righttoleftarrow$}} k(X)(t_1,\ldots, t_{d}), ()) =0 \in \Burn_{n+d}(G); $$ here $G$-acts trivially ${\mathbb P}^d$, respectively, on the variables $t_1,\ldots, t_d$. This is in stark contrast with invariants originating in unramified cohomology. \end{rema} When $X$ is replaced by a $G$-invariant open subvariety $U$, the same formula leads to a class in $\Burn_n(G)$, that is denoted by \[ [U\righttoleftarrow G]^{\mathrm{naive}}. \] (A class $[U\righttoleftarrow G]$ is also defined in \cite{BnG}, but plays no role in this paper.) Let \[ D=D_1\cup\dots\cup D_\ell \] be a simple normal crossing divisor on $X$, where each $D_i$ is $G$-invariant and nonsingular. Then, with ${\mathcal I}:=\{1,\dots,\ell\}$, we have \[ [X\righttoleftarrow G]=[U\righttoleftarrow G]^{\mathrm{naive}}+ \sum_{\emptyset\ne I\subseteq {\mathcal I}} \sum_{j\in {\mathcal J}_I} \mathrm{ind}_{G_{I,j}}^G\big(\psi_I([\underline{D}^\circ_{I,j}\righttoleftarrow G_{I,j}]^{\mathrm{naive}}_{(\mathcal{N}_{D_i/X})_{i\in I}})\big). \] Here, \begin{itemize} \item ${\mathcal J}_I$ indexes $G$-orbits of components of $D_I:=\bigcap_{i\in I}D_i$, with notation $D_{I,j}$ for the $G$-orbit of components corresponding to $j\in {\mathcal J}_I$, and $\underline{D}_{I,j}$ for a chosen component of $D_{I,j}$. \item $G_{I,j}$ is the maximal subgroup of $G$, for which $\underline{D}_{I,j}$ is invariant. \item We denote by $D^\circ_I\subset D_I$, by $D^\circ_{I,j}\subset D_{I,j}$, and by $\underline{D}^\circ_{I,j}\subset \underline{D}_{I,j}$, the complement of all $D_j$, $j\notin I$. \item The class $$ [\underline{D}^\circ_{I,j}\righttoleftarrow G_{I,j}]^{\mathrm{naive}}_{(\mathcal{N}_{D_i/X})_{i\in I}} \in \Burn_{n,I}(G_{I,j}), $$ in the \emph{indexed equivariant Burnside group} \cite[\S 4]{KT-struct} \cite[\S 4]{KT-vector} is defined like a naive class in $\Burn_n(G_{I,j})$, but with additional data of characters associated with each of the indicated line bundles \cite[\S 5]{KT-vector} (in this case, the restrictions of the normal bundles $\mathcal{N}_{D_i/X}$). \item The homomorphism $$ \psi_I\colon \Burn_{n,I}(G_{I,j})\to \Burn_n(G_{I,j}) $$ appends the characters associated with the line bundles to the characters $\beta$ \cite[Rem.\ 4.1]{KT-struct}. \item The induction homomorphism $$ \mathrm{ind}_{G_{I,j}}^G : \Burn_n(G_{I,j}) \to \Burn_n(G) $$ is defined in \cite[Defn.\ 3.1]{KT-vector}. \end{itemize} This formula holds by \cite[Prop.\ 4.8]{KT-vector}; see, also \cite[Exa.\ 5.12]{KT-vector}. The additional characters, attached to symbols in the indexed equivariant Burnside group, may be manipulated, e.g., by applying any element of $\Aut({\mathbb Z}^I)$, where ${\mathbb Z}^I$ denotes $\bigoplus_{i\in I}{\mathbb Z}$. For instance, when $J\subseteq I$ and we apply the element \[ \tau_{I,J}\in \Aut({\mathbb Z}^I), \qquad \tau_{I,J}(e_j):=\begin{cases} {\displaystyle\sum_{\substack{i\in I,\,i\le j\\ i>j'\,\forall\, j'\in J,\, j'<j}}e_i},&\text{if $j\in J$}, \\ \ \ \ \ \ \ e_j,&\text{if $j\notin J$}, \end{cases} \] of \cite[Exa.\ 4.1]{KT-vector}, to a symbol $(H\subseteq H',Y\mathrel{\reflectbox{$\righttoleftarrow$}} K,\beta,\gamma)\in \Burn_{n,I}(G)$ with the additional characters \[ \gamma=(c_i)_{i\in I}, \] we get the symbol $\tau_{I,J}(H\subseteq H',Y\mathrel{\reflectbox{$\righttoleftarrow$}} K,\beta,\gamma):=(H\subseteq H',Y\mathrel{\reflectbox{$\righttoleftarrow$}} K,\beta,\tilde\gamma)$, \[ \tilde\gamma=(\tilde c_i)_{i\in I}, \] where $\tilde c_j$ is the sum of $c_i$ over $i\le j$ with $i>j'$ for all $j'\in J$, $j'<j$ for $j\in J$, and $\tilde c_j=c_j$ otherwise. As a generalization of the map $\psi_I$ there is the map \[ \psi_{I,J}\colon \Burn_{n,I}(G)\to \Burn_{n,J}(G), \] which appends just the characters indexed by elements of $I\setminus J$ to the characters $\beta$; see \cite[Defn.\ 4.2]{KT-vector}. \section{Computing the class in the Burnside group} \label{sect:class} We wish to compute a class in an equivariant Burnside group, associated with a given $G$-action on $\Delta$. We have the exact sequence of algebraic groups \eqref{eqn.DeltamodT} and the notation introduced in \S \ref{sect:arr-diag}. We fix $\bar\delta\in \Delta/T$, the class of some $\delta\in \Delta$. Besides the stabilizer $G_{\bar\delta}$ of $\bar\delta$, there is \[ G_\delta:=\ker(\nu_{\bar\delta}), \] which is equal to the stabilizer of $\delta\in \Delta$, provided that $\delta$ is a suitably general lift of $\bar\delta\in \Delta/T$. The group $G_\delta$ is the generic stabilizer of the induced action of $G_{\bar\delta}$ on the component $T\bar\delta$ of $\Delta$. We recall, $\bar\delta\in \Delta/T$ indexes a component $X\bar\delta$ of $\mathbb X$. This has a class \begin{equation} \label{eqn.classdeltacomponent} [X\bar\delta \righttoleftarrow G_{\bar\delta}/G_\delta]\in \Burn_n(G_{\bar\delta}/G_\delta). \end{equation} Our goal is to compute the class of $\mathbb X\righttoleftarrow G$, which we understand to mean the collection of classes \eqref{eqn.classdeltacomponent} for all $\bar\delta\in \Delta/T$. We recall the lattice ${\mathcal L}'={\mathcal L}'(T)$ from \S \ref{sect:latt} and make a choice of fan $\Sigma$ and positive integer $r$ as in \S \ref{sect:cm}; we work on the model $\mathbb X_{\Sigma,{\mathcal L}',[r]}$. Exactly as in Proposition \ref{prop.standardmodel}, for the action of $G_{\bar\delta}/G_\delta$ we have $X_{\Sigma,{\mathcal L}',[r]}\bar\delta$ in standard form, with respect to the union of the strict transform of the toric boundary and the exceptional divisors of the De Concini-Procesi iterated blowup procedure. We introduce $G_{\bar\delta}$-invariant divisors. Let \[ \Gamma'_1,\dots,\Gamma'_{\ell_{\bar\delta}}\in {\mathcal L}' \] be a choice of conjugacy class representatives of elements of ${\mathcal L}'\setminus \{\ker(\nu)\}$. We set \[ \mathbb D_i:=\bigcup_{\Gamma'\text{ conjugate to }\Gamma'_i} \mathbb D_{\Gamma'}. \] Then $\mathbb D=\mathbb D_1\cup\dots\cup \mathbb D_{\ell_{\bar\delta}}$ is a simple normal crossing divisor on $\mathbb X_{\Sigma,{\mathcal L}',[r]}$, with each $\mathbb D_i$ invariant under $G_{\bar\delta}$. Now suppose $$ I\subseteq {\mathcal I}_{\bar\delta}:=\{1,\dots,\ell_{\bar\delta}\}, $$ with $\mathbb D_I$ nonempty. We take ${\mathcal J}_I$ to be the set of conjugacy classes of chains in ${\mathcal L}'$, with one element from each conjugacy class, indexed by an element of $I$. Then $j\in {\mathcal J}_I$ indexes an orbit $\mathbb D_{I,j}$ of $\mathbb D_\Lambda$ for a representative chain $\Lambda$ of the conjugacy class of $j$. The maximal subgroup, under which $\mathbb D_\Lambda$ is invariant, is \[ N_{G_{\bar\delta}}(\Lambda):=N_{G_{\bar\delta}}(\Gamma^1)\cap\dots\cap N_{G_{\bar\delta}}(\Gamma^t), \] the stabilizer of $\Lambda$ under the conjugation action of ${G_{\bar\delta}}$. With the subspaces $V_i=N_{\Gamma^i}\otimes k$ of \[ V:=N\otimes k \] for $1\le i\le t$, appearing in the description of a point of $\mathbb D_\Lambda$, not in any deeper stratum, we have from Lemma \ref{lem.divisors}, for $1\le i\le k$, a $N_{G_{\bar\delta}}(\Lambda)$-equivariant morphism $\mathbb D_\Lambda\to {\mathbb P}(V/V_i)$. This is surjective when $i=t$ and has image ${\mathbb P}(V_{i+1}/V_i)$ for $i<t$. For $\bar\delta\in \Delta/T$ the stabilizer $N_{G_{\bar\delta}}(\Lambda)$ acts on the fiber $\vartheta_{[r]}^{-1}(\bar\delta)$ over $\bar\delta$ in $T'_{[r]}/T'$, where $T'$ is as in \eqref{eqn.chainfirstT}. We let $\mathcal{K}_j$ denote the set of $N_{G_{\bar\delta}}(\Lambda)$-orbits for this action. \begin{prop} \label{prop.mainformula} Let $G$ act on $\Delta$, and let $\bar\delta\in \Delta/T$. Let us write \[ [X\bar\delta \righttoleftarrow G_{\bar\delta}/G_\delta]=A_{\bar{\delta}} + B_{\bar{\delta}} \] in $$ \Burn_n(G_{\bar\delta}/G_\delta), $$ where $A_{\bar{\delta}}$ records the contribution from $\mathbb X^{\circ}\cap X\bar\delta$, and $B_{\bar{\delta}}$, the contribution from strata obtained from exceptional divisors in the De Concini-Procesi model for $X$. Then $$ A_{\bar{\delta}}= \sum_{[\sigma]\in \Sigma/G_{\bar\delta}} (\Stab(\sigma)_\delta, \Stab(\sigma)/\Stab(\sigma)_\delta \mathrel{\reflectbox{$\righttoleftarrow$}} k(T^\sigma \bar\delta),\rho_{\sigma}), $$ where \begin{itemize} \item the sum is over $G_{\bar\delta}$-orbits of $\Sigma$; for each orbit an orbit representative $\sigma\in \Sigma$ is chosen, \item the action of $\Stab(\sigma)$ on $\mathbb X^{\circ}\cap T^\sigma\bar\delta$ has constant stabilizer $\Stab(\sigma)_\delta$, \item the representation \[ \rho_{\sigma}\colon \Stab(\sigma)_\delta\to {\mathrm{GL}}(\mathfrak J/\mathfrak J^2) \] is defined, using the ideal \[ \mathfrak J:=(x)_{x\in (\sigma^\vee\cap M)\setminus (\sigma^\perp\cap M)}(k[\sigma^\vee\cap M]/(x-\delta(x))_{x\in \sigma^\perp\cap M}k[\sigma^\vee\cap M]), \] \end{itemize} and \begin{align*} &B_{\bar{\delta}}=\\ &\ \sum_{\emptyset\ne I\subseteq {\mathcal I}_{\bar\delta}} \sum_{[\Lambda]\in {\mathcal J}_I} \sum_{[\bar\tau]\in \mathcal{K}_j} \mathrm{ind}_{ N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_{\delta}}^{G_{\bar\delta}/G_\delta} \big( \psi_{\{ 1,\dots, t\}} \big( [D_\Lambda^\circ\bar\tau\righttoleftarrow N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_{\delta} ]^{\mathrm{naive}}_{({\mathcal O}(-1))} \big) \big). \end{align*} \end{prop} By analogy with \cite[Conv.\ 8.1]{KT-vector}, here $({\mathcal O}(-1))$ denotes the following collection of line bundles, indexed by $\{1,\dots,t\}$: \begin{equation} \label{eqn.convention} {\mathcal O}_{{\mathbb P}(V_2/V_1)}(-1),{\mathcal O}_{{\mathbb P}(V_2/V_1)}(1)\otimes {\mathcal O}_{{\mathbb P}(V_3/V_2)}(-1),\dots. \end{equation} \begin{proof} The contribution from $\mathbb X^{\circ}\cap X\bar\delta$ is obtained directly from the formula for the naive class in $\Burn_n(G_{\bar\delta}/G_\delta)$, from Section \ref{sect:ebg}. The term $B_{\bar\delta}$ is taken from the formula in Section \ref{sect:ebg}, where the additional sum over orbit representatives of $\mathcal{K}_j$ accounts for the components in \eqref{eqn.DLambdacomponents}. \end{proof} Let us fix a chain \eqref{eqn.chain}, which indexes a stratum $\mathbb D_\Lambda\subset \mathbb X_{\Sigma,{\mathcal L}',[r]}$. Let $\bar\delta\in \Delta/T$, and with $T'$ as in \eqref{eqn.chainfirstT}, let $\bar\tau\in \vartheta^{-1}_{[r]}(\bar\delta)$. The group $N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}$ acts on \begin{equation} \label{eqn.whatNGLambdaactson} X'\bar\tau, \quad {\mathbb P}(V_2/V_1), \quad \dots, \quad {\mathbb P}(V_t/V_{t-1}), \quad {\mathbb P}(V/V_t), \end{equation} and we have an $N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}$-equivariant birational morphism from $D_\Lambda\bar\tau$ to the product of the varieties in \eqref{eqn.whatNGLambdaactson}. Therefore: \begin{lemm} \label{lem.DLambda} For $\bar\tau\in T'_{[r]}/T'$, mapping to $\bar\delta\in \Delta/T$, we have \begin{align*} [&D_\Lambda \bar\tau\righttoleftarrow N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta]_{({\mathcal O}(-1))}=\\ &\qquad[X'\bar\tau\times {\mathbb P}(V_2/V_1)\times \dots\times {\mathbb P}(V/V_t)\righttoleftarrow N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta]_{({\mathcal O}(-1))}. \end{align*} in $\Burn_{n,\{1,\dots,t\}}(N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta)$. \end{lemm} For $\Lambda\ne\emptyset$, the expression on the right-hand side in Lemma \ref{lem.DLambda} may be taken as known, in the recursive determination of the classes \eqref{eqn.classdeltacomponent}. \begin{theo} \label{thm.main} The class \[ [X\bar\delta\righttoleftarrow G_{\bar\delta}/G_\delta]=A_{\bar\delta}+B_{\bar\delta} \] in $\Burn_n(G_{\bar\delta}/G_\delta)$ may be computed by applying the formula for $A_{\bar\delta}$ from Proposition \ref{prop.mainformula} directly, and by computing the classes $$ [D^\circ_\Lambda\bar\tau\righttoleftarrow N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta]_{({\mathcal O}(-1))}^{\mathrm{naive}}\in \Burn_{n,\{1,\dots,t\}}(N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta) $$ appearing in the formula for $B_{\bar\delta}$ from Proposition \ref{prop.mainformula} in a recursive fashion, starting with large $t=|I|$, using the formula \begin{align*} [&D^\circ_\Lambda\bar\tau\righttoleftarrow N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta]_{({\mathcal O}(-1))}^{\mathrm{naive}}=[D_\Lambda\bar\tau\righttoleftarrow N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta]_{({\mathcal O}(-1))} \\ &-\sum_{[\Lambda']} \mathrm{ind}_{N_{G_{\bar\delta}}(\Lambda')_{\bar\tau}/G_\delta}^{N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta}\big(\psi_{I',J} \big(\tau_{I',J}[D^\circ_{\Lambda'}\bar\tau\righttoleftarrow N_{G_{\bar\delta}}(\Lambda')_{\bar\tau}/G_\delta]_{({\mathcal O}(-1))}^{\mathrm{naive}}\big)\big) \\ &-\sum_{[\Lambda'']}\sum_{[\bar\tau'']} \mathrm{ind}_{N_{G_{\bar\delta}}(\Lambda'')_{\bar\tau''}/G_\delta}^{N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta}\big(\psi_{I'',J} \big(\tau_{I'',J}[D^\circ_{\Lambda''}\bar\tau''\righttoleftarrow N_{G_{\bar\delta}}(\Lambda'')_{\bar\tau''}/G_\delta]_{({\mathcal O}(-1))}^{\mathrm{naive}}\big)\big). \end{align*} The first sum is over $N_G(\Lambda)_{\bar\tau}$-conjugacy classes of chains \[ \Lambda':\ \ \Gamma^1=\Gamma'^1\supset \Gamma'^2\supset\dots\supset \Gamma'^{t'} \] strictly containing $\Lambda$ with the same largest member $\Gamma^1=\Gamma'^1$; we put $I':=\{ 1,\ldots, t'\}$. The second sums are over $N_G(\Lambda)_{\bar\tau}$-conjugacy classes of chains \[ \Lambda'':\ \ \Gamma''^1\supset \Gamma''^2\supset\dots\supset \Gamma''^{t''} \] containing $\Lambda$, with $\Gamma''^1\supsetneq \Gamma^1$, and $N_G(\Lambda'')_{\bar\tau}$-orbit representatives $\bar\tau''$ of the fiber of \[ T''_{[r]}/T''\to T'_{[r]}/T' \] over $\bar\tau$, where $T''$ denotes $T_{\Gamma''^1}$; we put $I'':=\{ 1, \ldots, t''\}$. In each sum, $J$ records the indices of the members of $\Lambda$ and is identified in an order-preserving fashion with $\{1,\dots,t\}$ to land in $\Burn_{n,\{1,\dots,t\}}(N_{G_{\bar\delta}}(\Lambda)_{\bar\tau}/G_\delta)$. \end{theo} \begin{proof} This follows from the formula in \S \ref{sect:ebg}, and the evident analogous formula for indexed equivariant Burnside groups. By Lemma \ref{lem.divisors}, application of $\tau_{I',J}$ and $\tau_{I'',J}$ corrects the divisor characters in the first, respectively second sums in the formula. \end{proof} \section{Dimension 2} \label{sect:2} There is only one nontrivial action on $\mathbb G_m$, namely $t\mapsto t^{-1}$, and it is linearizable. In dimension 2, it is known \cite[II.4.9, Exa.\ 7]{vosk} that every action on ${\mathbb G}_m^2$ factors through a subgroup of \[ \mathfrak D_4\text{ or }C_2\times {\mathfrak S}_3\subset {\mathrm{GL}}_2({\mathbb Z}). \] Subgroups of $\mathfrak D_4$ give rise to a regular action on ${\mathbb P}^1\times {\mathbb P}^1$. Projecting from the identity of ${\mathbb G}_m^2\subset {\mathbb P}^1\times {\mathbb P}^1$ gives an induced regular action on ${\mathbb P}^2$, and this is linear. So, we focus on \[ G:=C_2\times {\mathfrak S}_3, \] whose action we realize by inverse and permutation on the coordinates of \[ T\subset {\mathbb G}_m^3,\qquad T:=\{(t_1,t_2,t_3)\,|\,t_1t_2t_3=1\}. \] This extends to a regular $G$-action on a del Pezzo surface $X$ of degree $6$; the action is obtained by regularizing the ${\mathfrak S}_3$ permutation action on the standard coordinates of ${\mathbb P}^2$ together with the Cremona involution. As mentioned in the introduction: \begin{itemize} \item This action is not linearizable, by \cite{isk-s3}; there, the proof relied on the (equivariant) Minimal Model Program, specifically, on the classification of Sarkisov links. In \cite{HKTsmall} we provided an alternative proof, using partial information about the class of the action in the equivariant Burnside group. \item There are no cohomological obstructions to stable linearizability, $$ {\mathrm H}^1(G', \Pic(X)) =0, $$ for all subgroups $G'\subseteq G$. \item For every subgroup $G'\subsetneq G$, the $G'$-action on $T$ is linearizable \cite[Section 9]{lemire}. \item The $G$-action is stably linearizable \cite[Prop. 9.11]{lemire}. \end{itemize} We apply the procedure from Section~\ref{sect:class} to compute the class of the $G$-action on an equivariant compactification of $T$ in the Burnside group. The result is: \begin{prop} \label{prop:class-dp6} The class in $\Burn_2(G)$ of the $G$-action on $X$ is: \begin{align*} [X\righttoleftarrow G]=(\mathrm{triv}&,G\mathrel{\reflectbox{$\righttoleftarrow$}} k(X),())\\ +&({\mathfrak S}_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1)) \\ +&(\text{diagonal in $C_2\times {\mathfrak S}_2$},C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1))\\ +&{\color{red}(C_2,{\mathfrak S}_3\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1))} \\ +&(C_2,{\mathfrak S}_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1))\\ +2&(C_2\times {\mathfrak S}_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1,e_2))\\ +2&(C_2\times {\mathfrak S}_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1+e_2,e_2))\\ +&(C_2\times C_3,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,1)))\\ +& (C_3,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(1,1)). \end{align*} \end{prop} For comparison, we display the class of the linear action $$ [{\mathbb P}(1\oplus V_{\chi})\righttoleftarrow G]\in \Burn_2(G), $$ computed in \cite[Exa.~5.3]{KT-struct}; here $V_{\chi}$ is the standard 2-dimensional representation of ${\mathfrak S}_3$, twisted by a nontrivial character of the central $C_2$. This, in essence the only possible linear action, yields the class \begin{align*} & (\mathrm{triv},G\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}(1\oplus V_{\chi})),())\\ +&({\mathfrak S}_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1))\\ +2&{\color{red} (C_2,{\mathfrak S}_3\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1))} \\ +2& (C_2\times {\mathfrak S}_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1,e_2))\\ +2&(C_2\times{\mathfrak S}_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1+e_2,e_2))\\ +&(C_2\times C_3,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,1)))\\ +&(C_2\times C_3,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,2))). \end{align*} The term \begin{equation} \label{eqn:symb} (C_2,{\mathfrak S}_3\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1)) \end{equation} is {\em incompressible} (see Defn. 3.3 and Prop.~3.6 in \cite{KT-vector}). It appears with different coefficients in the two expressions above. By \cite[Prop.~3.4]{KT-vector}, the actions are not equivariantly birational. However, there are other terms in these formulas that distinguish the two actions, e.g., the terms with $C_2\times C_3$-stabilizer. Recall that $X\times {\mathbb P}^r$ and ${\mathbb P}(1\oplus V_{\chi})\times {\mathbb P}^r$, with trivial $G$-action on ${\mathbb P}^r$, are equivariantly birational for $r\ge 2$, by \cite[Prop. 9.11]{lemire}. The case of $r=1$ is unknown. Computing $\Burn_3(G)$ we find nothing to distinguish the classes $$ [X\times {\mathbb P}^1\righttoleftarrow G], [{\mathbb P}(1\oplus V_{\chi})\times {\mathbb P}^1\righttoleftarrow G] \in \Burn_3(G). $$ Indeed, all contributions from nontrivial stabilizers vanish in $\Burn_3(G)$. For terms with $C_2$- and $C_3$-stabilizers this follows immediately from Proposition~\ref{prop:s-vanishing}, while for terms with $C_2\times C_3$-stabilizer this requires further analysis. \medskip The rest of this section consists of a proof of Proposition ~\ref{prop:class-dp6}. It is an application of the algorithm from Section \ref{sect:class}, which we carry out in detail. We have: \begin{itemize} \item $T=\Delta$, \item $\bar{\delta} = 1$, \item $G_{\bar{\delta}} = G$, $G_{\delta} = \mathrm{triv}$, \end{itemize} and the formula from Proposition \ref{prop.mainformula} simplifies to \[ [X\righttoleftarrow G]=A+B. \] We use the coordinates $t_1$ and $t_2$ to identify $T$ with ${\mathbb G}_m^2$, and recover the action of Example \ref{exam.notstabilizer}. There is a corresponding basis $e_1$, $e_2$ of $N\cong {\mathbb Z}^2$. \medskip \noindent {\em Step 1.} We compute ${\mathcal L}'={\mathcal L}'(T)$, the lattice of distinguished subgroups of $G$, associated with subtori of $T$: \[ \begin{array}{c|c} \Gamma'&T_{\Gamma'} \\ \hline G & \{(1,1,1)\} \\ \langle (0,(1,2))\rangle & \{(t,t,t^{-2})\} \\ \langle (0,(1,3))\rangle & \{(t,t^{-2},t)\} \\ \langle (0,(2,3))\rangle & \{(t^{-2},t,t)\} \\ \langle (1,(1,2))\rangle & \{(t,t^{-1},1)\} \\ \langle (1,(1,3))\rangle & \{(t,1,t^{-1})\} \\ \langle (1,(2,3))\rangle & \{(1,t,t^{-1})\} \\ \mathrm{triv} & T \end{array} \] \medskip \noindent {\em Step 2}: We construct a smooth projective $G$-invariant fan, with respect to which property $(\mathrm{E})$ holds for every $T_{\Gamma'}$; this has rays generated by \[ (1,0), (1,1), (0,1), (-1,0), (-1,-1), (0,-1). \] \medskip \noindent {\em Step 3}: We subdivide to obtain a fan $\Sigma$ satisfying the additional property, that no pair of rays in a single $G$-orbit spans a cone of $\Sigma$; the ray generators are \begin{align*} &(1,0), (2,1), (1,1), (1,2), (0,1), (-1,1), (-1,0), (-2,-1), \\ &\qquad\qquad\qquad\qquad\qquad\qquad (-1,-1), (-1,-2), (0,-1), (1,-1). \end{align*} \medskip \noindent {\em Step 4.} We find a positive integer $r$, such that the stabilizer locus in $T$ is in the union of the $r$-torsion translates of subtori in ${\mathcal G}_{{\mathcal L}'}$, and the same holds for the $\Stab(\sigma)$-action on $T^\sigma$, for all $\sigma\in \Sigma$. In $T$, the stabilizer locus consists of the one-dimensional subtori above, together with \[ (1,1,1), (-1,-1,1),(-1,1,-1),(1,-1,-1), (\zeta,\zeta,\zeta),(\zeta^2,\zeta^2,\zeta^2). \] For $\sigma\in \Sigma(1)$ we have $\Stab(\sigma)\cong {\mathbb Z}/2{\mathbb Z}$ acting on $T^\sigma\cong {\mathbb G}_m$, fixing $\pm 1$. So we take \[ r=6. \] \medskip \noindent {\em Step 5.} We carry out the De Concini-Procesi blow-up procedure, which in this case amounts to blowing up the $6$-torsion of $T$ in $X=X_\Sigma$ to obtain \[ X_{\Sigma,{\mathcal L}',[6]}\cong B\ell_{\text{$36$ points}}X. \] \medskip \noindent {\em Step 6.} We compute $A$ directly as \[ A=(\mathrm{triv},G\mathrel{\reflectbox{$\righttoleftarrow$}} k(X),()). \] This is the contribution from the zero cone in the formula from $\bm{\Sigma}_1$ in Proposition \ref{prop.mainformula}. The two orbits of $1$-dimensional cones $\sigma$ lead to action of $\Stab(\sigma)\cong {\mathbb Z}/2{\mathbb Z}$ on $T^\sigma\cong {\mathbb G}_m$ with trivial generic stabilizer, hence no contribution to the equivariant Burnside group. As well there is no contribution from the $2$-dimensional cones, which form a single orbit with trivial stabilizer. \medskip \noindent {\em Step 7.} We compute $B$ by the procedure of Theorem \ref{thm.main}. We have $\ell=3$: $$ \mathbb D = \mathbb D_1\cup \mathbb D_2\cup \mathbb D_3 \subset X_{\Sigma, {\mathcal L}', [6]}, $$ with respective conjugacy class representatives of ${\mathcal L}'\setminus\{\mathrm{triv}\}$ from the table in Step 1: \[ \Gamma'_1=G,\qquad \Gamma'_2=\langle(0,(1,2))\rangle,\qquad \Gamma'_3=\langle(1,(1,2))\rangle. \] Together, $\Gamma'_2$ and $\Gamma'_3$ generate $C_2\times {\mathfrak S}_2\cong \mathfrak K_4$. With $\mathcal I=\{ 1, 2, 3\}$ we have nonempty $D_I$ corresponding to the following subsets $I\subseteq \mathcal I$: $$ \{1\}, \quad \{2\},\quad \{3\},\quad \{ 1, 2\}, \quad \{ 1, 3\}. $$ For each $I$, there is exactly one conjugacy class of chains, with representative \[ \{\Gamma'_i\,|\,i\in I\}. \] We list representative chains $\Lambda$, with corresponding $N_G(\Lambda)$ and $N_G(\Lambda)$-orbits of $T'_{[6]}/T'$: \[ \begin{array}{c|c|c|c|c} \Lambda&N_G(\Lambda)&T'&T'_{[6]}/T'&\text{$N_G(\Lambda)$-orbits} \\ \hline G & G & \{1\} & T[6] & \text{see Table \ref{orbits}} \\ \langle (0,(1,2))\rangle & \mathfrak K_4 & \{(t,t,t^{-2})\} & \text{$\mu_6$ (via $t_1^{-1}t_2$)} & \text{see below} \\ \langle (1,(1,2))\rangle & \mathfrak K_4 & \{(t,t^{-1},1)\} & \text{$\mu_6$ (via $t_1t_2$)} & \text{see below} \\ G \supset \langle (0,(1,2))\rangle & \mathfrak K_4 & \{1\} & T[6] & \text{see Table \ref{orbits}} \\ G \supset \langle (1,(1,2))\rangle & \mathfrak K_4 & \{1\} & T[6] & \text{see Table \ref{orbits}} \end{array} \] When $T'$ has dimension $1$, we identify $T'_{[6]}/T'$ with $\mu_6$ by the indicated coordinate function. The action by $N_G(\Lambda)=\mathfrak K_4$ has orbits \[ \{1\},\quad \{-1\},\quad \{\zeta,\zeta^2\},\quad \{-\zeta,-\zeta^2\}. \] When $\Lambda=\{\langle(0,(1,2))\rangle\}$, the elements in orbits of size $2$ have stabilizer $\langle(1,(1,2))\rangle$. When $\Lambda=\{\langle(1,(1,2))\rangle\}$, the elements in orbits of size $2$ have stabilizer $\langle(0,(1,2))\rangle$. \begin{table} \[ \begin{array}{l|l} \text{stabilizer}&\text{orbit [$G$-stabilizer]} \\ \hline \mathfrak K_4&(1,1,1)\,[G]\\ &(-1,-1,1)\,[\mathfrak K_4]\\ \hline \langle(0,(1,2))\rangle&(\zeta,\zeta,\zeta)\,(\zeta^2,\zeta^2,\zeta^2)\,[{\mathfrak S}_3] \\ &(-\zeta,-\zeta,\zeta)\,(-\zeta^2,-\zeta^2,\zeta^2)\, [\langle(0,(1,2))\rangle] \\ \langle(1,(1,2))\rangle &(\zeta,\zeta^2,1)\,(\zeta^2,\zeta,1)\,[\langle(1,(1,2))\rangle] \\ &(-\zeta,-\zeta^2,1)\,(-\zeta^2,-\zeta,1)\,[\langle(1,(1,2))\rangle] \\ \langle(1,\mathrm{id})\rangle & (1,-1,-1)\,(-1,1,-1) \\ \hline \mathrm{triv} & (\zeta,-\zeta,-\zeta)\,(\zeta^2,-\zeta^2,-\zeta^2)\,(-\zeta,\zeta,-\zeta)\,(-\zeta^2,\zeta^2,-\zeta^2) \\ & (1,\zeta,\zeta^2)\,(1,\zeta^2,\zeta)\,(\zeta,1,\zeta^2)\,(\zeta^2,1,\zeta) \\ &(1,-\zeta,-\zeta^2)\,(1,-\zeta^2,-\zeta)\,(-\zeta,1,-\zeta^2)\,(-\zeta^2,1,-\zeta) \\ &(\zeta,-\zeta^2,-1)\,(\zeta^2,-\zeta,-1)\,(-\zeta^2,\zeta,-1)\,(-\zeta,\zeta^2,-1)\,[\mathrm{triv}] \\ &(-1,\zeta,-\zeta^2)\,(-1,\zeta^2,-\zeta)\,(\zeta,-1,-\zeta^2)\,(\zeta^2,-1,-\zeta) \\ &(-1,-\zeta,\zeta^2)\,(-1,-\zeta^2,\zeta)\,(-\zeta,-1,\zeta^2)\,(-\zeta^2,-1,\zeta) \end{array} \] \caption{Orbits of $T[6]$ under $\mathfrak K_4\subset G$. Orbits under $G$ are unions of $\mathfrak K_4$-orbits; for each a representative is identified, with $G$-stabilizer displayed in brackets $[\,]$.} \label{orbits} \end{table} As indicated in Theorem \ref{thm.main}, we start the computation of $B$ by looking at contributions with $t=2$; for these, we have $\mathbb D^\circ_\Lambda=\mathbb D_\Lambda$. \begin{itemize} \item $\Lambda=\{G\supset \langle(0,(1,2))\rangle\}$: We have $N_G(\Lambda)=\mathfrak K_4$, with \[ V_1=0\qquad\text{and}\qquad V_2=k\cdot(1,1). \] Following Lemma \ref{lem.DLambda}, we have \[ [D_\Lambda\bar\tau\righttoleftarrow (\mathfrak K_4)_{\bar\tau}]_{({\mathcal O}(-1))}= [\{1\}\bar\tau\times {\mathbb P}(V_2/V_1)\times {\mathbb P}(V/V_2)\righttoleftarrow (\mathfrak K_4)_{\bar\tau}]_{({\mathcal O}(-1))}, \] as a point, with pair of characters $e_1$ determined by $V_2/V_1$, and $e_1+e_2$ determined by $V/V_2$. So, \eqref{eqn.convention} gives $e_1$ and $e_2$ as characters of $({\mathcal O}(-1))$. Applying $\psi_{\{1,2\}}$ to get an element of $\Burn_2((\mathfrak K_4)_{\bar\tau})$, we only get something nontrivial when $(\mathfrak K_4)_{\bar\tau}={\mathfrak K_4}$. There are two contributions from Table \ref{orbits}; when we apply induction to $\Burn_2(G)$ we obtain \begin{align*} \psi_{\{1,2\}}\big(&[D_\Lambda\bar\tau\righttoleftarrow (\mathfrak K_4)_{\bar\tau}]_{({\mathcal O}(-1))}\big)=\\ &\begin{cases} (C_2\times{\mathfrak S}_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1,e_2)), & \text{if $(\mathfrak K_4)_{\bar\tau}={\mathfrak K_4}$},\\ 0, & \text{otherwise}.\end{cases} \end{align*} \item $\Lambda=\{G\supset \langle(1,(1,2))\rangle\}$: The computation is similar, with $$ V_2=k\cdot(1,-1), $$ and we obtain \begin{align*} \psi_{\{1,2\}}\big(&[D_\Lambda\bar\tau\righttoleftarrow (\mathfrak K_4)_{\bar\tau}]_{({\mathcal O}(-1))}\big)=\\ &\begin{cases} (C_2\times{\mathfrak S}_2,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1+e_2,e_2)), & \text{if $(\mathfrak K_4)_{\bar\tau}={\mathfrak K_4}$},\\ 0, & \text{otherwise}.\end{cases} \end{align*} \end{itemize} We proceed to cases with $t=1$. \begin{itemize} \item $\Lambda=\{G\}$: We have $V_1=0$. So, \begin{align*} \psi_{\{1\}}\big(&[D^\circ_\Lambda\bar\tau\righttoleftarrow G_{\bar\tau}]_{({\mathcal O}(-1))}\big) \\ = &\begin{cases} \psi_{\{1\}}\big([D_\Lambda\bar\tau\righttoleftarrow G_{\bar\tau}]_{({\mathcal O}(-1))}\big)-{\mathrm C}_0-{\mathrm C}_1, &\text{if $G_{\bar\tau}\supseteq \mathfrak K_4$},\\ \psi_{\{1\}}\big([D_\Lambda\bar\tau\righttoleftarrow G_{\bar\tau}]_{({\mathcal O}(-1))}\big),&\text{if $|G_{\bar\tau}|\in\{1,2,6\}$}, \end{cases} \end{align*} where for $i\in \{0,1\}$, \[ {\mathrm C}_i:=\mathrm{ind}_{\mathfrak K_4}^{G_{\bar\tau}}\big(\psi_{\{1,2\}}\big([D_{\{G\supset \langle(i,(1,2))\rangle \}}\bar\tau\righttoleftarrow \mathfrak K_4]_{({\mathcal O}(-1))}\big)\big); \] by Lemma \ref{lem.DLambda}, we have \[ [D_\Lambda\bar\tau\righttoleftarrow G_{\bar\tau}]_{({\mathcal O}(-1))}= [\{1\}\bar\tau\times {\mathbb P}(V)\righttoleftarrow G_{\bar\tau}]_{({\mathcal O}(-1))}. \] The nontrivial contributions come from three values of $\bar\tau$. When $\bar\tau=(1,1,1)$, we get \begin{align*} \psi_{\{1\}}\big(&[D^\circ_\Lambda\bar\tau\righttoleftarrow G_{\bar\tau}]_{({\mathcal O}(-1))}\big)= (C_2,{\mathfrak S}_3\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1))\\ &\qquad+(C_2\times C_3,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,((0,1),(1,1))). \end{align*} The cases $\bar\tau=(-1,-1,1)$ and $\bar\tau=(\zeta,\zeta,\zeta)$ give \[ (C_2,{\mathfrak S}_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1)),\quad\text{respectively}\quad(C_3,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(1,1)). \] \item $\Lambda=\{\langle (0,(1,2))\rangle \}$: The only nontrivial contribution is from $\bar\tau=1$. Then, \begin{align} \begin{split} \label{eqn.withcoefficienttwo} \psi_1\big([&D_\Lambda\bar\tau\righttoleftarrow \mathfrak K_4]_{({\mathcal O}(-1))}\big) =({\mathfrak S}_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1)) \\ &\qquad\qquad+2(\mathfrak K_4,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(e_1,e_1+e_2))\in \Burn_2(\mathfrak K_4). \end{split} \end{align} We get $\psi_1\big([D^\circ_\Lambda\bar\tau\righttoleftarrow \mathfrak K_4]^{\mathrm{naive}}_{({\mathcal O}(-1))}\big)$, according to Theorem \ref{thm.main}, by subtracting contributions from \[ \Lambda''=\{G,\langle 0,(1,2)\rangle\},\quad \bar\tau''\in\{(1,1,1),(-1,-1,1),(\zeta,\zeta,\zeta),(-\zeta,-\zeta,\zeta)\}. \] When $\bar\tau''\in\{(1,1,1),(-1,-1,1)\}$, we apply $\tau_{\{1,2\},\{2\}}$ to the indexed equivariant Burnside group element \[ (\mathfrak K_4\subseteq \mathfrak K_4,\mathrm{triv}\mathrel{\reflectbox{$\righttoleftarrow$}} k,(),(e_1,e_2)) \in \Burn_{2,\{1,2\}}(\mathfrak K_4) \] to yield, in each case, weights $e_1$ and $e_1+e_2$, thereby cancelling the term in \eqref{eqn.withcoefficienttwo} with coefficient $2$. When $\bar\tau''\in\{(\zeta,\zeta,\zeta),(-\zeta,-\zeta,\zeta)\}$ the contribution is trivial in $\Burn_2(\mathfrak K_4)$. So \[ \psi_1\big([D^\circ_\Lambda\bar\tau\righttoleftarrow \mathfrak K_4]^{\mathrm{naive}}_{({\mathcal O}(-1))}\big) =({\mathfrak S}_2,C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1)). \] \item $\Lambda=\{\langle (1,(1,2))\rangle \}$: The computation is similar. There is only a nontrivial computation for $\bar\tau=1$, and we obtain \[ \psi_1\big([D^\circ_\Lambda\bar\tau\righttoleftarrow \mathfrak K_4]^{\mathrm{naive}}_{({\mathcal O}(-1))}\big) =(\text{diagonal in $C_2\times {\mathfrak S}_2$},C_2\mathrel{\reflectbox{$\righttoleftarrow$}} k({\mathbb P}^1),(1)). \] \end{itemize} Combining the contributions, we obtain the formula in the statement of Proposition~\ref{prop:class-dp6}. \section{Dimension 3} \label{sect:dim3} In this section, we analyze 3-dimensional tori, following \cite{kun}. This is the smallest dimension where cohomology can obstruct rationality and linearizability. We have two motivating problems: \begin{itemize} \item Find nonlinearizable actions with vanishing ${\mathrm H}^1(G',\Pic(X))$, for all $G'\subseteq G$, this is the analogue of \cite{isk-s3}. \item Investigate the relation between $$ [X\righttoleftarrow G]\quad \text{ and } \quad {\mathrm H}^1(G',\Pic(X)), $$ where $X$ is a smooth $G$-equivariant projective compactification of $T$ and $G'\subseteq G$. \end{itemize} Any action on a torus $T=\mathbb G_m^3$ factors through a subgroup of $$ C_2\times {\mathfrak S}_3\times C_2\quad \text{ or } \quad C_2\times {\mathfrak S}_4, $$ and the second group admits 3 different actions, labeled C, S, and P in \cite{kun}. The first group is realized on a product of a del Pezzo surface of degree 6 with ${\mathbb P}^1$, with the natural action of $G':=C_2\times {\mathfrak S}_3$ on the DP6 (described in Section~\ref{sect:2}) and $C_2$ on ${\mathbb P}^1$. As mentioned in Section \ref{sect:2} and \cite[Rem. 9.13]{lemire}, it is already an open problem, whether the $G'$-action is linearizable. The other actions are realized as follows \cite[Section 2]{kun}: \begin{itemize} \item[(C)] on ${\mathbb P}^1\times {\mathbb P}^1\times{\mathbb P}^1$, \item[(S)] on the blowup of ${\mathbb P}^3$ in the four coordinate points and the six lines through these points; here ${\mathfrak S}_4$ permutes the coordinates in ${\mathbb P}^3$ and $C_2$ is the Cremona involution; \item[(P)] on the (singular) hypersurface $$ \{ x_1x_2x_3x_4=y_1y_2y_3y_4 \} \subset ({\mathbb P}^1)^4, $$ where ${\mathfrak S}_4$ acts by permuting the factors and $C_2$ switches $x_i$ and $y_i$, for all $i$. \end{itemize} The $G=C_2\times {\mathfrak S}_4$-action in type (C) is linearizable. The following proposition covers the types (S) and (P), see \cite[Fig. 4]{kun}. \begin{prop} \label{prop:kun} Assume that $T$ admits the action of the Klein group $G:={\mathbb Z}/2\oplus {\mathbb Z}/2$ such that $G\subset{\mathrm{GL}}(M)$ is generated by $$ \begin{pmatrix} -1 & 0 & 0 \\ 0 & 0& -1 \\ 0& -1& 0 \end{pmatrix}, \quad \begin{pmatrix} -1 & -1 & -1 \\ 0 & 0& 1 \\ 0& 1& 0 \end{pmatrix}. $$ Let $X$ be a smooth projective $G$-equivariant compactification of $T$. Then \begin{enumerate} \item ${\mathrm H}^1(G, \Pic(X)) = {\mathbb Z}/2$, and the action is not stably linearizable, \item $[X\righttoleftarrow G]=(\mathrm{triv},G\mathrel{\reflectbox{$\righttoleftarrow$}} k(X),())$ in $\Burn_3(G)$. \end{enumerate} \end{prop} \begin{proof} The first statement is one of the key results in \cite[Section 4]{kun}. The second follows by an argument as in the proof of Proposition~\ref{prop:s-vanishing}: any symbol with a nontrivial stabilizer that arises, vanishes in $\Burn_3(G)$. \end{proof} \bibliographystyle{plain}
2,869,038,155,196
arxiv
\section{Introduction} Mining software repositories community often needs to utilize methods originating from the Natural Language Processing (NLP) community. Sentiment Analysis is an NLP task that has received a 50-fold increase in the number of papers between 2005 and 2015~\cite{MANTYLA201816}. In software engineering it has gained recent attention on detecting developers' emotions~\cite{islam2016towards,Jongeling2017,mantyla2017bootstrapping,Gachechiladze2017Anger} as well as opinions about software products used in the field~\cite{martin2017survey,guzman2015retrieving,maalej2015bug}. Topic modeling is another NLP task that discovers topics occurring in a set of documents. For example, in software engineering it has been used to improve test case selection for manual testing~\cite{hemmati2017prioritizing} and to detect error prone software components~\cite{chen2017topic}. What is common in these NLP tasks is that they may produce incorrect results unless pre-processing is able to distinguish natural language from other textual elements that are common in software engineering context, such as source code, system configuration, and stack traces. In software engineering literature, we can find prior works on this topic. Bacchelli et al.~\cite{bacchelli2012content} presented a system for classifying content of software development emails. Their approach uses several parser and machine learning techniques and achieves an F-score of 0.945 for classifying whether a line is natural language(NL) or not, further classifying non-NL as junk, patch, or stack trace. Their tool seemed ideal for our problem, however, we were unable to either get it running or just extract their benchmark data from the database dump to a CSV file, due to complex database design. Overall, their solution offered very good performances, but their design seemed excessive. Thus, we searched for other solutions. In earlier work, Bacchelli et al.~\cite{bacchelli2010extracting} used a simpler approach which was based on regular expressions alone and achieved F-scores as high as 0.89 in separating source code from email. As our task is to separate natural language from various forms of SE communication, this only partially matched our needs. Yet, we found this approach appealing due its simplicity and relatively good performance. Before the work by Bacchelli et al., Betternburg et al.~\cite{bettenburg2008extracting} showed how regular expressions and island parsing can be used in a single project to extract patches, stack traces, source code, and enumerations from bug reports with an Accuracy between 97.0 and 100.0. However, they do not consider the extraction of natural language, thus, the task is a bit different from ours. Furthermore, Betternburg et al. did not report how easily transferable their approach is between projects, thus making us unsure if one could develop a similarly accurate regular expression for diverse data sources. When looking at the natural language processing literature, we realized that detecting natural language from computing outputs could be viewed as a special case of a language detection task, e.g. whether a piece of text is written in English or French. From the language detection literature, e.g.~\cite{jauhiainen2017evaluation}, we learned that different language features can be used, and it seemed that very good performances are achieved by extracting character n-grams between three to five characters. \section{Methodology} Using ideas originating from software engineering and language detection, we aim at creating a lightweight classifier tool that would take a line of text as input and predict whether it is \underline{N}atural \underline{L}anguage \underline{o}r \underline{N}ot (NLoN). The tool is currently available as an open-source R package on GitHub ~\cite{NLoN_Rpackage}. We want the cost of moving our tool between projects manageable in terms of effort and expertise. To control effort, we decided that it should require no more than 2,000 lines of text for training. According to our experience, manually labeling a single line takes about 3 seconds meaning that 2,000 lines can be labeled in 1 hour and 40 minutes. However, counting in some breaks and time needed to think about borderline cases, we think it is realistic to budget 4 hours for annotating project-specific data that can be incorporated in adaptations of the tool. In order to minimize expertise and effort, we decided to use machine learning rather than regular expressions. Although regular expression and parsers can offer excellent accuracy~\cite{bettenburg2008extracting}, they may require significant expertise when adapting them from one project to another. The adaption of regular expressions also requires investigation of the project under study. We think that effort is better spent in manual data labeling, whose results are then fed to a machine learner that decides whether a particular regular expression is a good predictor or not. We utilized 3 data sources with 2,000 samples each. First, we have Comments made on the Mozilla issue tracker for projects where most of the professional development happens, i.e., Firefox, Core, and Firefox OS. In total, the repository has over 6 million issue comments. Second, we have Chat entries retrieved from the public Slack archive of Kubernetes. In particular, we downloaded ~16K random entries from the \texttt{\#kubernetes-dev} channel, where we expected to find more representative examples of chat entries mixing natural language with code. Third, we have Email messages mined from one of the mailing list archives of the Apache Lucene project. Similarly, we downloaded the entire content (25K message) of the \texttt{lucene-dev} mailing list, where we expected to find more emails containing natural language content interleaved with code snippets. The first and second authors performed independent labeling. We noticed that oversight errors in human labeling occurred between 1 and 2\% of the labels for both labelers. After these errors were fixed, the labelers agreed on 97 to 98\% of the lines. To keep our presentation of results tidy, we only use the labels of the second rater (= second author). The first rater (= first author) was responsible for the NLoN-package implementation, thus, it is possible that his ratings are influenced by the feature engineering done for the machine learning model. Thus, the use of the second rater who was not involved in the tool implementation offers unbiased text labels. Still, we note that there is no meaningful difference in our classifier performance between raters. For machine learning, we implemented two approaches: feature engineering (FEng) and character tri-grams (C3gram). Feature engineering is inspired by the success of regular expressions of past work~\cite{bettenburg2008extracting,bacchelli2010extracting}. Yet, we do not interpret regular expressions as absolute rules where matching certain condition would classify the input as NL or non-NL. Rather we extract them as language features and feed the results to our machine learning algorithm, e.g. if the line ends with ``\{'', it is fed as 1, or as 0 otherwise. Additionally, feature engineering uses statistics of each line such as the ratio of non-letter characters and the ratio of capital letters. All feature engineering predictors are shown in Table~\ref{tab:feng}. Ten of our eleven features were created when working with our first data set, but to our surprise these features also performed very well with the two other data sets. In the end, we only found one extra feature that improved performance. However, we think that there is room for improvement in future works. Character tri-grams were suggested by language detection literature, e.g. \cite{jauhiainen2017evaluation}. We were afraid that due to our small sample size (2,000 lines) and the limited amount of contents each line holds, that tri-grams would not perform very well. Language detection approaches offer good performance starting from 25 characters and top performance is reached at 60 characters \cite{jauhiainen2017evaluation}, requirements that are not met by many of our input lines. Also, the number of samples in the language detection can go up to millions \cite{jauhiainen2017evaluation}. Glmnet implements a generalized linear model with a penalty. It was chosen as our machine learning tool due to its fast performance, robustness, and ability to handle large and sparse input space with multiple correlating inputs~\cite{friedman2010regularization}. Due to these features, penalized regressions are regarded as a recommended strategy for natural language processing tasks~\cite{gentzkow2017text}. Glmnet performs both variable selection and regularization, which prevent over-fitting by limiting the model complexity with a penalty term lambda. This ensures that we can test language features with high dimensionality, e.g. character tri-grams, without having to worry about feature selection or over-fitting. The ability to do feature selection as part of prediction has made Glmnet gain interest in the defect prediction community as well~\cite{osman2017automatic}. We use Glmnet for performing binomial logistic lasso regression and optimize its 10-fold cross-validation performance with respect to the area under the ROC curve (AUC). We report a performance at lambda min which gives the maximum mean AUC in cross validation. We repeated the cross validation 5 times and use the median performance in our results to counter the effects of non-balanced data partitioning. \section{Results} \subsection{Within-source prediction} Table~\ref{tab:performance} shows the results of machine learning using Glmnet with 10-fold cross validation. In all cases, we can see that both AUC and F-scores are above 0.9, and in many cases above 0.95. Combining both feature engineering and character tri-grams always offers a superior performance but it also contains the highest number of variables. The F-scores are shown to make backward comparison to previous papers easier. As reported in Bacchelli et al. \cite{bacchelli2012content}, their work resulted in F-scores up to 0.945. Our F-scores are between 0.959 and 0.970. From past work we found no execution times and our very brief execution time tests show that our tool can classify 1,000, 10,000, and 100,000 lines in 0.3s, 2.6s. and 27s respectively with a personal computer using a single core and logical unit. For the remainder of this paper, we will only report and discuss AUC measures. Unlike F-measure, AUC is threshold-independent, i.e., it does not depend on selecting a \emph{cutoff} value that the model uses to decide whether an instance is to be classified as either positive or negative. Threshold-dependent measures can lead to different and conflicting conclusions~\cite{Rahman2012threshold}. Besides, the three experimental datasets described earlier are imbalanced: the instances of the class NL outnumber those in the other class non-NL (see Table~\ref{tab:performance}). This problem, referred to as class imbalance~\cite{He2009imbalance}, can significantly compromise both the performance of the learning algorithms and the reliability of the assessment metrics. However, because it is threshold-independent, AUC is better at providing a reliable measure of the classification performance in presence of imbalanced datasets~\cite{Huang2005imbalance}. \begin{table}[] \centering \caption{Classification performance} \label{tab:performance} \begin{tabular}{lllll} & & Comments & Chat & Email \\ \hline NL lines & & 64.8 \% & 83.4 \% & 63.6 \% \\ \hline \multirow{3}{*}{AUC} & FEng & 0.984 & 0.971 & 0.969 \\ & C3gram & 0.973 & 0.951 & 0.962 \\ & Both & 0.987 & 0.976 & 0.981 \\ \hline \multirow{3}{*}{F1} & FEng & 0.957 & 0.957 & 0.938 \\ & C3gram & 0.918 & 0.935 & 0.918 \\ & Both & 0.959 & 0.970 & 0.959 \end{tabular} \end{table} \subsection{Feature Engineering (FEng)} Table ~\ref{tab:norm} shows normalized feature engineering coefficients at optimal penalty (\emph{lambda.min}) while Table ~\ref{tab:feng} explains each coefficient. A positive sign of a coefficient means that the coefficient increases the probability of a line being natural language, while negative values decrease it. We can notice that most of the coefficients have the same sign in all three data sets, meaning they predict towards the same end result. The coefficients are normalized, meaning that their size indicates their importance. An empty cell indicates that predictor is not selected for the model. For example, we can see that the number of stop-words, i.e. very common English words such ``it'' and ``the'', strongly predicts natural language while the ratio of special characters predicts the opposite. For stop words, we used a list included in MySQL database but we removed source code specific words, e.g. ``for'' and ``while''. We included the number of stop words twice with different ways of tokenizing character streams to words as we found that, depending on the data, a different tokenization was required. Coefficient values show that our decision was correct as both stop-word predictors are meaningful for all three data sets. \begin{table}[] \centering \caption{Normalized coefficients for model FEng} \label{tab:norm} \begin{tabular}{lrrr} & Comments & Chat & Email \\ \hline r.caps & -0.40 & -0.11 & 0.14 \\ r.specials & -2.04 & -0.41 & -1.13 \\ r.numbers & -0.90 & -0.21 & . \\ l.words & -1.59 & -0.85 & -2.54 \\ n.sw & 1.80 & 2.31 & 0.91 \\ n.sw2 & 0.59 & 0.29 & 0.30 \\ last.c.code & -0.15 & 0.04 & -0.63 \\ c1-3.letters & 0.29 & 0.06 & 0.33 \\ last.c.nl & 1.42 & 0.90 & 0.50 \\ n.emoticons & 0.64 & 0.05 & . \\ first.c.at & . & 1.58 & . \end{tabular} \end{table} \begin{table}[] \centering \caption{Feature engineering predictors} \label{tab:feng} \begin{tabular}{ll} Abbreviation & Explanation \\ \hline r.caps & Ratio of capital letters \\ r.specials & Ratio of chars not alphanumeric or whitespace \\ r.numbers & Ratio of numbers \\ l.words & Length of words \\ n.sw & Number of stop-words split with white space \\ n.sw2 & Number of stop-words split with tokenize\_words \\ last.c.code & Is last character typical in code , e.g. \{ or ; \\ n.c1-3.letters & Number of letters in first three characters \\ last.c.nl & Is last character typical in NL, e.g. ? or . \\ n.emoticons & Number of emoticons \\ first.c.at & Is first character of line @-sign \end{tabular} \end{table} \subsection{Character tri-grams (C3gram)} For character trigrams, we did some pre-processing. We changed all numbers to zeros as we figured that recognizing numbers would be important but the exact numbers would not matter for our task. We also converted all characters to lower case as we noticed no performance difference when keeping casing. We do realize that ratio of capital letters is a predictor in feature engineering, but still, as keeping capitals offered no performance improvement, we removed them. Perhaps with larger training data it would be meaningful. Table ~\ref{tab:3gram} shows the number of selected predictors and trigrams at minimum lambda, which gives the maximum AUC, but also for 1se lambda, which gives the most regularized (penalized) model such that AUC is within one standard deviation of the maximum. Utilizing not the best model but one that is one standard deviation away from it (lambda 1se) is an heuristic often used in machine learning, when several predictors are present, which slightly sacrifices model accuracy to select a simpler model whose accuracy is similar to the best model~\cite{friedman2001elements,krstajic2014cross} We have many character trigrams in our input data, between 8740 and 11169, but in all cases less than 10\% of those are selected as predictors for the best model. The number of selected tri-grams varies between 304 and 597. When we go for the simpler model, whose performance is within one standard deviation from the best model, we find that for the Chat messages from Kubernetes only 1.4\% of the trigrams (149/11169) are used in prediction. In the case of the Comments from Mozilla and the Mails from Lucene, the percentages are 2.8 and 2.4 percent respectively. The reduction in the number of predictors of the simpler model is even more evident in the model combining both tri-grams and feature engineering (see Table ~\ref{tab:both}). The best model, using both, has a number of predictors between 138 and 447, while the simpler model, giving nearly identical performance, uses between 27 to 68 predictors. \begin{table}[] \centering \caption{Number of predictors and performance at two different lambda values for model C3gram} \label{tab:3gram} \begin{tabular}{lrrr} & Comments & Chat & Email \\ \hline C3grams & 11169 & 9101 & 8740 \\\hline AUC (lambda min) & 0.973 & 0.951 & 0.962 \\ selected 3-grams & 464 & 304 & 554 \\\hline AUC (lambda 1se) & 0.966 & 0.949 & 0.959 \\ selected 3-grams & 133 & 129 & 210 \end{tabular} \end{table} \begin{table}[] \centering \caption{Number of predictors and performance at two different lambda values for model Both} \label{tab:both} \begin{tabular}{lrrr} & Comments & Chat & Email \\ \hline Features (C3grams+FEng) & 11180 & 9112 & 8751 \\\hline AUC (lambda min) & 0.987 & 0.976 & 0.981 \\ selected features & 406 & 138 & 383 \\\hline AUC (lambda 1se) & 0.986 & 0.973 & 0.980 \\ selected features & 48 & 27 & 68 \end{tabular} \end{table} \subsection{Cross-source prediction} We were also interested on how would our tool perform in cross-source prediction task. Table~\ref{tab:cross} shows these results. The first column (i) shows the results of the 10-fold cross validation using all the six thousand samples of the three source. We can see that in comparison to using just source-specific data (see Tables~\ref{tab:performance} and \ref{tab:cross}) the performance of character tri-grams slightly improves as it is higher than the midpoint of the range of within source prediction (0.968 vs. range 0.951-0.973), while the performance of feature engineering slightly reduces (0.970 vs range 0.969-0.984). This matches our expectation that character tri-grams do better with larger data sets as they are sparse in comparison to feature engineering numbers that can be computed from every line. When using both feature engineering and character tri-grams with all data, we achieve a performance of 0.982, while the performance of using only source specific data gives results varying between 0.987 and 0.976. We conclude that using all the data neither improves nor weakens the performance. For cross-source prediction (see the last three columns in Table~\ref{tab:cross}), we can see that using the Kubernetes Chat messages and the Lucene Email messages to predict Mozilla issue Comments (ii) works out surprisingly well with AUC up to 0.980, which is almost as good as using Mozilla's own data (AUC 0.987, see Table ~\ref{tab:performance}). On the other hand using Mozilla issue Comments and Kubernetes Chat messages to predict Lucene Emails (iv) offers much weaker performances with the best AUC at 0.913, which is considerable weaker than using Lucene mailing list's own data (0.981). Our cross-prediction results show that directly using our data one can get very good performances when filtering out non-natural language text in a software engineering context. Nevertheless, we recommend labeling a data set for each source as the effort is quite low (estimated only four hours) and the performance is very likely to be better. \begin{table}[] \centering \caption{Cross-source prediction results (AUC)} \label{tab:cross} \begin{tabular}{lrrrr} & (i) & (ii) & (iii) & (iv) \\ & All (CV) & Comments & Chat & Email \\ \hline F-engineering & 0.970 & 0.975 & 0.964 & 0.911 \\ Char 3-grams & 0.968 & 0.946 & 0.914 & 0.880 \\ Both & 0.982 & 0.980 & 0.957 & 0.913 \end{tabular} \end{table} \subsection{Limitations and future work} Our approach is relatively simple and offers very good performance in three different source types from three different projects. However, the results from three sources cannot be used to claim that our solution would work in all other software engineering contexts. In addition, we only tried one machine learning method (i.e., Glmnet) with default settings, and it is possible that other algorithms could offer better results. Therefore, we welcome others to try and improve our solution that is available online alongside our data. It is typical to have a mix of natural language and code in a line of text. When labeling we always consider these mixed lines as natural language lines. Based on reviewer feedback, we think it might be worthwhile to have a third class for the mixed lines. Such mixed lines would require further processing and they would need to be fed to another tool implementing parsing to separate the contents. Overall, one could challenge our choice of using line granularity, which was selected as we aimed for simplicity rather than perfection. Furthermore, we note that we did not assessed how NLoN performs with respect to different programming languages. Finally, the tool is only tested with an English language context and uses English stop-word list as part of the detection. The minimal requirement to use this tool in another language context would be to replace the English stop-word list with one of the corresponding language. Languages with numerous conjugated forms would probably also need lemmatization and pre-processing before our tool could be used. \section{Conclusions} In this paper, we have presented a solution to separate natural language from other text inputs that are common in software engineering such as stack traces or source code. From the software engineering domain, we derived the idea of using regular expressions to separate input in different types. However, we do not follow regular expression matches as absolute rules but rather as information that is fed to machine learning. We also extract other language features such as the ratio of capital letters and the number of the most common English words, i.e. stop words. Finally, from the language detection literature, we borrowed the idea of extracting character tri-grams as further information to feed our machine learning model. Our best model achieves an area under ROC curve performance from 0.976 to 0.987 in three different source types (bug tracker issue comments, chat messages, and email) which originate from three different projects (Mozilla Firefox, Kubernetes and Apache Lucene). When we originally came up with the problem that natural language should be separated from non-natural language while performing NLP tasks like as sentiment analysis or topic modeling, we were sure that a solution to this problem would be openly available. We found only one solution \cite{bacchelli2012content} that address the same problem and was openly available. Unfortunately, (i) we could not run it; (ii) even if we could have, the complexity of the solution seemed too high given the problem. Therefore, we implemented a lightweight solution of few hundreds lines of R-code and data files, instead of database dumps. Our solution is available as an open source R-package on GitHub ~\cite{NLoN_Rpackage}.
2,869,038,155,197
arxiv
\section*{Supplementary Material} See supplementary material for the details of experimental data. \acknowledgments The authors wish to thank J. R. Kim and Y. Ishida for fruitful discussions. This work is supported by the Institute for Basic science in Korea (Grant No. IBS-R009-G2). PPMS measurements are supported by the National Center for Inter-University Research Facilities (NCIRF) at Seoul National University in Korea. The work at Yonsei University is supported by the National Research Foundation of Korea (NRF) Grants (NRF-2017R1A5A1014862 (SRC program: vdWMRC center) and NRF-2019R1A2C2002601). \section*{data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
2,869,038,155,198
arxiv
\section{Introduction}\label{sec:intro} The diffusion processes represent the evolution of many real phenomena, such as epidemic diseases~\cite{Saeedian2017}, gossip spreading~\cite{banerjee2014gossip}, prey-predator species~\cite{bomze1995lotka}, pollution~\cite{gonccalves2013analytical}, and fluid flow~\cite{cussler2009diffusion}. Although there are many approaches in the mathematical view of this context, simple standard mathematical frameworks are inefficient to model some abnormal diffusion processes. The real-world contains eternally competition between the intelligent components of phenomena interacting intellectually in various conditions. Hence, there is still a great demand to advance complex system modeling to interpret such behaviors. In this paper, we intend to investigate the competition of two normal and anomalous diffusion processes of the SI model. The first diffusion enjoys a higher growth rate, and the other one is an anomalous diffusion including memory. We suggest a master equation which traces the dynamic of the mentioned contest and predicts the future dynamic behavior. It consists of a tunable memory factor that determines the state of ``how much the memory is stimulated, in anomalous diffusion.'' \begin{figure*}[ht!] \centering \begin{tikzpicture} \node at (-.6,2) {(\textbf{a})}; \node at (3.4,2) {(\textbf{b})}; \node at (7.4,2) {(\textbf{c})}; \draw [line width=.5mm,color=gray!70,rounded corners=0.2cm] (-.3,-1) rectangle (3,2) [xshift=4cm] (-.3,-1) rectangle (3,2) [xshift=4cm] (-.3,-1) rectangle (3,2); \node at (.3,-1.2 ) {\footnotesize{time stamp}}; \filldraw [red!50](2.6,-1.2) rectangle (2.8,-.7); \filldraw [black!50](2.4,-1.2) rectangle (2.6,-.7); \filldraw [red!50](6.6,-1.2) rectangle (6.8,.2); \filldraw [black!50](6.4,-1.2) rectangle (6.6,.2); \filldraw [red!50](10.6,-1.2) rectangle (10.8,1); \filldraw [black!50](10.4,-1.2) rectangle (10.6,-.7); \draw [->,line width=.3mm] (1,-1.2) -- (11,-1.2); \filldraw [black,nearly transparent](1.2,0.5) circle (.3cm); \filldraw [red,nearly transparent](1.9,1.3) circle (.2cm); \filldraw [red,nearly transparent](.5,1.3) circle (.2cm); \filldraw [red,nearly transparent](.5,-.3) circle (.15cm); \filldraw [black,nearly transparent](5.2,0.5) circle (.9cm); \filldraw [red,nearly transparent](5.9,1.3) circle (.5cm); \filldraw [red,nearly transparent](4.5,1.3) circle (.6cm); \filldraw [red,nearly transparent](4.5,-.3) circle (.4cm); \filldraw [red,nearly transparent](4.8,-.7) circle (.2cm); \filldraw [red,nearly transparent](5.8,-.4) circle (.1cm); \filldraw [black,nearly transparent](9.2,0.5) circle (.3cm); \filldraw [red,nearly transparent](9.9,1.3) circle (.6cm); \filldraw [red,nearly transparent](8.5,1.25) circle (.7cm); \filldraw [red,nearly transparent](8.6,-.2) circle (.58cm); \filldraw [red,nearly transparent](8.8,-.6) circle (.36cm); \filldraw [red,nearly transparent](9.8,-.4) circle (.43cm); \filldraw [red,nearly transparent](10,.7) circle (.2cm); \filldraw [red,nearly transparent](8,.5) circle (.14cm); \filldraw [red,nearly transparent](9.5,0.2) circle (.1cm); \filldraw [red,nearly transparent](10.1,0.2) circle (.3cm); \filldraw [black,nearly transparent](-.7,-1.8) circle (.3cm); \node at (2.5,-1.8) {\footnotesize{Anomalous diffusion with lower growth rate}}; \filldraw [red,nearly transparent](5.85,-1.8) circle (.3cm); \node at (8.9,-1.8) {\footnotesize{Normal diffusion with higher growth rate}}; \end{tikzpicture} \caption{\textbf{Schematic competition of normal and anomalous diffusion.} \textbf{(a)} At the first stage, the proportion of the anomalous diffusion is supposed to be small and equal to the proportion of the other side of the competition--normal diffusion process with the higher growth rate. \textbf{(b)} The conflict starts when some sharing diffusion areas are emerging, and the growth of one diffusion decays the proportion of another diffusion process. \textbf{(c)} The larger part of the competition establishes an ever-growing behavior so that the anomalous diffusion is likely to inevitable vanishing.} \label{fig:scheme} \end{figure*} The trivial outcome of our proposed model is illustrated in Fig.~\ref{fig:scheme}. The normal diffusion with a higher growth rate will occupy the more region of the system and maintain its growth. The counter-side of the rivalry, the one with a lower growth rate, is vulnerable to vanishing. However, by taking into account the memory effects~\cite{Ebadi2016,Saeedian2017,pone} in the anomalous diffusion, it is promising to extend the time interval of maintaining its minimum proportion. It is worthy to shed light upon possible applications of our proposed model in the industries and lay beyond the reach of theoretical aspects, namely competitive financial interactions~\cite{Iranzo2016,Iori2015}, social marketing events~\cite{Thackeray2008,Ashley2014}, sales promotion which may be applied in a saturated market~\cite{Kaushik2019}, and the new phenomenon so-called \textit{crowd-funding} and financing state-of-the-art technologies~\cite{Kaplan2013}. As well, the proposed idea is not only limited to economics but also extended to other fields of study involving an analogous model. In the following, section \ref{sec:def} deals with introducing the master equation with integer order and analyzing its dynamic behavior. In section \ref{sec:fractional}, the differential equation associated with lower growth rate is incorporated into the concept of memory by applying \textit{Caputo} approach~\cite{podlubny1999fractional} to provide the anomalous diffusion. To optimize the memory effects, a strategy will be suggested in section \ref{sec:strategy}, and its quality will be checked in section \ref{sec:heatmap} for the application in business. In section \ref{sec:sensitivity}, the conclusions and future directions are taken. \section{Modeling the competition} \label{sec:def} Let us denote the normal and anomalous diffusion at time $t$, respectively, by $I_1(t)$ and $I_2(t)$. We consider $S(t) \geq 0$ as a potential shared source at time $t$. We define constant coefficient $\gamma$ referring to the \textit{relative growth rate}, the proportion of the anomalous diffusion in respect to normal diffusion stating on the other side of the competition. Since the size of the whole system is assumed to be constant, the summation over the amount of the two sides, $I_1(t)$, $I_2(t)$ and the potential capacity $S(t)$ are not independent, so we consider the normalized form satisfying: \begin{equation} 1 = S(t)+I_1(t)+I_2(t)-(I_1(t)\cap{I_2(t)}). \end{equation} Each part of the source may distribute to the both diffusion through time. Thus, the growth of $I_1$ and/or $I_2$ leads to the reduction of $S$. Hence, we define the dynamic behavior of the potential capacity $S(t)$ with the following master equation, \begin{equation}\label{eq:1} \frac{dS}{dt} = -( I_1 + \gamma I_2 )S. \end{equation} The conversion rate of \(S\) to the two diffusion depends on the growth rate coefficients and the potential capacity. On the other hand, the growth of $I_2/I_1$ should decay $I_1/I_2$, and vice versa. Therefore, one can formulate the dynamics of each diffusion as: \begin{equation}\label{eq:2} \frac{dI_1}{dt} = (1 - \gamma)I_1 I_2 + I_1 S, \end{equation} \begin{equation}\label{eq:3} \frac{dI_2}{dt} = (\gamma- 1)I_1 I_2 + \gamma I_2 S, \end{equation} By assuming $0<\gamma<1$, the growth rate of diffusion \(I_1\) is higher than diffusion \(I_2\). Under the condition of $\gamma=1$, the two dynamical equations turn into two equal coupled differential equations. In this case, with the same initial values of $I_1$ and $I_2$, the two competitors will grow symmetrically as long as half of the system is occupied. \begin{figure*}[ht!] \centering \includegraphics[scale=0.5]{Memory05-VS-memoryless} \caption{ The evolution of $S(t)$, $I_1(t)$ and $I_2(t)$ with the relative growth rate $\gamma=0.995$ with the initial values are $S(0) = 0.8$, $I_1(0) = I_2(0) = 0.1$. \textbf{(a)} The numerical solution of a Markov process based on Eq.\ref{eq:1}, Eq.\ref{eq:2} and Eq.\ref{eq:3}. \textbf{(b)} The numerical solution of a Non-Markov process based on Eq.\ref{eq:11}, Eq.\ref{eq:21} and Eq.\ref{eq:4} with $\alpha = 0.5$. } \label{fig:1} \end{figure*} In Fig.~\ref{fig:1}, the dynamic of growth and decay of the two diffusions with the same initial value $I_1(0)= I_2(0)=0.1$ and relative growth rate $\gamma=0.995$ show the emerging pattern of the competition to earn a more shared area. $I_2(t)$ reaches a maximum value at critical time $t_c$ where $I_1(t_c)+I_2(t_c) \simeq 1$ and $S(t_c)\simeq 0$. In the case of memory-less, Fig.~\ref{fig:1}(a), the competition between the two sides begins at $t_c$. At this time, the side 1 begins growing faster than side 2 and obtains a bigger region of the system. However, the weaker side, \(I_1\), follows a decreasing trend. Hence, a small difference between the growth rate coefficients of the competitors causes two diverse destinies. Thus, the more powerful the side of the competition will monopolize the system. It shows that the relative growth rate plays a significant role in the success and failure of competitors so that relatively smaller ones have no chance to survive under the competition with bigger rivals. All the above discussion are based on the defined set of dynamical equations \ref{eq:1} to \ref{eq:3}. The proposed system can be validated by a well-known biological model with a similar concept; In fact, equations \ref{eq:2} and \ref{eq:3} are analogous to Lotka-Volterra competition model~\cite{bomze1995lotka}. Furthermore, in Sec.~\ref{sec:heatmap}, we will discuss the future states of the temporal contest while the relative growth rate $\gamma$ changes from 0 through 1. The main question is that which conditions aim the weaker competitor to survive more? In the following sections, we will propose a strategy on memory effects to prolong the survival of the weaker competitor. \section{Memory effects} \label{sec:fractional} A reaction-diffusion system which includes intelligent elements is affected by memory. However, the proposed model \ref{eq:1}-\ref{eq:3} described by integer order derivatives cannot perfectly describe processes with memory (non-Markovian processes)~\cite{Saeedian2017, podlubny1999fractional}, due to this fact that such derivatives are determined by only a very small neighborhood around each point of time. To overcome this shortcoming, we incorporate the concept of \textit{fractional calculus} into the system as a kernel of the differential operator--that is, substituting a fractional order derivative. Indeed, it is shown that fractional derivatives can appropriately represent the effects of power-law memory~\cite{kilbas2006theory,pone,Saeedian2017}. Hence, we consider memory effects only for the evolution of the weaker competitor, $I_2$. As a result, intellectual behaviors that aim to slow down the diffusion decaying can be formulated by applying the memory effects. Mathematically, an integral equation with a time-dependent kernel $\kappa(t-t')$~\cite{Saeedian2017,Hassanibesheli2017} enables us to take the effects of previous time steps into account: \begin{equation}\label{eq:41} \frac{dI_2}{dt} = \int_{t_0}^{t} \kappa(t-t') H dt', \end{equation} where \begin{equation}\label{eq:42} H=((\gamma-1) I_1(t') I_2(t'))+\gamma I_2(t')S(t'), \end{equation} and we set the kernel as: \begin{equation}\label{eq:43} \kappa(t-t')=\frac{1}{\Gamma(\alpha-1)(t-t')^{\alpha-2}}, \end{equation} where $0<\alpha\leqslant 1$ and $\Gamma$ denotes the Gamma function. Different types of fractional differential operators that are suggested by Riemann, Liouville, Grunwald, Letnikov, Sonine, Marchaud, Weyl, Riesz, Caputo, Fabrizio, Atangana, and other scientists~\cite{samko1993fractional, podlubny1999fractional, kilbas2006theory, fabrizio1, atangana2016chaos}. But, in this paper, we consider the Caputo fractional time derivative of order $\alpha$ which can describe physical meanings of real-world phenomena~\cite{podlubny1999fractional}: \begin{equation}\label{caputo} {}_{{t_0}}^cD_t^\alpha y(t) = \frac{1}{{\Gamma (\alpha - 1)}}\int_{{t_0}}^t {\frac{{y'(\tau )d\tau }}{{{{(t - {t_0})}^\alpha }}}} . \end{equation} A lower degree of the fractional derivative $\alpha$ indicates a ``stronger" (long-lasting) memory effects of the weaker competitor, $I_2$. Hence, the dynamical equation of $I_2$ will follow a fractional differential while the two other dynamical equations~\ref{eq:1} and~\ref{eq:2} will remain unchanged: \begin{align}\label{eq:11} &\frac{dS}{dt} = - (I_1 + \gamma I_2 )S,\\ \label{eq:21} &\frac{dI_1}{dt} = (1 - \gamma)I_1 I_2 + I_1 S,\\ \label{eq:4} & {}_{t_0}^cD_t^\alpha I_2(t) = (\gamma - 1)I_1 I_2 + \gamma I_2 S. \end{align} \begin{figure*}[ht!] \centering \pgfmathsetlength{\imagewidth}{\linewidth}% \pgfmathsetlength{\imagescale}{\imagewidth}% \begin{tikzpicture}[x=\imagescale,y=-\imagescale] \node[anchor=north west] at (0,0) {\includegraphics[scale=0.4]{2222}}; \draw [<->][thick](9.5cm,-4.5cm) -- (12cm,-4.5cm); \node at (11cm,-4.3cm){\textbf{$\Delta\tau$}}; \node [text=blue] at (3.99cm,-1.675cm) {\textbf{\textit{\footnotesize{selective}}}}; \node [text=blue] at (5.3cm,-1.7cm) {\textbf{\textit{\footnotesize{recalling-}}}}; \node [text=red] at (6.6cm,-1.7cm) {\textbf{\textit{\footnotesize{forgetting}}}}; \node [text=red] at (7.9cm,-1.7cm) {\textbf{\textit{\footnotesize{strategy}}}}; \end{tikzpicture} \caption{A comparison of the evolution of the anomalous diffusion $I_2(t)$ with the relative growth rate $\gamma=0.995$ and initial value $I_2(0) = 0.01$ for three cases, without memory, with memory, and including memory and strategy. The non-fractional value of $\alpha=1$ guarantees the absence of memory effects in the growth process of $I_2$ (solid black line). The blue dashed line indicates the growth of $I_2(t)$ with the memory factor $\alpha = 0.5$. The red dashed and dotted line corresponds to the growth of \(I_2(t)\) with a new memory which is started at the peak of the memory process with $\alpha = 0.5$. The interval $\Delta\tau$ denotes the added lifetime for a predefined minimum proportion after launching the strategy. } \label{fig:2} \end{figure*} For simplicity, we assume that the memory of Eq.\eqref{eq:4} is constant through time. Thus, by considering $\alpha=0.5$, the emerging competitors start developing with an almost similar rate and an equal potential source converting to two sides by considering the effect of memory, as illustrated in Fig.\ref{fig:1}(b). Interestingly, the influential memory affects the contest before the $t_c$, when the whole source is completely divided into two competitors. It reduces the negative slope of the curve and slows down the loss rate of the weaker side, and hinders the growth of the powerful side. Nevertheless, it is not possible to alter the final destiny of the weaker competitor. Therefore, after a comparatively longer time, the weaker side inevitably loses its whole system share, and the more powerful side of the competition earns all capacity. \section{Strategy} \label{sec:strategy} \textit{Besides remembering, forgetting is a priceless gift of human beings.}\\ We optimize the diffusion behavior by renewing the memory at a particular moment. This strategy may lead the growth curve to the highest level of curves based on different memory stages. Initiating the memory from different spots of the functional history timeline of the diffusion and drawing the corresponding curves enables us to compare the growth patterns depending on the memory start point. Such a selective strategy is an approach to remarkably extend the survival time of anomalous diffusion. Fig.\ref{fig:2} illustrates a comparison of the behavior of the system including memory and strategy (red dashed and dotted line), only memory (blue dashed line), without memory (black solid line), which lead to different growth dynamical curves. The black diagram shows the evolution of $I_2(t)$ with the relative growth rate $\gamma=0.995$ with the initial value $I_2(0) = 0.01$ and $\alpha = 1$. The non-integer value of $\alpha$ does not guarantee long-standing survival time, due to the absence of the memory effects in the growth process of $I_2$. The blue curve indicates the growth of $I_2(t)$ with a similar relative growth rate and initial values, when the memory is set $\alpha = 0.5$ for the operator $_0^cD_{10^{3}}^\alpha$. In this case, the proportion of the memory-less process lower than the process with memory, however, it achieves a local success after the peak (the advent of the conflict). The red curve corresponds to the anomalous diffusion with a new memory starting from the peak of the process with memory. As a result, to extend the survival time of diffusion with a lower growth rate, the anomalous diffusion should continue until the peak point with recalling the past states, then, the process restarts by forgetting past experiences, and a new anomalous diffusion continues the process with considering memory effects from the last peak. To do so, we can determine the fractional differential operator by piecewise functions, $_0^cD_{{t^*}}^\alpha$ and $_{{t^*}}^cD_{{{10}^3}}^\alpha$, where $t^*$ denotes the peak point. We call this approach ``selective recalling-forgetting strategy" which may indicate some well-known intelligent reactions in the context of Business or other possible aspects. Furthermore, despite the maximum value of $I_2$, examining this strategy for two other moments are interesting for advanced complex models; 1.~At the inflection of the curve $S$, when the evolution behaviors are changing. 2.~At the intersection of $I_1$ and $I_2$, when the source is saturated, and both sides of the contest include an equal value. \section{A Proof of concept} \label{sec:heatmap} To interpret an application of the main idea, let us assume a business case of study focusing on the competition of two newly founded companies. Hence, we introduce a simple dynamical model to compare the behavior of a multi-agent competing market containing two sides: our individual firm, \(I_2\), on one side, and the whole market, \(I_1\), except the so-called individual firm, on the other side (see Fig.~\ref{fig:scheme}, and Eqs.~\ref{eq:11}-\ref{eq:4}). By considering whole system as a \textit{market share}, our results will build a bridge connecting a \textit{rivalry of possessing market share} and \textit{fractional calculus}.\\ Therefore, we have analogously discussed:\\ \indent I. the temporal properties of this multi-agent contest;\\ \indent II. the memory effects of one diffusion on the evolution of the whole system;\\ \indent III. by changing the strategy, the extent which anomalous diffusion can sustain in the temporal contest to possess at least a minimum \textit{ad hoc} market share for a longer time;\\ Further discussion is the phase spaces of $\alpha$, $\Delta\tau$, $\gamma$. The notation $\alpha$ is a tunable memory factor that determines the state of how much the memory is stimulated in the weaker firm customers'. Also, $\Delta\tau$ denotes the added lifetime after launching the strategy. $0<\gamma<1$ refers to the relative growth rate of the market share of our individual firm concerning the relative growth rate of the market share of the other side of the competition (the whole market except our individual firm). We have revealed \(t_c\) in Fig.~\ref{fig:1} as a \textit{critical time}, in which the whole potential market is occupied by the competitors and achieving more market share for one firm. It yields to giving up the market share for another firm in the contest. Accordingly, a zero-sum gain~\cite{Krishnamoorthy2010,Hu2010} will emerge. As we have theoretically shown, the counter-side market with a higher growth rate will occupy the whole market and maintain their growing market share influenced by advertisements, financial investments~\cite{Krishnamoorthy2010,Hu2010}, hub-connections and united competitors~\cite{Iranzo2016}, so forth. On the other side of the rivalry, our individual firm with a lower growth rate is vulnerable to its market share extinction. Further, by taking into account the memory effects in the weaker firm, it can extend the time interval, $\Delta\tau$, of the minimum market share (Fig. \ref{fig:2}). To compare the total number of achieved customers of the weaker company, $I_2$, for three different cases--that is, the model without memory ($NMI_2$), with memory ($MI_2$), and with memory and strategy ($SMI_2$), we suggest considering cumulative market share through the time. Hence, we denote cumulative function by ``$\int$". \begin{figure}[ht!] \centering \includegraphics[scale=0.38]{33} \caption{ A comparison of cumulative market shares of $I_2$ for three different cases; with memory and strategy, only with memory, and without memory, when $\gamma=0.995$. }% \label{fig:3}% \end{figure} \begin{figure}[ht!] \centering \pgfmathsetlength{\imagewidth}{\linewidth}% \pgfmathsetlength{\imagescale}{\imagewidth}% \begin{tikzpicture}[x=\imagescale,y=-\imagescale] \node at (0,0) {\includegraphics[scale=0.45]{52}}; \node [rotate=90] at (-4cm,0cm){\large{\textbf{$\gamma$}}}; \end{tikzpicture} \caption{Proportions of cumulative market shares of $I_2$, for the system including memory and strategy to the system with memory, in a range of relative growth rates $0<\gamma<1$ through the time-stamp 1000. }% \label{fig:5}% \end{figure} \begin{figure}[ht!] \centering \pgfmathsetlength{\imagewidth}{\linewidth}% \pgfmathsetlength{\imagescale}{\imagewidth}% \begin{tikzpicture}[x=\imagescale,y=-\imagescale] \node at (0,0) {\includegraphics[scale=0.35]{detaT}}; \node at (0cm,-2.7cm){\large{\textbf{$\gamma$}}}; \node [rotate=90] at (-3.5cm,0cm){\large{\textbf{$\Delta\tau$}}}; \end{tikzpicture} \caption{Predicting the effect of triggering the new strategy on the lengthening the additional survival time, $\Delta\tau$ (see Fig.~\ref{fig:2}), of the weaker side (our individual firm) for different rates of competitions, $\gamma$.}% \label{fig:detaT}% \end{figure} Fig.~\ref{fig:3} shows that the evolution process involving the strategy (red dashed line) performs better than two other cases, as well the memory influences the system (blue solid line) after around 500. It confirms that, for such a $\gamma$ closing to 1, it is recommended to run the strategy because the impact of using strategy and memory is more than the effects of exclusive memory. Consequently, when the competition between the two firms is too tight (e.g. for $\gamma=0.995$), it is plausible to introduce a selective recalling-forgetting strategy. Besides, to clarify the efficiency of the proposed model for various relative growth rates, we provide a heatmap of the proportions of cumulative market share for different competition ranges, $0<\gamma<1$, versus time (Fig.~\ref{fig:5}). The notation ${C} = \frac{{\int {SM{I_2}} }}{{\int {M{I_2}} }}$ indicates a proportion of the cumulative market share of $I_2$ including strategy and memory over the cumulative market share of $I_2$ only with memory. Based on Fig.\ref{fig:5}, for the range of $0.6<\gamma<0.7$ and $\gamma\simeq1$ using a selective recalling-forgetting strategy is highly recommended for surviving. Considering a predefined minimum market share, Fig.~\ref{fig:detaT} demonstrates the effect of triggering the new strategy on the lengthening the additional survival time ($\Delta\tau$) of the weaker side (our individual firm). When it comes to a lower ratio of relative growth ($\gamma \to 0$), the managers may be reluctant to run the strategy. Because, when $\gamma \to 0$, it results in too small additional survival time ($\Delta\tau \to 0$). However, for larger values of $\gamma$, managers can provide an \textit{trade-off} analysis~\cite{Ardalankia2019} to evaluate the probable profitability. \section{Discussion} \label{sec:sensitivity} The diffusion problems in the real world have always consisted of a competition between various diffusion processes. These competitions occur in varied circumstances; one competitor may have a higher growth rate (or higher diffusion velocity), and the other one surpasses alternative factors. Hence, we have developed a deterministic model of such unequal competitions and studied its dynamic behavior. Here, a competition model has been proposed in two distinctive processes--without memory effects (normal diffusion) described by integer order differentials, and with memory effects (anomalous diffusion) by non-integer order differentials. We have revealed the impact of memory effects on the competition dynamics and presented a novel strategy by renewing memory effects imposed on the anomalous diffusion. In the memoryless process, both processes reach a maximum value when the conflict began. After this time, the diffusion processes diverge exponentially so that the more powerful side, even for relative growth rate $\gamma\simeq1$, would dominate the whole system. Thus, the weaker, anomalous diffusion has no chance to survive under the competition with the other rivals on the bigger side. However, there are some factors in real intelligent interactions that moderate such extreme divergence dynamics and we have represented this fact by memory effects. The proposed model has illustrated that the presence of memory leads to more sustainable dynamics, whereas the lack of memory leads to more energetic dynamics. In this regard, when the process is decaying (or growing), the memory effects have a conservative action on the dynamic. By taking to account such a mechanism, we have prolonged the survival time of the anomalous diffusion. One application of this strategy makes sense in Business; We maximize the efficiency of an individual weaker venture (relative to the whole market) by recalling the past until the peak point achieved and forgetting the past experiences, and the process is continued with a new memory starting from the last peak. Here, we have suggested that the relative growth rate coefficients can play the role of trade-off effects between value and cost of individual customers~\cite{Ardalankia2019} and it is plausible that the memory~\cite{pone,Saeedian2017,Ebadi2016} represents the characteristics of the value-cost trade-off and provides the customers to satisfy their utility~\cite{Grauwin2009}. At the heart of this approach, we emphasize that exploring a new strategy and also other striking actions take time to propagate in society, and this time-lag must be considered~\cite{Banerjee2013}. Considering scarce resources, two growing economic sectors in a selfish interaction~\cite{Grauwin2009} contribute to a competition of gaining the possible maximum market share and customers. Throughout a certain real-world network of competing agents, in spite of cumulative growth~\cite{Newman2001,Newman2003,Barabsi1999,Barabaacutesi1999}, there may exist some \textit{frictions} and drivers which affect the growth~\cite{pone}. Following this train of thought, there exist internal and external dynamics that create the cost of growth. Accordingly, the states of failure to possess a certain market share, and ever-growing market share, or even a trade-off between further growth or failure in a temporal behavior will emerge. Considering the memory of systems as a decaying factor against sudden alterations~\cite{pone,Hassanibesheli2017}, besides with probable strategies~\cite{Iranzo2016} as a temporal game-changer, in this study, we have applied the memory created by an individual firm--in \textit{statue quo}--in the customers' viewpoint or launching new strategies in the firms as an advantage to compete against the whole market. To demonstrate the competitors' behavior, some scholars considered restricted areas exposed to overcrowding~\cite{Forgerini2014}. In this context, the systems increasingly grow over time~\cite{pone}. As soon as the accessible region reduces, newer agents may locate in the territory of others, or their territory squeeze. Due to lack of resources--the density of the spatial area around agents--the involving agents are eliminated. This phenomenon will amplify when the space of the contest reduces. Indeed, after a critical time, the systems are vulnerable to some effects against growth, say lack of space in a rivalry and squeezed territories~\cite{Forgerini2014} or the cost of promotion, or agents extinction~\cite{Cohen2000}. We have utilized the same memory, that is, the same fractional derivative order, for both starting points--the initial time and the peak. Nonetheless, for further interpretation, it would be interesting to expand the meaning of growth rates and the concept of memory (or the fractional derivative order) of the proposed model in different contexts. For more realistic modelings, we can exploit the selective recalling-forgetting strategy with variable fractional order $\alpha(t)$ for a different position, rather than the peak point.
2,869,038,155,199
arxiv
\section{Background and Related Work} \paragraph{Sparse retrieval} Before the emergence of dense retrievers, traditional sparse retrievers such as TF-IDF or BM25 were the de facto method in open-domain question-answering systems~\cite{chen2017reading,yang2019end}. These sparse models measure similarity using weighted term-matching between questions and passages and do not train on a particular data distribution. It is well-known that sparse models are great at lexical matching, but fail to capture synonyms and paraphrases. \paragraph{Dense retrieval} On the contrary, dense models~\cite{lee2019latent,karpukhin2020dense,guu2020realm} measure similarity using learned representations from supervised QA datasets, leveraging pre-trained language models like BERT. In this paper, we use the popular dense passage retriever (DPR) model~\cite{karpukhin2020dense} as our main evaluation,\footnote{The detailed experimental settings are in Appendix~\ref{app:exp-details}.} and we also report the evaluation of REALM~\cite{guu2020realm} in Appendix~\ref{app:full_results}. DPR models the retrieval problem using two encoders, namely the question and the passage encoders, initialized using BERT. DPR uses a contrastive objective during training, with in-batch negatives and hard negatives mined from BM25. During inference, a pre-defined large set of passages (e.g., 21-million passages in English Wikipedia) are encoded and pre-indexed---for any test question, the top passages with the highest similarity scores are returned. Recently, other advances have been made in improving dense retrieval, including incorporating better hard negatives~\cite{xiong2021approximate,qu2021rocketqa}, or fine-grained phrase retrieval~\cite{lee2021learning}. We leave them for future investigation. \paragraph{Generalization problem} Despite the impressive in-domain performance of dense retrievers, their capability of generalizing to unseen questions still remains relatively under-explored. Recently, \newcite{lewis2020question} discover that there is a large overlap between training and testing sets on popular QA benchmarks, concluding that current models tend to memorize training questions and perform significantly worse on non-overlapping questions. AmbER~\cite{chen2021evaluating} test sets are designed to study the entity disambiguation capacities of passage retrievers and entity linkers. They find models perform much worse on rare entities compared to common entities. Similar to this work, our results show dense retrieval models generalize poorly, especially on rare entities. We further conduct a series of analyses to dissect the problem and investigate potential approaches for learning robust dense retrieval models. Finally, another concurrent work~\cite{thakur2021beir} introduces the BEIR benchmark for zero-shot evaluation of retrieval models and shows that dense retrieval models underperform BM25 on most of their datasets. \section{Conclusion} In this study, we show that DPR significantly underperforms BM25 on {EntityQuestions}, a dataset of simple questions based on facts mined from Wikidata. We derive key insights about why DPR performs so poorly on this dataset. We learn that DPR remembers robust representations for common entities, but struggles to differentiate rarer entities without training on the question pattern. We suggest future work in incorporating entity memory into dense retrievers to help differentiate rare entities. Several recent works demonstrate retrievers can easily learn dense representations for a large number of Wikipedia entities~\cite{wu-etal-2020-scalable,li-etal-2020-efficient}, or directly generate entity names in an autoregressive manner~\cite{de2021autoregressive}. DPR could also leverage entity-aware embedding models like EaE~\cite{fevry-etal-2020-entities} or LUKE~\cite{yamada2020luke} to better recall long-tail entities. \section*{Ethical Considerations} Our proposed dataset, {EntityQuestions}, is constructed by sampling (subject, relation, object) triples from Wikidata, which is dedicated to the public domain under the Creative Commons CC0 License. In general, machine learning has the ability to amplify biases presented implicitly and explicitly in the training data. Models that we reference in our study are based on BERT, which has been shown to learn and exacerbate stereotypes during training~(e.g., \citealt{kurita-etal-2019-measuring}, \citealt{tan-2019-assessing}, \citealt{nadeem2020stereoset}). We further train these models on Wikidata triples, which again has the potential to amplify harmful and toxic biases. In the space of open-domain question answering, deployed systems leveraging biased pre-trained models like BERT will likely be less accurate or biased when asked questions related to stereotyped and marginalized groups. We acknowledge this fact and caution those who build on our work to consider and study this implication before deploying systems in the real world. \section*{Acknowledgements} We thank the members of the Princeton NLP group for helpful discussion and valuable feedback. This research is supported by gift awards from Apple and Amazon. \section{EntityQuestions} In this section, we build a new benchmark {EntityQuestions}, a set of simple, entity-centric questions and compare dense and sparse retrievers. \paragraph{Dataset collection} We select 24 common relations from Wikidata~\cite{wikidata2014vrandecic} and convert fact (\textit{subject}, \textit{relation}, \textit{object}) triples into natural language questions using manually defined templates (Appendix~\ref{app:full_results}). To ensure the converted natural language questions are answerable from Wikipedia, we sample triples from the T-REx dataset~\cite{trex2018elsahar}, where triples are aligned with a sentence as evidence in Wikipedia. We select relations following the criteria: (1) there are enough triples ($>$2k) in the T-REx; (2) it is easy enough to formulate clear questions for the relation; (3) we do not select relations with only a few answer candidates (e.g., \textit{gender}), which may cause too many false negatives when we evaluate the retriever; (4) we include both person-related relations (e.g., \textit{place-of-birth}) and non-person relations (e.g., \textit{headquarter}). For each relation, we randomly sample up to 1,000 facts to form the evaluation set. We report the macro-averaged accuracy over all relations of {EntityQuestions}. \paragraph{Results} We evaluate DPR and BM25 on the {EntityQuestions} dataset and report results in Table~\ref{tab:intro} (see full results and examples in Appendix~\ref{app:full_results}). DPR trained on NQ significantly underperforms BM25 on almost all sets of questions. For example, on the question ``Where was {\text{[E]}} born?'', BM25 outperforms DPR by $49.8\%$ absolute using top-20 retrieval accuracy.\footnote{For our entire analysis, we consider top-20 retrieval accuracy for brevity. However, trends still hold for top-1, top-5, and top-100 retrieval accuracy.} Although training DPR on multiple datasets can improve the performance (i.e., from $49.7\%$ to $56.7\%$ on average), it still clearly pales in comparison to BM25. We note the gaps are especially large on questions about person entities. In order to test the generality of our findings, we also evaluate the retrieval performance of REALM~\cite{guu2020realm} on {EntityQuestions}. Compared to DPR, REALM adopts a pre-training task called salient span masking (SSM), along with an inverse cloze task from \newcite{lee2019latent}. We include the evaluation results in Appendix~\ref{app:full_results}.\footnote{We cannot directly compare the retrieval accuracy of REALM to DPR, as the REALM index uses 288 BPE token blocks while DPR uses 100 word passages.} We find that REALM still scores much lower than BM25 over all relations ($19.6\%$ on average). This suggests that incorporating pre-training tasks such as SSM still does not solve the generalization problem on these simple entity-centric questions. \section{Introduction} Recent dense passage retrievers outperform traditional sparse retrieval methods like TF-IDF and BM25~\cite{robertson2009probabilistic} by a large margin on popular question answering datasets~(\citealt{lee2019latent}, \citealt{guu2020realm}, \citealt{karpukhin2020dense}, \citealt{xiong2021approximate}). These dense models are trained using supervised datasets and the dense passage retriever (DPR) model~\cite{karpukhin2020dense} demonstrates that only training 1,000 supervised examples on top of BERT~\cite{devlin2019bert} already outperforms BM25, making it very appealing in practical use. In this work, we argue that dense retrieval models are not yet robust enough to replace sparse methods, and investigate some of the key shortcomings dense retrievers still face. We first construct {EntityQuestions}, an evaluation benchmark of simple, entity-centric questions like ``Where was Arve Furset born?'', and show dense retrieval methods generalize very poorly. As shown in Table~\ref{tab:intro}, a DPR model trained on either a single dataset Natural Questions (NQ)~\cite{kwiatkowski2019natural} or a combination of common QA datasets drastically underperforms the sparse BM25 baseline (49.7\% vs 71.2\% on average), with the gap on some question patterns reaching $60\%$ absolute! \input{tables/intro} Based on these results, we perform a deep dive into why a single dense model performs so poorly on these simple questions. We decouple the two distinct aspects of these questions: the entities and the question pattern, and identify what about these questions gives dense models such a hard time. We discover the dense model is only able to successfully answer questions based on common entities, quickly degrading on rarer entities. We also observe that dense models can generalize to unseen entities only when the question pattern is explicitly observed during training. We end with two investigations of practical solutions towards addressing this crucial problem. First, we consider data augmentation and analyze the trade-off between single- and multi-task fine-tuning. Second, we consider a fixed passage index and fine-tune specialized question encoders, leading to memory-efficient transfer to new questions. We find that data augmentation, while able to close gaps on a single domain, is unable to consistently improve performance on unseen domains. We also find that building a robust passage encoder is crucial in order to successfully adapt to new domains. We view this study as one important step towards building universal dense retrieval models. \section{Dissecting the Problem: Entities vs. Question Patterns} In this section, we investigate why dense retrievers do not perform well on these questions. Specifically, we want to understand whether the poor generalization should be attributed to (a) novel entities, or (b) unseen question patterns. To do this, we study DPR trained on the NQ dataset and evaluate on three representative question templates: \ti{{place-of-birth}}, \ti{{headquarter}}, and \ti{{creator}}.\footnote{The question templates for these relations are: {{place-of-birth}}: ``Where was {\text{[E]}} born?''; {{headquarter}}: ``Where is the headquarters of {\text{[E]}}?''; {{creator}}: ``Who was {\text{[E]}} created by?''.} \subsection{Dense retrievers exhibit popularity bias} We first determine how the entity {\text{[E]}} in the question affects DPR's ability to retrieve relevant passages. To do this, we consider all triples in Wikidata that are associated with a particular relation, and order them based on frequency of the subject entity in Wikipedia. In our analysis, we use the Wikipedia hyperlink count as a proxy for an entity's frequency. Next, we group the triples into $8$ buckets such that each bucket has approximately the same cumulative frequency. Using these buckets, we consider two new evaluation sets for each relation. The first (denoted ``rand ent'') randomly samples at most 1,000 triples from each bucket. The second (denoted ``train ent'') selects all triples within each bucket that have subject entities observed in questions within the NQ training set, as identified by ELQ~\cite{li-etal-2020-efficient}. We evaluate DPR and BM25 on these evaluation sets and plot the top-20 accuracy in Figure~\ref{fig:freq-analysis}. DPR performs well on the most common entities but quickly degrades on rarer entities, while BM25 is less sensitive to entity frequency. It is also notable that DPR performs generally better on entities seen during NQ training than on randomly selected entities. This suggests that DPR representations are much better at representing the most common entities as well as entities observed during training. \input{tables/finetune_analysis} \subsection{Observing questions helps generalization} \label{sec:entity_generalization} We next investigate whether DPR generalizes to unseen entities when trained on the question pattern. For each relation considered, we build a training set with at most $8,000$ triples. We ensure no \ti{tokens} from training triples overlap with tokens from triples in the corresponding test set. In addition to using the question template used during evaluation to generate training questions, we also build a training set based on a syntactically different but semantically equal question template.\footnote{ {{place-of-birth}}: ``What is the birthplace of {\text{[E]}}?''; {{headquarter}}: ``Where is {\text{[E]}} headquartered?''; {{creator}}: ``Who is the creator of {\text{[E]}}?''. } We fine-tune DPR models on the training set for each relation and test on the evaluation set of {EntityQuestions} for the particular relation and report results in Table~\ref{tab:p19_finetune_analysis}. \begin{figure*}[t] \centering \hspace{-3em} \begin{minipage}[b]{.43\textwidth} \centering \includegraphics[width=1.1\columnwidth]{figures/visualization/DPR-pt-cvr.pdf} \vspace{-3em} \caption*{~~~~~~~(a)} \end{minipage}\qquad \begin{minipage}[b]{.43\textwidth} \centering \includegraphics[width=1.1\columnwidth]{figures/visualization/DPR-ft-P19-cvr.pdf} \vspace{-3em} \caption*{~~~~~~~(b)} \end{minipage} \caption{ Visualization of positive passage embeddings returned by DPR before and after fine-tuning on the \ti{place-of-birth} questions. (a): Positive passage embeddings returned by DPR trained on NQ; (b) Positive passage embeddings returned by DPR after fine-tuning. } \label{fig:tsne_plots} \end{figure*} Clearly, observing the question pattern during training allows DPR to generalize well on unseen entities. On all three relations, DPR can match or even outperform BM25 in terms of retrieval accuracy. Training on the equivalent question pattern achieves comparable performance to the exact pattern, showing dense models do not rely on specific phrasing of the question. We also attempt fine-tuning the question encoder and passage encoder separately. As shown in Table~\ref{tab:p19_finetune_analysis}, surprisingly, there is a significant discrepancy between only training the passage encoder (OnlyP) and only training the question encoder (OnlyQ): for example, on {{place-of-birth}}, DPR achieves $72.8\%$ accuracy with the fine-tuned passage encoder, while it achieves $45.4\%$ if only the question encoder is fine-tuned. This suggests that passage representations might be the culprit for model generalization. To understand what passage representations have learned from fine-tuning, we visualize the DPR passage space before and after fine-tuning using t-SNE~\cite{van2008visualizing}. We plot the representations of positive passages sampled from NQ and \ti{{place-of-birth}} in Figure~\ref{fig:tsne_plots}. Before fine-tuning, positive passages for \ti{{place-of-birth}} questions are clustered together. Discriminating passages in this clustered space is more difficult using an inner product, which explains why only fine-tuning the question encoder yields minimal gains. After fine-tuning, the passages are distributed more sparsely, making differentiation much easier. \section{Towards Robust Dense Retrieval} \label{sec:robust_dense} Equipped with a clear understanding of the issues, we explore some simple techniques aimed at fixing the generalization problem. \paragraph{Data augmentation} We first explore whether fine-tuning on questions from a \ti{single} {EntityQuestions} relation can help generalize on the full set of {EntityQuestions} as well as other QA datasets such as NQ. We construct a training set of questions for a single relation and consider two training regimes: one where we fine-tune on relation questions alone; and a second where we fine-tune on both relation questions and NQ in a multi-task fashion. We perform this analysis for three relations and report top-20 retrieval accuracy in Table~\ref{tab:data_augmentation_entityqa}. \input{tables/data_augmentation_entityqa} We find that fine-tuning only on a single relation improves {EntityQuestions} meaningfully, but degrades performance on NQ and still largely falls behind BM25 on average. When fine-tuning on both relation questions and NQ together, most of the performance on NQ is retained, but the gains on {EntityQuestions} are much more muted. Clearly, fine-tuning on one type of entity-centric question does not necessarily fix the generalization problem for other relations. This trade-off between accuracy on the original distribution and improvement on the new questions presents an interesting tension for universal dense encoders to grapple with. \paragraph{Specialized question encoders} While it is challenging to have one retrieval model for all unseen question distributions, we consider an alternative approach of having a single passage index and adapting specialized question encoders. Since the passage index is fixed across different question patterns and cannot be adapted using fine-tuning, having a robust passage encoder is crucial. We compare two DPR passage encoders: one based on NQ and the other on the PAQ dataset~\cite{lewis2021paq}.\footnote{PAQ dataset sampling scheme is described in Appendix~\ref{app:exp-details}.} We expect the question encoder trained on PAQ is more robust because (a) 10M passages are sampled in PAQ, which is arguably more varied than NQ, and (b) all the plausible answer spans are identified using automatic tools. We fine-tune a question encoder for each relation in {EntityQuestions}, keeping the passage encoder fixed. As shown in Table~\ref{tab:passage_encoders},\footnote{Per-relation accuracy can be found in Appendix~\ref{app:per-relation}.} fine-tuning the encoder trained on PAQ improves performance over fine-tuning the encoder trained on NQ. This suggests the DPR-PAQ encoder is more robust, { }nearly closing the gap with BM25 using a single passage index. We believe constructing a robust passage index is an encouraging avenue for future work towards a more general retriever. \input{tables/passage_encoders} \section{Full Results on {EntityQuestions}} \label{app:full_results} \paragraph{DPR vs. BM25} The evaluation results are shown in Table~\ref{tab:full_results}. BM25 significantly outperforms DPR models trained on either a single dataset NQ or a combination of common QA datasets. \paragraph{REALM vs. BM25} We also evaluate he retrieval performance of REALM~\cite{guu2020realm} on {EntityQuestions}. Specifically, we use REALM to retrieve 20 passages and check if the gold answer is a sub-string of the retrieved passages. We also evaluate BM25 on the same 288-token blocks that are used in REALM model. As shown in Table~\ref{tab:realm_results}, the results show that REALM still significantly underperforms BM25 on {EntityQuestions}, even with the extra pre-training tasks. \paragraph{Examples of DPR retrived passages} Table~\ref{tab:dpr-examples} shows examples of DPR retrieved results on three representative questions. DPR makes clear mistakes like confusing entities with similar names or missing the presence of an entity, causing it to retrieve irrelevant passages on these simple, entity-centric questions. \section{Experimental Details} \label{app:exp-details} \paragraph{Experimental settings of DPR} In our experiments, we use either pre-trained DPR models released by the authors, or the DPR models re-trained by ourself (Table~\ref{tab:passage_encoders}). All our experiments are carried out on $4 \times$ 11Gb Nvidia RTX 2080Ti GPUs. For all our fine-tuning experiments, we fine-tune for 10 epochs, with a learning rate $2 \times 10^{-5}$ and a batch size of 24. When we retrain DPR from scratch, we train for 20 epochs with a batch size of 24 (the original DPR models were trained on 8$\times$ 32Gb GPUs with a batch size of 128 and we have to reduce the batch size due to the limited computational resources) and a learning rate of $2 \times 10^{-5}$. \paragraph{Experimental settings of BM25} In our experiments, we use the Pyserini~\cite{lin2021pyserini} implementation of unigram BM25 with default parameters. We build an index using the same Wikipedia passage splits provided in the official DPR release. \paragraph{PAQ dataset sampling} \citet{lewis2021paq} introduce Probably Asked Questions (PAQ), a large question repository constructed using a question generation model on Wikipedia passages. We group all of the questions asked about a particular passage and filter out any passages that have less than 3 generated questions. We then sample 100K such passages and sample one question asked about each. We split this dataset into 70K/15K/15K for train/dev/test splits, although we do not evaluate on this dataset. Following \citet{karpukhin2020dense}, we use BM25 to mine hard negative examples. \section{Per-relation Accuracy with Different Passage Encoders} \label{app:per-relation} We fine-tune DPR with the passage encoder fixed on either NQ or PAQ. Table~\ref{tab:full__passage_encoders} compares the per-relation accuracy of DPR with fixed passage encoder fine-tuned on NQ and PAQ. As is shown, the passage encoder trained on PAQ is much more robust than the passage encoder trained on NQ. For many non-person relations, using a PAQ-based passage encoder can outperform BM25. \input{tables/full__passage_encoders}
2,869,038,155,200
arxiv
\section{Introduction}\label{s:Introduction} The Haumea (2003 EL$_{61}$) collisional family was discovered by \citet{Brown2007} who noted that Haumea and five other Kuiper belt objects (KBOs) shared a spectral feature that is indicative of nearly pure water ice on the surfaces of the bodies. These six KBOs, along with four additional family members identified by \citet{Schaller2008}, \citet{Snodgrass2010}, and \citet{Ragozzine2007}, can all be dynamically linked to Haumea, and there do not appear to be any dynamically unrelated KBOs that share this spectral feature. Aside from being spectrally linked to these other KBOs, Haumea itself shows signs of its collisional past. Despite having a nearly pure water ice surface, Haumea's density is $\sim2.6$ g cm$^{-3}$ \citep{Rabinowitz2006}, which is higher than expected for typical assumed ice/rock ratios in the Kuiper belt \citep{Brown2008}; one way to achieve this higher density is to have a catastrophic collision between a differentiated proto-Haumea and another KBO in which proto-Haumea loses a substantial fraction of its water ice mantle \citep{Brown2007}. This scenario is supported by the presence of at least two water ice satellites \citep{Barkume2006,Ragozzine2009}. Haumea also has an elongated shape and a very short spin period of $\sim4$ hours that is unlikely to be primordial \citep{Rabinowitz2006,Lacerda2007}. \citet{Ragozzine2007} examined the dynamical connections between the identified Haumea family members. These connections are made by first estimating the orbit of the center of mass of the colliding bodies, and then estimating the ejection velocities of each family member relative to the collision's center of mass. The ejection velocity is given by \begin{equation}\label{eq:dv} \Delta \vec{v}= \vec{v} - \vec{v}_{cm} \end{equation} where $\vec{v}_{cm}$ is the estimated collision's center-of-mass velocity. Because Haumea is by far the largest remnant from the collision, its orbit immediately after the collision should have nearly coincided with the center-of-mass orbit. However, Haumea is currently located at the boundary of the 12:7 mean motion resonance (MMR) with Neptune; over long timescales, the chaotic zone of this resonance causes a random walk of the proper elements such that Haumea's current orbit may be significantly distant from its post-collision orbit. \citet{Ragozzine2007} estimate the center-of-mass collision orbit by minimizing the sum of the relative speeds of all family members, assuming that Haumea's semimajor axis and its Tisserand parameter with respect to Neptune are both conserved during its chaotic evolution; they then use Haumea's present distance from the collision's center-of-mass orbit, together with a calculation of its chaotic diffusion rate, to estimate the age of the collisional family to be $3.5\pm2$ Gy. Given the exceedingly low collision probabilities for objects large enough to form the Haumea family in the current Kuiper belt, the family is likely to be old. However, the family probably cannot have formed in the primordial, much more massive Kuiper belt, because whatever caused the mass of the Kuiper belt to be depleted (by an estimated 2 or 3 orders of magnitude) would have also destroyed the dynamical coherence of the family~\citep{Levison2008}. The high inclination ($\sim27^{\circ}$) of the family also argues against a primordial origin, because such large inclinations are probably products of the excitation and mass depletion of the Kuiper belt. Thus, it appears that the Haumea family-forming collision occurred near the end of the primordial, high-mass phase of the Kuiper belt. Several of the largest KBOs show evidence of their collisional past (see review by \citet{Brown2008}), but the Haumea family is the only collisional family that has been identified in the Kuiper belt. The dynamical connections between the members of the family allow us to place some constraints on the type of collision that formed the family and also constrain the age of the family as being old, but probably not primordial. These characteristics make the Haumea family an excellent probe of the collisional environment in the Kuiper belt following the excitation and mass depletion event; understanding the type of collision that created the family (especially the relative sizes and speeds of the impactor and target) would provide valuable insight into the size and orbital distribution of the Kuiper belt at the time of the collision (see discussions of this in \citet{Marcus2011} and \citet{Levison2008}). Proposed models for the formation of the Haumea family have attempted to reproduce the family's relatively small velocity dispersion ($\sim150$~ms$^{-1}$) and to explain the compositional and orbital characteristics of the family. However, the orbits of the family members have been sculpted by several gigayears of dynamical evolution. In this paper we use numerical simulations to determine how this orbital evolution affects the dynamical coherence of the family. In Section~\ref{s:sims}, we determine the loss rates for the family, which depend on the initial velocity dispersion from the collision, and we determine how the velocity dispersion of the surviving family members is altered over time; from these simulations, we also obtain a hard lower limit for the age of the family. In Section~\ref{s:formation_models}, we apply these results to the family-formation models of \citet{Leinhardt2010} (a graze-and-merge type collision between two similarly sized, differentiated KBOs) and \citet{Schlichting2009} (the collisional disruption of a satellite orbiting Haumea), and we compare the predictions from these two formation models to the current observations of the family. Section~\ref{s:conclusions} provides a summary of our results and conclusions. \section{Orbital evolution of the Haumea family}\label{s:sims} Even though the identified Haumea family members (see Table~\ref{t:known_family}) have a fairly low velocity dispersion ($\Delta v \sim 150$~ms$^{-1}$), their proper orbital elements span a relatively large range in semimajor axis, $a$, and eccentricity, $e$, (a range that is typical of classical KBOs), and they have atypically large inclinations, $i$, of $\sim27^{\circ}$. Using the data for their best-fit orbits\footnote{orbit information was taken from the AstDyS website (http://hamilton.dm.unipi.it/astdys)}, we did a 10 Myr numerical simulation to obtain the average values of $a,e$ and $i$ for each family member over that time span, and we calculated the corresponding values of $\Delta v$ (equation~\ref{eq:dv}) relative to the center-of-mass collision orbit determined by~\citet{Ragozzine2007}; these are listed in Table~\ref{t:known_family} for the family members identified by \citet{Brown2007}, \citet{Schaller2008}, \citet{Ragozzine2007}, and \citet{Snodgrass2010}. Below, we examine the orbital distribution of the known family members to refine the center-of-mass orbit in light of the additional identified family members since \citet{Ragozzine2007}. We use the results of long-term numerical simulations to estimate how much the family's orbits have evolved since its formation, and we obtain a hard lower limit on the age of the family. \subsection{Collision center-of-mass orbit and a lower limit on the family's age}\label{ss:cmorbit} We use the average values of $a, e,$ and $i$ for the nine identified family members (Table~\ref{t:known_family}) to re-calculate the center-of-mass collision orbit using the method described by \citet{Ragozzine2007}: we minimize the sum of $\Delta v$ for the nine family members while fixing the semimajor axis of the center-of-mass orbit at that of Haumea's current orbit, allowing its eccentricity and inclination to vary such that Haumea's current Tisserand parameter with respect to Neptune ($T_N = 2.83$) is maintained, and allowing the mean anomaly, $M$, and the argument of pericenter, $\omega$, to vary freely; the longitude of ascending node, $\Omega$, is ignorable as it does not affect the distribution of $\Delta v$. Figure~\ref{f:cm_orbit} shows the results of this calculation for a range of eccentricity and inclination combinations of the collision center-of-mass orbit. The lower limit of the shaded region in the figure is the value of the family's average $\Delta v$ found by selecting values of the mean anomaly and argument of pericenter that minimize $\Delta v$; the shaded area shows the range in $\Delta v$ obtained by allowing $\omega$ to vary, but still selecting the value of $M$ that minimizes $\Delta v$ for each value of $\omega$. Parameters along the lower boundary of the shaded regions represent collisions occurring very near to the ecliptic plane, while parameters along the upper boundary represent collisions at the extreme, off-ecliptic points in the orbit ($\sim15-20$ AU above the ecliptic plane). The difference in average $\Delta v$ for the different values of $\omega$ is a factor of $\sim 2$, as noted by \citet{Ragozzine2007}; this increase in average $\Delta v$ for off-ecliptic collision points is due to the fact that producing the observed family's spread in inclination requires a larger $\Delta v$ at these locations. Because collisions near the ecliptic are much more probable than off-ecliptic collisions, we choose the center-of-mass orbit that minimizes the lower portion of the filled curve in Figure~\ref{f:cm_orbit}. The result is $(a,e,i,\omega,M) = (43.1$ AU$, 0.124, 28.2^{\circ}, 270^{\circ}, 76^{\circ})$. This is very similar to the collision center-of-mass orbit determined by \citet{Ragozzine2007}: $(a,e,i,\omega,M) = (43.1$ AU$, 0.118, 28.2^{\circ}, 270.8^{\circ}, 75.7^{\circ})$, indicating that the newer family members do not significantly affect the estimate of the collision center-of-mass orbit. The small difference in the eccentricity does not much affect the values of $\Delta v$ for the family members (both values of $\Delta v$ are listed in Table~\ref{t:known_family}) because the calculated $\Delta v$ is a fairly flat function of eccentricity within $\pm\sim10\%$ of its minimum. In the above calculations, as in~\citet{Ragozzine2007}, we assumed a constant semimajor axis and conservation of the Tisserand parameter during the chaotic evolution of Haumea's orbit. To test the validity of this assumption, we performed numerical simulations of resonant diffusion within the 12:7 MMR, and we find that Haumea's Tisserand parameter can vary by $\pm0.5\%$. This is a small variation, but it does affect the allowable combinations of $e$ and $i$ for the best-fit center-of-mass orbit. We performed the minimization of the sum of $\Delta v$ for the identified family members while allowing $e$ and $i$ to vary independently, and we find a slightly revised best-fit center-of-mass orbit: $(a,e,i,\omega,M) = (43.1$ AU$, 0.124, 27.3^{\circ}, 276^{\circ}, 70^{\circ})$. This orbit has a Tisserand parameter $T_N = 2.84$, which is within the range of $T_N$ found in our numerical simulations. If we additionally relax the constraints to allow the semimajor axis of the orbit to vary by $\pm0.15$ AU (the approximate range of variation in the 12:7 MMR), we find very similar results: $(a,e,i,\omega,M) = (43.1\pm0.15$ AU$, 0.121, 27.3^{\circ}, 278^{\circ}, 68^{\circ})$. These alternate minimum $\Delta v$ center-of-mass orbit fits give us an estimate of the uncertainties in the orbital parameters: \begin{equation*} (a,e,i,\omega,M)_{cm} = (43.1 AU, 0.115-0.132, 27-28.3^{\circ}, 270-278^{\circ}, 68-76^{\circ}). \end{equation*} These orbits all represent collisions near the ecliptic plane, and the \citet{Ragozzine2007} estimate falls within the uncertainties. We can use the collision center-of-mass orbit to set a lower limit on the age of the Haumea family by determining the minimum time necessary for such an orbit to diffuse to Haumea's current eccentricity ($e=0.19$) in the 12:7 MMR with Neptune. We generated 800 test particles with initial conditions within the uncertainties of the collision center-of-mass orbit found above, randomized their initial mean anomaly, and integrated these for 1 Gyr. We find from this simulation that $\sim3\%$ and $\sim6\%$ of the test particles reach Haumea's eccentricity by 500 Myr and 1 Gyr, respectively; this is a slightly lower efficiency than \citet{Ragozzine2007} found from similar simulations ($10\%$ had diffused by 1 Gyr), but the two results are consistent given that the number of test particles in their simulation was only 78. These results allow us to conclude with $\sim95\%$ confidence that the Haumea family is older than 1 Gyr. The fastest that any test particle in our simulation diffused to Haumea's eccentricity was $\sim100$ Myr (Fig.~\ref{f:haumea}); this indicates that 100 Myr is a strong lower limit on the age of the family. Another way to estimate the lower limit on the family's age is to examine the precession of the orbital planes of the identified family members. The current values of the longitudes of ascending node, $\Omega$, of the known family members are indistinguishable from a random distribution. Imediately after the family forming collision, the family members will share a common line of nodes on the collision center-of-mass orbit plane, but will have different values of the other orbital elements. After the collision, the differences in semimajor axes amongst the family members cause their orbit planes to precess at slightly different rates. The nodal precession rates range from $64^{\circ}Myr^{-1}$ to $81^{\circ}Myr^{-1}$ for the family members strongly affected by MMRs with Neptune and $69^{\circ}Myr^{-1}$ to $72^{\circ}Myr^{-1}$ for the non-resonant family members (as determined from our numerical simulations of their best-fit orbits). Considering the differences in these rates, we expect the nodes to be randomized on a 20 Myr timescale for the resonant family members and a 100 Myr timescale for the non-resonant family members. Both this estimate and the resonant diffusion timescale of Haumea's eccentricity within the 12:7 MMR indicate that 100 Myr is a hard lower limit on the age of the family. \subsection{Long-term orbital evolution} The Haumea family members' range in semimajor axis includes regions affected by various MMRs with Neptune. This means that since the time of the collision (at least 100 Myr ago, but most likely several Gyr ago), the orbital distribution of the family has been modified by dynamical evolution, and some family members have probably been removed by orbital instabilities. Comparisons of formation models to the current set of observed family members must account for this orbital modification and decay of the total population. To determine the effects of long-term orbital evolution on the family, we performed a suite of eight numerical simulations, each with a cloud of 800 test particles representing family members generated with an isotropic distribution of initial ejection velocity vectors about the collision center-of-mass determined by \citet{Ragozzine2007}: $(a,e,i,\omega,M) = (42.1$ AU$, 0.118, 28.2^{\circ}, 270.8^{\circ}, 75.7^{\circ})$. (As discussed in Section~\ref{ss:cmorbit}, there is some uncertainty in this collision center-of-mass orbit, but the simulation results do not strongly depend on the exact values chosen, as all of the allowed range results in test particles spread over very similar ranges in $a$, $e$, and $i$.) In each of the eight simulations, we adopted different values of the magnitude, $\Delta v$, of the initial ejection velocity of the cloud of test particles: $\Delta v = 50,100,150,..., 400$ ms$^{-1}$, respectively. We then integrated these test particle clouds forward in time for $4$ Gyr under the influence of the Sun and the four outer planets (in their current configuration), using the symplectic orbital integration method of \citet{Wisdom1991}. Any test particles that approached within a Hill sphere of Neptune were considered lost, because these are unstable on very short timescales. Figure~\ref{f:snapshots} plots the $a-e$ and $a-i$ distributions for two of our simulations; both the initial distributions and a snapshot at 3.5 Gyr are shown. In these plots, the test particles are color coded according to their stability (a particle is considered to be unstable if it approaches within a Hill sphere of Neptune at any point in the simulation); most of the test particles that are unstable are located either near the inner edge of the family (where their initial perihelion distances are nearly Neptune crossing) or near the labeled MMRs with Neptune. Previous studies have found that this region of the Kuiper belt is strongly affected by even fairly high order MMRs \citep{Chiang2003,Lykawka2007,Volk2011}, so it is not surprising that the family is strongly affected as well. The effect of the resonances on the unstable test particles is to increase their eccentricity until they become Neptune crossing and then have close encounters with Neptune. A few percent of the test particles survive in resonance until the end of the simulation; this result is consistent with the resonant fraction found by \citet{Lykawka2012} in a similar study of the evolution of the Haumea family. These stable resonant test particles are mostly found in the 3:2 and 7:4 MMRs; in these cases, the long-lived test particles are additionally stabilized due to the Kozai resonance \citep{Kozai1962}. The Kozai resonance causes a test particle's argument of perihelion to librate about $90^{\circ}$ which ensures that perihelion occurs well away from the ecliptic plane, protecting the test particle from close encounters with Neptune. The fraction of test particles that survive as family members up to $1.5$ and $3.5$ Gyr are listed in Table~\ref{t:survival_rates} for each value of $\Delta v$; for initial $\Delta v$ of $50-200$~ms$^{-1}$ (typical of the observed family) $20-25\%$ of the family is lost by $3.5$ Gyr, which is consistent with the \citet{Lykawka2012} results. In addition to the erosion of the population, we also find that the dynamical clustering in proper elements grows weaker (and the apparent $\Delta v$ of the test particles increases) over the course of the simulations. We determined the $\Delta v$ for each test particle by calculating its proper $a, e$ and $i$ (taking the average over the last 50 Myr of the simulation) and then allowing the values of the orbital angles to vary until we find the smallest difference between the test particle's orbital velocity and the orbital velocity of the collision's center-of-mass orbit. We find that the chaotic diffusion in orbital elements induces a spread of $50-100$~ms$^{-1}$ among the test particles and shifts the average to slightly higher than their initial value of $\Delta v$. Figure~\ref{f:deltav} shows the distributions of $\Delta v$ at 3.5 Gyr for three simulations in which the initial $\Delta v$ had values of $50$~ms$^{-1}$, $150$~ms$^{-1}$, and $350$~ms$^{-1}$. We conclude that, while some individual test particles that are strongly affected by MMRs with Neptune experience larger changes in $\Delta v$, for the non-resonant Haumea family members, it is likely that the value of $\Delta v$ they acquired at the time of the collision was within $\pm50-100$~ms$^{-1}$ of their current $\Delta v$. \section{Implications for family formation}\label{s:formation_models} The numerical simulations described above show how long-term orbital evolution will affect the Haumea family's dynamical clustering. In this section, we examine the implications of those results for the proposed family formation models of \citet{Leinhardt2010} and \citet{Schlichting2009}. These models make specific predictions for the mass and velocity distribution of the family members immediately following the formation of the family. We use our simulations of the family's orbital evolution to evolve the models' predicted mass and velocity distributions to the current epoch so we can compare the models' predictions to the currently observed family. \subsection{Overview of proposed formation models}\label{ss:proposed_models} \citet{Brown2007} estimated that the Haumea collisional family was created in a catastrophic collision between a proto-Haumea of radius $R\sim830$ km and another KBO with a radius of $\sim500$ km. However, the low velocity dispersion amongst the observed family members is problematic for such a model. In a catastrophic collision between two large KBOs, the velocity dispersion of the family members should be close to the escape velocity of the largest remnant, $\Delta v \sim v_{\rm{esc,Haumea}} \sim 900$~ms$^{-1}$ (see discussion in \citet{Leinhardt2010} and \citet{Schlichting2009}); in contrast, the observed family has $\Delta v = 50$--$300$~ms$^{-1}$. To explain the low velocity dispersion of the observed family, \citet{Schlichting2009} propose that the family originates from the catastrophic disruption of a satellite orbiting Haumea, rather than the disruption of a proto-Haumea. This actually requires two different collisions: a collision between a proto-Haumea and another large KBO that creates a large, icy satellite and gives Haumea its unusual shape and fast spin, then a subsequent collision between the satellite and another KBO. The latter would create a collisional family with values of $\Delta v$ close to the escape speed of the satellite rather than of Haumea. Assuming a primarily water ice composition, these authors estimate that the disrupted satellite would have had a radius $R\sim 260$ km to account for the mass of Haumea's remaining satellites and the rest of the collisional family; the expected $\Delta v$ would be $\sim 200$~ms$^{-1}$ for a satellite of this size. For a collisional family formed in this way, the authors estimate that the total mass of the family at formation would be no more than $\sim5\%$ of the mass of Haumea ($M_H \approx 4.2 \times 10^{21}$ kg). \citet{Leinhardt2010} propose an alternative formation mechanism for the family: a graze and merge collision event between two similarly sized, radius $R\sim850$ km, differentiated KBOs. In this scenario, the two impacting bodies merge after the collision, resulting in a very fast rotating object. The family members and satellites are a result of mass shedding due to the high spin rate of the merged body (Haumea) rather than being direct impact ejecta; this accounts for the low velocity dispersion of the family members. The authors ran several collision simulations and found that the resulting family would have a mass of $\sim4-7\%$ $M_H$, most of which would be found within $\sim300$~ms$^{-1}$ of the collision orbit. \subsection{Dynamical evolution of the family}\label{ss:dynamical_evolution} To determine how the dynamical evolution of the family will affect the mass and $\Delta v$ estimates from these formation scenarios, we generate synthetic families for the two models. We detail the assumptions for each step below, but the overall procedure is as follows: we first sample the size distribution predicted for the collisional model to generate a list of family members and their sizes. We then assign these members an initial $\Delta v$ according to the distribution of velocities from the model. This creates a snapshot of the family immediately after formation. To account for 3.5 Gyr of orbital evolution, we randomly assign an orbital history from one of our eight simulations (Section~\ref{s:sims}) to each of our synthetic family members (assigning it from the appropriate simulation based on initial $\Delta v$). Having assigned each family member an orbital history, we then take a snapshot of the surviving family members' orbital element and mass distributions at 3.5 Gyr. From this we can calculate the expected mass vs. $\Delta v$ distribution of the family for each model. For the graze-and-merge collisional model, \citet{Leinhardt2010} provide plots of the cumulative number of collisional fragments as a function of fragment mass and the cumulative mass of fragments as a function of $\Delta v$ from each of their high-resolution collision simulations (their Figure 3). We use the given total mass of collisional fragments to convert the normalized number of fragments presented in their Figure 3a to an absolute number of fragments. To convert the cumulative mass distribution to a cumulative size distribution, we assume that all the fragments have a uniform density of 1.15 g cm$^{-3}$, which is consistent with the family being composed of 80\% water ice by mass (their Table 2). We construct a set of synthetic family members from this size distribution and assign each fragment an initial $\Delta v$ based on the mass vs. velocity distribution (their Figure 3b); each bin in $\Delta v$ is filled with randomly selected family members until the specified mass in that bin is reached. Based on its initial value of $\Delta v$, each family member is then randomly assigned a 3.5 Gyr orbital history from our numerical simulations; in this way some family members are removed from the population, and the others diffuse in orbital elements and apparent $\Delta v$. \citet{Leinhardt2010} detail four different successful graze-and-merge collision simulations with slightly varying initial conditions (their simulations 1 through 4). For each of these family forming simulations, we generate 1500 synthetic, evolved families using the procedure above. \citet{Schlichting2009} did not perform collision simulations for the satellite breakup model, so we have to make some assumptions about the size and velocity distribution of the resulting family. We adopt a differential size distribution for the family \begin{equation}\label{eq:sd} N(R)dR \propto R^{1-\beta}dR \end{equation} where $R$ is the radius of the collision fragments; we adopt values of $\beta$ in the range 4.5-5.5, consistent with typical catastrophic collision simulations \citep{Leinhardt2012,Marcus2011}. Assuming a total family mass of $0.04-0.05 M_H$, we generate a synthetic family from this size distribution, with fragments in the size range $50$ km $ < R < 150$ km (the same size range as the graze-and-merge simulations, for ease of comparison). We use the same density for the family members as for the graze-and-merge formation scenario to convert radius to mass. For the ejection velocity of the fragments, we adopt a normal distribution with mean $200$~ms$^{-1}$ and standard deviation $50$~ms$^{-1}$. We generate 1500 synthetic families using these assumptions and dynamically evolve the family members for 3.5 Gyr in the same way as described for the graze-and-merge model. Figures~\ref{f:gm_mdv} and~\ref{f:sat_mdv} show our results for the \citet{Leinhardt2010} and \citet{Schlichting2009} formation models, respectively: we plot the average cumulative mass of the synthetic families as a function of their apparent $\Delta v$ at $t=3.5$ Gyr; the $1\sigma$ uncertainties are also indicated. Our calculations find that in the graze-and-merge model, the current evolved Haumea family should have a total mass of $0.045\pm0.01$ $M_H$, of which $\sim0.02$ $M_H$ should be found at $\Delta v < 150$~ms$^{-1}$. The currently observed family (including Haumea's satellites) is estimated to have a mass of $\sim0.017$ $M_H$ \citep{Cook2011}, which accounts for $\sim85\%$ of the mass that is expected to be found within $150$~ms$^{-1}$ of the collision center for the \citet{Leinhardt2010} formation model. The model predicts that there should be twice as much mass ($0.035\pm0.01$ $M_H$) in family members at larger velocities. The satellite breakup model predicts a surviving family of $\sim0.035$ $M_H$, mostly in the $\Delta v = 100$--$300$~ms$^{-1}$ range, indicating that the known family members account for $\sim50\%$ of the expected mass of the family. \subsection{Comparison with observations}\label{ss:observations} The known Haumea family members are not an observationally complete population; to compare the evolved synthetic families from Section~\ref{ss:dynamical_evolution} to the observed family, we must estimate how many of our synthetic family members would be within the observable apparent magnitude and ecliptic latitude range of observational surveys that have detected the Haumea family members. For each of our synthetic families (obtained in Section~\ref{ss:dynamical_evolution}), we take a snapshot of the instantaneous orbital elements of the family members at $t=3.5$ Gyr from their assigned orbital histories. This snapshot allows us to calculate the heliocentric distance and ecliptic latitude for each synthetic family member. To calculate the apparent magnitude, we use the heliocentric distance and the assigned size (as described in Section~\ref{ss:dynamical_evolution}), but we also need to make some assumptions about the albedos of the family members. Haumea's albedo is 0.8 \citep{Lacerda2007}, and \citet{Elliot2010} have measured an albedo of 0.88 for the next brightest family member (2003 TX$_{300}$), but albedos have not been measured for the other family members. Based on the light curves obtained for five of the known family members and the light curves of other icy solar system bodies with known albedos, \citet{Rabinowitz2008} argue that the Haumea family members' albedos are likely in the range $0.3-1.4$. For our synthetic family, we adopt an albedo distribution with an average of 0.8 (Haumea's albedo) and a uniform spread of $\pm0.2$. (We discuss below how this assumption might affect the results of our comparison.) Given this albedo assumption, we calculate the apparent magnitudes and ecliptic latitudes for each synthetic family, and we use the resulting distributions to calculate the number of objects as a function of $\Delta v$ for each synthetic family that would be detected by an observational survey. For the observational comparison, we use the results of the Palomar distant solar system survey conducted by \citet{Schwamb2010}. This was a wide-field survey of $\sim12000$ $deg^{2}$ down to a limiting magnitude of $m_r\simeq21.3$, detecting 52 KBOs and Centaurs (27 previously known objects, and 25 new ones), including 4 of the previously identified Haumea family members. The presence of so many known KBOs in their survey fields allowed \citet{Schwamb2010} to estimate that their detection efficiency was $\sim65\%$ down to $m_r\simeq21.3$ for the known population (see their Figure 3). They also provide a plot of the survey's fractional sky coverage as a function of ecliptic latitude (see their Figure 4); they covered approximately $50\%$ of the sky $\pm30^{\circ}$ from the ecliptic. From this information, we can estimate the detection probability for an object in our synthetic families based on its apparent magnitude and ecliptic latitude. We use this detection probability to determine how many of our synthetic collisional family members could have been detected in this survey. An important note here is that the \citet{Schwamb2010} survey was not capable of spectrally identifying family members. They can only say that there are four previously identified Haumea family members within their detections (listed in their Table 2), but it is possible that additional, unidentified Haumea family members are present within their survey detections. To examine this possibility, we calculated $\Delta v$ for each of their listed detections and found two additional objects within $500$~ms$^{-1}$ of the collision center-of-mass orbit. One of these objects, 2004 SB$_{60}$ ($\Delta v \approx 350$~ms$^{-1}$), was observed by \citet{Schaller2008} and found not to have the water ice spectral feature characteristic of all the other identified Haumea family members (its surface spectrum is consistent with no water ice being present). The other object, 2008 AP$_{129}$, has a $\Delta v \approx 140$~ms$^{-1}$, suggestive of a dynamical association with the Haumea family, but the object shows only a moderate amount of water ice absorption in its surface spectrum \citep{Brown2012}, with a water ice fraction substantially lower than measured for the known family members. If we allow for the possibility that 2008 AP$_{129}$ is a water ice poor family member, this means that it is possible that the \citet{Schwamb2010} survey detected as many as five Haumea family members, but this number is not likely to be larger. Figures~\ref{f:ndv_gm} and~\ref{f:ndv_sat} show the number of synthetic family members as a function of $\Delta v$ for each of the two formation scenarios that would have been detected by the \citet{Schwamb2010} survey; also shown for reference are the Haumea family members actually detected in that survey. For the satellite breakup model (Figure~\ref{f:ndv_sat}), the actual number of family members detected falls within the $1 \sigma$ range of total detections predicted by the model, but the observed values of $\Delta v$ are lower than the predicted values. Almost none of the synthetic families match the observations by producing 4 or 5 detections all with $\Delta v < 150$~ms$^{-1}$. The values of $\Delta v$ for some of the observed family members could be larger than the values in Table~\ref{t:known_family} though, because of the uncertainty in the orbital angles of the collision orbit. To constrain the collision orbit and calculate the minimum $\Delta v$ for the known family members, we assumed that the collision took place near the ecliptic plane (where collision probabilities are highest); the family members' ejection velocities from the collision could be larger than this minimum value, although \citet{Ragozzine2007} argue that the correction factor is likely to be $\sim2$ or lower unless the ejection of fragments was highly anisotropic (see also our discussion of this in Section~\ref{ss:cmorbit}). If we allow for a factor of two correction to the $\Delta v$ estimates for the real family members, we can extend the $\Delta v$ of the survey's detected family members out to $\sim250$~ms$^{-1}$. Using this increased allowable range of $\Delta v$, $18\%$ of the synthetic families satisfactorily reproduce the observations, making the satellite breakup model statistically consistent with the observations. The graze-and-merge model predicts that, on average, the survey should have detected 8 family members. This is larger than the 4 or 5 real detections, but the actual detections fall within the $1\sigma$ uncertainties of the synthetic families. Just as in the satellite breakup model, the real detections fall at significantly lower $\Delta v$ than predicted by the graze-and-merge model. Very few of the synthetic families result in all the detectable family members falling below $150$~ms$^{-1}$, and all of these cases result in too few detections to be consistent with the observations. If we increase the allowed maximum $\Delta v$ to $250$~ms$^{-1}$ and require the synthetic families to match the number of real detections (4 or 5), we find that only $4\%$ of the synthetic families reproduce the observations, indicating that the observations are not a typical outcome of the graze-and-merge model. We did, however, make a number of assumptions when generating our synthetic families, so we examine how these might be altered to bring the model into closer agreement with the observations. One assumption we made in the creation of the families in Section~\ref{ss:dynamical_evolution} was that there was no relationship between a fragment's mass and its $\Delta v$ from the collision center. Our only constraint was that the binned mass vs. $\Delta v$ matched the outcome of the \citet{Leinhardt2010} simulations. If there is a correlation between fragment mass and $\Delta v$ such that the higher $\Delta v$ family members are, on average, smaller than the lower $\Delta v$ members, this might account for the lack of observed family members at large $\Delta v$. To test this we impose a relationship between fragment mass, $m$, and $\Delta v$ such that, averaged over all the family members, \begin{equation}\label{eq:mdv} \Delta v(m) \propto m^{-k} \end{equation} while the total mass of all fragments within a given range of $\Delta v$ is still constrained by the \citet{Leinhardt2010} simulation results. This power law relationship has been seen in some laboratory impact experiments, with the exponent typically being $k < 1/6$ \citep{Nakamura1991,Holsapple2002,Giblin2004}. Taking $k=1/6$ and generating a new set of synthetic families for the graze-and-merge collision, the percentage of synthetic families with 4 or 5 detections all with $\Delta v < 250$~ms$^{-1}$ increases to $8\%$. The percentage of synthetic families with 4 or 5 detections and $\Delta v < 150$~ms$^{-1}$ is still negligible; even if we increase the value of $k$ to $1/4$ (which is not a likely value), we still see $<1\%$ of the synthetic families resulting in 4 or 5 detections below $150$~ms$^{-1}$. Another easily altered assumption is that of the albedos for the family members. We assumed albedos in the range $0.6-1.0$, but if we assume systematically lower albedos, in the range $0.4-0.6$ (still consistent with their icy composition), we decrease the average predicted number of detections for the graze-and-merge model from $\sim8$ to $\sim4$, also decreasing the number of predicted detections at large $\Delta v$. With this albedo assumption, nearly half of the synthetic families result in no detections at $\Delta v > 250$~ms$^{-1}$, and the percentage of families with 4 or 5 detections and $\Delta v < 250$~ms$^{-1}$ is $11\%$. Lowering the albedos further does not increase the percentage of families that agree with the observations. If we assume the above albedo range in combination with the $k=1/6$ mass-$\Delta v$ relationship, the percentage of matching families is $\sim12\%$. Given that at least one family member besides Haumea has been shown to have a very bright albedo \citep{Elliot2010}, assuming systematically lower albedos for most of the family members might not be the most likely solution to the discrepancy between the observations and the models. However, it cannot be ruled out until the albedos of some of the smaller family members have been measured. Depending on the assumptions about albedos and mass-velocity relationships, and allowing for the uncertainty in the known family's $\Delta v$, we find that $4-12\%$ of the synthetic families generated by the graze-and-merge collision model reproduce the Haumea family, as observed by the \citet{Schwamb2010} survey. The actual observed family is at systematically lower $\Delta v$ than the model predicts, but given the relatively small set of observed family members, the model is still consistent the observations. \section{Discussion and Conclusions}\label{s:conclusions} After accounting for 3.5 Gyr of dynamical evolution and accounting for the observational incompleteness of the known family, both of the proposed Haumea family formation models we have examined here \citep{Leinhardt2010,Schlichting2009} are consistent with the total number of observed family members. There is, however, a significant difference between the observed values of $\Delta v$ and those predicted in the formation models; almost none ($\ll1\%$) of the synthetic families we generated for either formation scenario account for the observations of family members that all fall within $150$~ms$^{-1}$ of the collision center. The only way we find to make the velocity distributions for the formation scenarios consistent with the observations is to allow that the actual ejection velocities of some of the known family members are a factor of $\sim2$ larger than the calculated minimum values. This adjustment of the observed $\Delta v$ falls within the uncertainties for their calculation of the collision center-of-mass orbit (see our discussion in Section~\ref{ss:cmorbit}). But even with this adjustment, only $10-20\%$ of the synthetic families reproduce the observed family. This is a reasonable level of agreement given the uncertainties in the collision models and the small number of observed family members, but it is still interesting that so few family members have been identified at large $\Delta v$. In section~\ref{ss:observations} we explored how the assumptions we made about the albedos of the family members and possible correlations between fragment mass and ejection velocity could change the predictions of the graze-and-merge collision model. We find that it is difficult to alter these assumptions enough to obtain a better than 10--15\% agreement with the observations. For either formation scenario, we find that there should be an additional $\sim0.01-0.03$ $M_H$ of family members that have yet to be identified, i.e., at least as many as those already detected. If, as we discover these additional family members, the distribution of $\Delta v$ remains too heavily weighted toward the low-end, the formation models will have to be reconsidered. Recent spectroscopic and photometric surveys of the Kuiper belt designed to detect water ice have not identified any large $\Delta v$, ice-rich Haumea family members. \citet{Fraser2012} performed a photometric study of 120 objects from the major dynamical classes of the Kuiper belt using the Hubble Space Telescope (HST) and did not find any additional Haumea family members. \citet{Benecchi2011} also performed HST photometry of a large sample of Kuiper belt objects and failed to identify any new ice-rich family members. Ground based studies searching for water ice in the Kuiper belt have also failed to detect higher $\Delta v$ family members; \citet{Brown2012} detected no additional family members, and \citet{Trujillo2011} found only one new member, located near the dynamical core of the family. The extent of these surveys suggests that if there were ice-rich Haumea family members spread at large $\Delta v$ throughout the Kuiper belt, we likely should have already identified some of them. This is consistent with our comparisons of the proposed collisional models to the observed Haumea family; figure~\ref{f:simulated_detections} shows the distribution of simulated detections for a subset of our dynamically evolved graze-and-merge collisional families (see Section~\ref{ss:observations}) compared to the known Haumea family. The simulated detections span a larger parameter range of the Kuiper belt (due to the presence of higher $\Delta v$ family members) than the actual detections; \citet{Lykawka2012} also found that simulated Haumea families with $\Delta v$ consistent with existing collisional formation models would occupy a larger portion of the Kuiper belt than currently observed. In contrast to the expected $\Delta v$ distributions from the collisional models, we find that, accounting for the time evolution of the a-e-i and $\Delta v$ distributions in our simulations, all of the known family members are consistent with initial $\Delta v < 100$ ms$^{-1}$. One possible way to modify the collision models was suggested by \citet{Cook2011}, who argue that perhaps the proto-Haumea was only partially differentiated. This would allow for some of the collisional fragments to be rocky rather than primarily water ice, which would mean that they might not show the water ice spectral feature that has thus far been used as the only secure way to identify a family member; perhaps the dynamically nearby KBO 2008 AP$_{129}$, which has a $\Delta v$ of only 140~ms$^{-1}$ but shows less water ice absorption than the accepted family members (see Section~\ref{ss:observations}) represents a new class of rockier family members. A rockier composition could also make the fragments darker and therefore more difficult to detect at all. It has been similarly suggested that surface inhomogeneities on the proto-Haumea could result in collisional fragments with different compositional characteristics from those of the known family members \citep{Schaller2008}. It is unclear why, in either of these situations, there would be a preference for the less icy fragments to be dispersed at large $\Delta v$, but if the composition of the target and impactor are substantially different from those assumed in the collision simulations, that could affect the entire $\Delta v$ distribution. Another possibility is that the formation models have not adequately accounted for collisional evolution amongst the ejected fragments themselves, and that this could alter the family's size and/or velocity distribution in a significant way. Collision simulations like the ones in \citet{Leinhardt2010} are computationally very expensive and they follow the family's evolution for only a few thousand spin periods of the primary (a few hundred days in total), so the model's final size and velocity distributions are not fully evolved. The presence or absence of higher $\Delta v$ family members with future additions to the set of observed Haumea family members will determine if any of these modified scenarios should be considered, or if there is another collisional model that could better explain the family. For either the \citet{Leinhardt2010} or the \citet{Schlichting2009} models to be consistent with observations, we should find several higher $\Delta v$ family members among any new identifications. In summary, our study of the long-term dynamical evolution of the Haumea family leads to the followings conclusions. \begin{enumerate} \item The Haumea family is at least 100 Myr old. This estimate is based on the timescale to randomize the nodal longitudes of the orbital planes of the family members, as well the timescale for chaotic evolution of Haumea's eccentricity in the 12:7 MMR with Neptune. From the chaotic diffusion of Haumea's eccentricity, we can conclude with $95\%$ confidence that the family is older than 1 Gyr. \item For initial ejection velocities, $\Delta v$, in the range $50-400$~ms$^{-1}$, $20-45$\% of original Haumea family members are lost due to close encounters with Neptune over 3.5 Gyr. Most of this loss occurs at the inner edge of the family (interior to $\sim 41$ AU) and near the locations of MMRs with Neptune. A few percent of the surviving Haumea family members are expected to be found in MMRs with Neptune. The 3:2 and 7:4 MMRs are the most likely of the resonances to contain surviving members. \item Within the population of surviving and potentially recognizable family members, chaotic diffusion in orbital elements over 3.5 Gyr introduces a $50-100$~ms$^{-1}$ spread in the apparent velocities of the family relative to the collision center-of-mass orbit, with the average $\Delta v$ increasing slightly over time. \item Accounting for long-term dynamical evolution to the graze-and-merge collision model of \citet{Leinhardt2010}, we find that the currently observed family represents $>85\%$ of the expected family mass within $150$~ms$^{-1}$ of the collision center, but an additional $0.035\pm0.01$ $M_H$ (about twice the mass of the known family) remains to be identified at larger $\Delta v$. Accounting for observational incompleteness, the \citet{Leinhardt2010} model is consistent with the observations at the $\sim10\%$ confidence level. \item For the satellite breakup model of \citet{Schlichting2009}, we find that the currently observed family accounts for $\sim 50\%$ of the expected mass of the family. Most of the remaining mass should be found at $\Delta v > 150$~ms$^{-1}$. Accounting for observational incompleteness, the satellite breakup model is consistent with the observations at the $\sim20\%$ confidence level. \item Both formation models predict more family members at large $\Delta v$ than are currently observed (even allowing for a factor of $\sim2$ higher values of $\Delta v$ for the known family members due to the uncertainty in estimates of the collision center-of-mass orbit). If additional Haumea family members are identified and continue to have low $\Delta v$ ($\lesssim 200$~ms$^{-1}$), new formation models (or modifications to the existing models) will have to be considered. \end{enumerate} \acknowledgments This research was supported by grant no.~NNX08AQ65G from NASA's Outer Planets Research program. We thank D.~Ragozzine for a helpful review.
2,869,038,155,201
arxiv
\section{Introduction} In Section~\ref{sec: graphs}, we describe the graph-theoretic framework for the investigation of the algebraic information contained in the topology of scalar Feynman diagrams. Perturbative quantum field theories possess an inherent algebraic structure, which underlies the combinatorics of recursion governing renormalisation theory, and are thus deeply connected to the theory of graphs. \looseness=1 In Section~\ref{sec: geometry}, we broadly review preliminary notions in algebraic geometry and algebraic topology. An algebraic variety over $\mathbb{Q}$ gives rise to two distinct rational structures via algebraic de Rham cohomology and Betti cohomology, which are compatible with each other only after complexification. The coexistence of these two cohomologies and their peculiar compatibility are linked to a specific class of complex numbers, known as periods. The cohomology of an~alge\-braic variety is equipped with two filtrations, and the mixed Hodge structure arising from their interaction constitutes the bridge between the theory of periods and the theory of motives. In Section~\ref{sec: periods}, we introduce the set of periods, lying between $\bar{\mathbb{Q}}$ and $\mathbb{C}$, among which are the numbers that come from evaluating parametric Feynman integrals, and we briefly review their remarkable properties. Suitable cohomological structures are exploited to derive non-trivial information about these numbers. In Section~\ref{sec: motives}, we describe how Feynman integrals are promoted to periods of motives. Technical issues arising from the presence of singularities are tackled by blow up. We~adopt the category-theoretic Tannakian formalism where motivic periods, and motivic Feynman integrals in particular, reveal their most intriguing properties. We~present an overview of the current progress of research towards the general understanding of the structure of scattering amplitudes via the theory of motivic periods, giving particular attention to recent results in massless scalar~$\phi^4$ quantum field theory. \section{Scalar Feynman graphs} \label{sec: graphs} \subsection{Perturbative quantum field theory} A quantum field theory encodes in its Lagrangian every admissible interaction among particles, but it does it in a way that makes decoding this information a difficult task. The probability amplitude associated to the interaction process between given initial and final states, called its \textit{Feynman amplitude}, is determined by the set of kinematic and interaction terms in the Lag\-ran\-gian. However, individual Lagrangian terms correspond to propagators and interaction vertices which can be linked together in infinitely many distinct ways to connect the same pair of initial and final states. Each of these admissible realisations of the same interaction process has to be accounted for in an infinite sum of contributions to the probability amplitude. We~associate to each of these possibilities a graphical representation, called its \textit{Feynman diagram}, whose individual contribution to the probability amplitude is explicitly written in the form of a \textit{Feynman integral} by applying the formal correspondence between Lagrangian terms and graphical components, which is established by convention through the set of Feynman rules of~the theory. It~is only the sum of the contributing Feynman integrals to a given process that has a physical meaning and not the individual integrals, which are themselves interrelated by~the gauge symmetry of the Lagrangian. In perturbative quantum field theory, the sum of individual Feynman integrals is a \textit{perturbative expansion} in some small parameter of the theory, typically a suitable coupling constant. Thus, the Feynman amplitude can be expanded in a formal power series, which has been shown to be divergent\footnote{Serone et al~\cite{SSV17} characterised the conditions under which some class of asymptotic perturbative series are Borel resummable, leading to exact results without introducing non-perturbative effects in the form of trans-series.} by Dyson~\cite{Dys52}. The divergence does not, however, undermine the accuracy of~predictions that can be made with the theory. Indeed, although a Feynman amplitude recei\-ves contributions to any order in perturbation theory, practical calculations are made by truncating the sum at a certain order and directly evaluating only the remaining finitely many terms. Moreover, the explicit calculation of a Feynman amplitude only includes those diagrams which are one-particle irreducible, or 1PI, that is, diagrams which cannot be divided in two by~cutting through a single propagator. See Fig.~\ref{fig:1PI}. The contribution from a non-1PI diagram at some given order can be expressed as a combination of lower-order 1PI contributions, which have already been accounted for in the formal series. \begin{figure}[htb!] \centering \subfloat[One-particle irreducible]{\includegraphics[scale=.5]{1PI_leftN.png}} \quad \subfloat[One-particle reducible]{\includegraphics[scale=.5]{1PI_rightN.png}} \caption{Examples of 1PI and non-1PI diagrams.} \label{fig:1PI} \end{figure} \looseness=1 The leading order terms in the perturbative expansion of a Feynman amplitude are called tree-level contributions. Higher order diagrams are obtained from tree-level diagrams by adding internal loops. Each independent loop in a diagram is associated to an unconstrained momentum and integrals over unconstrained loop momenta are the origin of \textit{singularities} in Feynman integrals. We~distinguish two classes of singularities. The \textit{ultraviolet} (UV) divergences arise in the limit of infinite loop momentum, a regime that is far beyond the energy scale that we have currently experimental access to and where we expect new physical phenomena to become relevant and corresponding new terms to enter the Lagrangian. Sensitivity to the high loop momentum region is treated by means of \textit{renormalisation theory}. For a renormalizable theory, a~suitable adjustment of the Lagrangian parameters allows to systematically re-express the predictions of the theory in terms of renormalized physical couplings, so that they decouple from UV physics. Thus, the theory gives a finite and well-defined relation between physical observables. The \textit{infrared} (IR) divergences only arise in theories with massless particles as they originate in the limit of infinitesimal loop momentum. They cannot be removed by renormalisation and introduce numerous subtleties in the evaluation of Feynman integrals which we are not dealing with in the present text. For a detailed and comprehensive presentation of perturbative quantum field theory we refer to Zee~\cite{Zee03} and Srednicki~\cite{Sre10}. \looseness=1 Evaluating Feynman integrals over loop momenta has been of fundamental concern since the early days of perturbative quantum field theory. Since the first insights into the problem of UV divergences in a quantum field theory presented by Dyson~\cite{Dys49,Dys52}, Salam~\cite{Sal51_2,Sal51_1} and Weinberg~\cite{Wei60} in the 1950s and 60s, our understanding has vastly improved. In 2004, Smirnov~\cite{Smi06} summarised more than fifty years of advancements in the field, providing an overview of the most powerful, successful and well-established methods available at the time for evaluating Feynman integrals in a systematic way, showing how the problem of evaluation had become more and more critical. What could be easily evaluated had already been evaluated years ago. Nowadays, new approaches, based on the symmetry properties of the loop integrands\footnote{We refer to Elvang and Huang~\cite{EH13} for a review of the subject, including unitarity methods, BCFW recursion relations, and the methods of leading singularities and maximal cuts.} and the complementary perspective of differential equations,\footnote{Henn~\cite{Henn15} gives an overview of the method of differential equations, using tools such as Chen iterated integrals, multiple polylogarithms, and the Drinfeld associator.} are available and vastly studied. Despite progress, the mathematical understanding and the computation of Feynman integrals are still far from being complete. Overlapping divergences can be treated iteratively, thus revealing in the first place the recursive nature of renormalisation theory. However, this combinatorics of subdivergences is only the first hint to a more fundamental algebraic structure inherent in all renormalizable quantum field theories and deeply connected to the theory of graphs.\footnote{A first discussion about the appearance of transcendental numbers in Feynman integrals and its relation to the topology of Feynman graphs is presented by Kreimer~\cite{Kre97} in the framework of knot theory and link diagrams. A~recent review on the theory of numbers and single-valued functions on the complex plane which arise in quantum field theory is presented by Schnetz~\cite{Sch16} in the modern context of the theory of motivic periods.} \subsection{Feynman parametrisation} \looseness=1 We consider a scalar quantum field theory in an even number $D$ of space-time dimensions with Euclidean metric\footnote{It is common practice to compute amplitudes in Euclidean space. Moving to Minkowski space involves performing an extension by analytic continuation known as Wick rotation. See for example~\cite{Smi06} and~\cite{Sre10}.} and allow different propagators to have different masses. A~Feynman diagram is a \textit{connected directed graph} where each edge represents a propagator and is assigned a~momentum and a mass and each vertex stands for a tree-level interaction. Exter\-nal half-edges, also known as external legs, represent incoming or outgoing particles, while internal edges are the internal propagators of the diagram. We~define the \textit{loop number} to be the number of~inde\-pen\-dent cycles of the graph. We~adopt the convention for which all external legs have arrows pointing inwards, and consequently distinguish incoming and outgoing particles depending on~the momentum being positive or negative, respectively. Momentum is positive when it points in~the same direction of the arrow of the corresponding directed edge, and it is negative otherwise. We~fix momenta on external lines and for each internal loop we choose an arbitrary orientation of the edges which is consistent with momentum conservation at each vertex of the graph and globally. Momentum conservation leaves one unconstrained free momentum variable for each loop. Thus, the loop number is equal to the number of independent loop momentum vectors. We~only consider graphs that are one-particle irreducible. {\samepage Let $G$ be such a Feynman graph with $m$ external legs, $n$ internal edges, and $l$ independent loops. Its Feynman integral, up to numerical prefactors, is \begin{gather} \label{eq: I_G_old} I_G= \big(\mu^2\big)^{n-lD/2} \int \prod_{r=1}^l \frac{{\rm d}^Dk_r}{{\rm i} \pi^{D/2}} \; \prod_{j=1}^n \frac{1}{-q_j^2+m_j^2-{\rm i} \varepsilon}, \end{gather} where $\varepsilon$ is a small positive parameter,\footnote{The $-{\rm i} \varepsilon$ term is required by the choice of Feynman pole prescription for the computation of the propagators and it allows to perform a Wick rotation to Minkowski space. In what follows, however, the $-{\rm i} \varepsilon$ term does not play a role. We~set $\varepsilon = 0$ for simplicity of notation.} $\mu$ is a scale introduced to make the expression dimensionless,\footnote{In what follows, the scale $\mu$ remains factored out. We~set $\mu^2 = 1$ for simplicity of notation.} $k_1,\dots,k_l$ are the independent loop momenta, $m_1,\dots,m_n$ are the masses of the internal lines, and $q_1,\dots,q_n$ are the momenta flowing through them, which can be expressed as \begin{gather} q_j=\sum_{i=1}^l \lambda_{ji} k_i + \sum_{i=1}^m \sigma_{ji} p_i, \end{gather} where $p_1,\dots,p_m$ are the external momenta and $\lambda_{ji}$, $\sigma_{ji} \in \{ -1,0,1 \}$ are constants depending on~the particular graph structure. } Feynman~\cite{Fey49} introduced the well-known manipulation consisting of defining a set of parameters $x_1,\dots,x_n$, called \textit{Feynman parameters}, one for each internal edge of the graph, and applying the formula \begin{gather} \prod_{j=1}^n \frac{1}{P_j} = \Gamma(n) \int_{\{x_j \ge 0\}} {\rm d}^nx \,\delta\bigg(1- \sum_{j=1}^n x_j \bigg) \frac{1}{\left(\sum_{j=1}^n x_j P_j \right)^n} \end{gather} with the choice $P_j=-q_j^2+m_j^2$ for $j=1,\dots,n$. Here, $\Gamma$ is the Euler gamma function and $\delta$ is the Dirac delta distribution. Indeed, we can write \begin{gather} \sum_{j=1}^n x_j \big({-}q_j^2+m_j^2\big) = - \sum_{r=1}^l \sum_{s=1}^l k_r \cdot (M_{rs} k_s) + \sum_{r=1}^l 2 k_r \cdot Q_r + J, \end{gather} where $M$ is an $l \times l$-matrix with scalar entries, $Q$ is an $l$-vector with momentum vectors as entries and $J$ is a scalar. $M$, $Q$ and $J$ can be suitably expressed in terms of the graph parameters $\{x_j, q_j, m_j\}_{j=1}^n$. Applying Feynman parametrisation to~\eqref{eq: I_G_old}, the $l$-dimensional integral over the loop momenta becomes an $(n-1)$-dimensional integral over the Feynman parameters \begin{gather} \label{eq: I_G_new} I_G = \Gamma\bigg(n-\frac{lD}{2}\bigg) \int_{\{ x_j \ge 0 \}} {\rm d}^nx \, \delta\bigg(1- \sum_{j=1}^n x_j \bigg) \frac{\mathcal{U}^{n-(l+1)D/2}}{\mathcal{F}^{n-lD/2}}, \end{gather} which is characterised by the polynomials $\mathcal{U}= \det (M)$ and $\mathcal{F}= \det (M) \big(J+ Q M^{-1} Q \big)$, called \textit{first} and \textit{second Symanzik polynomials} of the Feynman graph, respectively. Notice that the dimension~$D$ of space-time, entering the exponents in the integrand of~\eqref{eq: I_G_new}, acts as regularisation. We~use dimensional regularisation\footnote{The dimensional regularisation procedure has been first introduced by 't~Hooft and Veltman~\cite{tHooftVeltman}.} with $D=4-2\epsilon$ and $\epsilon$ a small parameter. A~detailed description of~Feynman parametrisation can be found in Srednicki~\cite{Sre10}. \begin{Example} Consider a one-loop diagram with $m=n$ external legs, as the one shown in~Fig.~\ref{fig: nbox}. \begin{figure}[htb!] \centering \includegraphics[scale=.5]{nboxN.png} \put(-182,130){\makebox(0,0)[lb]{\small $m_n$}} \put(-95,166){\makebox(0,0)[lb]{\small $m_6$}} \put(-55,130){\makebox(0,0)[lb]{\small $m_5$}} \put(-55,85){\makebox(0,0)[lb]{\small $m_4$}} \put(-95,45){\makebox(0,0)[lb]{\small $m_3$}} \put(-141,45){\makebox(0,0)[lb]{\small $m_2$}} \put(-182,85){\makebox(0,0)[lb]{\small $m_1$}} \put(-155,125){\makebox(0,0)[lb]{\small $q_n$}} \put(-98,146){\makebox(0,0)[lb]{\small $q_6$}} \put(-77,125){\makebox(0,0)[lb]{\small $q_5$}} \put(-77,90){\makebox(0,0)[lb]{\small $q_4$}} \put(-98,66){\makebox(0,0)[lb]{\small $q_3$}} \put(-136,68){\makebox(0,0)[lb]{\small $q_2$}} \put(-155,90){\makebox(0,0)[lb]{\small $q_1$}} \put(-198,120){\makebox(0,0)[lb]{\small $p_n$}} \put(-156,170){\makebox(0,0)[lb]{\small $p_{n-1}$}} \put(-75,170){\makebox(0,0)[lb]{\small $p_5$}} \put(-35,120){\makebox(0,0)[lb]{\small $p_4$}} \put(-44,55){\makebox(0,0)[lb]{\small $p_3$}} \put(-132,20){\makebox(0,0)[lb]{\small $p_2$}} \put(-184,55){\makebox(0,0)[lb]{\small $p_1$}} \caption{Example of a one-loop Feynman diagram with $m=n$ external legs.} \label{fig: nbox} \end{figure} \noindent Its Symanzik polynomials are \begin{gather}\label{eq: 1Sym} \mathcal{U}_{\text{1-loop}} = \sum_{j=1}^n x_j, \\ \mathcal{F}_{\text{1-loop}} = \mathcal{U}_{\text{1-loop}} \sum_{j=1}^n m_j^2 x_j + \sum_{\substack{i,j=1 \\ i < j}}^n (q_i-q_j)^2 x_i x_j, \end{gather} where the internal momenta are given by $q_1=k$, $q_i=k+p_1+\dots+p_{i-1}$ for $1 < i \leq n$. Here,~$k$~is the unique loop momentum of the graph, and $p_1+\dots+p_m=0$ by global momentum conservation. \end{Example} \subsection{Graph polynomials} Re-expression of Feynman integrals in parametric form shows that the correspondence bet\-ween scalar Feynman diagrams and Feynman integrals can be reformulated in different terms. The~infor\-mation contained in a Feynman graph is shared out among its multiple components, which can be identified as the underlying graph structure, the directionality of edges and the various edge labels. If~we destructure a Feynman graph in these layers and momentarily neglect the extra information apart from the graph structure, we observe that its integral is insensitive to changes of the graph which leave its topology unaltered. Focusing on the underlying graph topology, the Symanzik polynomials can be suitably re-interpreted and they are commonly called \textit{graph polynomials} in this context. Let $G$ be a finite graph without isolated vertices. $G$ is specified by the pair $(V_G, E_G)$, where $V_G$ is the collection of vertices and $E_G$ is the collection of edges. We~choose an arbitrary orientation of its edges and define the map \begin{align} \mathbb{Z}^{E_G} &\longrightarrow \mathbb{Z}^{V_G}, \\ e &\longmapsto t(e)-s(e), \end{align} where $e \in E_G$ is an edge and $s(e),t(e) \in V_G$ are its \textit{source} and \textit{target} endpoints with respect to the edge orientation. Let~us extend this map to the exact sequence \begin{gather} 0 \rightarrow H_1(G, \mathbb{Z}) \rightarrow \mathbb{Z}^{E_G} \rightarrow \mathbb{Z}^{V_G} \rightarrow H_0(G, \mathbb{Z}) \rightarrow 0, \end{gather} where $H_0(G, \mathbb{Z})$ and $H_1(G, \mathbb{Z})$ are the zeroth and first homology groups of the graph. As~a~con\-sequence, the graph loop number $l_G$ is related to the number of edges $n_G$, the number of verti\-ces~$v_G$, and the number of connected components $c_G$ by\footnote{The loop number is equivalently defined as the rank of the first homology group of the graph, while the number of connected components corresponds to the rank of the zeroth homology group of the graph.} \begin{gather} l_G = \text{rank}(H_1(G, \mathbb{Z})) = |E_G |- |V_G| + \text{rank}(H_0(G, \mathbb{Z})) = n_G - v_G + c_G. \end{gather} Assume $G$ is a graph of Feynman type, that is, finite, connected and one-particle irreducible. Let~the \textit{valence} of a vertex be the number of edges attached to it. Since they do not contribute to~the braid pattern of Feynman graphs, both vertices of valence one, corresponding to the source endpoints of external legs, and vertices of valence two, corresponding to mass insertions, do not play a role here. To such a graph $G$ we wish to assign an integral $I_G$ which corresponds to the one previously defined in~\eqref{eq: I_G_new} when the neglected extra information is re-inserted. We~start by~associating a variable $x_e$ to every internal edge $e \in E_G$ of the graph. These variables are known as \textit{Schwinger parameters} and they are the graph-theoretic analogues of Feynman parameters. Let~$\mathcal{T}_1$ be the set of \textit{spanning trees}\footnote{A graph of zero loop number with $k$ connected components is called a $k$-forest. When $k=1$, the forest is called a tree. Given an arbitrary connected graph $G$, a spanning $k$-forest of $G$ is a subgraph $T \subseteq G$ such that $V_T=V_G$ and $T$ is a $k$-forest. A~spanning $k$-forest of $G$ is usually denoted by the collection of its trees. A~connected graph has always at least one spanning tree.} of $G$. The \textit{first graph polynomial} of $G$ is defined as \begin{gather} \label{eq: Psi_G} \Psi_G = \sum_{\substack{T \in \mathcal{T}_1}} \prod_{e \notin E_T} x_e. \end{gather} It is a homogeneous polynomial of degree $l_G$ in the Schwinger parameters. Note that each monomial of $\Psi_G$ has coefficient one, and $\Psi_G$ is linear in each Schwinger parameter. \begin{Example} The first graph polynomial of the Feynman graph shown in Fig.~\ref{fig: loops} is $\Psi_G = x_1 \cdots x_n \big( \frac{1}{x_1}+\dots+ \frac{1}{x_n}\big)$. \begin{figure}[htb!] \centering \includegraphics[scale=.37]{loopsN.png} \put(-82,86){\makebox(0,0)[lb]{\small $x_1$}} \put(-82,68){\makebox(0,0)[lb]{\small $x_2$}} \put(-82,46){\makebox(0,0)[lb]{\small $x_3$}} \put(-82,5){\makebox(0,0)[lb]{\small $x_n$}} \caption{Example of a scalar Feynman graph with $n$ internal propagators.} \label{fig: loops} \end{figure} \end{Example} By construction, the first Symanzik polynomial $\mathcal{U}$ of a Feynman graph $G$ does not depend on momenta and masses involved in the diagram, but is only dependent on the graph topology. Indeed, it explicitly identifies with the first graph polynomial $\Psi_G$ of the corresponding pure graph structure. The same is not true for the second Symanzik polynomial $\mathcal{F}$, which is a~function of~external momenta and internal masses. However, we can re-express $\mathcal{F}$ in a way that clearly separates the contribution to $\mathcal{F}$ coming from the graph topology from its other dependences. To this end, momenta and masses edge labels must re-enter our discussion. Let~$\mathcal{T}_2$ be the set of \textit{spanning $2$-forests} of $G$ and $P_{T_i}$ be the set of external momenta of $G$ attached to its tree $T_i$. The \textit{second graph polynomial} of $G$ is defined as \begin{gather} \Xi_G (\{p_j, m_e\}) = \bigg( \sum_{e \in E_G} m_e^2 x_e \bigg) \Psi_G \; - \sum_{(T_1,T_2) \in \mathcal{T}_2} \bigg( \prod_{e \notin E_{T_1} \cup E_{T_2}} x_e \bigg) \Bigg(\sum_{\substack{p_j \in P_{T_1} \\ p_k \in P_{T_2}}} p_j \cdot p_k \Bigg). \end{gather} It is a homogeneous polynomial of degree $l_G+1$ in the Schwinger parameters. Note that, if all internal masses are zero, then $\Xi_G$ is linear in each Schwinger parameter. It~follows from their definitions that the second Symanzik polynomial and the second graph polynomial of a~Feynman graph are, indeed, the same. Moreover, having fixed the momenta of external particles and the masses of internal propagators, we are left with the explicit dependence of $\mathcal{F}$ on the graph structure given in terms of spanning $2$-forests. \begin{figure}[htb!] \centering \subfloat[Full Feynman diagram]{\includegraphics[scale=.35]{box_leftN.png} \put(-70,111){\makebox(0,0)[lb]{\small $q_3$}} \put(-120,62){\makebox(0,0)[lb]{\small $q_2$}} \put(-22,62){\makebox(0,0)[lb]{\small $q_4$}} \put(-70,14){\makebox(0,0)[lb]{\small $q_1$}} \put(-71,88){\makebox(0,0)[lb]{\small $m_3$}} \put(-96,62){\makebox(0,0)[lb]{\small $m_2$}} \put(-50,62){\makebox(0,0)[lb]{\small $m_4$}} \put(-71,36){\makebox(0,0)[lb]{\small $m_1$}} \put(-14,98){\makebox(0,0)[lb]{\small $p_3$}} \put(-128,98){\makebox(0,0)[lb]{\small $p_2$}} \put(-14,26){\makebox(0,0)[lb]{\small $p_4$}} \put(-128,26){\makebox(0,0)[lb]{\small $p_1$}}} \qquad\qquad \subfloat[Underlying graph structure]{\includegraphics[scale=.35]{box_rightN.png} \put(-70,102){\makebox(0,0)[lb]{\small $x_3$}} \put(-110,62){\makebox(0,0)[lb]{\small $x_2$}} \put(-28,62){\makebox(0,0)[lb]{\small $x_4$}} \put(-70,22){\makebox(0,0)[lb]{\small $x_1$}}} \caption{Box diagram with four legs.} \label{fig: box} \end{figure} \begin{Example} \label{ex: box} To explicitly see how the individual terms in the graph polynomials arise from the knot structure of the diagram, we look closer at the one-loop Feynman graph with $m=4$ external legs, also called \textit{box diagram},\footnote{This gives a next-to-leading order contribution to the two-to-two particle scattering process. Srednicki~\cite{Sre10} gives a detailed discussion of two particles elastic scattering at one-loop using standard methods in perturbative quantum field theory.} which is shown in Fig.~\ref{fig: box}. Its Symanzik polynomials are \begin{gather}\label{eq: box} \mathcal{U}_{\text{box}} = x_1+x_2+x_3+x_4, \\ \mathcal{F}_{\text{box}} = \big[(x_1+x_2+x_3+x_4)\big(m_1^2x_1+m_2^2x_2+m_3^2x_3+m_4^2x_4\big)+ x_1x_2 p_1^2+ x_2x_3 p_2^2 + x_3x_4 p_3^2 \\ \hphantom{\mathcal{F}_{\text{box}} =} {} + x_4x_1 p_4^2+ x_1x_3 (p_1+p_2)^2 + x_2x_4 (p_2+p_3)^2\big]. \end{gather} Neglecting mass terms, the remaining monomials correspond to the spanning forests shown in~Figs.~\ref{fig: trees} and~\ref{fig: forests}. \begin{figure}[htb!] \centering \subfloat[$+x_1$]{\includegraphics[width=0.19\textwidth]{tree1N.png} \put(-48,65){\makebox(0,0)[lb]{\small $x_3$}} \put(-75,39){\makebox(0,0)[lb]{\small $x_2$}} \put(-20,39){\makebox(0,0)[lb]{\small $x_4$}} } \qquad \subfloat[$+x_2$]{\includegraphics[width=0.19\textwidth]{tree2N.png} \put(-48,65){\makebox(0,0)[lb]{\small $x_3$}} \put(-20,39){\makebox(0,0)[lb]{\small $x_4$}} \put(-48,12){\makebox(0,0)[lb]{\small $x_1$}}} \qquad \subfloat[$+x_3$]{\includegraphics[width=0.19\textwidth]{tree3N.png} \put(-75,39){\makebox(0,0)[lb]{\small $x_2$}} \put(-20,39){\makebox(0,0)[lb]{\small $x_4$}} \put(-48,12){\makebox(0,0)[lb]{\small $x_1$}}} \qquad \subfloat[$+x_4$]{\includegraphics[width=0.19\textwidth]{tree4N.png} \put(-48,65){\makebox(0,0)[lb]{\small $x_3$}} \put(-75,39){\makebox(0,0)[lb]{\small $x_2$}} \put(-48,12){\makebox(0,0)[lb]{\small $x_1$}}} \caption{Spanning trees in the box diagram with four legs and corresponding terms in $\mathcal{U}_{\text{box}}$.} \label{fig: trees} \end{figure} \begin{figure}[htb!] \centering \subfloat[$+x_1 x_2 p_1^2$]{\includegraphics[width=0.19\textwidth]{forest1N.png} \put(-20,39){\makebox(0,0)[lb]{\small $x_4$}} \put(-48,66){\makebox(0,0)[lb]{\small $x_3$}}} \qquad \subfloat[$+x_2 x_3 p_2^2$]{\includegraphics[width=0.19\textwidth]{forest2N.png} \put(-20,39){\makebox(0,0)[lb]{\small $x_4$}} \put(-48,13){\makebox(0,0)[lb]{\small $x_1$}}} \qquad \subfloat[$+x_3 x_4 p_3^2$]{\includegraphics[width=0.19\textwidth]{forest3N.png} \put(-76,39){\makebox(0,0)[lb]{\small $x_2$}} \put(-48,13){\makebox(0,0)[lb]{\small $x_1$}}} \qquad \subfloat[$+x_4 x_1 p_4^2$]{\includegraphics[width=0.19\textwidth]{forest4N.png} \put(-48,66){\makebox(0,0)[lb]{\small $x_3$}} \put(-76,39){\makebox(0,0)[lb]{\small $x_2$}}} \\ \subfloat[$+x_1 x_3 (p_1+p_2)^2$]{\includegraphics[width=0.19\textwidth]{forest5N.png} \put(-20,39){\makebox(0,0)[lb]{\small $x_4$}} \put(-76,39){\makebox(0,0)[lb]{\small $x_2$}}} \qquad \subfloat[$+x_2 x_4 (p_2+p_3)^2$]{\includegraphics[width=0.19\textwidth]{forest6N.png} \put(-48,66){\makebox(0,0)[lb]{\small $x_3$}} \put(-48,13){\makebox(0,0)[lb]{\small $x_1$}}} \caption{Spanning $2$-forests in the box diagram with four legs and corresponding terms in~$\mathcal{F}_{\text{box}}$.} \label{fig: forests} \end{figure} \end{Example} Thus, the Symanzik or graph polynomials capture the algebraic information contained in the topology of a Feynman diagram and they prove to be the first tool to be used in the tentative investigation of renormalisation theory via the algebraic manipulation of concatenated one-loop integrals. For a more detailed overview of the properties of Feynman graph polynomials we refer to Bogner and Weinzierl~\cite{BW10}. \subsection{Primitive log-divergent graphs} \label{sec: primitive} The parametric Feynman integral in~\eqref{eq: I_G_new} can be written in a slightly different notation, which turns out to be particularly useful henceforth. Neglecting prefactors and assuming $D=4$, it is equivalent to the \textit{projective integral} \begin{gather}\label{eq: I_G} I_G(\{p_j,m_e\}) = \int_{\sigma} \frac{\Omega}{\Psi_G^2} \bigg(\frac{\Psi_G}{\Xi_G(\{p_j,m_e\})} \bigg)^{n_G-2l_G}, \end{gather} where $\sigma$ is the real projective simplex given by \begin{gather} \sigma = \big\{ [x_1:\dots:x_{n_G}] \in \mathbb{P}^{n_G-1}(\mathbb{R}) \,|\, x_e \ge 0,\, e=1,\dots,n_G \big\} \end{gather} and $\Omega$ is the top-degree differential form on $\mathbb{P}^{n_G-1}$ expressed in local coordinates as \begin{gather} \Omega = \sum_{e=1}^{n_G}(-1)^e x_e \, {\rm d}x_1 \wedge \dots \wedge \widehat{{\rm d}x_e} \wedge \dots \wedge {\rm d}x_{n_G}. \end{gather} One can check that the integrand is homogeneous of degree zero, so that the integral in projective space is well-defined and equivalent, under the affine constraint $x_{n_G}=1$, to the previous parametric integral in affine space. Integral~\eqref{eq: I_G} is in general divergent, as singularities may arise if the zero sets of the graph polynomials $\Psi_G$ and $\Xi_G$ intersect the domain of integration. Graphs satisfying the condition $n_G=2l_G$ are called \textit{logarithmically divergent} and constitute a particularly interesting class of graphs. In fact, their Feynman integral simplifies to \begin{gather}\label{eq: I_G_last} I_G= \int_{\sigma} \frac{\Omega}{\Psi_G^2}, \end{gather} where the dependence on the second Symanzik polynomial, and consequently on momenta and masses, has vanished. Being uniquely sensitive to the graph topology, such a Feynman graph describes a so-called \textit{single-scale process}.\footnote{Among other contexts, the feature of no-scaling also occurs in the evaluation of Feynman diagrams concerning the anomalous magnetic moment of the electron, as presented by Laporta and Remiddi~\cite{LR96}.} For a logarithmically divergent graph $G$, we define the \textit{graph hypersurface} as the zero set of its first Symanzik polynomial \begin{gather} \label{eq: X_G} X_G = \big\{[x_1:\dots:x_{n_G}] \in \mathbb{P}^{n_G-1} \,|\, \Psi_G(x_1,\dots,x_{n_G})=0 \big\} \end{gather} which describes the singularities of its Feynman integral $I_G$. The following theorem on the convergence of logarithmically divergent graphs is proven by Bloch, Esnault and Kreimer~\cite{BEK06}. \begin{Theorem}\label{th: log} Let $G$ be logarithmically divergent. The integral $I_G$ converges if and only if every proper subgraph $\varnothing \ne \gamma \subset G$ satisfies the condition $n_{\gamma} > 2 l_{\gamma}$. \end{Theorem} A logarithmically divergent graph $G$ such that $I_G$ is convergent is called \textit{primitive log-divergent}, or simply \textit{primitive}. We~give particular attention to the class of primitive log-divergent graphs in scalar massless $\phi^4$ quantum field theory. They are called \textit{$\phi^4$-graphs}, and have vertices with valence at most four. Feynman amplitudes in $\phi^4$ theory have been computed to much higher loop orders than most other quantum field theories thanks to the work of Broadhurst and Kreimer~\cite{BK95,BK97}, and Schnetz~\cite{Sch10}. Some of the simplest $\phi^4$-graphs are shown in Fig.~\ref{fig: phi4} along with the values of the associated Feynman integrals. Here, $\zeta$ is the Riemann zeta function, and $P_{3,5}=-\frac{216}{5} \zeta(3,5) - 81 \zeta(5)\zeta(3) + \frac{522}{5} \zeta(8)$. \begin{figure}[htb!] \centering \subfloat[][\emph{$I_G=6 \zeta(3)$}] {\includegraphics[width=0.18\textwidth]{3N.png}} \qquad \subfloat[][\emph{$I_G=20 \zeta(5)$}] {\includegraphics[width=0.18\textwidth]{4N.png}} \qquad \subfloat[][\emph{$I_G=36 \zeta(3)^2$}] {\includegraphics[width=0.18\textwidth]{5N.png}} \qquad \subfloat[][\emph{$I_G=32 P_{3,5}$}] {\includegraphics[width=0.18\textwidth]{6N.png}} \caption{Examples of $\phi^4$-graphs with 3, 4, 5 and 6 loops.} \label{fig: phi4} \end{figure} \subsection{Multiple zeta values} \label{sec: mzvs} The Riemann zeta function is defined on the half-plane of complex numbers $s \in \mathbb{C}$ with $\mathop{\rm Re}(s) > 1$ by the absolutely convergent series \begin{gather} \label{eq: zeta} \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} \end{gather} and extended to a meromorphic function on the whole complex plane with a single pole at $s = 1$. The first tentative attempts to find polynomial relations among zeta values by multiplying terms of the form~\eqref{eq: zeta} have led to a generalisation of the notion of Riemann zeta value. Multiple zeta values, or MZVs, are the real numbers \begin{gather} \label{eq: zsum} \zeta(s_1,\dots,s_l) = \sum_{n_1 > n_2 > \dots > n_l \ge 1} \frac{1}{n_1^{s_1} \cdots n_l^{s_l}} \end{gather} associated to tuples of integers $\mathbf{s}=(s_1,\dots,s_l)$, called \textit{multi-indices}. To guarantee the convergence of the infinite series, only multi-indices such that $s_i \ge 1$ for $i=1,\dots,l$ and $s_1 \ge 2$ are considered. They are called \textit{admissible} multi-indices. The integers $\mathop{\rm wt}(\mathbf{s})=s_1+\dots+s_l$ and $l$ are called \textit{weight} and \textit{length} of the multi-index $\mathbf{s}$, respectively. Following the early observations that products of two zeta values are $\mathbb{Q}$-linear combinations of zeta and double zeta values, and that products of more than two zeta values are analogously expressed in terms of multiple zeta values of higher length, linear relations among MZVs have been the object of a more and more extensive investigation by many mathematicians, including Brown, Cartier, Deligne, Drinfeld, \'{E}calle, Goncharov, Hain, Hoffman, Kontsevich, Terasoma, Zagier, Broadhurst and Kreimer. Indeed, the $\mathbb{Q}$-linear relations among multiple zeta values directly provide insights on the widely sought-after algebraic relations among Riemann zeta values. The $\mathbb{Q}$-vector space spanned by multiple zeta values forms an algebra under the so-called \textit{stuffle product}. Analytic methods, like partial fraction expansions, provide only a few of the known relations among MZVs. Many more are obtained, although conjecturally, by performing extensive numerical experiments, as described by Bl\"{u}mlein et al~\cite{BBV10}. However, enormous progress followed the analytic discovery of a crucial feature of multiple zeta values, that is, beside their representation as infinite series, they admit an alternative representation as iterated integrals over simplices of weight-dimension. Let~$\Delta^p= \{(t_1,\dots,t_p) \in \mathbb{R}^p \,|\, 1 \ge t_1 \ge t_2 \ge \dots \ge t_p \ge 0 \}$ and define the measures on the open interval $(0,1)$ \begin{gather} \omega_0(t)=\frac{{\rm d}t}{t} , \qquad \omega_1(t) = \frac{{\rm d}t}{1-t}. \end{gather} If $\mathbf{s}$ is an admissible multi-index, write $r_i=s_1+\dots+s_i$ for each $i=1,\dots,l$ and set $r_0=0$. Define the measure $\omega_{\mathbf{s}}$ on the interior of the simplex $\Delta^{\mathop{\rm wt}(\mathbf{s})}$ by \begin{gather} \omega_{\mathbf{s}} = \prod_{i=1}^l \underbrace{\omega_0(t_{r_{i-1}+1}) \cdot \cdot \cdot \omega_0(t_{r_{i}-1})}_{s_i - 1 \text{ times }}\omega_1(t_{r_i}). \end{gather} The following insight is due to Kontsevich. \begin{Proposition} Let $\mathbf{s}=(s_1,\dots,s_l)$ be an admissible multi-index. The multiple zeta value~$\zeta(\mathbf{s})$ can be obtained by the convergent improper integral \begin{gather} \label{eq: zint} \zeta(\mathbf{s}) = \zeta(s_1,\dots,s_l) = \int_{\Delta^{\mathop{\rm wt}(\mathbf{s})}} \omega_{\mathbf{s}}. \end{gather} \end{Proposition} \looseness=1 This different way of writing multiple zeta values yields a new algebra structure associated with the so-called \textit{shuffle product}. Many other linear relations among MZVs are obtained systematically in this alternative framework. However, relations are also and most interestingly derived by the comparison of the two representations given by~\eqref{eq: zsum} and~\eqref{eq: zint}. The coexistence of the stuffle and shuffle algebra structures on the $\mathbb{Q}$-vector space of MZVs proved to be the most productive source of information about these numbers. For a more detailed discussion of the classical theory of multiple zeta values we refer to Fres\'an and Burgos Gil~\cite{FG}. Making concrete use of the many $\mathbb{Q}$-linear relations that MZVs are known to satisfy is not an easy task, particularly at high weights. For example, there is no algorithm that leads to the reduction of~any given MZV into a chosen $\mathbb{Q}$-basis. A~boost in our understanding originated from the exact-numerical decomposition algorithm by Brown~\cite{Brown2012}. Developed from the non-classical perspective of the theory of motives, it conjecturally provides a general strategy to handle MZVs, and more generally polylogarithmic numbers, by converting them into the so-called \textit{f-alphabet}.\footnote{The elements of the shuffle algebra on the $\mathbb{Q}$-vector space of MZVs are interpreted as words in letters of certain weights. Precisely, there is one letter for each odd weight greater than 1. Words in these letters span finite-dimensional subspaces of definite weight. Notice, however, that the conversion into the $f$-alphabet depends on the choice of algebra basis.} We observe the remarkable fact that $\mathbb{Q}$-linear combinations of multiple zeta values are ubiquitous in the evaluation of Feynman amplitudes in perturbative quantum field theories. It~was conjectured by Broadhurst and Kreimer~\cite{BK95} and then proved by Brown and Schnetz~\cite{BS12} that Feynman integrals of the infinite family of \textit{zig-zag graphs} in $\phi^4$ theory (see Fig.~\ref{fig: zigzag}) are certain known rational multiples of the odd values of the Riemann zeta function. \begin{figure}[htb!] \centering \subfloat[][\emph{$l=5$}] {\includegraphics[scale=0.3]{zigzag_5N.png}} \qquad\qquad \subfloat[][\emph{$l=6$}] {\includegraphics[scale=0.3]{zigzag_6N.png}} \caption{Examples of zig-zag graphs with 5 and 6 loops.} \label{fig: zigzag} \end{figure} \begin{Theorem} Let $Z_l$ be the zig-zag graph with $l$ loops. Its Feynman integral is \begin{gather} I_{Z_l} = 4 \frac{(2l-2)!}{l! (l-1)!} \left( 1 - \frac{1 - (-1)^l}{2^{2l-3}} \right) \zeta(2l-3). \end{gather} \end{Theorem} Another example is given by the anomalous magnetic moment of the electron in quantum electrodynamics. The tree level Feynman diagram representing a slow-moving electron emitting a photon is depicted in Fig.~\ref{fig: g} along with its one-loop correction. The two-loop correction comes from the contributions of seven distinct two-loop diagrams. The total two-loop Feynman amplitude has been evaluated by Petermann~\cite{Pet57}, giving $\frac{197}{144} + \frac{1}{2} \zeta(2) - 3 \zeta(2) \log(2)+ \frac{3}{4} \zeta(3)$, which involves the logarithm of 2 and again values of the Riemann zeta function. \begin{figure}[htb!] \centering \subfloat[Tree-level contribution]{\includegraphics[scale=.55]{el_leftN.png}} \qquad\qquad \subfloat[One-loop contribution]{\includegraphics[scale=.55]{el_rightN.png}}% \caption{Up to one-loop Feynman diagrams contributing to the anomalous magnetic moment of the electron.} \label{fig: g} \end{figure} Many more examples are given by Broadhurst~\cite{Bro13}. Due to a vast amount of evidence, it was believed for a long time that all primitive amplitudes of the form~\eqref{eq: I_G_last} in massless $\phi^4$ theory should be $\mathbb{Q}$-linear combinations of MZVs. Only recently this conjectural statement was proved false in the motivic setup\footnote{Outside of the motivic framework, the statement relies on transcendentality conjectures.} by Brown and Schnetz~\cite{BS2012}. Explicit examples of $\phi^4$-amplitudes at high loop orders not expressible in terms of multiple zeta values have been found by Panzer and Schnetz~\cite{PS17}. In the same work, explicit computation of all $\phi^4$-amplitudes with loop order up to $7$ suggests that not all MZVs appear among them. For example, no $\phi^4$-graph is known to evaluate to $\zeta(2)$ or $\zeta(2)^2$. Remarkably, the integral representation of MZVs partially clarify the presence of these numbers in perturbative calculations in quantum field theory. Indeed, both expressions~\eqref{eq: I_G_last} and~\eqref{eq: zint} are suitably interpreted as \textit{periods} of algebraic varieties. \section{Cohomology theory of algebraic varieties} \label{sec: geometry} \subsection{Singular homology} We follow the expositions by Weibel~\cite{Wei94} and Hartshorne~\cite{Har77}. Let~$M$ be a topological space. For each integer $k \ge 0$, the standard $k$-simplex is \begin{gather} \Delta^k_{st} = \bigg\{ (t_0,\dots,t_k) \in \mathbb{R}^{k+1} \,\bigg|\, \sum_{i=0}^k t_i = 1, \; t_i \ge 0, \; i=0,\dots,k \bigg\}. \end{gather} For each $i=0,\dots,k$, the \textit{face map} $\delta_i^k \colon \Delta^{k-1}_{st} \rightarrow \Delta^k_{st}$ is defined by \begin{gather} \delta^k_i (t_0,\dots,t_{k-1}) = (t_0,\dots,t_{i-1},0,t_i,\dots,t_{k-1}). \end{gather} A \textit{singular k-chain} in $M$ is a continuous\footnote{If $M$ is a differentiable manifold, we can assume the singular chains to be piecewise smooth, or smooth, without altering the homology groups.} map $\sigma \colon \Delta^k_{st} \rightarrow M$. For each $k \ge 0$, let \begin{gather} C_k(M) = \bigoplus_{\sigma} \mathbb{Z} \sigma \end{gather} be the free abelian group generated by singular $k$-chains. Elements of $C_k(M)$ are finite $\mathbb{Z}$-linear combinations of the continuous maps $\sigma\colon \Delta^k_{st} \rightarrow M$. For each $k \ge 1$, the \textit{boundary map} $\partial_k \colon C_k(M) \rightarrow C_{k-1}(M)$ is defined by \begin{gather} \partial_k (\sigma) = \sum_{i=0}^k (-1)^i \big(\sigma \circ \delta_i^k\big), \end{gather} where the alternating signs in the sum guarantee that boundary maps satisfy the condition $\partial_{k-1} \circ \partial_k = 0$. The pair $(C_{\bullet}(M), \partial_{\bullet})$ is called a \textit{homological chain complex} and is graphically represented as \begin{gather} \dots \xlongrightarrow{\partial_{k+1}} C_k(M) \xlongrightarrow{\partial_{k}} C_{k-1}(M) \xlongrightarrow{\partial_{k-1}} \dots \xlongrightarrow{\partial_{2}} C_1(M) \xlongrightarrow{\partial_{1}} C_0(M). \end{gather} \begin{Definition} \label{def: ho} The \textit{singular homology} of the topological space $M$ is the homology of the complex $(C_{\bullet}(M), \partial_{\bullet})$, that is \begin{gather} H_k^{\text{s}}(M, \mathbb{Z}) = \begin{cases} C_0(M)/\Im(\partial_1), & k=0,\\ \mathop{\rm Ker}(\partial_k)/\Im(\partial_{k+1}), & k \ge 1. \end{cases} \end{gather} In degree $k$, chains in the kernel of the boundary map $\partial_k$ are called \textit{$($closed$)$ cycles} and chains in the image of the boundary map $\partial_{k+1}$ are called \textit{$($exact$)$ boundaries}. \end{Definition} \begin{Example}\label{ex: ex1} Let $M= \mathbb{C}^*$ be the punctured complex plane. The singular chains \begin{gather} \gamma_0\colon\quad \Delta^0_{st} \rightarrow \mathbb{C}^*, \qquad 1 \mapsto 1, \\ \gamma_2 \colon\quad \Delta^1_{st} \rightarrow \mathbb{C}^*, \qquad (t, 1-t) \mapsto {\rm e}^{2 \pi {\rm i} t} \end{gather} generate the singular homology groups $H_0^{\text{s}}(\mathbb{C}^*,\mathbb{Z})$ and $H_1^{\text{s}}(\mathbb{C}^*,\mathbb{Z})$, respectively. These are both free groups of rank one. All the other homology groups vanish. \end{Example} For each $k \ge 0$, the free abelian group of \textit{singular n-cochains} is defined by \begin{gather} C^k(M) = \mathop{\rm Hom}(C_k(M), \mathbb{Z}). \end{gather} Analogously, applying vector duality, we introduce the \textit{coboundary maps} $d^k\!\colon C^k(M) \!\rightarrow\! C^{k+1}(M)$, which satisfy the condition $d^{k+1} \circ d^k = 0$. The corresponding \textit{cohomological chain complex} $(C^{\bullet}(M), d^{\bullet})$ is graphically represented as \begin{gather} \cdots \xlongleftarrow{d^{k+1}} C^{k+1}(M) \xlongleftarrow{d^{k}} C^{k}(M) \xlongleftarrow{d^{k-1}} \cdots \xlongleftarrow{d^{1}} C^1(M) \xlongleftarrow{d^{0}} C^0(M). \end{gather} \begin{Definition} \label{def: coho} The \textit{singular cohomology} of the topological space $M$ is the cohomology of the complex $(C^{\bullet}(M), d^{\bullet})$, that is \begin{gather} H^k_{\text{s}}(M, \mathbb{Z}) = \begin{cases} \mathop{\rm Ker}\big(d^0\big), & k=0,\\ \mathop{\rm Ker}\big(d^k\big)/\Im(d^{k-1}), & k \ge 1. \end{cases} \end{gather} \end{Definition} Definitions~\ref{def: ho} and~\ref{def: coho} of singular homology and cohomology of topological spaces, given here with respect to $\mathbb{Z}$, extend naturally to other coefficient rings. For our purposes, we assume the ring of coefficients to be $\mathbb{Q}$. This allows us to identify singular cohomology groups with the vector duals of the corresponding singular homology groups\footnote{This isomorphism is true for real or complex coefficients as well, but it does not hold for integer coefficients.} \begin{gather} H^k_{\text{s}}(M, \mathbb{Q}) \simeq \mathop{\rm Hom}(H_k^{\text{s}}(M,\mathbb{Q}),\mathbb{Q}), \end{gather} that is, classes of a cohomology group can be interpreted as classes of linear functionals on the corresponding homology group. The singular cohomology of the topological space underlying a~complex algebraic variety is of particular interest. \begin{Definition} \label{def: betti} Let $X$ be an algebraic variety over a subfield $\mathbb{K}$ of $\mathbb{C}$. Its set of complex points $X(\mathbb{C})$ canonically carries the complex analytic topology, and the corresponding topological space\footnote{Equipped with the canonical structure sheaf, $X^{\rm an}$ is a complex analytic space, called the \textit{analytification} of $X$. The relationship between algebraic spaces over the complex numbers and complex analytic spaces is described by a series of results, known as GAGA-type theorems. These developments followed the work by Serre~\cite{GAGA} on the existence and faithfulness of the analytification of a complex algebraic variety.} is written as $X^{\rm an}$. The \textit{Betti cohomology} of $X$ is the singular cohomology of the underlying topo\-lo\-gi\-cal space $X^{\rm an}$, that is \begin{gather} H^k_{{\rm B}}(X,\mathbb{Q})=H^k_{\text{s}}\big(X^{\rm an},\mathbb{Q}\big) \end{gather} for $k \ge 0$. \end{Definition} \begin{Example} \label{ex: ex2} Let $\mathbb{G}_m = \mathop{\rm Spec} \mathbb{Q}[x,1/x]$ be the multiplicative group. $\mathbb{G}_m$ is an algebraic variety over $\mathbb{Q}$ and its underlying topological space of complex points is $\mathbb{G}_m^{\rm an} = \mathbb{C}^*$. For each $k \ge 0$, the $k$-th Betti cohomology group of $\mathbb{G}_m$ is $H_{\rm B}^k(\mathbb{G}_m,\mathbb{Q}) = H_{\text{s}}^k(\mathbb{C}^*, \mathbb{Q})$. \end{Example} \subsubsection{Some properties of homology} We briefly recall some properties of singular homology and cohomology. \begin{itemize}\itemsep=0pt \item[($a$)] \textit{Homotopy invariance}. If~$M_1$ and $M_2$ are homotopically equivalent topological spaces, then $H_k^{\text{s}}(M_1,\mathbb{Q}) \simeq H_k^{\text{s}}(M_2, \mathbb{Q})$ for each $k \ge 0$. An analogous statement holds for singular cohomology. \item[($b$)] \textit{Mayer--Vietoris sequences}. For any two open subspaces $U,V \subseteq M$ of a given topological space $M$, such that $M = U \cup V$, there is a long exact sequence of the following form: \begin{equation} \begin{tikzcd} \cdots \arrow[r] & H^s_k(U \cap V,\mathbb{Q}) \arrow[r] \arrow[d, phantom, ""{coordinate, name=Z}] & H^s_k(U,\mathbb{Q}) \oplus H^s_k(V,\mathbb{Q}) \arrow[dll, "", rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}] \\ H^s_{k}(M,\mathbb{Q}) \arrow[r] & H^s_{k-1}(U \cap V,\mathbb{Q}) \arrow[r] & \cdots. \end{tikzcd} \end{equation} An analogous statement holds for singular cohomology. \item[($c$)] \textit{K\"unneth formula}. For any two topological spaces $M_1$, $M_2$, for each $k \ge 0$, there is a natural isomorphism \begin{gather} H_k^{\text{s}}(M_1 \times M_2, \mathbb{Q}) \simeq \bigoplus_{i+j=k} H_i^{\text{s}}(M_1, \mathbb{Q}) \otimes H_j^{\text{s}}(M_2, \mathbb{Q}). \end{gather} An analogous statement holds for singular cohomology. \item[($d$)] \textit{Push-forward}. Let~$f\colon M_1 \rightarrow M_2$ be a continuous map between two topological spa\-ces~$M_1$,~$M_2$. Then, $f$ induces a morphism of chain complexes \begin{gather} f_*\colon\ C_{\bullet}(M_1) \rightarrow C_{\bullet}(M_2) \end{gather} called \textit{push-forward}, sending $\sigma_1 \in C_k(M_1)$ to $\sigma_2 = f \circ \sigma_1 \in C_k(M_2)$. Equivalently, the following diagram: \begin{equation} \begin{tikzcd} \Delta_{st}^k \arrow[r, "\sigma_1"] \arrow[rd, "\sigma_2"'] & M_1 \arrow[d, "f"] \\ & M_2 \end{tikzcd} \end{equation} commutes. Hence, $f$ induces also a group homomorphism between the corresponding singular homology groups \begin{gather} f_* \colon\ H_k^{\text{s}}(M_1, \mathbb{Q}) \rightarrow H_k^{\text{s}}(M_2, \mathbb{Q}) \end{gather} for each $k \ge 0$. \item[($e$)] \textit{Pull-back.} Let $f\colon M_1 \rightarrow M_2$ be a continuous map between two topological spa\-ces~$M_1$,~$M_2$. Then, $f$ induces a morphism of cochain complexes \begin{gather} f^*\colon\ C^{\bullet}(M_2) \rightarrow C^{\bullet}(M_1) \end{gather} called \textit{pull-back}, sending $\omega_2 \in C^k(M_2)$ to $\omega_1 = \omega_2 \circ f_* \in C^k(M_1)$. Equivalently, the following diagram: \begin{equation} \begin{tikzcd} C_k(M_1) \arrow[r, "\omega_1"] \arrow[d, "f_*"'] & \mathbb{Q} \\ C_k(M_2) \arrow[ur, "\omega_2"'] & \end{tikzcd} \end{equation} commutes. Hence, $f$ induces also a group homomorphism between the corresponding singular cohomology groups \begin{gather} f^*\colon\ H^k_{\text{s}}(M_2, \mathbb{Q}) \rightarrow H^k_{\text{s}}(M_1, \mathbb{Q}) \end{gather} for each $k \ge 0$. \end{itemize} \subsubsection{Relative singular homology} Let $M$ be a topological space and $\iota\colon N \hookrightarrow M$ the canonical inclusion of a topological subspace $N \subseteq M$. Denote by $(C_{\bullet}(N), \partial^N_{\bullet})$ and $(C_{\bullet}(M), \partial^M_{\bullet})$ their homological chain complexes, and by $\iota_*\colon C_{\bullet}(N) \rightarrow C_{\bullet}(M)$ the corresponding injective morphism obtained via push-forward. For each $k \ge 1$, we define the total chain complex $C_{\bullet}(M,N)$ to be the \textit{mapping cone}\footnote{Note that, for any morphism of chain complexes $f_*\colon C_{\bullet}(M_1) \rightarrow C_{\bullet}(M_2)$, the mapping cone $C_k(M_2,M_1) = C_{k-1}(M_1) \oplus C_k(M_2)$ can be defined. However, injectivity of the morphism $\iota_*$ implies that the cone $C_{\bullet}(M,N)$ is quasi-isomorphic to the quotient $C_{\bullet}(M) / C_{\bullet}(N)$.} of the morphism~$\iota_*$, that is \begin{gather} C_k(M,N) = C_{k-1}(N) \oplus C_k(M), \end{gather} and the differential $\partial_k\colon C_k(M,N) \rightarrow C_{k-1}(M,N)$ to act as \begin{gather} \partial_k (\sigma_N,\sigma_M) = \big({-}\partial_{k-1}^N (\sigma_N), - \iota_*(\sigma_N) + \partial_k^M (\sigma_M)\big), \end{gather} where $(\sigma_N,\sigma_M) \in C_k(M,N)$. \begin{Definition} The \textit{relative homology} of the pair of topological spaces $(M,N)$ is the homology of the total chain complex $(C_{\bullet}(M,N), \partial_{\bullet})$. For $k \ge 1$, we denote the relative singular homology groups as $H_k^{\text{s}}(M,N,\mathbb{Q})$. \end{Definition} Relative homology fits into the following long exact sequence: \begin{equation} \label{eq: long} \begin{tikzcd} \cdots \arrow[r] & H_k^{\text{s}}(M,\mathbb{Q}) \arrow[r] \arrow[d, phantom, ""{coordinate, name=Z}] & H_k^{\text{s}}(M,N,\mathbb{Q}) \arrow[dll, "", rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}] \\ H_{k-1}^{\text{s}}(N,\mathbb{Q}) \arrow[r] & H_{k-1}^{\text{s}}(M,\mathbb{Q}) \arrow[r] & H_{k-1}^{\text{s}}(M,N,\mathbb{Q}) \arrow[r] & \cdots, \end{tikzcd} \end{equation} where the connecting morphisms are the push-forward maps $\iota_*\colon H_k^{\text{s}}(N,\mathbb{Q}) \rightarrow H_k^{\text{s}}(M,\mathbb{Q})$ induced by the inclusion $\iota\colon N \hookrightarrow M$. Consider an element of the relative homology group $H_k^{\text{s}}(M,N,\mathbb{Q})$. This is represented by a pair $(\sigma_N, \sigma_M)$ of singular chains $\sigma_N \in C_{k-1}(N)$ and $\sigma_M \in C_k(M)$ satisfying \begin{gather} \partial_{k-1}^N \sigma_N = 0 , \qquad \partial_k^M \sigma_M = \iota_* \sigma_N. \end{gather} Note that, since $\iota_*$ is injective, the latter condition implies the former. Thus, relative homology classes are represented by chains in $M$ whose boundary is contained in $N$. Relative cohomology groups $H^k_{\text{s}}(M,N, \mathbb{Q})$ are defined similarly. \begin{Example} \label{ex: ex0} Let $M = \mathbb{C}^*$ be the punctured complex plane and $N = \{ p,q \} \subset M$ be the subspace consisting of two points $p,q \in \mathbb{C}^*$ with $p \ne q$. Let~$\gamma_1\colon \Delta^1_{st} \rightarrow M$ be any continuous map\footnote{When it does not pass through the origin, the oriented segment starting at $p$ and ending at $q$ is an example of such a map.} such that $\gamma_1(0, 1) = p$, $\gamma_1(1, 0) = q$, and it does not encircle the origin. Then \begin{gather} \partial_1^M \gamma_1 = p-q \in C_0(N). \end{gather} Consequently, $\gamma_1$ defines a relative chain. It~follows from the long exact sequence~\eqref{eq: long} that the only non-trivial relative homology group is $H_1^{\text{s}}(M,N,\mathbb{Q})$. A~basis of this group is given by the chain $\gamma_1$ and the chain $\gamma_2$, introduced in Example~\ref{ex: ex1}, consisting of a counterclockwise circle containing the origin. Such a basis is graphically represented in Fig.~\ref{fig: base}. \begin{figure}[htb!] \centering \includegraphics[scale=1.2]{baseN.png} \put(-145,55){\makebox(0,0)[lb]{\small $\gamma_2$}} \put(-180,24){\makebox(0,0)[lb]{\small $O$}} \put(-108,25){\makebox(0,0)[lb]{\small $p$}} \put(-60,25){\makebox(0,0)[lb]{\small $\gamma_1$}} \put(-9,25){\makebox(0,0)[lb]{\small $q$}} \caption{Basis of $H_1^{\text{s}}(\mathbb{C}^*,\{ p,q \},\mathbb{Q})$.} \label{fig: base} \end{figure} \end{Example} \subsection{De Rham cohomology} \label{sec: smooth} We start by reviewing a classical construction in differential geometry. Let~$M$ be a differentiable manifold of dimension $n$. A~\textit{differential $p$-form} on $M$ is written in local coordinates as \begin{gather} \sum_{1 \le i_1 < \dots < i_p \le n} f_{i_1,\dots,i_p} \; {\rm d}x_{i_1} \wedge \dots \wedge {\rm d}x_{i_p}, \end{gather} where $f_{i_1,\dots,i_p}$ are $\mathcal{C}^{\infty}$-functions. Let~$\Omega^p(M)$ denote the $\mathbb{R}$-vector space of differential $p$-forms on~$M$ and define the space of differential forms on $M$ as \begin{gather} \Omega(M) = \bigoplus_{p=0}^n \Omega^p(M). \end{gather} The \textit{exterior derivative} ${\rm d}\colon \Omega(M) \rightarrow \Omega(M)$ is the unique $\mathbb{R}$-linear map which sends $p$-forms into $(p+1)$-forms and satisfies the following axioms: \begin{itemize}\itemsep=0pt \item[(1)] Let $f$ be a smooth function. Then, ${\rm d}f = \sum_{i=1}^n \frac{\partial f}{\partial x_i} {\rm d}x_i$ is the ordinary differential of $f$. \item[(2)] ${\rm d} \circ {\rm d}=0$. \item[(3)] Let $\alpha$ be a $p$-form on $M$ and $\beta$ any differential form in $\Omega(M)$. Denote by $\alpha \wedge \beta$ their exterior product. Then, ${\rm d}(\alpha \wedge \beta) = {\rm d}\alpha \wedge \beta + (-1)^p \alpha \wedge {\rm d}\beta$. \end{itemize} The associated cochain complex is \begin{gather} 0 \rightarrow \Omega^0(M) \xrightarrow{\rm d} \Omega^1(M) \xrightarrow{\rm d} \cdots \xrightarrow{\rm d} \Omega^n(M) \rightarrow 0 \end{gather} and its cohomology, denoted $H^{\bullet}_{\rm dR}(M, \mathbb{R})$, is called the \textit{smooth de Rham cohomology} of $M$. A~differential $p$-form $\omega$ is \textit{closed} if ${\rm d}\omega=0$ and it is \textit{exact} if there exists a differential $(p-1)$-form $\eta$ such that $\omega = {\rm d}\eta$. A~classical theorem\footnote{De Rham's theorem was first presented in his PhD thesis, published in 1931, when cohomology groups had not been introduced yet. He did not state the theorem in the way it is described today, but gave an equivalent version involving Betti numbers and integration of closed differential forms over cycles.} by de Rham~\cite{Der31} asserts that the singular cohomology $H^{\bullet}_{\text{s}}(M, \mathbb{R})$ can be computed using differential forms.\footnote{We refer to Bott and Tu~\cite{BT82} for a comprehensive investigation of differential forms in algebraic topology.} \begin{Theorem} \label{th: derham} Let $M$ be a differentiable manifold of dimension $n$. For $0 \le k \le n$, the map \begin{align} H^k_{\mathrm{dR}}(M,\mathbb{R}) &\longrightarrow H^k_{\mathrm{s}}(M,\mathbb{R}) \simeq \mathop{\rm Hom}\big(H_k^{\text{s}}(M,\mathbb{R}), \mathbb{R}\big), \\ {}[\omega] &\longmapsto \int \omega, \end{align} which sends the class of a differential form $\omega$ to the integration functional \begin{align} \int \omega\colon\ H_k^{\text{s}}(M,\mathbb{R}) &\longrightarrow \mathbb{R}, \\ {}[\gamma] &\longmapsto \displaystyle\int_{\gamma} \omega, \end{align} is an isomorphism. \end{Theorem} \subsubsection{Algebraic de Rham cohomology} \label{sec: algdR} A notion of de Rham cohomology for general algebraic varieties over fields of characteristic zero has been introduced by Grothendieck~\cite{Gro66}. Let~$\mathbb{K}$ be a subfield of $\mathbb{C}$ and let $X$ be an algebraic variety over $\mathbb{K}$. \begin{Definition} Consider $U \subseteq X$ an open affine subset in the Zariski topology. The ring of~regular functions on $U$, denoted by $\mathcal{O}(U)$, is a finitely-generated $\mathbb{K}$-algebra, and precisely a~quotient of a polynomial ring over $\mathbb{K}$. We~say that $X$ is \textit{smooth} or \textit{nonsingular} of dimension~$n$ if, for every closed point $x \in X$, the limit $\lim_{\substack{\\ \overrightarrow{U \ni x} }} \mathcal{O}(U)$, indexed over all Zariski open affine subsets $U \subseteq X$ containing $x$, and with ordering defined by reverse inclusion, is a regular local ring of dimension $n$. \end{Definition} Let $X$ be smooth of dimension $n$ and affine. We~can write $X = \mathop{\rm Spec} R$, where $R = \mathcal{O}(X)$ is the ring of regular functions on $X$. A~\textit{$\mathbb{K}$-linear algebraic $p$-form} on $X$ is a differential $p$-forms on $X$ with coefficients in $R$. In a local coordinate chart, it is given by an expression of the form \begin{gather} \label{eq: local} \sum_{1 \le i_1 < \dots < i_p \le n} f_{i_1,\dots,i_p} \; {\rm d}x_{i_1} \wedge \dots \wedge {\rm d}x_{i_p}, \end{gather} where $f_{i_1,\dots,i_p}$ are $\mathbb{K}$-polynomial functions on $X$. We~denote by $\Omega^p(X)$ the $\mathbb{K}$-vector space of~algebraic $p$-forms on $X$ and we define the space of algebraic forms on $X$ as \begin{gather} \Omega(X) = \bigoplus_{p=0}^n \Omega^p(X). \end{gather} A derivation ${\rm d}\colon \Omega(X) \rightarrow \Omega(X)$, satisfying properties that are analogous to the ones described in~Section~\ref{sec: smooth} for the exterior derivative, can be defined. It~canonically yields a cochain complex \begin{gather} 0 \rightarrow R \simeq \Omega^0(X) \xrightarrow{\rm d} \Omega^1(X) \xrightarrow{\rm d} \cdots \xrightarrow{ \rm d} \Omega^n(X) \rightarrow 0 \end{gather} called the \textit{algebraic de Rham complex} of $X$. The associated cohomology, denoted $H^{\bullet}_{\rm dR}(X, \mathbb{K})$, is called the \textit{algebraic de Rham cohomology} of $X$. \begin{Remark} If $X$ is smooth of dimension $n$, but not necessarily affine, at each closed point $x \in X$, we can choose some Zariski open affine neighbourhood $U$ of $x$ and some regular functions $x_1, \dots, x_n \in \mathcal{O}(U)$ in such a way to define a system of \textit{local parameters}\footnote{If we do not assume $X$ to be smooth, then we can find local coordinates in an affine open neighbourhood $U$ of a closed point $x \in X$ if and only if the rank of the Jacobian matrix at $x$ is equal to the dimension of $U$.} at $x$. Viewed as a subvariety of the affine $\mathbb{K}$-space $\mathbb{A}^n$, $U$ inherits its local coordinate structure. Intuitively, by choosing a covering of $X$ composed of Zariski open affine subsets, the algebraic variety is charted with affine spaces. Observe that the morphism $U \rightarrow \mathbb{A}^n$ defined by the local coordinates $x_1, \dots, x_n$ is always an \'etale map,\footnote{\'Etale maps can be interpreted as the algebraic analogue of local isomorphisms in the complex analytic topology. However, open sets in the Zariski topology are not small enough for \'etale maps to be local isomorphisms.} but not generally an embedding.\footnote{For complex neighbourhoods, local coordinates define local isomorphisms. Indeed, smooth algebraic varieties over $\mathbb{C}$ can be locally embedded as submanifolds of the complex affine space.} Conceptually, the $\mathbb{K}$-linear algebraic forms of degree $p$ on $X$ are obtained by suitably gluing\footnote{The assignment of algebraic forms to smooth affine varieties via local coordinates is well-behaved under gluing, and hence it globalises.} the algebraic $p$-forms defined locally, as in~\eqref{eq: local}, on each subset $U$ of an affine open covering of $X$. The notion of algebraic de Rham cohomology thus \v Ceck-style generalises to arbitrary smooth algebraic $\mathbb{K}$-varieties. Such an intuition does not, however, capture the full picture. The algebraic substitute for the smooth differential form is rigorously defined through the notions of K\"ahler differential and exterior power, while the rigorous construction of the algebraic de Rham cohomology of any smooth algebraic $\mathbb{K}$-variety requires the use of sheaf cohomology and hypercohomology. We~do not present these concepts here, since an intuitive understanding is sufficient to our purpose, but we refer to Kashiwara and Schapira~\cite{KS06}, and Hartshorne~\cite{Har77}. Moreover, we mention that several constructions are available to adapt the definition of algebraic de Rham cohomology to the case of singular varieties giving well-behaved theories. Details are reported by Huber and M\"uller-Stach~\cite{HM17}. \end{Remark} \begin{Example} \label{ex: ex4} Consider $X= \mathbb{G}_m= \mathop{\rm Spec} \mathbb{Q}[x, 1/x]$. The only non-vanishing spaces of $\mathbb{Q}$-linear algebraic forms are \begin{gather} \Omega^0(\mathbb{G}_m) = \mathbb{Q}[x, 1/x], \\ \Omega^1(\mathbb{G}_m) = \mathbb{Q}[x, 1/x] \cdot {\rm d}x. \end{gather} Consequently, the two groups \begin{gather} H^0_{\rm dR}(\mathbb{G}_m, \mathbb{Q}) = \mathbb{Q} , \\ H^1_{\rm dR}(\mathbb{G}_m, \mathbb{Q}) = \frac{\mathbb{Q}[x, 1/x] \cdot {\rm d}x}{{\rm d} \mathbb{Q}[x, 1/x]} = \mathbb{Q}\bigg[ \frac{{\rm d}x}{x} \bigg] \end{gather} are the only non-trivial algebraic de Rham cohomology groups of $X$. \end{Example} \subsubsection{Relative de Rham cohomology} The definition of algebraic de Rham cohomology extends to the relative setting. Let~$\mathbb{K}$ be a~subfield of $\mathbb{C}$ and let $X$ be a smooth algebraic variety over $\mathbb{K}$ of dimension $n$. Recall the following definition. \begin{Definition} A codimension-1 closed subvariety $D \subset X$ is called a \textit{divisor with normal crossings} if, for every point $x \in D$, there is an open affine neighbourhood $U \subseteq X$ of $x$ and some local coordinates $x_1, \dots, x_n$ on $U$ such that: \begin{itemize}\itemsep=0pt \item[(1)] The morphism $U \rightarrow \mathbb{A}^n$ defined by $x_1, \dots, x_n$ is \'etale. \item[(2)] The restriction $D_{|U}$ is locally described by an equation of the form $x_1 \cdot x_2 \cdots x_r$ for some $1 \le r \le n$. \end{itemize} Moreover, $D$ is called a \textit{divisor with simple normal crossings}\footnote{$D$ looks locally like a collection of coordinate hyperplanes.} if, in addition, its irreducible components are smooth. \end{Definition} For simplicity,\footnote{We illustrate here the construction of relative algebraic de Rham cohomology in a particularly simple framework. The construction can, however, be adapted for the general case of a closed subvariety of a smooth algebraic $\mathbb{K}$-variety. For a general discussion, we refer to Huber and M\"uller-Stach~\cite{HM17}.} let $X$ be affine and $D \subset X$ a divisor with simple normal crossings. Denote by $D_i$, for $i=1,\dots,r$, the smooth irreducible components of $D$. For $I \subseteq \{0,\dots,r\}$, we set \begin{gather} D_I = \bigcap_{i \in I} D_i , \qquad D^p = \begin{cases} X, & p=0, \\ \coprod_{|I|=p} D_I, & p \ge 1. \end{cases} \end{gather} The associated double cochain complex of $\mathbb{K}$-vector spaces $K^{p,q}=\Omega^q(D^p)$ is graphically represented as \begin{equation} \label{eq: double} \begin{tikzcd} \cdots & \cdots & \cdots & \\ \Omega^2(X) \arrow[u, "{\rm d}"] \arrow[r] & \bigoplus_i \Omega^2(D_i) \arrow[u, "-{\rm d}"] \arrow[r] & \bigoplus_{i<j} \Omega^2(D_i \cap D_j) \arrow[u, "{\rm d}"] \arrow[r] & \cdots \\ \Omega^1(X) \arrow[u, "{\rm d}"] \arrow[r] & \bigoplus_i \Omega^1(D_i) \arrow[u, "-{\rm d}"] \arrow[r] & \bigoplus_{i<j} \Omega^1(D_i \cap D_j) \arrow[u, "{\rm d}"] \arrow[r] & \cdots \\ \Omega^0(X) \arrow[u, "{\rm d}"] \arrow[r] & \underbrace{\bigoplus_i \Omega^0(D_i)}_{|I|=1} \arrow[u, "-{\rm d}"] \arrow[r] & \underbrace{\bigoplus_{i<j} \Omega^0(D_i \cap D_j)}_{|I|=2} \arrow[u, "{\rm d}"] \arrow[r] & \cdots, \end{tikzcd} \end{equation} where the vertical differential ${\rm d}_{\rm ver}\colon K^{p,q} \rightarrow K^{p,q+1}$ is given by \begin{gather} {\rm d}_{\rm ver} = (-1)^p {\rm d} \end{gather} and the horizontal differential ${\rm d}_{\rm hor}\colon K^{p,q} \rightarrow K^{p+1,q}$ is given by \begin{gather} {\rm d}_{\rm hor} = \bigoplus_{\substack{|I|=p\\ |J|=p+1\\ I \subset J}}(-1)^l {\rm d}_{IJ}, \end{gather} where $J = \{ j_0, \dots, j_p\}$ with $j_0 < \dots < j_p$, $I = \big\{j_0, \dots, \widehat{j_l}, \dots, j_p \big\}$, and ${\rm d}_{IJ}\colon \Omega^q(D_I) \rightarrow \Omega^q(D_J)$ is the restriction map.\footnote{We observe that the two-row sequence in~\eqref{eq: double} is exact. For the vertical lines $K^{p,q} \rightarrow K^{p,q+1}$, it follows from the property ${\rm d} \circ {\rm d} = 0$ of the differential ${\rm d}$. For the horizontal lines $K^{p,q} \rightarrow K^{p+1,q}$, it follows from the fact that the differential ${\rm d}_{\rm hor}\colon K^{p,q} \rightarrow K^{p+1,q}$ is surjective for even values of $p$ and trivial for odd values of $p$, as a~consequence of the surjectivity of the restriction maps ${\rm d}_{IJ}$.} Note that the sign factor $(-1)^p$ in the definition of ${\rm d}_{\rm ver}$ implies that the vertical and horizontal differentials anticommute. Moreover, since $D_I$ has dimension equal to $n-|I|$, the double complex is trivial for $p+q > n$. We~denote by $(\Omega^{\bullet}(X,D), \delta)$ the total cochain complex associated to $K^{p,q}$, that is \begin{gather} \Omega^{\bullet}(X,D) = \bigoplus_{p+q= \bullet} K^{p,q}, \qquad \delta = {\rm d}_{\rm ver} + {\rm d}_{\rm hor}. \end{gather} For each $k \ge 0$, the space $\Omega^k(X,D)$ corresponds to the direct sum of the spaces on the $k$-th diagonal of the double cochain complex $K^{p,q}$ represented in~\eqref{eq: double}. The total complex is indeed explicitly written as \begin{gather} \Omega^0(X,D) \simeq \Omega^0(X) \xlongrightarrow{\delta^0} \Omega^1(X,D) \simeq \Omega^1(X) \oplus \bigoplus_i \Omega^0(D_i) \xlongrightarrow{\delta^1} \cdots. \end{gather} The relative algebraic de Rham cohomology $H_{\rm dR}^{\bullet}(X,D,\mathbb{K})$ is the cohomology of the total cochain complex $\Omega^{\bullet}(X,D)$, that is \begin{gather} H^k_{\rm dR}(X,D,\mathbb{K}) = \begin{cases} \mathop{\rm Ker}\big(\delta^0\big), & k=0, \\ \mathop{\rm Ker}\big(\delta^k\big)/\Im\big(\delta^{k-1}\big), & k \ge 1. \end{cases} \end{gather} The following proposition is a consequence of the surjectivity of the restriction maps ${\rm d}_{IJ}$. \begin{Proposition} \label{prop: topdeg} Let $X$ be a smooth affine variety over $\mathbb{K}$ of dimension $n$ and $D \subset X$ a divisor with simple normal crossings. Each class in the top-degree cohomology group $H_{\rm dR}^n(X,D,\mathbb{K})$ has a representative in $\Omega^n(X)$. \end{Proposition} \begin{Example} \label{ex: exx} Let $X=\mathbb{G}_m=\mathop{\rm Spec} \mathbb{Q}[x,1/x]$ and $D=\{ 1, z \}$ with $z \in \mathbb{Q}$, $z \ne 1$. The cor\-res\-ponding double algebraic de Rham complex is \begin{equation} \begin{tikzcd} 0 & & \\ \displaystyle\mathbb{Q}\bigg[ x, \frac{1}{x} \bigg] {\rm d}x \arrow[u, "{\rm d}"] \arrow[r] & 0 & \\ \displaystyle\mathbb{Q}\bigg[ x, \frac{1}{x} \bigg] \arrow[u, "{\rm d}"] \arrow[r] & \mathbb{Q} \oplus \mathbb{Q} \arrow[u, "-{\rm d}"] \arrow[r] & 0, \end{tikzcd} \end{equation} where the only non-trivial horizontal differential is the evaluation map \begin{align} \mathbb{Q}\bigg[ x, \frac{1}{x} \bigg] & \longrightarrow \mathbb{Q} \oplus \mathbb{Q}, \\ f & \longmapsto (f(1), f(z)). \end{align} The corresponding total complex is \begin{align} \mathbb{Q}\bigg[x,\frac{1}{x}\bigg] & \xlongrightarrow{\delta^0} \mathbb{Q}\bigg[x, \frac{1}{x}\bigg] {\rm d}x \oplus \mathbb{Q} \oplus \mathbb{Q}, \\ f(x) & \longmapsto (f'(x){\rm d}x, f(1), f(z)), \end{align} where the only non-trivial differential is explicitly written. The non-trivial relative algebraic de Rham cohomology groups are \begin{gather} H^0_{\rm dR}(X,D,\mathbb{Q}) = \mathop{\rm Ker}\big(\delta^0\big) = 0 , \\ H^1_{\rm dR}(X,D,\mathbb{Q}) = \mathop{\rm coKer}\big(\delta^0\big) = \frac{\mathbb{Q}\left[ x, \frac{1}{x} \right] {\rm d}x \oplus \mathbb{Q} \oplus \mathbb{Q}}{\Im(\delta^0)} \end{gather} and a basis of $H^1_{\rm dR}(X,D)$ is given by the classes $\big[ \big( \frac{{\rm d}x}{x},0,0 \big) \big] = \big[\frac{{\rm d}x}{x} \big]$ and $ \big[ \big( \frac{{\rm d}x}{z-1},0,0 \big) \big] = \big[\frac{{\rm d}x}{z-1}\big]$. \end{Example} \subsection{Comparison isomorphism} The following fundamental theorem is due to Grothendieck~\cite{Gro66}. \begin{Theorem} \label{th: comp} Let $\mathbb{K}$ be a subfield of $\mathbb{C}$ and let $X$ be a smooth algebraic variety over $\mathbb{K}$. There is a canonical isomorphism \begin{gather}\label{eq: comp_is} \mathop{\rm comp}\colon\ H^{\bullet}_{\rm dR} (X, \mathbb{K}) \otimes_{\mathbb{K}} \mathbb{C} \xlongrightarrow{\sim} H^{\bullet}_{{\rm B}}(X,\mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C} \end{gather} known as \textit{comparison isomorphism}. Moreover, if $Y \subset X$ is closed subvariety, we have \begin{gather} \mathop{\rm comp}\colon\ H^{\bullet}_{\rm dR} (X, Y, \mathbb{K}) \otimes_{\mathbb{K}} \mathbb{C} \xlongrightarrow{\sim} H^{\bullet}_{{\rm B}}(X, Y,\mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C}. \end{gather} \end{Theorem} As mentioned in Definition~\ref{def: betti}, any algebraic $\mathbb{K}$-variety $X$ canonically yields an analytic complex space $X^{\rm an}$ associated with the space of complex points $X(\mathbb{C})$. If~$X$ is smooth, then~$X^{\rm an}$ is an analytic complex manifold. The classical theory of de Rham cohomology, discussed in Section~\ref{sec: smooth} for differentiable manifolds, extends to complex geometry as well. \begin{Definition} Let $M$ be a complex manifold of dimension $n$. For $p,q \ge 0$, a differential form of \textit{holomorphic degree $p$} and \textit{antiholomorphic degree $q$} on $M$, also called a \textit{differential $(p,q)$-form}, is written in local analytic coordinates as \begin{gather} \label{eq: diffC} \sum_{I, J} f_{IJ}\, {\rm d}z_{i_1} \wedge \dots \wedge {\rm d}z_{i_p} \wedge {\rm d}\bar{z}_{j_1} \wedge \dots \wedge {\rm d}\bar{z}_{j_q}, \end{gather} where the sum runs over the index subsets $I=\{ i_1,\dots,i_p \}, J = \{ j_1,\dots,j_q \} \subseteq \{1,\dots,n\}$ and $f_{IJ}$ are $\mathcal{C}^{\infty}$-functions. Let~$\Omega^{p,q}(M)$ denote the $\mathbb{C}$-vector space of differential $(p,q)$-forms on~$M$. The standard de Rham differential splits as ${\rm d} = \partial + \bar{\partial}$, where $\partial\colon \Omega^{p,q} \rightarrow \Omega^{p+1,q}$ and $\bar{\partial}\colon \Omega^{p,q} \rightarrow \Omega^{p,q+1}$. The \textit{Dolbeault cohomology} of $M$, denoted by $H^{\bullet, \bullet}_{{\rm D}}(M, \mathbb{C})$, is the cohomology of the double cochain complex $(\Omega^{\bullet, \bullet}(M), \bar{\partial})$, called the \textit{Dolbeault complex} of $M$. \end{Definition} \begin{Definition} A \textit{holomorphic $t$-form} on $M$ is a finite sum of differential $(p,q)$-forms on $M$ with $p+q=t$ that are locally expressed as in~\eqref{eq: diffC} with the coefficient functions $f_{IJ}$ being holomorphic. The \textit{holomorphic de Rham cohomology}\footnote{A singular version of the holomorphic de Rham complex, called logarithmic de Rham complex, is obtained by considering meromorphic forms which are holomorphic in the bulk, but admit logarithmic poles towards the compactification boundaries of $X$.} of $M$, denoted by $H^{\bullet}_{\rm dR}(M, \mathbb{C})$, is the cohomology of the cochain complex associated with the graded $\mathbb{C}$-vector space of holomorphic forms on $M$ and the holomorphic component $\partial$ of the de Rham differential.\footnote{For a formally rigorous definition of holomorphic de Rham cohomology, using the tools of sheaf cohomology and hypercohomology, see Voisin~\cite{Voi02}.} \end{Definition} The following proposition is the complex analogue of Theorem~\ref{th: derham} by de Rham. \begin{Proposition} \label{prop: derhamC} Let $M$ be a complex manifold of dimension $n$. For $0 \le k \le n$, there is an isomorphism \begin{gather} \label{eq: derhamC1} H^k_{\rm dR}(M,\mathbb{C}) \xlongrightarrow{\sim} H^k_{\text{s}}(M, \mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C}. \end{gather} In particular, if $M=X^{\rm an}$, where $X$ is a smooth algebraic $\mathbb{K}$-variety of dimension $n$, we have \begin{gather} \label{eq: derhamC2} H^k_{\rm dR}(X^{\rm an},\mathbb{C}) \xlongrightarrow{\sim} H^k_{{\rm B}}(X,\mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C} \end{gather} for $0 \le k \le n$. \end{Proposition} \begin{Remark} The holomorphic de Rham complex of $X^{\rm an}$ is equivalently obtained by analytification of the algebraic de Rham complex of $X$, and an equivalent notion of holomorphic de Rham cohomology of $X^{\rm an}$ follows canonically. Thus, the procedure of analytification of an algebraic variety and the properties of holomorphic de Rham cohomology provide the conceptual link between algebraic de Rham cohomology and Betti cohomology, which underlies Grothendieck's comparison isomorphism. \end{Remark} {\sloppy\begin{Remark} An important observation follows from combining Grothendieck's and de Rham's theorems. Given a smooth algebraic variety $X$ over a subfield $\mathbb{K}$ of $\mathbb{C}$, the holomorphic de Rham cohomology of the underlying topological space $X^{\rm an}$, equivalent to its singular cohomology, is isomorphic to the algebraic de Rham cohomology of $X$ after complexification, that~is \begin{gather} \label{eq: combo} H^k_{\rm dR}(X^{\rm an}, \mathbb{C}) \simeq H^k_{\rm dR}(X, \mathbb{K}) \otimes_{\mathbb{K}} \mathbb{C} \end{gather} for $k \ge 0$. The holomorphic de Rham cohomology can therefore be computed considering algebraic forms only. In this way, a purely algebraic definition of cohomology is obtained. \end{Remark} } \subsection{Pure Hodge structures} \label{sec: pure} As a consequence of Theorem~\ref{th: comp}, the Betti cohomology of an algebraic variety is endowed with a richer structure than the singular cohomology of a generic topological space. Recall the following definition. \begin{Definition} Let $H$ be a finite-dimensional $\mathbb{Q}$-vector space and let $H_{\mathbb{C}}= H \otimes_{\mathbb{Q}} \mathbb{C}$. Assume that $H_{\mathbb{C}}$ possesses a bigrading \begin{gather} H_{\mathbb{C}} = \bigoplus_{p+q=k} H^{p,q} \end{gather} for some integer $k$, satisfying the property $\overline{H^{p,q}} = H^{q,p}$, called \textit{Hodge symmetry}. $H$ is called a~\textit{pure Hodge structure of weight $k$} and the given direct sum decomposition of its complexification~$H_{\mathbb{C}}$ is called a \textit{Hodge decomposition}. \end{Definition} \begin{Remark} An equivalent definition of pure Hodge structure of weight $k$ is obtained by~observing that the data encoded in the Hodge decomposition is equivalent to a finite decreasing filtration $F^{\bullet}$ of $H_{\mathbb{C}}$, called \textit{Hodge filtration}, such that \begin{gather} F^pH_{\mathbb{C}} \oplus \overline{F^{k-p+1}H_{\mathbb{C}}} = H_{\mathbb{C}} \end{gather} for all integers $p$. The two equivalent descriptions are related by \begin{gather} H^{p,q} = F^pH_{\mathbb{C}} \cap \overline{F^qH_{\mathbb{C}}} , \qquad F^pH_{\mathbb{C}} = \bigoplus_{i \ge p} H^{i, k-i} \end{gather} for $p$, $q$ integers such that $p+q = k$. \end{Remark} Let $M$ be a compact K\"ahler\footnote{A K\"ahler manifold is a manifold with a complex structure, a Riemannian structure, and a symplectic structure which are mutually compatible.} manifold. For $p,q \ge 0$, its Dolbeault cohomology classes in~bidegree $(p,q)$ uniquely correspond\footnote{This result, true for any compact hermitian complex manifold, is known as Hodge isomorphism.} to the \textit{harmonic $(p,q)$-forms} on $M$, and there are canonical maps \begin{gather} H^{p, q}_{{\rm D}}(M, \mathbb{C}) \rightarrow H^{p+q}_{\rm dR}(M,\mathbb{C}) \simeq H^{p+q}_{\text{s}}(M,\mathbb{C}). \end{gather} The following theorem by Hodge~\cite{Hod41} marks the beginning of what is currently known as \textit{Hodge theory}. \begin{Theorem} \label{th: dec} Let $M$ be a compact K\"ahler manifold. The following direct sum decomposition \begin{gather} \label{eq: hodge} H^k_{\rm dR}(M,\mathbb{C}) = \bigoplus_{p+q=k} H^{p,q}_{{\rm D}}(M, \mathbb{C}) \end{gather} holds for $k \ge 0$. \end{Theorem} \begin{Remark} Note that the complex conjugate of $H^{p,q}(M)$ is $H^{q,p}(M)$. Following equation~\eqref{eq: derhamC1}, the ordinary cohomology group $H^k_{\text{s}}(M,\mathbb{Q})$ is a pure Hodge structure of weight $k$, and Hodge's theorem gives a decomposition\footnote{The Hodge decomposition of a compact K\"ahler manifold is independent of the choice of K\"ahler metric, although there is no analogous decomposition for arbitrary compact complex manifolds.} of its complexification as a direct sum of $\mathbb{C}$-vector spaces. \end{Remark} Let $X$ be a smooth projective variety defined over a subfield $\mathbb{K}$ of $\mathbb{C}$. $X^{\rm an}$ is a compact K\"ahler manifold, and thus the Hodge decomposition and filtration are defined on $H^k_{\rm dR}(X^{\rm an}, \mathbb{C})$ for $k \ge 0$. Following equation~\eqref{eq: derhamC2}, the Betti cohomology groups of $X$ are then pure Hodge structures of~weights equal to their degrees. Moreover, it can be proven\footnote{This result follows non-trivially from the interpretation of the Hodge filtration in terms of hypercohomology with coefficients in the complex of holomorphic forms, and the GAGA theorem. We~refer to Voisin~\cite{Voi02}.} that the Hodge filtration $F^{\bullet}$, which makes the Betti cohomology group $H^k_{{\rm B}}(X,\mathbb{Q})$ into a pure Hodge structure of~weight $k$, directly acts on the algebraic de Rham cohomology groups of $X$ over $\mathbb{K}$. Precisely, there is an~integer $n$ such that $F^{\bullet}$ is a finite decreasing filtration on $H_{\mathbb{K}} = H^k_{\rm dR} (X, \mathbb{K})$ satisfying \begin{gather} F^pH_{\mathbb{K}} \oplus \overline{F^{n-p+1}H_{\mathbb{K}}} = H_{\mathbb{K}} \end{gather} for all integers $p$. To keep track of all these structures, we define the formal assignment $X \mapsto H^{\bullet}(X)$, where $H^k(X)$ is the triple of data given by \begin{gather} H^k(X) = \big(\big(H^k_{\rm dR}(X, \mathbb{K}), F^{\bullet}\big), H^k_{{\rm B}}(X, \mathbb{Q}), {\rm comp}\big) \end{gather} for $k \ge 0$. We~call $H^k(X)$ a \textit{(pure) de Rham and Betti system of realisations}, or shortly a \textit{(pure) $H$-system}, of weight $n$ over $\mathbb{K}$. Observe that the weight of $H^k(X)$ is defined by the action of the Hodge filtration on the algebraic de Rham cohomology, and it does not generally equal the degree $k$. \begin{Definition} Let $X, X'$ be smooth projective $\mathbb{K}$-varieties. Write $H = H^k(X)$ and $H'= H^{k'}(X')$, where $k,k' \ge 0$. A~\textit{morphism of pure $H$-systems} $f \colon H \rightarrow H'$ is a pair $f = (f_{\rm dR}, f_{{\rm B}})$ consisting of a $\mathbb{K}$-linear map $f_{\rm dR}\colon H_{\rm dR} \rightarrow H_{\rm dR}'$ and a $\mathbb{Q}$-linear map $f_{{\rm B}}\colon H_{{\rm B}} \rightarrow H_{{\rm B}}'$ such that: \begin{itemize}\itemsep=0pt \item[(1)] $f_{\rm dR}$ is filtered\footnote{Let $\mathbb{K}$ be a subfield of $\mathbb{C}$ and $(V,F)$, $(V',F)$ be filtered $\mathbb{K}$-vector spaces. A~morphism $f\colon V \rightarrow V'$ is called \textit{filtered} if $f(F^pV) \subseteq F^pV'$ for each $p \ge 0$.} with respect to the Hodge filtration, that is \begin{gather} f_{\rm dR}(F^{\bullet}H_{\rm dR}) \subseteq F^{\bullet}H_{\rm dR}'. \end{gather} \item[(2)] The following diagram commutes: \begin{equation} \begin{tikzcd}[column sep=large] H_{\rm dR} \otimes_{\mathbb{K}} \mathbb{C} \arrow[r, "\mathop{\rm comp}"] \arrow[d, "f_{\rm dR} \otimes_{\mathbb{K}} \text{Id}_{\mathbb{C}}"'] & H_{{\rm B}} \otimes_{\mathbb{Q}} \mathbb{C} \arrow[d, "f_{{\rm B}} \otimes_{\mathbb{Q}} \text{Id}_{\mathbb{C}}"]\\ H_{\rm dR}' \otimes_{\mathbb{K}} \mathbb{C} \arrow[r, "\mathop{\rm comp}'"'] & H_{{\rm B}}' \otimes_{\mathbb{Q}} \mathbb{C}. \end{tikzcd} \end{equation} \end{itemize} \end{Definition} Observe that, if $H$ and $H'$ have different weights, then every morphism between them is zero. The following variant of Theorem~\ref{th: dec} implies that pure $H$-systems are functorial for morphisms of smooth projective varieties. \begin{Theorem} \label{th: smoothHf} Let $X$, $X'$ be smooth projective varieties over $\mathbb{K}$. For any morphism $f\colon X \rightarrow X'$, the induced map on cohomology $f^*\colon H^{\bullet}(X') \rightarrow H^{\bullet}(X)$ is a morphism of pure $H$-systems. \end{Theorem} \begin{Example} \label{ex: hodgetate} For each $n \in \mathbb{Z}$, we define \begin{gather} \mathbb{Q}(n) = ((\mathbb{K}, F^{\bullet}), \mathbb{Q}, \mathop{\rm comp}), \end{gather} where the filtration yields $\mathbb{K} = F^{-n} \mathbb{K} \supseteq F^{-n+1} \mathbb{K} = 0$ and the isomorphism $\mathop{\rm comp}\colon \mathbb{C} \rightarrow \mathbb{C}$ is given by multiplication by $(2 \pi {\rm i} )^{-n}$. $\mathbb{Q}(n)$ is a one-dimensional pure $H$-system of weight $-2n$ over $\mathbb{K}$ and is called a \textit{Tate--Hodge system}. As an example, $\mathbb{Q}(-1)$ is isomorphic to $H^1(\mathbb{G}_m)= \big(\big(H^1_{\rm dR}(\mathbb{G}_m), F^{\bullet}\big), H^1_{{\rm B}}(\mathbb{G}_m), {\rm comp}\big)$, where $F^{\bullet}$ is the trivial filtration concentrated in degree $1$. Observe that $\mathbb{Q}(-1)$ is a pure $H$-system of weight~$2$, although $H^1(\mathbb{G}_m)$ has degree~$1$. \end{Example} \subsection{Mixed Hodge structures} \label{sec: mixed} The Betti cohomology in degree $k$ of a smooth projective $\mathbb{K}$-variety $X$ carries canonically a~pure Hodge structure of weight $k$. However, this is no longer true when $X$ fails to be smooth or~projective. The generalisation of the notion of pure Hodge structure to the case of quasi-projective varieties is due to Deligne~\cite{Del71_1,Del71_2,Del74}, who proved that the Betti cohomology of a~quasi-projective variety over a~subfield $\mathbb{K}$ of $\mathbb{C}$ is an \textit{iterated extension} of pure Hodge structures. Recall the following definition. \begin{Definition} \label{def: mixH} Let $H$ be a finite-dimensional $\mathbb{Q}$-vector space and let $H_{\mathbb{C}}= H \otimes_{\mathbb{Q}} \mathbb{C}$. Assume that $H$ possesses a finite increasing filtration $W_{\bullet}$, called \textit{weight filtration}, and that $H_{\mathbb{C}}$ possesses a finite decreasing filtration $F^{\bullet}$, called \textit{Hodge filtration}, such that, for all integers $m$, the $m$-th graded quotient of $H$ with respect to $W_{\bullet}$ \begin{gather} {\rm Gr}^W_m H = W_m / W_{m-1} \end{gather} together with the filtration induced by $F^{\bullet}$ on its complexification \begin{gather} F^{\bullet} {\rm Gr}^W_m H = ( F^{\bullet} \cap W_m \otimes \mathbb{C} + W_{m-1} \otimes \mathbb{C}) / W_{m-1} \otimes \mathbb{C} \end{gather} is a pure Hodge structure of weight $m$. $H$ is called a \textit{mixed Hodge structure}. \end{Definition} \begin{Remark} Let $H$ be a mixed Hodge structure. For all integers $m$, there is a short exact sequence \begin{gather} 0 \rightarrow W_{m-1} \rightarrow W_m \rightarrow {\rm Gr}^W_m H \rightarrow 0. \end{gather} Take $m = h$ to be the highest weight of $H$, defined by $W_h = H$. The short exact sequence above gives $H$ as an extension of the pure Hodge structure ${\rm Gr}^W_h H$ by $W_{h-1}$. Analogously, taking $m = h-1$, $W_{h-1}$ is an extension of ${\rm Gr}^W_{h-1} H$ by $W_{h-2}$, which in turn is an extension of ${\rm Gr}^W_{h-2} H$ by $W_{h-3}$, and so on. In this way, mixed Hodge structures are explicitly realised as iterated extensions of pure ones. \end{Remark} \begin{Theorem} Let $X$ be a quasi-projective variety over a subfield $\mathbb{K}$ of $\mathbb{C}$. \begin{itemize}\itemsep=0pt \item[$(1)$] For $k \ge 0$, its Betti cohomology group $H^k_{{\rm B}}(X,\mathbb{Q})$ is a mixed Hodge structure with respect to a weight filtration $W_{\bullet}$ and a Hodge filtration $F^{\bullet}$ which satisfy \begin{gather} W_{-1} = 0 \subseteq W_0 \subseteq W_1 \subseteq \dots \subseteq W_{2k} = H^k_{{\rm B}}(X,\mathbb{Q}) , \\ F^0 = H^k_{{\rm B}}(X,\mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C} \supseteq F^1 \supseteq\dots \supseteq F^k \supseteq F^{k+1} = 0. \end{gather} {\sloppy If $X$ is smooth, then ${\rm Gr}^W_m H^k_{{\rm B}}(X,\mathbb{Q}) = 0$ for all $m < k$. If~$X$ is projective, then ${\rm Gr}^W_m H^k_{{\rm B}}(X,\mathbb{Q})= 0$ for all $m > k$. \item[$(2)$] The Hodge filtration $F^{\bullet}$ acts on the algebraic de Rham cohomology groups of $X$ over $\mathbb{K}$, and the weight filtration $W^{{\rm B}}_{\bullet} = W_{\bullet}$ induces a corresponding weight filtration $W_{\bullet}^{\rm dR}$ on the algebraic de Rham cohomology groups of $X$ over $\mathbb{K}$. } \item[$(3)$] The comparison isomorphism is filtered with respect to the weight filtration \begin{gather} \mathop{\rm comp}( W^{\rm dR}_{\bullet} \otimes_{\mathbb{K}} \mathbb{C}) = W^{{\rm B}}_{\bullet} \otimes_{\mathbb{Q}} \mathbb{C}. \end{gather} \end{itemize} \end{Theorem} Again, to keep track of the several structures that we have introduced, we define a formal assignment $X \mapsto H^{\bullet}(X)$, where $H^k(X)$ is the triple of data given by \begin{gather} H^k(X) = \big(\big(H^k_{\rm dR}(X, \mathbb{K}), F^{\bullet}, W_{\bullet}^{\rm dR}\big), \big(H^k_{{\rm B}}(X, \mathbb{Q}), W^{{\rm B}}_{\bullet}\big), \mathop{\rm comp}\!\big) \end{gather} for $k \ge 0$. We~call $H^k(X)$ a \textit{(mixed) de Rham and Betti system of realisations}, or shortly a~\textit{(mixed) $H$-system}, over $\mathbb{K}$. Observe that, for each integer $m$, the triple of data \begin{gather} {\rm Gr}^W_m H =\big(\big({\rm Gr}^W_m H_{\rm dR}, F^{\bullet}\big), {\rm Gr}^W_m H_{{\rm B}}, \mathop{\rm comp}\!\big), \end{gather} where $H = H^k(X)$, is a pure $H$-system of weight $m$. \begin{Definition} Let $X$, $X'$ be quasi-projective $\mathbb{K}$-varieties. Write $H = H^k(X)$ and $H'= H^{k'}(X')$, where $k,k' \ge 0$. A~\textit{morphism of mixed $H$-systems} $f\colon H \rightarrow H'$ is a pair $f = (f_{\rm dR}, f_{{\rm B}})$ consisting of a $\mathbb{K}$-linear map $f_{\rm dR}\colon H_{\rm dR} \rightarrow H_{\rm dR}'$ and a $\mathbb{Q}$-linear map $f_{{\rm B}}\colon H_{{\rm B}} \rightarrow H_{{\rm B}}'$ such that: \begin{itemize}\itemsep=0pt \item[$(1)$] $f_{{\rm B}}$ is filtered with respect to the weight filtration, that is \begin{gather} f_{{\rm B}}(W_{\bullet}^{{\rm B}} H_{{\rm B}}) \subseteq W_{\bullet}^{{\rm B}} H_{{\rm B}}'. \end{gather} \item[$(2)$] $f_{\rm dR}$ is filtered with respect to the weight and Hodge filtrations, that is \begin{gather} f_{\rm dR}\big(W_{\bullet}^{\rm dR} H_{\rm dR}\big) \subseteq W_{\bullet}^{\rm dR} H_{\rm dR}' . \\ f_{\rm dR}\big(F^{\bullet}H_{\rm dR}\big) \subseteq F^{\bullet}H_{\rm dR}', \end{gather} \item[(3)] $f_{{\rm B}}$ and $f_{\rm dR}$ are compatible with the comparison isomorphism, that is \begin{gather} (f_{{\rm B}} \otimes_{\mathbb{Q}} \text{Id}_{\mathbb{C}}) \circ \mathop{\rm comp} = \mathop{\rm comp}\nolimits' \circ (f_{\rm dR} \otimes_{\mathbb{K}} \text{Id}_{\mathbb{C}}). \end{gather} \end{itemize} \end{Definition} The following analogue of Theorem~\ref{th: smoothHf} holds. \begin{Theorem} Let $X$, $X'$ be quasi-projective varieties over $\mathbb{K}$. For any morphism $f\colon X \rightarrow X'$, the induced map on cohomology $f^*\colon H^{\bullet}(X') \rightarrow H^{\bullet}(X)$ is a morphism of mixed $H$-systems. \end{Theorem} We denote by $\mathbf{MHSy}(\mathbb{Q})$ the category\footnote{Further aspects of de Rham and Betti systems of realisations are discussed by Brown~\cite{Bro17_1}.} of mixed $H$-systems over $\mathbb{Q}$. Deligne~\cite{Del71_2} proved that $\mathbf{MHSy}(\mathbb{Q})$ is an abelian category. Moreover, it is naturally endowed with two forgetful functors \begin{gather} \omega_{{\rm B}}\colon\ \mathbf{MHSy}(\mathbb{Q}) \rightarrow \text{Vec}_{\mathbb{Q}}, \qquad \omega_{\rm dR}\colon\ \mathbf{MHSy}(\mathbb{Q}) \rightarrow \text{Vec}_{\mathbb{Q}} \end{gather} called \textit{Betti} and \textit{de Rham functors}, sending the mixed system of realisations $H \in \mathbf{MHSy}(\mathbb{Q})$ into the $\mathbb{Q}$-vector spaces $H_{{\rm B}}$ and $H_{\rm dR}$, respectively. \section{Periods of motives} \label{sec: periods} \subsection{Periods} \label{sec: num} The following elementary definition was introduced by Kontsevich and Zagier~\cite{KZ01}. \theoremstyle{definition} \begin{Definition} \label{def: period} A \textit{period} is a complex number whose real and imaginary parts are values of absolutely convergent integrals of the form \begin{gather} \label{eq: per_int} \int_{\sigma} f(x_1,\dots,x_n)\, {\rm d}x_1 \cdots {\rm d}x_n, \end{gather} where the integrand $f$ is a rational function with rational coefficients and the domain of integration $\sigma \subseteq \mathbb{R}^n$ is defined by finite unions and intersections of domains of the form $\{ g(x_1,\dots,x_n)$ $\ge 0 \}$ with $g$ a rational function with rational coefficients. \end{Definition} If rational functions and coefficients are replaced in Definition~\ref{def: period} by algebraic functions and coefficients, the same set of numbers is obtained. Indeed, algebraic functions in the integrand can be substituted by rational functions by enlarging the set of variables. Note that, because the integral of any real-valued function is equivalent to the volume subtended by its graph, any period admits a representation as the volume of a domain defined by polynomial inequalities with rational coefficients. Thus, the integrand can always be assumed to be the constant function~1. However, this extremely simplified framework does not prove to be particularly useful. Quite the opposite, in what follows, we mostly work with an even more general description of periods than the one given in Definition~\ref{def: period}. We~denote by $\mathcal{P}$ the set of periods. Being $\bar{\mathbb{Q}} \subset \mathcal{P} \subset \mathbb{C}$, periods are generically transcendental numbers and nonetheless they contain only a finite amount of information, which is captured by the integrand and domain of integration of its integral representation as in~\eqref{eq: per_int}. Indeed, just like $\bar{\mathbb{Q}}$, $\mathcal{P}$ is countable. Many famous numbers belong to the class of periods. Here are some examples: \begin{itemize}\itemsep=0pt \item[$(a)$] Algebraic numbers are periods, e.g., \begin{gather} \sqrt{2} = \int_{2x^2 \le 1} {\rm d}x. \end{gather} \item[$(b)$] Logarithms of algebraic numbers are periods, e.g., \begin{gather} \label{eq: log} \log 2= \int_{1}^{2} \frac{{\rm d}x}{x}. \end{gather} \item[$(c)$] The transcendental number $\pi$ is a period, as given by \begin{gather} \pi = \int_{-1}^1 \frac{{\rm d}x}{\sqrt{1-x^2}} = \int\displaylimits_{-\infty}^{+\infty} \frac{{\rm d}x}{1+x^2} = \iint\displaylimits_{x^2 + y^2 \le 1} {\rm d}x {\rm d}y \end{gather} and alternatively by \begin{gather} \label{eq: pi} 2 \pi {\rm i} = \oint_{\gamma_2} \frac{{\rm d}z}{z}, \end{gather} where $\gamma_2$ is a closed path encircling the origin in the complex plane. \item[$(d)$] Values of the beta function at positive rational arguments are periods, as given by \begin{gather} B(u,v) = \int_{0}^1 t^{u-1} (1-t)^{v-1} {\rm d}t, \qquad \mathop{\rm Re}(u),\, \mathop{\rm Re}(v) > 0, \end{gather} and values of the gamma function at positive rational arguments satisfy\footnote{The statement follows from the relation between the gamma and the beta functions $\Gamma(a_1) \cdots \Gamma(a_n) = \Gamma(a_1+\dots+ a_n) \prod_{i=1}^{n-1} B(a_1+\dots+ a_{i-1}, a_i)$ with $\mathop{\rm Re}(a_k)>0$, $k=1, \dots, n$.} \begin{gather} \Gamma \bigg( \frac{p}{q} \bigg)^q \in \mathcal{P}, \qquad p, q \in \mathbb{N}. \end{gather} \item[$(e)$] The elliptic integral \begin{gather} 2 \int_{-b}^{b} \sqrt{1 + \frac{a^2 x^2}{b^4 - b^2 x^2}}\, {\rm d}x \end{gather} representing the perimeter of an ellipse with radii $a$ and $b$, is a period. Note that it is not an algebraic function of $\pi$ for $a \neq b$, $a, b \in \mathbb{Q}_{> 0}$, \item[$(f)$] Values of the Riemann zeta function at integer arguments $s \ge 2$ are periods, e.g., \begin{gather} \zeta (3) = 1 + \frac{1}{2^3} + \frac{1}{3^3} + \dots = \sum_{n=1}^{\infty} \frac{1}{n^3} = \iiint\displaylimits_{0<x<y<z<1} \frac{{\rm d}x {\rm d}y {\rm d}z}{(1-x)yz}, \end{gather} and more generally multiple zeta values are periods by means of their integral representation~\eqref{eq: zint}. \item[$(g)$] Convergent Feynman integrals, as in~\eqref{eq: I_G_last}, are periods. Moreover, removing the convergence requirement, the statement suitably extends to a wider class of Feynman integrals.\footnote{Under some assumptions, Bogner and Weinzierl~\cite{BWdiv} showed that the coefficients appearing in the Laurent series of any scalar multi-loop integral are periods.} \item[$(h)$] Special values at algebraic arguments of hypergeometric functions, values of modular forms at suitable arguments, and values of various kinds of L-functions are periods. \end{itemize} Because the integral representation of a period is not unique, it is possible that a certain integral of a transcendental function admits a representation as a period as well. For example, $\log(2)$ is a period, and yet it can be written as the following integral of a transcendental function \begin{gather} \int\displaylimits_{0}^{1} \frac{x}{\log \frac{1}{1-x}}\, {\rm d}x. \end{gather} Indeed, there seems to be no general principle able to predict if a certain infinite sum or integral of a transcendental function is a period according to Definition~\ref{def: period}, or able to determine whether two periods, given by explicit integrals, are equal or different. A~number in $\bar{\mathbb{Q}}$ also admits apparently different expressions, but those same techniques that work for checking the equality of algebraic numbers do not in general work for periods. In fact, two different periods may be~numerically very close and yet be distinct.\footnote{For example, the approximation $\pi = \frac{6}{\sqrt{3502}} \log(2 u) + 7.37 \times 10^{-82}$, where $u$ is the product of four quartic units, has been found by Shanks~\cite{Sha82}.} However, the following conjecture is presented by~Kontsevich and Zagier~\cite{KZ01}. \begin{Conjecture} \label{conj: per1} If a period has two different integral representations, then one expression can be transformed into the other by application of the three integral transformation rules of~addi\-ti\-vity, change of variables, and Stokes' formula, in which all integrands and domains of~integration are algebraic with algebraic coefficients. \end{Conjecture} We note that even a proof of Conjecture~\ref{conj: per1} would not solve the additional problem of finding an algorithm to determine whether or not two given numbers in $\mathcal{P}$ are equal, or whether or not a~given real number belongs to $\mathcal{P}$. Another fundamental open problem in the theory of periods is to explicitly exhibit one number which does not belong to $\mathcal{P}$. Such numbers must exist, because~$\mathcal{P}$ is a countable subset of $\mathbb{C}$, but the concrete identification of one of such numbers has only been proposed conjecturally. Indeed, the basis of natural logarithms $e$ and the Euler--Mascheroni constant $\gamma$ are conjecturally not periods. Several further questions on the arithmetic nature and transcendence of periods are open or only conjecturally answered.\footnote{See Waldschmidt~\cite{Wal06} for an overview of the topic.} Before moving to a more sophisticated definition of periods written in the language of algebraic geometry, which is essential to subsequent developments, we mention the fruitful interplay between the theory of periods and the theory of linear differential equations. When the inte\-g\-rands or the domains of integration depend on some set of parameters, the integrals, as functions of these parameters, usually satisfy linear differential equations with algebraic coefficients. The solutions of these differential equations generate periods when evaluated at algebraic arguments. The differential equations occurring in this way are called \textit{Picard--Fuchs differential equations}. The relation between periods and Picard--Fuchs equations has proved to be particularly productive in the case of elliptic curves, hypergeometric functions, modular forms and L-functions. \subsection{Algebra of motivic periods} \label{sec: mot_alg} The theory of periods is alternatively developed within the formalism of algebraic geometry. We~refer to Huber and M\"{u}ller-Stach~\cite{HM17}. \begin{Definition} \label{def: period2} Let $X$ be a smooth quasi-projective variety defined over $\bar{\mathbb{Q}}$, and $Y \subset X$ a~clo\-sed subvariety. A~\textit{period} is a complex number which can be expressed as an integral of the form $\int_{\gamma} \omega \in \mathbb{C}$, where $\omega$ is a closed algebraic differential $k$-form on $X$ vanishing on $Y$, and $\gamma$ is a~singular $k$-chain on the complex manifold $X^{\rm an}$ with boundary contained in $Y^{\rm an}$ for some integer $k \ge 0$. \end{Definition} The equivalence of Definitions~\ref{def: period2} and~\ref{def: period} follows from the observation that the algebraic chain $\gamma$ can be deformed to a semi-algebraic chain and then broken up into small pieces, which can be bijectively projected onto open domains in $\mathbb{R}^n$ with algebraic boundary. Without loss of generality, we work with coefficients in $\mathbb{Q}$ instead of $\bar{\mathbb{Q}}$. We~note that, like Definition~\ref{def: period}, Definition~\ref{def: period2} also contains redundancy. The integral $\int_{\gamma} \omega$ can be formally decomposed into the quadruple \begin{gather} (X, Y, \omega, \gamma) \end{gather} and different quadruples can give the same resulting number. To get rid of this redundancy, the various forms of topological invariance of the integral must be suitably accounted for. Following Stokes' theorem, the integral is insensitive to the individual cycle and form, being instead determined by the homology and cohomology classes of these. Let~us associate to $\omega$ its cohomology class in the $k$-th algebraic de Rham cohomology group of $X$ relative to $Y$, and to $\gamma$ its homology class in the $k$-th Betti homology group of $X$ relative to $Y$. Then, the first step towards a unique algebraic description of periods consists of the following substitutions \begin{gather} \omega \longrightarrow [\omega] \in H^k_{\rm dR} (X, Y,\mathbb{Q}) , \\ \gamma \longrightarrow [\gamma] \in H_k^{{\rm B}}(X,Y,\mathbb{Q}) \end{gather} into the quadruple $(X,Y,\omega,\gamma)$. The problem of the coexistence of distinct, but similarly behaved, cohomologies associated to the same variety, which seems to imply an arbitrary choice here and in many other situations, has been tackled by Grothendieck\footnote{ Grothendieck proposed the notion of a motive in a letter to Serre in 1964. He himself did not author any publication on motives, although he mentioned them frequently in his correspondence. The first formal expositions of the theory of motives are due to Demazure~\cite{Dem71} and Kleiman~\cite{Kle72}, who based their work on Grothendieck's lectures on the topic.}~\cite{CS01} with the introduction of the \textit{theory of motives}. He suggested that there should exist a universal cohomology theory taking values in a $\mathbb{Q}$-category of motives. The notion of a motive is thus proposed to capture the intrinsic cohomological essence of a variety. Without delving into the category-theoretic details of the theory of motives\footnote{For a thorough introduction to the theory of motives, we refer to Voevodsky~\cite{Voe00}, Andr\'e~\cite{And04}, Deligne and Goncharov~\cite{DG05}, and Murre et al~\cite{MNP13}.}, we give here a conceptual introduction to its specific application to the theory of periods. Further discussion on the fundamental features of motives, as they appear in the study of periods, is presented in Section~\ref{sec: motives} in a more rigorous formalism. Recall from Theorem~\ref{th: comp} that there is a comparison isomorphism \begin{gather} \mathop{\rm comp}\colon\ H^k_{\rm dR} (X, Y,\mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C} \xlongrightarrow{\sim} H^k_{{\rm B}}(X, Y,\mathbb{Q}) \otimes_{\mathbb{Q}} \mathbb{C} \end{gather} induced by the pairing \begin{align} H^k_{\rm dR} (X, Y,\mathbb{Q}) \times H_k^{\text{s}} \big(X^{\rm an}, Y^{\rm an}, \mathbb{Q}\big) & \longrightarrow \mathbb{C}, \\[1ex] ([\omega] , [\gamma]) & \longmapsto \int_{\gamma} \omega. \end{align} Momentarily neglecting the presence of filtrations for simplicity, the de Rham and Betti system of realisations of $X$ relative to $Y$ in degree $k$ is \begin{gather} H^k(X,Y) = \big(H^k_{\rm dR} (X, Y,\mathbb{Q}), H^k_{{\rm B}}(X, Y,\mathbb{Q}), {\rm comp}\big). \end{gather} In the same way that the cohomology class of a differential form singles out its cohomological behaviour, the $H$-system of an algebraic variety intuitively selects the content shared by its coexisting algebraic de Rham and Betti cohomologies, and it filters out everything else. It~is, therefore, a first approximation towards the realisation of Grothendieck's idea of a motive. We~define the motivic version of the period $\int_{\gamma} \omega$ as the triple \begin{gather} \big[H^k(X,Y), [\omega], [\gamma]\big]^{\rm m}, \end{gather} where $\text{m}$ in the superscript stands for motivic. We~call a period in this guise a \textit{motivic period}. This has proved to be the most profitable reformulation of the original notion of a period. However, a second source of redundancy in the description of periods via the integral formulation in Definition~\ref{def: period2}, corresponding to the same transformation rules in Conjecture~\ref{conj: per1}, has yet to be factored out. \begin{Definition} \label{def: algebra_p} The space $\mathcal{P}^{\rm m}$ of motivic periods is defined as the $\mathbb{Q}$-vector space\footnote{In what follows, we no longer display the field $\mathbb{Q}$ among the arguments of the cohomology groups for simplicity of notation.} generated by the symbols $[H^{\bullet}(X,Y), [\omega], [\gamma]]^{\rm m}$ after factorisation modulo the following three equivalence relations: \begin{itemize}\itemsep=0pt \item[(1)] \textit{Bilinearity}. $[H^{\bullet}(X,Y), [\omega], [\gamma]]^{\rm m}$ is bilinear in $[\omega]$ and $[\gamma]$. \item[(2)] \textit{Change of variables}. If~$f\colon (X_1,Y_1) \rightarrow (X_2,Y_2)$ is a $\mathbb{Q}$-morphism of pairs of algebraic varieties, $[\gamma_1] \in H^{{\rm B}}_{\bullet}(X_1,Y_1)$ and $[\omega_2] \in H^{\bullet}_{\rm dR} (X_2,Y_2)$, then \begin{gather} [H^{\bullet}(X_1,Y_1), f^* [\omega_2], [\gamma_1]]^{\rm m} = [H^{\bullet}(X_2,Y_2), [\omega_2], f_* [\gamma_1]]^{\rm m}, \end{gather} where $f^*$ and $f_*$ are the pull-back and the push-forward of $f$, respectively. \item[(3)] \textit{Stokes' formula}. Assume for simplicity that $X$ is a smooth affine algebraic variety over~$\mathbb{Q}$ of dimension $d$ and $D \subset X$ is a simple normal crossing divisor. The normalisation\footnote{$\tilde{D}$ is the disjoint union of the irreducible components of $D$.}~$\tilde{D}$ of~$D$ contains a simple normal crossing divisor $\tilde{D}_1$ coming from double points in $D$. If~$[\omega] \in H^{d-1}_{\rm dR} \big(\tilde{D}, \tilde{D}_1\big)$ and $[\gamma] \in H^{{\rm B}}_d(X,D)$, then \begin{gather} [H^d(X,D), \delta [\omega], [\gamma]]^{\rm m} = \big[H^{d-1}(\tilde{D}, \tilde{D}_1), [\omega], \partial [\gamma]\big]^{\rm m}, \end{gather} where $\delta\colon H^{d-1}_{\rm dR} \big(\tilde{D}, \tilde{D}_1\big) \rightarrow H^d_{\rm dR} (X,D)$ is the coboundary operator acting on the algebraic de Rham cohomology and $\partial\colon H_d^{{\rm B}} (X, D) \rightarrow H_{d-1}^{{\rm B}} \big(\tilde{D}, \tilde{D}_1\big)$ is the boundary operator acting on the Betti homology. \end{itemize} \end{Definition} We observe that the space of motivic periods $\mathcal{P}^{\rm m}$ is naturally endowed with an algebra structure. Indeed, new periods are obtained by taking sums and products of known ones. \subsection{Period map} \label{sec: periodmap} We call \textit{period map} the evaluation homomorphism \begin{align} \label{eq: per_map} \mathop{\rm per}\nolimits\colon\ \mathcal{P}^{\rm m} &\longrightarrow \mathcal{P} , \\ {}\big[H^k(X,Y), [\omega], [\gamma]\big]^{\rm m} &\longmapsto [\gamma] \circ \mathop{\rm comp} \circ [ \omega] = \int_{\gamma} \omega. \end{align} Following the construction in Section~\ref{sec: mot_alg}, the period map is explicitly surjective. Its injectivity is, on the other hand, not proven. Indeed, a period has a unique motivic realisation only conjecturally. Conjecture~\ref{conj: per1} is equivalent to the \textit{period conjecture} below. \begin{Conjecture} \label{conj: per2} The period map $\mathop{\rm per}\nolimits\colon \mathcal{P}^{\rm m} \rightarrow \mathcal{P}$ is an isomorphism. \end{Conjecture} Let us briefly discuss the key idea underlying the period conjecture. A~$\mathbb{Q}$-morphism $f\colon$ $(X_1,Y_1) \rightarrow (X_2,Y_2)$ between two pairs of algebraic varieties induces a \textit{change of coordinates} between the corresponding algebraic de Rham cohomologies by pull-back, that is \begin{equation} \begin{tikzcd} (X_1,Y_1)\arrow[r, squiggly] \arrow[d, "f"] & H^{\bullet}_{\rm dR}(X_1,Y_1) \\ (X_2,Y_2) \arrow[r, squiggly] & H^{\bullet}_{\rm dR}(X_2,Y_2). \arrow[u, "f^*"] \end{tikzcd} \end{equation} The same morphism $f$ acts on the topological spaces of complex points underlying the given algebraic varieties, and it induces a change of coordinates between the corresponding singular homologies by push-forward, that is \begin{equation} \begin{tikzcd} \big(X_1^{\rm an},Y_1^{\rm an}\big)\arrow[r, squiggly] \arrow[d, "f"] & H_{\bullet}^{\text{s}}\big(X_1^{\rm an},Y_1^{\rm an}\big) \arrow[d, "f_*"] \\ \big(X_2^{\rm an},Y_2^{\rm an}\big) \arrow[r, squiggly] & H_{\bullet}^{\text{s}}\big(X_2^{\rm an},Y_2^{\rm an}\big) . \end{tikzcd} \end{equation} By means of such changes of coordinates, one can easily derive two distinct integral representations of the same period. For example, taking $[\gamma_1] \in H_{\bullet}^{\text{s}}\big(X_1^{\rm an},Y_1^{\rm an}\big)$ and $[\omega_2] \in H^{\bullet}_{\rm dR}(X_2,Y_2)$, we~have \begin{gather} \int_{[\gamma_1]} f^*[\omega_2] = \int_{f_* [\gamma_1]} [\omega_2]. \end{gather} The corresponding two motivic representations of the same period \begin{gather} [H^{\bullet}(X_1,Y_1), f^*[\omega_2], [\gamma_1]]^{\rm m}, \qquad [H^{\bullet}(X_2,Y_2), [\omega_2], f_* [\gamma_1]]^{\rm m} \end{gather} could a priori be different motivic periods. However, they are identified with each other by change of variables. Indeed, the period conjecture corresponds to the statement that, whenever different motivic representations of the same period arise, they can always be interrelated by~the three equivalence relations in Definition~\ref{def: algebra_p}. \begin{Definition} \label{def: matrix} Let $X$ be a smooth quasi-projective $\mathbb{Q}$-variety, $Y \subset X$ a closed subvariety, and $H = H^{\bullet}(X,Y)$ the $H$-system of $X$ relative to $Y$. Assume that $\{[\omega_j]\}_{j=1}^{n}$ is a basis of the algebraic de Rham cohomology $H^{\bullet}_{\rm dR}(X, Y)$, and that $\{[\gamma_i]\}_{i=1}^{n}$ is a basis of the Betti homology $H_{\bullet}^{{\rm B}}(X, Y)$. We~denote by $\mathop{\rm per}\nolimits|_H$ the period map restricted to the motivic periods in $\mathcal{P}^{\rm m}$ that are built on the given Hodge structure $H$. Observe that $\mathop{\rm per}\nolimits|_H$ is fully determined by the values that it takes when evaluated at $[H, [\omega_j], [\gamma_i]]^{\rm m}$, which are \begin{gather} \mathop{\rm per}\nolimits|_H([H, [\omega_j], [\gamma_i]]^{\rm m}) = \int_{\gamma_i} \omega_j \end{gather} for each pair of indices $(i,j)$ with $i,j = 1,\dots,n$. We~define the \textit{period matrix} of $H$ as the $n \times n$-matrix with complex entries $(p_{ij})_{i,j = 1,\dots,n}$ given by \begin{gather} p_{ij} = \int_{\gamma_i} \omega_j. \end{gather} \end{Definition} The period matrix expresses in a different guise the same information contained in the period map, once it has been restricted to a specific $H$-system. \begin{Remark} For a given mixed $H$-system $H = (H_{\rm dR}, \, H_{{\rm B}}, \, \mathop{\rm comp})$, there is a canonical choice of bases on $H_{\rm dR}$ and $H^{{\rm B}}$ which is compatible\footnote{See Brown~\cite{Bro17_1} for details.} with the comparison isomorphism and with the Hodge and weight filtrations in $H$. We~often implicitly assume to work in the canonical bases when writing the period matrix. Let~us denote by $\{ e_i \}_{i = 1}^n$ and $\{ f_i \}_{i = 1}^n$ the canonical bases on the de Rham and Betti realisations of $H$, respectively. For $i=1, \dots, n$, the action of the comparison isomorphism on the $i$-th element of the canonical basis of $H_{\rm dR}$ is given by \begin{align} \mathop{\rm comp}\colon\ H_{\rm dR} \otimes_{\mathbb{Q}} \mathbb{C} & \xlongrightarrow{\sim} H_{{\rm B}} \otimes_{\mathbb{Q}} \mathbb{C}, \\ e_i \otimes 2 \pi {\rm i} & \longmapsto f_i^{\vee} \otimes 2 \pi {\rm i}, \end{align} where $f_i^{\vee}$ denotes the standard vector dual basis element of $f_i$ on $H_{{\rm B}}$. For $i, j=1, \dots, n$, the pairing map gives \begin{align} H_{\rm dR} \times H^{{\rm B}} & \longrightarrow \mathbb{C}, \\ (e_j , f_i) & \longmapsto \int_{f_i} e_j = p_{ij}. \end{align} For $i = 1, \dots, n$, we define the vector dual $e_i^{\vee}$ of the basis element $e_i$ to be $e_i^{\vee} = f_i$. Observe that, since we cannot easily make sense of a notion of de Rham homology, the dual of a basis of~$H_{\rm dR}$ is defined to be a basis of $H^{{\rm B}}$. \end{Remark} \begin{Example} \label{ex: ex10} Let $H= H^1(\mathbb{G}_m, \{ 1, z \})$ with $z \in \mathbb{Q} \backslash \{0, 1\}$. As shown in Examples~\ref{ex: ex0} and~\ref{ex: exx}, a basis of the Betti homology group $H_1^{{\rm B}}(\mathbb{G}_m, \{ 1, z \}) \simeq H^{\text{s}}_1(\mathbb{C}^*, \{ 1, z \})$ is given by $[\gamma_1]$, where $\gamma_1$ is a continuous oriented map from $1$ to $z$ which does not encircle the origin, and $[\gamma_2]$, where $\gamma_2$ is a counterclockwise cycle encircling the origin. A~basis of the algebraic de Rham cohomology group $H^1_{\rm dR}(\mathbb{G}_m, \{ 1, z \})$ is given by $[\omega_1] = \big[ \frac{{\rm d}x}{z-1} \big]$ and $[\omega_2] = \big[ \frac{{\rm d}x}{x} \big]$. Such a choice of bases is indeed canonical, and the period matrix of $H$ is \begin{gather} \begin{pmatrix} 1 & \log(z) \\ 0 & 2 \pi {\rm i} \end{pmatrix}\!. \end{gather} \end{Example} \subsection{Examples} \label{sec: examples} \subsubsection[Motivic $2 \pi {\rm i}$]{Motivic $\boldsymbol{2 \pi {\rm i}}$} The period $2 \pi {\rm i}$ is given by the contour integral \begin{gather} \label{eq: int_2pii} 2 \pi {\rm i} = \oint\displaylimits_{\gamma_2} \frac{{\rm d}x}{x}, \end{gather} where $\gamma_2$ is a counterclockwise cycle encircling the origin in the punctured complex plane $\mathbb{C}^*$. As observed in Example~\ref{ex: ex2}, the complex manifold $\mathbb{C}^*$ is isomorphic to the topological space of complex points $\mathbb{G}_m^{\rm an}$ underlying the algebraic variety $\mathbb{G}_m$ over $\mathbb{Q}$. As shown in Examples~\ref{ex: ex1} and~\ref{ex: ex4}, we have that \begin{gather} H_1^{{\rm B}}(\mathbb{G}_m) = \mathbb{Q}[\gamma_2], \qquad H^1_{\rm dR}(\mathbb{G}_m) = \mathbb{Q} \bigg[ \frac{{\rm d}x}{x} \bigg]. \end{gather} Recalling that $H^1(\mathbb{G}_m) = \big(H^1_{\rm dR}(\mathbb{G}_m), H^1_{{\rm B}}(\mathbb{G}_m), {\rm comp}\big)$, a motivic version of $2 \pi {\rm i} $ is \begin{gather} \label{eq: 2pii_1} (2 \pi {\rm i} )^{\rm m} = \bigg[ H^1(\mathbb{G}_m), \bigg[ \frac{{\rm d}x}{x} \bigg], [\gamma_2] \bigg]^{\rm m}, \end{gather} which is alternatively represented by the pairing \begin{align} H^1_{\rm dR}(\mathbb{G}_m) \times H_1^{{\rm B}}(\mathbb{G}_m) & \longrightarrow \mathbb{C} , \\ \bigg( \bigg[ \frac{{\rm d}x}{x} \bigg], [\gamma_2] \bigg) & \longmapsto \oint_{\gamma_2} \frac{{\rm d}x}{x} = 2 \pi {\rm i}. \end{align} A second integral representation of $2 \pi {\rm i}$ is given by \begin{gather} \label{eq: int2} 2 \pi {\rm i} = \int_{\mathbb{P}^1(\mathbb{C})} \frac{{\rm d}z \wedge {\rm d}\bar{z}}{(1 + z \bar{z})^2}, \end{gather} where $\frac{{\rm d}z \wedge {\rm d}\bar{z}}{(1 + z \bar{z})^2}$ is a closed algebraic $2$-form over the projective manifold $\mathbb{P}^{1, \text{an}}$. Because $\mathbb{P}^{1, \text{an}}$ is compact and K\"ahler, Theorem~\ref{th: dec} applies, giving the Hodge decomposition \begin{gather} \label{eq: H2pi} H^2_{\rm dR}(\mathbb{P}^1) \otimes_{\mathbb{Q}} \mathbb{C} = \bigoplus_{p+q=2} H^{p,q}_{{\rm D}}\big(\mathbb{P}^1, \mathbb{C}\big) \end{gather} which implies that the pure $H$-system $H^2(\mathbb{P}^1)$ has weight 2. Recalling that the differential forms in $H^{p,q}_{{\rm D}}$ contain $p$ copies of the differential ${\rm d}z$ and $q$ copies of the anti-holomorphic differential ${\rm d}\bar{z}$, we have that $\big[ \frac{{\rm d}z \wedge {\rm d}\bar{z}}{(1 + z \bar{z})^2} \big] \in H^{1,1}_{{\rm D}}\big(\mathbb{P}^1, \mathbb{C}\big)$. Therefore, the integral~\eqref{eq: int2} corresponds to the motivic period\footnote{Note that we are here using the intuitive definition of algebraic de Rham cohomology of non-affine varieties given in Section~\ref{sec: algdR}. Although $\frac{{\rm d}z \wedge {\rm d}\bar{z}}{(1 + z \bar{z})^2}$ is not a global algebraic $2$-form on $\mathbb{P}^1(\mathbb{C})$, and indeed there are no non-zero global algebraic $2$-forms on $\mathbb{P}^1(\mathbb{C})$ for dimension reasons, one can still rigorously make sense of $2 \pi {\rm i}$ as a period of $H^2(\mathbb{P}^1)$ via the \v Ceck construction mentioned in Section~\ref{sec: algdR}, that is, choosing a Zariski open affine covering of $\mathbb{P}^1$ and computing the algebraic de Rham cohomology as a hypercohomology of sheaves.} \begin{gather} \label{eq: 2pii_2} (2 \pi {\rm i} )^{\rm m} = \bigg[H^2(\mathbb{P}^1), \bigg[\frac{{\rm d}z \wedge {\rm d}\bar{z}}{(1 + z \bar{z})^2} \bigg], \left[\mathbb{P}^{1, \text{an}}\right] \bigg]^{\rm m}. \end{gather} \begin{Remark} The two apparently different motivic periods in~\eqref{eq: 2pii_1} and~\eqref{eq: 2pii_2} are the same, thus preserving the period conjecture. To show this, define \begin{gather} A = \mathbb{P}^{1, \text{an}} \backslash \{ \infty \} \cong \mathbb{C} \subset \mathbb{P}^{1, \text{an}} , \qquad B = \mathbb{P}^{1, \text{an}} \backslash \{ 0 \} \cong \mathbb{C} \subset \mathbb{P}^{1, \text{an}} \end{gather} which satisfy the relations \begin{gather} A \cap B \simeq \mathbb{C}^* \simeq \mathbb{G}_m^{\rm an} , \qquad A \cup B = \mathbb{P}^{1, \text{an}}. \end{gather} By the Mayer--Vietoris theorem applied to the singular homology groups, the following long exact sequence holds \hfsetfillcolor{red!0} \hfsetbordercolor{red!70!black} \begin{equation} \begin{tikzcd} 0 \arrow[r] & H^{\text{s}}_0(A \cup B) \arrow[r] & H^{\text{s}}_0(A) \oplus H^{\text{s}}_0(B) \arrow[d] \\ \underbrace{H^{\text{s}}_1(A) \oplus H^{\text{s}}_1(B)}_{ \simeq 0} \arrow[d] & H^{\text{s}}_1(A \cup B) \arrow[l] & H^{\text{s}}_0(A \cap B) \arrow[l] \\ H^{\text{s}}_1(A \cap B) \arrow[r] & H^{\text{s}}_2(A \cup B) \arrow[r] & \underbrace{H^{\text{s}}_2(A) \oplus H^{\text{s}}_2(B)}_{ \simeq 0}. \end{tikzcd} \end{equation} Hence, the step $ H^{\text{s}}_1(A \cap B) \rightarrow H^{\text{s}}_2(A \cup B)$ is an isomorphism, giving \begin{gather} H^{\text{s}}_1(\mathbb{G}_m^{\rm an}) \simeq H^{\text{s}}_2\big(\mathbb{P}^{1, \text{an}}\big). \end{gather} Similarly, one can prove that the whole $H$-systems $H^1(\mathbb{G}_m)$ and $H^2(\mathbb{P}^1)$ are isomorphic and that the change of coordinates occurring between them relates the cohomology classes $\big[\frac{{\rm d}z \wedge {\rm d}\bar{z}}{(1 + z \bar{z})^2} \big]$ and $\big[ \frac{{\rm d}x}{x} \big]$ and the homology classes $[\gamma_0]$ and $\left[\mathbb{P}^{1, \text{an}} \right]$ via pull-back and push-forward maps, respectively. \end{Remark} \subsubsection[Motivic $\log(z)$]{Motivic $\boldsymbol{\log(z)}$} \label{sec: log} Recall the integral representation of $\log(z)$, $z \in \mathbb{Q} \backslash \{ 0, 1 \}$, given by \begin{gather} \label{eq: log_num} \log(z) = \int_1^z \frac{{\rm d}x}{x}. \end{gather} As in the case of $2 \pi {\rm i}$, this is an integral over the punctured complex plane $\mathbb{C}^* = \mathbb{G}_m^{\rm an}$. However, contrary to the case of $2 \pi {\rm i}$, where the integration path $\gamma_2$ is closed, integral~\eqref{eq: log_num} is performed on an open path, precisely any continuous oriented path $\gamma_1 \subset \mathbb{C}^*$ starting at $1$ and ending at $z$, which does not encircle the origin. The integration path being open requires us to work in the framework of relative homology. Let~$\mathbb{G}_m$ be the ambient variety. Then, $\mathbb{C}^*$ is the underlying topological space, and $\{ 1, z \} \subset \mathbb{C}^*$ with $z \in \mathbb{Q} \backslash \{ 0,1 \}$ is a simple normal crossing divisor. As~shown in Examples~\ref{ex: ex0} and~\ref{ex: exx}, we have \begin{gather} H_1^{{\rm B}}(\mathbb{G}_m, \{1, z\}) = \mathbb{Q}[\gamma_1, \gamma_2], \qquad H^1_{\rm dR}(\mathbb{G}_m, \{1, z\}) = \mathbb{Q} \bigg[\frac{{\rm d}x}{z-1}, \frac{{\rm d}x}{x}\bigg]. \end{gather} Observe that we can write $\big[ \big(\frac{{\rm d}x}{z-1},0,0 \big), \big(\frac{{\rm d}x}{x},0,0 \big) \big] = \big[ \frac{{\rm d}x}{z-1}, \frac{{\rm d}x}{x} \big]$ as a consequence of Proposition~\ref{prop: topdeg}. Setting as usual $H^1(\mathbb{G}_m, \{1, z\}) = \big(H^1_{\rm dR}(\mathbb{G}_m, \{ 1, z \}), H^1_{{\rm B}}(\mathbb{G}_m, \{ 1, z \}), {\rm comp}\big)$, a motivic version of $\log(z)$ is \begin{gather} \log(z)^{\rm m} = \bigg[ H^1(\mathbb{G}_m, \{ 1, z \}), \bigg[\frac{{\rm d}x}{x} \bigg], [ \gamma_1 ] \bigg]^{\rm m} \end{gather} which is alternatively represented by the pairing \begin{align} H^1_{\rm dR}(\mathbb{G}_m, \{1, z\}) \times H_1^{{\rm B}}(\mathbb{G}_m, \{1, z\}) & \longrightarrow \mathbb{C} , \\[1ex] \bigg(\bigg[ \frac{{\rm d}x}{x} \bigg], [\gamma_1] \bigg) & \longmapsto \int_{\gamma_1} \frac{{\rm d}x}{x} = \log(z). \end{align} \subsubsection{Elementary relations} Elementary relations among periods are often simply recast in the formalism of motivic periods. In fact, de Rham and Betti systems of realisations conjecturally capture all algebraic relations among periods. \begin{Example} For $a,b \in \mathbb{Q} \backslash \{ 0, 1 \}$, such that $ab \ne 1$, consider the following injective morphisms of pairs of $\mathbb{Q}$-spaces: \begin{gather} (\mathbb{G}_m, \{ 1, a \}) \hookrightarrow (\mathbb{G}_m, \{ b, ab \}) \hookrightarrow (\mathbb{G}_m, \{ 1, b, ab \}), \\ (\mathbb{G}_m, \{ 1, b \}) \hookrightarrow (\mathbb{G}_m, \{ 1, b, ab \}), \\ (\mathbb{G}_m, \{ 1, ab \}) \hookrightarrow (\mathbb{G}_m, \{ 1, b, ab \}). \end{gather} Since the differential form $\frac{{\rm d}x}{x}$ is invariant under rescaling of $x$, we have the motivic representations \begin{gather} \log(a)^{\rm m} = \bigg[H^1(\mathbb{G}_m, \{ 1, b, ab \}), \bigg[\frac{{\rm d}x}{x} \bigg], [b, ab] \bigg]^{\rm m} , \\ \log(b)^{\rm m}= \bigg[H^1(\mathbb{G}_m, \{ 1, b, ab \}), \bigg[\frac{{\rm d}x}{x} \bigg], [1, b] \bigg]^{\rm m}, \\ \log(ab)^{\rm m} =\bigg [H^1(\mathbb{G}_m, \{ 1, b, ab \}), \bigg[\frac{{\rm d}x}{x} \bigg], [1, ab] \bigg]^{\rm m}, \end{gather} where $[z, w]$ denotes the class of a continuous oriented map in $\mathbb{C}^*$ which goes from $z$ to $w$ and it does not encircle the origin in the Betti homology group $H_1^{{\rm B}}(\mathbb{G}_m, \{ 1, b, ab \})$ for $z,w \in \{ 1, b, ab \}$. Additivity of the Betti homology classes $[b, ab] \cup [1, b] = [1, ab]$ implies that motivic logarithms satisfy the expected relation \begin{gather} \log(ab)^{\rm m} = \log(a)^{\rm m} + \log(b)^{\rm m}. \end{gather} \end{Example} \begin{Example} Consider $H = H^1(\mathbb{G}_m, \{ 1, z \})$ for $z \in \mathbb{Q} \backslash \{ 0, 1 \}$. Let~$\gamma$ be the union of the paths $\gamma_1$ and $\gamma_2$ in the punctured complex plane, as shown in Fig.~\ref{fig: new_base}. \begin{figure}[htb!] \centering \includegraphics[scale=.37]{new_baseN.png} \put(-172,98){\makebox(0,0)[lb]{\small \textcolor{ared}{$\gamma$}}} \put(-111,69){\makebox(0,0)[lb]{\small $\gamma_2$}} \put(-137,44){\makebox(0,0)[lb]{\small $0$}} \put(-84,44){\makebox(0,0)[lb]{\small $1$}} \put(-45,59){\makebox(0,0)[lb]{\small $\gamma_1$}} \put(-7,45){\makebox(0,0)[lb]{\small $z$}} \caption{The paths $\gamma_1$, $\gamma_2$, and $\gamma$ in $\mathbb{C}^*$.} \label{fig: new_base} \end{figure} The period obtained by integrating $\omega_2$ along $\gamma$ is \begin{gather} \int_{\gamma}\omega_2 = \int_{\gamma_1 \cup \gamma_2} \omega_2 = \int_{\gamma_1} \omega_2 + \int_{\gamma_2} \omega_2 = \log(z) + 2 \pi {\rm i}, \end{gather} which translates into the following relation among motivic periods \begin{gather} \begin{split} (\log(z) + 2 \pi {\rm i})^{\rm m} &= \left[H, [\omega_2], [\gamma] \right]^{\rm m} =\left[ H, [\omega_2], [\gamma_1 \cup \gamma_2] \right]^{\rm m}\\ &= \left[H, [\omega_2], [\gamma_1] \right]^{\rm m} + \left[ H,[\omega_2], [\gamma_2] \right]^{\rm m} \\ &= \log(z)^{\rm m} + (2 \pi {\rm i} )^{\rm m}, \end{split} \end{gather} where we have used the additivity of the Betti homology classes and the injective morphism $\mathbb{G}_m \hookrightarrow (\mathbb{G}_m, \{ 1, z \})$. Because $ (\log(z) + 2 \pi {\rm i})^{\rm m} \in \mathop{\rm per}\nolimits^{-1} (\log(z) + 2 \pi {\rm i})$ and $\log(z)^{\rm m} + (2 \pi {\rm i} )^{\rm m} \in \mathop{\rm per}\nolimits^{-1} (\log(z)) + \mathop{\rm per}\nolimits^{-1} (2 \pi {\rm i})$, it follows that \begin{gather} \mathop{\rm per}\nolimits^{-1} (\log(z) + 2 \pi {\rm i}) \cap \big(\mathop{\rm per}\nolimits^{-1} (\log(z)) + \mathop{\rm per}\nolimits^{-1} (2 \pi {\rm i} )\big) \ne \varnothing. \end{gather} Note that $\mathop{\rm per}\nolimits^{-1} (\log(z) + 2 \pi {\rm i}) = \mathop{\rm per}\nolimits^{-1} (\log(z)) + \mathop{\rm per}\nolimits^{-1} (2 \pi {\rm i} )$ only holds conjecturally. \end{Example} Moreover, many new functional equations among motivic periods are found by means of the more abstract, and yet more powerful formalism that we discuss in Section~\ref{sec: motives}. By the period conjecture, new relations among motivic periods automatically translates into new algebraic relations among the corresponding numbers. \section{Feynman motives} \label{sec: motives} \subsection{Singularities and the blow up} Multiple zeta values and convergent Feynman integrals are periods by means of the integral representations~\eqref{eq: zint} and~\eqref{eq: I_G_last}, respectively. In both cases, singularities of the integrand can be contained in the domain of integration, a feature that does not occur in the examples of $2 \pi {\rm i}$ and $\log(z)$. Whenever singularities are present, they have to be taken care of with particular attention. \begin{Example} The period $\zeta(2)$ is given by the integral \begin{gather} \label{eq: zeta2} \zeta(2) = \int\displaylimits_{1 \ge x_1 \ge x_2 \ge 0} \frac{{\rm d}x_1}{x_1} \wedge \frac{{\rm d}x_2}{1-x_2} \end{gather} over the complex manifold $\mathbb{C}^2$. The domain of integration is the simplex \begin{gather} \sigma = \{(x_1,x_2) \in \mathbb{C}^2 \,|\, 1 \ge x_1 \ge x_2 \ge 0 \} \end{gather} and the integrand is the differential $2$-form \begin{gather} \omega = \frac{{\rm d}x_1}{x_1} \wedge \frac{{\rm d}x_2}{1-x_2}. \end{gather} Observing that $\mathbb{C}^2$ is isomorphic to the topological space of complex points $\mathbb{A}^2(\mathbb{C})$, underlying the affine\footnote{For any positive integer $n$, the $n$-dimensional affine variety over $\mathbb{Q}$ is defined as $\mathbb{A}^n = \mathop{\rm Spec} \mathbb{Q}[x_1, \dots, x_n]$. For any field extension $\mathbb{K} \supseteq \mathbb{Q}$, the space of $\mathbb{K}$-points of $\mathbb{A}^n$ is $\mathbb{A}^n(\mathbb{K}) = \mathbb{K}^n$. The multiplicative group $\mathbb{G}_m = \mathop{\rm Spec} \mathbb{Q}[x, \frac{1}{x}]$ satisfies $\mathbb{G}_m =\mathop{\rm Spec} \mathbb{Q}[x_1, x_2] / (1 - x_1 x_2) =\mathbb{A}^1 \backslash \{0\} \subset \mathbb{A}^2$, that is, $\mathbb{G}_m$ is a hyperbola in $\mathbb{A}^2$.} $\mathbb{Q}$-algebraic variety $\mathbb{A}^2 = \mathop{\rm Spec} \mathbb{Q}[x_1,x_2]$, we may try to build $\zeta(2)^{\rm m}$ as we did for the examples in Section~\ref{sec: examples}. Consider the lines $l_0 = \{x_1 = 0 \}$ and $l_1 = \{x_2 = 1 \}$ in the affine plane $\mathbb{A}^2$. Since $L = l_0 \cup l_1$ is the locus of singular points of $\omega$, the latter is an algebraic $2$-form on $X = \mathbb{A}^2 \backslash L$. Thus, $[\omega ]$ is a class in the second algebraic de Rham cohomology group of $X$ and, consequently, we may want to consider the integral~\eqref{eq: zeta2} as a period of $X$ relative to some divisor containing the boundary of $\sigma$. In an attempt to do so, define the simple normal crossing divisor \begin{gather} D = \{ x_1 = x_2 \} \cup \{ x_1 = 1 \} \cup \{ x_2 = 0 \} \subset \mathbb{A}^2 \end{gather} containing $\partial \sigma$. Note that $D$ is not in $X$ because $D \cap L \ne \varnothing $. However, the divisor $D \backslash (D \cap L) \subset X$ does no longer contain $\partial \sigma$. The problem arises from the fact that $\sigma$ itself is not contained in $X$, intersecting the singular locus $L$ in two points \begin{gather} p = (0,0) = \sigma \cap l_0 = D \cap l_0, \qquad q = (1,1) = \sigma \cap l_1 = D \cap l_1. \end{gather} Removing the singular points $p$, $q$ from $D$ and considering the second relative $H$-system $H^2(X, D$ $\backslash (D \cap L))$ does not solve the mentioned technical issue, because $[\sigma]$ is not a class in $H_2^{{\rm B}}(X, D\backslash (D \cap L))$. See Fig.~\ref{fig: z2}. \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{z2N.png} \put(-40,120){\makebox(0,0)[lb]{\small \textcolor{ared}{$D$}}} \put(-40,93){\makebox(0,0)[lb]{\small \textcolor{ared}{$q$}}} \put(-125,8){\makebox(0,0)[lb]{\small \textcolor{ared}{$p$}}} \put(-125,55){\makebox(0,0)[lb]{\small \textcolor{agreen}{$l_0$}}} \put(-85,92){\makebox(0,0)[lb]{\small \textcolor{agreen}{$l_1$}}} \put(-75,42){\makebox(0,0)[lb]{\small \textcolor{ablue}{$\sigma$}}} \caption{Construction of $\zeta(2)^{\rm m}$ in the affine plane $\mathbb{A}^2$.} \label{fig: z2} \end{figure} \end{Example} The example of $\zeta(2)$ shows how direct removal of singular points explicitly fails and motivates a more articulated geometric construction, called \textit{blow up}, which proves to be successful in the case of $\zeta(2)$ and many more examples. Graphically, we may illustrate the procedure as the removal of a whole region of space centred at the singularity and the corresponding reshaping of the integration domain. See Fig.~\ref{fig: blow} for a qualitative representation of how the blow up of~the two singular points $p,q \in \mathbb{A}^2$ acts on $\sigma$ in the case of $\zeta(2)$. \begin{figure}[htb!] \centering \subfloat[][{Before the blow up}] {\includegraphics[scale=.3]{blow1.png}} \qquad \subfloat[][{After the blow up}] {\includegraphics[scale=.3]{blow2.png}} \caption{Qualitative illustration of the blow up of the singular points of $\zeta(2)$.} \label{fig: blow} \end{figure} \subsection{Motivic multiple zeta values} Consider $\zeta(2)$ again. The blow up of the affine plane $\mathbb{A}^2$ along the singular points $p$, $q$ is defined as the closed subvariety \begin{gather} Y =\mathop{\rm Blow}_{p,q}\big(\mathbb{A}^2\big) \subset \mathbb{A}^2 \times \mathbb{P}^1 \times \mathbb{P}^1 \end{gather} given by the equations \begin{gather} x_1 \alpha_1 = x_2 \beta_1,\qquad (x_1 -1) \alpha_2 = (x_2 -1) \beta_2, \end{gather} where $[\alpha_i : \beta_i]$, $i=1,2$, are homogeneous coordinates on the two copies of $\mathbb{P}^1$. The projection of $Y$ onto the first factor in $\mathbb{A}^2 \times \mathbb{P}^1 \times \mathbb{P}^1$ is the proper surjective map \begin{align} \pi\colon\ Y & \longrightarrow \mathbb{A}^2 , \\ (x_1, x_2) \times [\alpha_1 : \beta_1] \times [\alpha_2 : \beta_2] & \longmapsto (x_1,x_2). \end{align} Let us write $\pi^{-1}$ to denote the inverse image operator\footnote{Observe that $\pi^{-1}$ is not a map defined on the affine plane $\mathbb{A}^2$ because $\pi$ is not invertible.} under the projection $\pi$. The inverse images of the singular points $p,q \in \mathbb{A}^2$ are the projective lines $E_p,E_q \subset Y$, called \textit{exceptional divisors}. Precisely, we have \begin{gather} \pi^{-1}(p) = \pi^{-1}(0,0) = (0,0) \times \mathbb{P}^1 \times [1:1] = E_p, \\ \pi^{-1}(q) = \pi^{-1}(1,1) = (1,1) \times [1:1] \times \mathbb{P}^1 = E_q. \end{gather} Moreover, the restriction of $\pi$ to the complement in $Y$ of the exceptional divisors $E_p$, $E_q$ \begin{align} \pi |_{Y \backslash (E_p \cup E_q)}\colon\ Y \backslash (E_p \cup E_q) & \longrightarrow \mathbb{A}^2 \backslash \{p,q\}, \\ (x_1,x_2) \times [1:1] \times [1:1] & \longmapsto (x_1,x_2) \end{align} is an isomorphism. For any closed subset $C \subset \mathbb{A}^2$, the inverse image $\pi^{-1}(C)$ is called \textit{total transform} of $C$. The \textit{strict transform} of $C$, denoted $\hat{C}$, is instead the closed subset of $Y$ obtained by first removing the points $p,q$ if they belong to $C$, then taking the inverse image under $\pi$, and finally taking the Zariski closure, that is \begin{gather} \label{eq: strictdef} \hat{C} = \overline{\pi^{-1}(C \backslash \{p,q\})} \subseteq \pi^{-1}(C). \end{gather} It follows that the strict transforms of $l_0, l_1$ are the affine lines \begin{gather} L_0 = \hat{l}_0 = \big\{ (0,x_2) \times [1:0] \times [1-x_2:1] \,|\, x_2 \in \mathbb{A}^1 \big\}, \\ L_1 = \hat{l}_1 = \big\{ (x_1,1) \times [1:x_1] \times [0:1] \,|\, x_1 \in \mathbb{A}^1 \big\} \end{gather} and their total transforms are \begin{gather} \pi^{-1}(l_0) = L_0 \cup E_p, \qquad \pi^{-1}(l_1) = L_1 \cup E_q. \end{gather} We observe that $L_0$, $E_p$ and $L_1, E_q$ intersect in only one point each. Precisely \begin{gather} L_0 \cap E_p = \{ (0,0) \times [1:0] \times [1:1] \}, \\ L_1 \cap E_q = \{ (1,1) \times [1:1] \times [0:1] \}.\label{eq: LcapsE} \end{gather} Moreover, we have \begin{gather} L_1 \cap E_p = \varnothing = L_0 \cap E_q, \\ L_1 \cap L_0 = \{ (0,1) \times [1:0] \times [0:1] \}. \end{gather} In a similar way to~\eqref{eq: strictdef}, but taking the closure in the analytic topology, we define the strict transform $\hat{\sigma}$ of the domain of integration. Observing that the closed points of $E_p$ can be interpreted as lines passing through $p$, and analogously that the closed points of $E_q$ can be interpreted as lines passing through $q$, we obtain \begin{gather} \hat{\sigma} \cap E_p = \{ (0,0) \times [t:1] \times [1:1] \,|\, 0 \le t \le 1 \}, \\ \hat{\sigma} \cap E_q = \{ (1,1) \times [1:1] \times [1:t] \,|\, 0 \le t \le 1 \} \end{gather} which, combined with~\eqref{eq: LcapsE}, imply that \begin{gather} \label{eq: intersect} \hat{\sigma} \cap L_0 = \varnothing , \qquad \hat{\sigma} \cap L_1 = \varnothing. \end{gather} See Fig.~\ref{fig: blow_last} for a graphical representation of the blow up. \begin{figure}[htb!] \centering \includegraphics[scale=.3]{blow_lastN.png} \put(-125,77){\makebox(0,0)[lb]{\small \textcolor{agreen}{$L_0$}}} \put(-97,102){\makebox(0,0)[lb]{\small \textcolor{agreen}{$L_1$}}} \put(-70,47){\makebox(0,0)[lb]{\small \textcolor{ablue}{$\hat{\sigma}$}}} \put(-27,65){\makebox(0,0)[lb]{\small \textcolor{aorange}{$E_q$}}} \put(-75,10){\makebox(0,0)[lb]{\small \textcolor{aorange}{$E_p$}}} \caption{The strict transform of $\sigma$ in the blow up $Y$.} \label{fig: blow_last} \end{figure} While the inverse image $\pi^{-1}$ is applied to the ambient variety, giving the reshaped domain $\hat{\sigma}$, the differential form $\omega$ is replaced by its pull-back $\pi^*(\omega)$, denoted by $\hat{\omega}$. Let~us now show that the pull-back $\hat{\omega}$ is only singular\footnote{In principle, $\hat{\omega}$ might have singularities along the total transform of $l_0 \cup l_1$, i.e., $L_0 \cup L_1 \cup E_p \cup E_q$. However, in the case of $\zeta(2)$, it turns out that $\hat{\omega}$ has no singularities along the exceptional divisors. More generally, this condition determines whether the blow up prescription turns out to be successful or not for a given period.} on the strict transform $L = L_0 \cup L_1$. We~use local coordinates on the blow up $Y$. In particular, consider a patch of $Y$ around the point $L_0 \cap E_p$ as shown in~Fig.~\ref{fig: patch}. \begin{figure}[htb!] \centering \includegraphics[scale=.5]{patchN.png} \put(-70,105){\makebox(0,0)[lb]{\small \textcolor{agreen}{$L_0$}}} \put(-115,95){\makebox(0,0)[lb]{\small \textcolor{aorange}{$E_p$}}} \caption{Local patch of $Y$ around the intersection of $L_0$ and $E_p$.} \label{fig: patch} \end{figure} Here, a local system of coordinates is explicitly given by \begin{gather} t = \frac{x_1}{x_2}=\frac{\beta_1}{\alpha_1}, \qquad s = x_2, \end{gather} where $L_0$ and $E_p$ have equations $t=0$ and $s=0$, respectively. Applying this change of variables to $\hat{\omega}$, we have \begin{gather} \hat{\omega} = \frac{{\rm d} (st)}{st} \wedge \frac{{\rm d}s}{1-s} = \frac{{\rm d}s}{s} \wedge \frac{{\rm d}s}{1-s} + \frac{{\rm d}t}{t} \wedge \frac{{\rm d}s}{1-s} = \frac{{\rm d}t}{t} \wedge \frac{{\rm d}s}{1-s}. \end{gather} It follows that $\hat{\omega}$ is singular along the strict transform $L_0$, while it is smooth along the exceptional divisor $E_p$, because it has no pole at $s=0$. Analogously, we find that $\hat{\omega}$ is singular along $L_1$, but not along $E_q$. Then, the singular locus of $\hat{\omega}$ is $L$. Observe that the complement $Y \backslash L$ is the closed affine subvariety of $\mathbb{A}^2 \times \mathbb{A}^1 \times \mathbb{A}^1$ given by the equations \begin{gather} x_1 t = x_2, \qquad x_1 -1 = (x_2 -1) s, \end{gather} where $t$, $s$ are affine coordinates on the two copies of $\mathbb{A}^1$. Therefore, the differential form $\hat{\omega}$ determines a class in $H^2_{\rm dR}(Y \backslash L)$. Moreover, it follows from~\eqref{eq: intersect} that, moving from the original affine plane $\mathbb{A}^2$ to the blow up $Y$, the singular locus of the differential form $\hat{\omega}$ and the domain of integration $\hat{\sigma}$ are disjoint. As usual, we may want to consider the integral~\eqref{eq: zeta2} as a period of~$Y \backslash L$ relative to some divisor containing the boundary of $\hat{\sigma}$. The blow up construction is thus successful for the period $\zeta(2)$ if $[\hat{\sigma}]$ turns out to be a class in the given relative Betti homology group. To see this, recall that $\partial \sigma$ is contained in the union $D$ of the affine lines \begin{gather} m_1 = \{ x_1 = x_2 \}, \qquad m_2 = \{ x_1 = 1 \}, \qquad m_3 = \{ x_2 = 0 \}. \end{gather} Thus, we naturally consider the normal crossing divisor $M \subset Y$ defined by \begin{gather} M = \pi^{-1}(D) = \pi^{-1}(m_1 \cup m_2 \cup m_3) = E_p \cup E_q \cup M_1 \cup M_2 \cup M_3, \end{gather} where $M_i= \hat{m_i}$ denotes the strict transform of $m_i$ for $i=1,2,3$. Note that $L \cap M$ is the union of the points $L_0 \cap E_p$ and $L_1 \cap E_q$ expressed in~\eqref{eq: LcapsE}. Therefore, $\hat{\sigma}$ is contained in $Y \backslash L$ and $\partial \hat{\sigma}$ is contained in $M \backslash (M \cap L) \subset Y \backslash L$, implying \begin{gather} [\hat{\sigma}] \in H_2^{{\rm B}}(Y \backslash L, M \backslash (M \cap L)). \end{gather} Besides, the restriction of $\hat{\omega}$ to every irreducible component $M_i$, $i=1,2,3$, of $M$ gives zero, implying \begin{gather} [\hat{\omega}] \in H^2_{\rm dR}(Y \backslash L, M \backslash (M \cap L)). \end{gather} Setting $H = H^2(Y \backslash L, M \backslash (M \cap L))$, the resulting motivic version of $\zeta(2)$ is \begin{gather} \zeta(2)^{\rm m} = \left[ H, [\hat{\omega}], [\hat{\sigma}] \right]^{\rm m}. \end{gather} Indeed, the pairing of $[\hat{\sigma}]$ and $[\hat{\omega}]$ yields \begin{gather} \int_{\hat{\sigma}} \hat{\omega} = \int_{ \hat{\sigma}} \pi^*(\omega) = \int_{\pi_*(\hat{\sigma})} \omega = \int_{\sigma} \omega = \zeta(2) \end{gather} by the equivalence relation under change of variables in $\mathcal{P}^{\rm m}$. Moreover, the whole canonical period matrix of $H$ is \begin{gather} \label{eq: matrixZ2} \begin{pmatrix} 1 & \zeta(2) \\[1ex] 0 & (2 \pi {\rm i})^2 \end{pmatrix}\!. \end{gather} \subsection{Motivic Feynman integrals} \label{sec: intGmot} In an attempt to overcome singularity issues, the blow up procedure can be similarly applied to generic MZVs and other families of periods, such as convergent Feynman integrals. For an~expo\-sition of the general computation of the $H$-system of a blow up we refer to Voisin~\cite{Voi02}. Let $G$ be a primitive log-divergent Feynman graph, $E_G$ the collection of its edges, and \mbox{$n_G = |E_G|$}, as in Section~\ref{sec: primitive}. Recall that $x_e$ denotes the Schwinger parameter associated to $e \in E_G$, and $\Psi_G$, $I_G$, and $X_G$ denote the first graph polynomial, the Feynman integral, and the graph hypersurface, as given in~\eqref{eq: Psi_G},~\eqref{eq: I_G_last}, and~\eqref{eq: X_G}, respectively. Denote by $\omega_G$ and $\sigma$ the integrand and the domain of integration of $I_G$. Since $\omega_G$ is a top-degree algebraic differential form on~$\mathbb{P}^{n_G-1} \backslash X_G$, and $\partial \sigma$ is contained in the union $D$ of the coordinate hyperplanes $\{x_e = 0$, $e \in E_G \}$, we may intuitively try to build the motive $I_G^{\rm m}$ on the relative $H$-system \begin{gather} H^{n_G-1}\big(\mathbb{P}^{n_G-1} \backslash X_G, D \backslash (D \cap X_G)\big). \end{gather} However, this na\"ive attempt fails whenever the hypersurface $X_G$ intersects the integration cycle~$\sigma$ non-trivially, implying the presence of non-negligible singularities. Whenever singularities are present, $\sigma$ does not define an element in the corresponding na\"ive relative Betti homology group. To successfully build the motive $I_G^{\rm m}$ in the presence of singularities, the blow up technique is applied. A linear subvariety $L \subset \mathbb{P}^{n_G-1}$ defined by the vanishing of a subset of the set of Schwinger parameters is called a \textit{coordinate linear space}, while its subspace of real points with non-negative coordinates is denoted by \begin{gather} L(\mathbb{R}_{\ge 0}) = \{ [x_e]_{e \in E_G} \in L \,|\, x_e \in \mathbb{R}_{\ge 0} \}. \end{gather} Since the coefficients of $\Psi_G$ are positive, the locus of problematic singularities is \begin{gather} \sigma \cap X_G(\mathbb{C}) = \bigcup_{L \subset X_G} L(\mathbb{R}_{\ge 0}), \end{gather} where the union is taken over all coordinate linear spaces $L \subset X_G$. \begin{Remark} The coordinate linear spaces $L \subset X_G$ are in one-to-one correspondence with the subgraphs $\gamma \subset G$ such that $l_{\gamma} > 0$. It~follows that \begin{gather} \sigma \cap X_G(\mathbb{C}) = \bigcup_{\gamma \subset G} L_{\gamma}(\mathbb{R}_{\ge 0}), \end{gather} where the union is taken over all subgraphs $\gamma \subset G$ with $l_{\gamma} > 0$, and $L_{\gamma}$ is the linear subvariety of $\mathbb{P}^{n_G-1}$ defined by the equations $\{ x_e = 0 , e \in E_{\gamma} \}$. \end{Remark} The following theorem is proven, and an explicit algorithmic construction of the blow ups is given, by Bloch, Esnault and Kreimer~\cite{BEK06}. \begin{Theorem} \label{th: BEK_th} Let $G$ be a primitive log-divergent Feynman graph. There exists a tower \begin{gather} \pi\colon\ P = P_r \rightarrow P_{r-1} \rightarrow \dots \rightarrow P_1 \rightarrow P_0 = \mathbb{P}^{n_G-1} \end{gather} such that, for each $i=1,\dots,r$, $P_i$ is obtained by blowing up $P_{i-1}$ along the strict transform of a coordinate linear space $L_i \subset X_G$, and the following conditions hold: \begin{itemize}\itemsep=0pt \item[$(1)$] The pulled-back differential $\hat{\omega}_G = \pi^* \omega_G$ has no poles along the exceptional divisors associated to the blow ups. \item[$(2)$] Let $B$ be the total transform of $D$ in $P$, that is \begin{gather} B = \pi^{-1}(D) = \pi^{-1} \bigg( \bigcup_{e \in E_G} \{ x_e = 0 \} \bigg). \end{gather} Then, $B \subset P$ is a normal crossing divisor such that none of the non-empty intersections of its irreducible components is contained in the strict transform $Y_G$ of $X_G$ in $P$. \item[$(3)$] The strict transform of $\sigma$ in $P$ does not meet $Y_G$, that is \begin{gather} \hat{\sigma} \cap Y_G(\mathbb{C}) = \varnothing. \end{gather} \end{itemize} \end{Theorem} As a consequence of Theorem~\ref{th: BEK_th}, the motive $I_G^{\rm m}$ associated to any primitive log-divergent Feynman graph $G$ can be written explicitly. Being $\partial \hat{\sigma} \subset B \backslash (B \cap Y_G)$, the domain of integration defines the class \begin{gather} [\hat{\sigma}] \in H_{n_G-1}^{{\rm B}}(P \backslash Y_G, B \backslash (B \cap Y_G)) \end{gather} called \textit{Betti framing}, while the integrand defines the class \begin{gather} [\hat{\omega}_G] \in H^{n_G-1}_{\rm dR}(P \backslash Y_G, B \backslash (B \cap Y_G)) \end{gather} called \textit{de Rham framing}. Brown and Doryn~\cite{BD13} present a method for explicit computation of the framings on the cohomology of Feynman graph hypersurfaces. Then, the de Rham and Betti system of realisations $H_G = H^{n_G-1}(P \backslash Y_G, B \backslash (B \cap Y_G))$ is called the \textit{graph $H$-system},\footnote{The graph $H$-system is also explicitly known in the general case of renormalised amplitudes of single-scale graphs due to the work of Brown and Kreimer~\cite{BK13}, who paved the way for the rigorous investigation of divergent Feynman graphs and their renormalised amplitudes from an algebro-geometric perspective.} and the motivic Feynman integral $I_G^{\rm m}$ is given by \begin{gather} I_G^{\rm m} =\big [H_G, [\hat{\omega}_G], [\hat{\sigma}]\big]^{\rm m}. \end{gather} Indeed, the pairing of the classes $[\hat{\omega}_G]$ and $[\hat{\sigma}]$ yields the period \begin{gather} \int_{\hat{\sigma}} \hat{\omega}_G = \int_{ \hat{\sigma}} \pi^*(\omega_G) = \int_{\pi_*(\hat{\sigma})} \omega_G = \int_{\sigma} \omega_G = I_G \end{gather} by the equivalence relation under change of variables in $\mathcal{P}^{\rm m}$. \begin{Example} Adopting the notation \begin{gather} \mathcal{P}_{\rm log} = \mathbb{Q} \langle I_G \,|\, G \text{ is a primitive log-divergent Feynman graph} \rangle, \\ \mathcal{P}_{\phi^4} = \mathbb{Q} \big\langle I_G \,|\, G \text{ is a primitive log-divergent Feynman graph in $\phi^4$ theory} \big\rangle, \end{gather} we observe that the sequence of inclusions $\mathcal{P}_{\phi^4} \subset \mathcal{P}_{\rm log} \subset \mathcal{P}$ is preserved after promoting periods to periods of motives, that is $\mathcal{P}_{\phi^4}^{\rm m} \subset \mathcal{P}_{\rm log}^{\rm m} \subset \mathcal{P}^{\rm m}$. \end{Example} Many concrete results on the structure of $\mathcal{P}_{\rm log}$ follow from the study of the corresponding motivic space $\mathcal{P}_{\rm log}^{\rm m}$. For example, the following proposition on graph $H$-systems is proven by~Brown~\cite{Bro17_2} within the formalism of motivic Feynman integrals. \begin{Proposition} \label{prop: trivial} Let $G$ be a primitive log-divergent Feynman graph. If~$G$ has a single vertex, that is $v_G = 1$, or if $G$ has a single loop, that is $l_G =1$, then its $H$-system $H_G$ is isomorphic to the pure Hodge--Tate system $\mathbb{Q}(0)$. \end{Proposition} We observe that Proposition~\ref{prop: trivial} makes no restriction on the physicality of the graph $G$, which can have arbitrary vertex-degrees, so that $I_G$ belongs to $\mathcal{P}_{\rm log}$, but not necessarily to $\mathcal{P}_{\phi^4}$. \subsection{Tannakian formalism} We briefly introduce the fundamentals of the theory of \textit{Tannakian categories}, following the more detailed and comprehensive exposition by Deligne et al~\cite{Detal82}. The concept of a Tannakian category was first introduced by Saavedra Rivano~\cite{Saa72} to encode the properties of the category $\mathop{\rm Rep}\nolimits_{\mathbb{K}}(G)$ of the finite-dimensional $\mathbb{K}$-linear representations of an affine group scheme $G$ over a~field~$\mathbb{K}$. Let~us recall some preliminary notions in category theory. Let~$\mathbb{K}$ be a subfield of $\mathbb{C}$. \begin{Definition} A \textit{$\mathbb{K}$-linear category} $\mathcal{C}$ is an additive category such that, for each pair of objects~$X,Y \in \text{Ob}(\mathcal{C})$, the group $\mathop{\rm Hom}_{\mathcal{C}}(X,Y)$ is a $\mathbb{K}$-vector space and the composition maps are $\mathbb{K}$-bilinear. \end{Definition} \begin{Definition} Let $\mathcal{C}$ be a $\mathbb{K}$-linear category endowed with a $\mathbb{K}$-bilinear functor $\otimes\colon \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$. \begin{itemize}\itemsep=0pt \item[$(a)$] An \textit{associativity constraint} for $(\mathcal{C}, \otimes)$ is a natural transformation \begin{gather} \phi = \phi_{\cdot , \cdot , \cdot} \colon\ \cdot \otimes (\cdot \otimes \cdot) \longrightarrow (\cdot \otimes \cdot ) \otimes \cdot \end{gather} such that the following two conditions hold: \begin{itemize}\itemsep=0pt \item[$(a.1)$] For all $X,Y,Z \in \text{Ob}(\mathcal{C})$, the map $\phi_{X,Y,Z}$ is an isomorphism. \item[$(a.2)$] For all $X,Y,Z,T \in \text{Ob}(\mathcal{C})$, the following diagram commutes: \begin{equation} \begin{tikzcd}[column sep=-2.3em] & & X \otimes (Y \otimes (Z \otimes T)) \arrow[dll, "\text{Id} \otimes \phi_{Y,Z,T}"'] \arrow[drr, "\phi_{X,Y,Z \otimes T}"] & &\\ X \otimes ((Y \otimes Z) \otimes T) \arrow[dr, "\phi_{X,Y \otimes Z,T}"'] & & & & (X \otimes Y) \otimes (Z \otimes T) \arrow[dl, "\phi_{X \otimes Y,Z,T}"]\\ & (X \otimes (Y \otimes Z)) \otimes T \arrow[rr, "\phi_{X,Y,Z} \otimes \text{Id}"'] & & ((X \otimes Y) \otimes Z) \otimes T. & \end{tikzcd} \end{equation} \end{itemize} \item[$(b)$] A \textit{commutativity constraint} for $(\mathcal{C}, \otimes)$ is a natural transformation \begin{gather} \psi = \psi_{\cdot , *}\colon \cdot \otimes * \longrightarrow * \otimes \cdot \end{gather} such that the following two conditions hold: \begin{itemize}\itemsep=0pt \item[$(b.1)$] For all $X,Y \in \text{Ob}(\mathcal{C})$, the map $\psi_{X,Y}$ is an isomorphism. \item[$(b.2)$] For all $X,Y \in \text{Ob}(\mathcal{C})$, the following composition is the identity: \begin{gather} \psi_{Y,X} \circ \psi_{X,Y} \colon\ X \otimes Y \longrightarrow X \otimes Y. \end{gather} \end{itemize} \item[$(c)$] An associativity constraint and a commutativity constraint are \textit{compatible} if, for all $X,Y,\allowbreak Z \in \text{Ob}(\mathcal{C})$, the following diagram commutes: \begin{equation} \begin{tikzcd}[column sep=+3.8em] & X \otimes (Y \otimes Z) \arrow[r, "\phi_{X,Y,Z}"] \arrow[dl, "\text{Id} \otimes \psi_{Y,Z}"'] & (X \otimes Y) \otimes Z \arrow[dr, "\psi_{X \otimes Y, Z}"] & \\ X \otimes (Z \otimes Y) \arrow[dr, "\phi_{X,Z,Y}"'] & & & Z \otimes (X \otimes Y) \arrow[dl, "\phi_{X,Z,Y}"] \\ & (X \otimes Z) \otimes Y \arrow[r, "\psi_{X,Z} \otimes \text{Id}"'] & (Z \otimes X) \otimes Y. & \end{tikzcd} \end{equation} \item[($d)$] A pair $(U,u)$ consisting of an object $U \in \text{Ob}(\mathcal{C})$ and an isomorphism $u\colon U \rightarrow U \otimes U$ is an \textit{identity object} if the functor $X \mapsto U \otimes X$ is an equivalence of categories. \end{itemize} \end{Definition} \begin{Definition} A \textit{$\mathbb{K}$-linear tensor category} is a tuple $(\mathcal{C}, \otimes, \phi, \psi)$ consisting of a $\mathbb{K}$-linear cate\-gory $\mathcal{C}$, a $\mathbb{K}$-bilinear functor $\otimes \colon \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$, and compatible associativity and commutativity constraints $\phi$, $\psi$ such that $\mathcal{C}$ contains an identity object. \end{Definition} \begin{Definition} An object $L \in \text{Ob}(\mathcal{C})$ is \textit{invertible} if the functor $X \mapsto L \otimes X$ is an equivalence of categories. Equivalently, $L$ is invertible if and only if there exists an object $L' \in \text{Ob}(\mathcal{C})$ such that $L \otimes L' \simeq \mathbf{1}$. Then, $L'$ is also invertible. \end{Definition} \begin{Definition} Let $(\mathcal{C}, \otimes)$ be a $\mathbb{K}$-linear tensor category, where we omit the constraints $\phi$, $\psi$ for simplicity, and let $X,Y \in \text{Ob}(\mathcal{C})$. Assume that there exists an object $Z \in \text{Ob}(\mathcal{C})$ such that, for all $T \in \text{Ob}(\mathcal{C})$, the functors $T \mapsto \mathop{\rm Hom}(T,Z)$ and $T \mapsto \mathop{\rm Hom}(T \otimes X, Y)$ admit a functorial isomorphism \begin{gather} \mathop{\rm Hom}(T,Z) \xlongrightarrow{\sim} \mathop{\rm Hom}(T \otimes X, Y). \end{gather} In this case, the functor $T \mapsto \mathop{\rm Hom}(T \otimes X, Y)$ is said to be \textit{representable} and the object $Z$ is called the \textit{internal Hom} between the objects $X$ and $Y$. It~is alternatively written as $\underline{\mathop{\rm Hom}}(X,Y)$ and it is unique up to isomorphism. \end{Definition} \begin{Definition} The \textit{dual} of an object $X \in \text{Ob}(\mathcal{C})$ is defined as $X^{\vee} = \underline{\mathop{\rm Hom}}(X,\mathbf{1})$. If~$X^{\vee}$ and~${(X^{\vee})}^{\vee}$ exist, then there is a natural morphism $X \mapsto {\big(X^{\vee}\big)}^{\vee}$ and the object $X$ is \textit{reflexive} if such a morphism is an isomorphism. \end{Definition} \begin{Definition} A $\mathbb{K}$-linear tensor category $(\mathcal{C}, \otimes)$ is \textit{rigid} if the following conditions hold: \begin{itemize}\itemsep=0pt \item[(1)] For all $X,Y \in \text{Ob}(\mathcal{C})$, $\underline{\mathop{\rm Hom}}(X,Y)$ exists. \item[(2)] For all $X_1,X_2,Y_1,Y_2 \in \text{Ob}(\mathcal{C})$, the natural morphism \begin{gather} \underline{\mathop{\rm Hom}}(X_1,Y_1) \otimes \underline{\mathop{\rm Hom}}(X_2,Y_2) \longrightarrow \underline{\mathop{\rm Hom}}(X_1 \otimes X_2,Y_1 \otimes Y_2) \end{gather} is an isomorphism. \item[(3)] All objects are reflexive. \end{itemize} \end{Definition} \begin{Definition} \label{def: tannaka} A \textit{Tannakian category} over the field $\mathbb{K}$ is a rigid abelian $\mathbb{K}$-linear tensor category $\mathcal{T}$ such that $\text{End}(\mathbf{1}) = \mathbb{K}$, and there exists an exact faithful $\mathbb{K}$-linear tensor functor $\omega\colon \mathcal{T} \rightarrow \text{Vec}_{\mathbb{K}}$, where $\text{Vec}_{\mathbb{K}}$ is the category of finite-dimensional vector spaces over $\mathbb{K}$. Any such functor is called a \textit{fibre functor}. \end{Definition} \begin{Example} The category $\text{Vec}_{\mathbb{K}}$ of finite-dimensional $\mathbb{K}$-vector spaces, together with the identity functor, is a Tannakian category over $\mathbb{K}$. \end{Example} \begin{Example} The category $\text{GrVec}_{\mathbb{K}}$ of finite-dimensional graded $\mathbb{K}$-vector spaces, together with the forgetful functor $\omega\colon \text{GrVec}_{\mathbb{K}} \rightarrow \text{Vec}_{\mathbb{K}}$, sending $(V, (V_n)_{n \in \mathbb{Z}})$ to $V$, is~a~Tannakian category over $\mathbb{K}$. \end{Example} \begin{Example} The category $\mathop{\rm Rep}\nolimits_{\mathbb{K}}(G)$ of finite-dimensional $\mathbb{K}$-linear representations of an~abst\-ract group $G$, together with the functor $\omega\colon \mathop{\rm Rep}\nolimits_{\mathbb{K}}(G) \rightarrow \text{Vec}_{\mathbb{K}}$ that forgets the action of $G$, is~a~Tannakian category over $\mathbb{K}$. \end{Example} Let us fix a Tannakian category $\mathcal{T}$ over $\mathbb{K}$ and a fibre functor $\omega$ of $\mathcal{T}$. Let~$R$ be a $\mathbb{K}$-algebra. We~denote by $\underline{\mathop{\rm Aut}}^{\otimes}(\omega)(R)$ the collection of families $(\lambda_X)_{X \in \text{Ob}(\mathcal{T})}$ of $R$-linear automorphisms \begin{gather} \lambda_X\colon\ \omega(X) \otimes_{\mathbb{K}} R \longrightarrow \omega(X) \otimes_{\mathbb{K}} R \end{gather} which are compatible with the tensor structure and functorial. Here, compatibility with the tensor structure and functoriality mean\footnote{In the given diagrams, all unlabelled tensor products are over $\mathbb{K}$ and all unlabelled arrows are the natural isomorphisms.} that: \begin{itemize}\itemsep=0pt \item[(1)] For all $X_1, X_2 \in \text{Ob}(\mathcal{T})$, the following diagram commutes: \begin{equation} \begin{tikzcd}[column sep=+4.0em] \omega(X_1 \otimes X_2) \otimes R \arrow[r, "\lambda_{X_1 \otimes X_2}"] \arrow[d] & \omega(X_1 \otimes X_2) \otimes R \arrow[d] \\ \omega(X_1) \otimes \omega(X_2) \otimes R \arrow[d] & \omega(X_1) \otimes \omega(X_2) \otimes R \arrow[d] \\ (\omega(X_1) \otimes R) \otimes_R (\omega(X_2) \otimes R) \arrow[r,"\lambda_{X_1} \otimes_R \lambda_{X_2}"'] & (\omega(X_1) \otimes R) \otimes_R (\omega(X_2) \otimes R). \end{tikzcd} \end{equation} \item[(2)] The following diagram commutes: \begin{equation} \begin{tikzcd} \omega(\mathbf{1}) \otimes R \arrow[r, "\lambda_{\mathbf{1}}"] \arrow[d] & \omega(\mathbf{1}) \otimes R \arrow[d] \\ R \arrow[r, "\text{Id}"'] & R. \end{tikzcd} \end{equation} \item[(3)] For all $X,Y \in \text{Ob}(\mathcal{T})$ and for every morphism $\alpha \in \mathop{\rm Hom}(X,Y)$, the following diagram commutes: \begin{equation} \begin{tikzcd} \omega(X) \otimes R \arrow[r, "\lambda_X"] \arrow[d, "\omega(\alpha) \otimes \text{Id}"'] & \omega(X) \otimes R \arrow[d, "\omega(\alpha) \otimes \text{Id}"] \\ \omega(Y) \otimes R \arrow[r, "\lambda_Y"'] & \omega(Y) \otimes R. \end{tikzcd} \end{equation} \end{itemize} Deligne et al~\cite{Detal82} proved that all Tannakian categories are categories of finite-dimensional linear representations of a pro-algebraic group. \begin{Theorem} \label{th: aut} Let $\mathcal{T}$ be a Tannakian category over $\mathbb{K}$ with a fibre functor $\omega$. \begin{itemize}\itemsep=0pt \item[$(1)$] The functor $R \mapsto \underline{\mathop{\rm Aut}}^{\otimes}(\omega)(R)$ is representable by an affine group scheme over $\mathbb{K}$, which is denoted as $\underline{\mathop{\rm Aut}}^{\otimes}(\omega)$ or $G^{\omega}$, and is called the \textit{Tannaka group} of the pair $(\mathcal{T}, \omega)$. \item[$(2)$] For every $X \in \text{Ob}(\mathcal{T})$, the group $\underline{\mathop{\rm Aut}}^{\otimes}(\omega)$ acts naturally on $\omega(X)$ and the functor \begin{equation} \begin{tikzcd}[row sep = -0.1em] \mathcal{T} \arrow[r] & \mathop{\rm Rep}\nolimits_{\mathbb{K}}(G^{\omega}), \\[1ex] X \arrow[r, mapsto] & \omega(X) \arrow[loop right, "G^{\omega}"] \end{tikzcd} \end{equation} sending $X$ to the vector space $\omega(X)$ with this action of $\underline{\mathop{\rm Aut}}^{\otimes}(\omega)$, is an equivalence of~cate\-gories. \end{itemize} \end{Theorem} Given a second fibre functor $\omega'$, we analogously define $\underline{\mathop{\rm Isom}}^{\otimes}(\omega,\omega')(R)$ to be the collection of families $(\tau_X)_{X \in \text{Ob}(\mathcal{T})}$ of $R$-linear isomorphisms \begin{gather} \tau_X\colon\ \omega(X) \otimes_{\mathbb{K}} R \longrightarrow \omega'(X) \otimes_{\mathbb{K}} R \end{gather} which are compatible with the tensor structure and functorial. Deligne et al~\cite{Detal82} proved the following result. \begin{Theorem} Let $\mathcal{T}$ be a Tannakian category over $\mathbb{K}$ with two fibre functors $\omega$ and $\omega'$. The~fun\-c\-tor $R \mapsto \underline{\mathop{\rm Isom}}^{\otimes}(\omega,\omega')(R)$ is representable by an affine scheme over $\mathbb{K}$, which is denoted as $\underline{\mathop{\rm Isom}}^{\otimes}(\omega, \omega')$, and is a right torsor under $\underline{\mathop{\rm Aut}}^{\otimes}(\omega)$ and a left torsor under $\underline{\mathop{\rm Aut}}^{\otimes}(\omega')$. \end{Theorem} \begin{Remark} In what follows, we denote $\mathop{\rm Aut}^{\otimes}(\omega) = \underline{\mathop{\rm Aut}}^{\otimes}(\omega)(\mathbb{C})$ the group of $\mathbb{C}$-linear auto\-morphisms of the fibre functor $\omega$, and analogously $\mathop{\rm Isom}\nolimits^{\otimes}(\omega, \omega') = \underline{\mathop{\rm Isom}}^{\otimes}(\omega, \omega')(\mathbb{C})$ the group of~$\mathbb{C}$-linear isomorphisms between the fibre functors $\omega$ and $\omega'$. \end{Remark} \subsection{Motivic Galois theory} \label{sec: mcategory} Recall the notions of pure and mixed $H$-systems associated with algebraic varieties over $\mathbb{Q}$ given in Sections~\ref{sec: pure} and~\ref{sec: mixed}. On the one hand, the algebraic de Rham and Betti cohomologies of a smooth projective $\mathbb{Q}$-variety are fundamentally described by a pure $H$-system. On the other hand, applying the resolution of singularities by Hironaka~\cite{Hir64-I,Hir64-II}, the algebraic de Rham and Betti cohomologies of an arbitrary quasi-projective $\mathbb{Q}$-variety can be expressed in terms of the cohomologies of smooth projective varieties, and since pure $H$-systems of different weights get mixed in this expression, they are fundamentally described by a mixed system of realisations. Because we specifically look at the application of the theory of motives to the theory of periods, it is sufficient to our purpose to work with the partial realisation of Grothendieck's notion of motives provided by $H$-systems\footnote{In the literature on motivic periods, mixed de Rham and Betti systems of realisations are sometimes called motives themselves.}. However, it turns out to be necessary and fruitful to enhance the na\"ive construction given in Section~\ref{sec: mot_alg} to its rigorous category-theoretic formulation. Recall that $\mathbf{MHSy}(\mathbb{Q})$ is the category of mixed $H$-systems over $\mathbb{Q}$, and $\omega_{{\rm B}}$, $\omega_{\rm dR}$ are its two forgetful functors arising from the Betti and de Rham realisations, respectively. All the defining properties of a Tannakian category, encoded in Definition~\ref{def: tannaka}, apply to $\mathbf{MHSy}(\mathbb{Q})$, thus justifying the use of the Tannakian machinery in the study of motivic periods. \begin{Proposition} $\mathbf{MHSy}(\mathbb{Q})$ is a Tannakian category over $\mathbb{Q}$ and the functors $\omega_{{\rm B}}$, $\omega_{\rm dR}$ are fibre functors. \end{Proposition} In what follows, we write $ \mathcal{H} = \mathbf{MHSy}(\mathbb{Q}) $. The pro-algebraic group $\mathop{\rm Aut}^{\otimes}(\omega_{\rm dR})$ is denoted by $G^{\rm dR}$ and called the \textit{motivic Galois group}. Observe that $G^{\rm dR}(H)$ is a group in ${\rm GL}(\omega_{\rm dR}(H))$ for every object $H \in \text{Ob}(\mathcal{H})$. Following Theorem~\ref{th: aut}, the category of mixed $H$-systems is equivalent to the category of finite-dimensional $\mathbb{Q}$-linear representations of the motivic Galois group, that is \begin{gather} \mathcal{H} \simeq \mathop{\rm Rep}\nolimits_{\mathbb{Q}}\big(G^{\rm dR}\big). \end{gather} \begin{Remark} We observe that the motivic Galois group can alternatively be realised via Betti cohomology as $G^{{\rm B}} = \mathop{\rm Aut}^{\otimes}(\omega_{{\rm B}})$ and the corresponding category of finite-dimensional $\mathbb{Q}$-linear representations is still the same category of mixed $H$-systems $\mathcal{H}$. \end{Remark} In Tannakian formalism, the space of motivic periods $\mathcal{P}^{\rm m}$ is expressed as \begin{gather} \mathcal{P}^{\rm m}= \mathbb{Q} \big\langle [H,\omega,\sigma]^{\rm m} \,|\, H \in \text{Ob}(\mathcal{H}), \; \omega \in \omega_{\rm dR}(H), \, \sigma \in \omega_{{\rm B}}(H)^{\vee} \big\rangle \end{gather} with implicit factorisation modulo bilinearity and functoriality. Recall that \begin{gather} H = (H_{\rm dR}, \, H_{{\rm B}}, \, \mathop{\rm comp}\nolimits_H \colon H_{\rm dR} \otimes_{\mathbb{Q}} \mathbb{C} \xrightarrow{\sim} H_{{\rm B}} \otimes_{\mathbb{Q}} \mathbb{C}), \end{gather} where $H_{\rm dR} = \omega_{\rm dR}(H)$, $H_{{\rm B}} = \omega_{{\rm B}}(H)$ are finite-dimensional $\mathbb{Q}$-vector spaces and $\mathop{\rm comp} \in \mathop{\rm Isom}\nolimits^{\otimes}(\omega_{\rm dR},\omega_{{\rm B}})$ is a $\mathbb{C}$-linear isomorphism. In this framework, an alternative but equivalent description of motivic periods follows. \begin{Proposition} $\mathcal{P}^{\rm m}$ is isomorphic to the space of regular functions on the affine $\mathbb{Q}$-scheme $\mathop{\rm Isom}\nolimits^{\otimes}(\omega_{\rm dR},\omega_{{\rm B}})$. \end{Proposition} We observe that such an isomorphism is explicitly written as \begin{align} \mathcal{P}^{\rm m} &\xlongrightarrow{\sim} \mathcal{O}\big(\mathop{\rm Isom}\nolimits^{\otimes}(\omega_{\rm dR},\omega_{{\rm B}})\big), \\ {}[H,\omega,\sigma ]^{\rm m} &\longmapsto \big[ (\lambda_X)_{X \in \text{Ob}(\mathcal{H})} \mapsto (\sigma \otimes_{\mathbb{Q}} \text{Id}_{\mathbb{C}}) \circ \lambda_H \circ (\omega \otimes 2 \pi {\rm i}) \big], \end{align} where we have \begin{gather} \begin{array}{rcccl} \omega_{\rm dR}(H) \otimes_{\mathbb{Q}} \mathbb{C} &\xlongrightarrow{\lambda_H} &\omega_{{\rm B}}(H) \otimes_{\mathbb{Q}} \mathbb{C} &\xlongrightarrow{\sigma \otimes_{\mathbb{Q}} \text{Id}_{\mathbb{C}}} & \mathbb{C}, \\[1ex] \omega &\longmapsto & \lambda_H(\omega) &\longmapsto & \sigma(\lambda_H(\omega)). \end{array} \end{gather} Following Theorem~\ref{th: aut}, the motivic Galois group $G^{\rm dR}$ has a natural action on $\mathop{\rm Isom}\nolimits^{\otimes}(\omega_{\rm dR},\omega_{{\rm B}})$ which is written as \begin{gather} \label{eq: action} G^{\rm dR} \otimes \mathop{\rm Isom}\nolimits^{\otimes}(\omega_{\rm dR},\omega_{{\rm B}}) \longrightarrow \mathop{\rm Isom}\nolimits^{\otimes}(\omega_{\rm dR},\omega_{{\rm B}}) \end{gather} and which induces a dual coaction on the corresponding space of regular functions $\mathcal{P}^{\rm m}$, that is \begin{align} \label{eq: coaction} \Delta\colon\ \mathcal{P}^{\rm m} &\longrightarrow \mathcal{O}\big(G^{\rm dR}\big) \otimes \mathcal{P}^{\rm m}, \\ {}[H,\omega,\sigma]^{\rm m} &\longmapsto \sum_{i=1}^n \big[H,\omega,e_i^{\vee}\big]^{\rm dR} \otimes [H,e_i,\sigma]^{\rm m}, \end{align} where $\{ e_i \}_{i = 1, \dots, n}$ is a basis of $\omega_{\rm dR}(H)$, and $\big\{ e_i^{\vee} \big\}_{i = 1, \dots, n}$ denotes the associated vector dual basis of~$\omega_{{\rm B}}(H)^{\vee}$, as introduced in Section~\ref{sec: periodmap}. Here, $[H,e_i,\sigma]^{\rm m} \in \mathcal{P}^{\rm m}$ is called a \textit{Galois conjugate} of the motivic period $[H,\omega,\sigma]^{\rm m}$, while $\big[H,\omega,e_i^{\vee}\big]^{\rm dR} \in \mathcal{O}\big(G^{\rm dR}\big)$ is called a \textit{de Rham period}. We~denote by $\mathcal{P}^{\rm dR} = \mathcal{O}\big(G^{\rm dR}\big)$ the space of regular functions on the motivic Galois group, and we call it the \textit{space of de Rham periods}. The coaction $\Delta$ is known as the \textit{Galois coaction}. \begin{Remark} Note that the space of de Rham periods is naturally a Hopf algebra, while the space of motivic periods is not, thus making the coaction intrinsically asymmetric. Indeed, the Galois coaction turns the finite-dimensional $\mathbb{Q}$-vector space $\mathcal{P}^{\rm m}$ into a comodule over the Hopf algebra $\mathcal{P}^{\rm dR}$. Moreover, there is a canonical single-valued map that associates a number to each de Rham period, thus representing the de Rham analogue of the period map. For a detailed discussion we refer to Brown~\cite{Bro17_2, Bro17_1}. \end{Remark} \begin{Example} Consider the motivic logarithm $\log(z)^{\rm m}$ for $z \in \mathbb{Q} \backslash \{ 0,1 \}$. Following Section~\ref{sec: log}, we have \begin{gather} \log(z)^{\rm m} = \bigg[ H^1(\mathbb{G}_m, \{ 1, z \}), \bigg[ \frac{{\rm d}x}{x} \bigg], [ \gamma_1 ] \bigg]^{\rm m}. \end{gather} Let us denote $H=H^1(\mathbb{G}_m, \{ 1, z \})$ for simplicity. Adopting the canonical choice of bases, as in Example~\ref{ex: ex10}, the period matrix of $H$ is \begin{gather} \begin{pmatrix} 1 & \log(z) \\ 0 & 2 \pi {\rm i} \end{pmatrix}\!. \end{gather} Direct application of the prescription in~\eqref{eq: coaction} gives the explicit decomposition \begin{gather} \Delta\bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], [\gamma_1] \bigg]^{\rm m} = \bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], \bigg[ \frac{{\rm d}x}{z-1} \bigg]^{\vee} \bigg]^{\rm dR} \otimes \bigg[ H, \bigg[ \frac{{\rm d}x}{z-1} \bigg], [\gamma_1] \bigg]^{\rm m} \\ \hphantom{\Delta\bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], [\gamma_1] \bigg]^{\rm m} =} {} + \bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], \bigg[ \frac{{\rm d}x}{x} \bigg]^{\vee} \bigg]^{\rm dR} \otimes \bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], [\gamma_1] \bigg]^{\rm m}. \label{eq: coactLog00} \end{gather} Observing that $\big[\frac{{\rm d}x}{z-1}\big]^{\vee} = [\gamma_1]$ and $\big[\frac{{\rm d}x}{x} \big]^{\vee} = [\gamma_2]$, and identifying the de Rham periods \begin{gather} \bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], \bigg[ \frac{{\rm d}x}{z-1} \bigg]^{\vee} \bigg]^{\rm dR} = \log(z)^{\rm dR} , \qquad \bigg[ H, \bigg[ \frac{{\rm d}x}{x} \bigg], \bigg[ \frac{{\rm d}x}{x} \bigg]^{\vee} \bigg]^{\rm dR} =(2 \pi {\rm i})^{\rm dR}, \end{gather} we find that the expression~\eqref{eq: coactLog00} is equivalent to \begin{gather} \label{eq: coactLog} \Delta \log(z)^{\rm m} = \log(z)^{\rm dR} \otimes 1^{\rm m} + (2 \pi {\rm i})^{\rm dR} \otimes \log(z)^{\rm m}, \end{gather} where $1^{\rm m}$ and $\log(z)^{\rm m}$ are the Galois conjugates of $\log(z)^{\rm m}$. \end{Example} \begin{Example} As for $\log(z)^{\rm m}$, the Galois coaction of the motivic multiple zeta value $\zeta(\mathbf{s})^{\rm m}$ can be computed explicitly. For example, from the period matrix~\eqref{eq: matrixZ2}, we have that \begin{gather} \Delta \zeta(2)^{\rm m} = \zeta(2)^{\rm dR} \otimes 1^{\rm m} + \big((2 \pi {\rm i})^{\rm dR}\big)^2 \otimes \zeta(2)^{\rm m}. \end{gather} As powers of $(2 \pi {\rm i})^{\rm dR}$ naturally appear among de Rham conjugates, the Galois coaction is often intended with an implicit factorisation\footnote{The operation of factorisation of a $H$-system modulo $2 \pi {\rm i}$ is called a \textit{Tate twist}, and it can indeed be formally defined in terms of the Hodge--Tate systems introduced in Example~\ref{ex: hodgetate}.} of $\mathcal{P}^{\rm dR}$ modulo the ideal generated by $(2 \pi {\rm i})^{\rm dR}$. Let~us assume so. For $n \ge 1$, we have that \begin{gather} \label{eq: coactZ2} \Delta \zeta(2)^{\rm m} = \zeta(2)^{\rm dR} \otimes 1^{\rm m} + 1^{\rm dR} \otimes \zeta(2)^{\rm m}, \\ \Delta \zeta(2n+1)^{\rm m} = \zeta(2n+1)^{\rm dR} \otimes 1^{\rm m} + 1^{\rm dR} \otimes \zeta(2n+1)^{\rm m}, \\ \Delta (\zeta(2)^{\rm m} \zeta(2n+1)^{\rm m}) = \zeta(2n+1)^{\rm dR} \otimes \zeta(2)^{\rm m} + 1^{\rm dR} \otimes \zeta(2)^{\rm m} \zeta(2n+1)^{\rm m}. \end{gather} \end{Example} \subsection{Coaction conjecture} Let us look at the example of scalar massless $\phi^4$ quantum field theory and consider the Galois coaction restricted to $\mathcal{P}_{\phi^4}^{\rm m}$. This is a priori valued in the whole space $\mathcal{P}^{\rm dR} \otimes \mathcal{P}^{\rm m}$. However, after computing every known $\phi^4$-period with loop orders at most $7$ and several $\phi^4$-periods with higher loop orders, and explicitly verifying that in each case the Galois coaction preserves the space $\mathcal{P}_{\phi^4}^{\rm m}$, Panzer and Schnetz\footnote{Panzer and Schnetz~\cite{PS17} explicitly computed the first examples of $\phi^4$-amplitudes which are not MZVs. Such numbers are polylogarithms at 2nd and 6th roots of unity. The coaction conjecture is verified for them as well.}~\cite{PS17} proposed the following conjecture, known as the \textit{coaction conjecture}. \begin{Conjecture} \label{conj: PanzerSchnetz} The Galois coaction closes on $\phi^4$-periods. In other words, the Galois conjugates of a $\phi^4$-period are also $\phi^4$-periods, that is \begin{gather} \Delta\big(\mathcal{P}_{\phi^4}^{\rm m}\big)\subseteq\mathcal{P}^{\rm dR}\otimes\mathcal{P}_{\phi^4}^{\rm m}. \end{gather} \end{Conjecture} Such a conjecture implies the existence of a fundamental hidden symmetry underlying the class of $\phi^4$-periods that we do not yet properly understand. Indeed, the unexpected observations by Panzer and Schnetz, and the resulting conjecture, have greatly stimulated research, moti\-va\-ting the search for a mathematical mechanism able to distinguish $\phi^4$-periods from periods of all graphs, and thus explain this surprising evidence. Some advancements in this direction have already been made. Conjecture~\ref{conj: PanzerSchnetz} is the strongest among several reformulations of its statement obtained by suitably enlarging the space of ampli\-tu\-des under consideration. A~weaker version of the coaction conjecture has been proven by Brown~\cite{Bro17_2}. To a scalar Feynman graph $G$, we associate the finite-dimensional $\mathbb{Q}$-vector space~$\mathcal{P}_{a}^{\rm m}(G)$ consisting of the motivic realisations of all affine integrals of globally-defined algebraic differential forms on the usual integration domain $\sigma$, called the \textit{affine periods} of $G$. They include convergent affine integrals of the form \begin{gather} \int_0^{\infty} \cdots \int_0^{\infty}\frac{q}{\Psi_G^k} \bigg|_{x_{n_G} = 1} {\rm d}^{n_G-1}x, \end{gather} where $k \ge 1$ is an integer, and $q$ is a polynomial in $\mathbb{Q}[x_1, \dots, x_{n_G -1}]$. However, the denominator of the integrand of an affine period of $G$ can also possibly involve linear factors of the form $\sum_{e \in E_{\gamma}} x_e$, where $\gamma$ is a subgraph of~$G$. \begin{Theorem} \label{th: coactB} The Galois group acts on $\mathcal{P}_{a}^{\rm m}(G)$, that is $\Delta(\mathcal{P}_{a}^{\rm m}(G)) \subseteq \mathcal{P}^{\rm dR} \otimes \mathcal{P}_{a}^{\rm m}(G)$. \end{Theorem} We write $\mathcal{P}_{a}^{\rm m}$ to denote the space spanned by all $\mathcal{P}_{a}^{\rm m}(G)$ for any\footnote{We observe that restricting to $\phi^4$-graphs does not change the resulting space $\mathcal{P}_{a}^{\rm m}$, which is sometimes also denoted by $\mathcal{P}_{\tilde{\phi}^4 }^{\rm m}$.} scalar Feynman graph~$G$. Since direct computations show that $\mathcal{P}_{a}^{\rm m} \simeq \mathcal{P}_{\phi^4}^{\rm m}$ at low loop orders, Theorem~\ref{th: coactB} directly supports Conjecture~\ref{conj: PanzerSchnetz}. \begin{Remark} The affine motive $H_{a}$ of a scalar Feynman graph $G$ is defined analogously to the projective $H$-system $H = H^{n_G-1}(P \backslash Y_G, B \backslash (B \cap Y_G))$, described in Section~\ref{sec: intGmot}, after replacing the projective space $P$ with an affine open subspace $A \subset P$ obtained by removing its hyperplanes with strictly positive coefficients. The inclusion $A \backslash Y_G \hookrightarrow P \backslash Y_G$ induces a morphism of objects $H \rightarrow H_a$ in $\mathcal{H}$, which gives an equivalence at the level of motivic periods. Every motivic period of $H$ is also a motivic period of $H_{a}$, but the contrary is not true. Many periods of $H_{a}$ are not periods of Feynman graphs in the ordinary sense. \end{Remark} Let $G$ be a scalar Feynman graph. We~define the \textit{generalised Feynman integrals} associated to $G$ as the projective integrals of parametric form\footnote{Note that the ordinary Feynman integral~\eqref{eq: I_G} of $G$ can be written in this form.} \begin{gather}\label{eq: I_Gtilde} \int_{\sigma} \frac{p \, \Omega}{\Psi_G^k \, \Xi_G^h(\{p_j, m_e\})}, \end{gather} where $k,h \ge 1$ are integers, and $p$ is a homogeneous polynomial in $\mathbb{Q}[\{x_e\}]$ of degree $k l_G +h (l_G +1) - n_G$, so that the overall integrand is homogeneous of degree zero. Although possibly divergent, generalised Feynman integrals are periods, and they can be promoted to their motivic realisations after being suitably regularised. We~define the finite-dimensional $\mathbb{Q}$-vector space $\mathcal{P}_{g}^{\rm m}$ to be the space of motivic realisations of regularised versions of generalised Feynman integrals. An analogous statement to Conjecture~\ref{conj: PanzerSchnetz} is proposed by Brown~\cite{Bro17_2}. \begin{Conjecture} $\mathcal{P}_{g}^{\rm m}$ is stable under the Galois coaction, that is $\Delta(\mathcal{P}_{g}^{\rm m}) \subseteq \mathcal{P}^{\rm dR} \otimes \mathcal{P}_{g}^{\rm m}$. \end{Conjecture} \subsection{Weights and the small graph principle} \label{sec: weight} Recall the notions of Hodge and weight filtrations introduced in Sections~\ref{sec: pure} and~\ref{sec: mixed}. For $H \in \text{Ob}(\mathcal{H})$, the $\mathbb{Q}$-vector space $\omega_{\rm dR}(H)$ is equipped with a Hodge filtration $F^{\bullet}$ and a weight filtration $W^{\rm dR}_{\bullet}$, while the $\mathbb{Q}$-vector space $\omega_{{\rm B}}(H)$ is provided with a weight filtration $W^{{\rm B}}_{\bullet}$ only. Mixed $H$-systems, contrary to pure ones, do not have a well-defined weight. However, their graded quotients with respect to the weight filtration do possess a pure structure of definite weight. These properties are used to define a notion of \textit{weight} for motivic periods. \begin{Definition} The weight filtration on $\omega_{\rm dR}(H)$ induces a weight filtration on the space of motivic periods by \begin{gather} W_{\bullet}^{\rm dR} \mathcal{P}^{\rm m} = \mathbb{Q} \big\langle [H,\omega,\sigma]^{\rm m} \,|\, \omega \in W_{\bullet}^{\rm dR} \big\rangle. \end{gather} We denote $W=W^{\rm dR}$ for simplicity. A~given motivic period $[H,\omega,\sigma]^{\rm m} $ is said to have weight at most $n$ if it belongs to $W_n \mathcal{P}^{\rm m}$, and to have weight $n$ if it is non-zero in the graded quotient ${\rm Gr}^W_n \mathcal{P}^{\rm m} = W_n \mathcal{P}^{\rm m} / W_{n-1} \mathcal{P}^{\rm m}$. \end{Definition} \begin{Remark} We observe that the weight of motivic periods can alternatively, but equivalently, be defined from the Betti realisation via the weight filtration induced on $\mathcal{P}^{\rm m}$ by $W^{{\rm B}}$. \end{Remark} \begin{Example} \label{ex: weightLog} Consider $H = H^1(\mathbb{G}_m, \{ 1, z \})$ for $z \in \mathbb{Q} \backslash \{ 0,1 \}$. Its weight filtration in the de Rham realisation is \begin{gather} W_{-1} = 0 \subseteq W_0 = W_1 = \mathbb{Q}(0) \subseteq W_2 = H^1(\mathbb{G}_m, \{ 1, z \}). \end{gather} Observing that $0,1 \in W_0$ and $2\pi {\rm i}, \log(z) \in W_2$, the weight of each entry of the period matrix of~$H$ is determined. Indeed, $0$, $1$ are periods of weight zero, while $2\pi {\rm i}$, $\log(z)$ have weight $2$. \end{Example} \begin{Example} The weight filtration can be used to systematically study $\mathcal{P}_{\phi^4}^{\rm m}$ weight by weight. For example, direct computation in low weight shows that $W_0 \mathcal{P}_{\phi^4}^{\rm m} =W_1 \mathcal{P}_{\phi^4}^{\rm m}=W_2\mathcal{P}_{\phi^4}^{\rm m} = \mathbb{Q}(0)$. \end{Example} More generally, the following proposition is due to Brown~\cite{Bro17_2}. \begin{Proposition} \label{prop: Q02} Let $G$ be a primitive log-divergent Feynman graph. Every Galois conjugate of its motivic Feynman integral $I_G^{\rm m}$ which has weight up to $2$ is a period of $\mathbb{Q}(0)$, that is a rational number. \end{Proposition} The Galois conjugates of Feynman periods are expected to satisfy the following conjecture by Brown~\cite{Bro17_2}, known as \textit{small graph principle}.\footnote{The small graph principle is also conjectured to hold for regularised versions of generalised Feynman integrals associated to arbitrary scalar Feynman graphs.} \begin{Conjecture} \label{conj: sgp} Let $G$ be a primitive log-divergent Feynman graph. Denote by $[ H_G, \omega_G, \sigma ]^{\rm m}$ the motivic realisation of its Feynman integral $I_G^{\rm m}$. The elements on the right-hand side of the coaction formula for $\Delta [ H_G, \omega_G, \sigma ]^{\rm m}$ can be expressed in the form \begin{gather} \prod_i [ H_{\gamma_i}, \omega_{\gamma_i}, \sigma ]^{\rm m}, \end{gather} where the product runs over a subset $\{ \gamma_i \}$ of the set of subgraphs and quotient graphs of $G$. \end{Conjecture} The small graph principle implies that the Galois conjugates of weight at most $k$ of the motivic amplitude of a primitive log-divergent Feynman graph are associated to its sub-quotient graphs\footnote{Sub-quotient graphs are formally obtained by contracting and deleting edges of the original graph.} with at most $k+1$ edges. In other words, when interested in Feynman periods of weight at most $k$, it suggests to look at graphs with up to $k+1$ edges. As a consequence, the topology of a given graph constrains the Galois theory of its amplitudes. \begin{Example} Consider the system of realisations $H = H^1(\mathbb{G}_m, \{ 1, p \})$ for $p \in \mathbb{Q} \backslash \{ 0,1 \}$ prime. Following Example~\ref{ex: weightLog}, $\log(p)^{\rm m}$ is a period of $H$ with weight 2. Then, the small graph principle suggests that any $\log(p)^{\rm m}$ appearing in the right-hand side of the coaction formula for a given $\phi^4$-periods comes from graphs with at most three edges. Proposition~\ref{prop: trivial} implies that all two-edge graphs are trivial, meaning that the associated graph motive $H_G$ is the Hodge--Tate system~$\mathbb{Q}(0)$, which does not have $\log(p)^{\rm m}$ in its period matrix. Writing down all possible graphs with three edges, we get the graphs shown in Fig.~\ref{fig: log} along with their first graph polynomials in~the Schwinger parameters. \begin{figure}[htb!] \centering \subfloat[][{$x_1+x_2+x_3$}] {\includegraphics[scale=0.55]{p1N.png}} \qquad \subfloat[][{$x_1x_2+x_2x_3+x_3 x_1$}] {\includegraphics[scale=0.55]{p2N.png}} \qquad \subfloat[][{$x_1(x_2+x_3)$}] {\includegraphics[scale=0.55]{p3n.png}} \quad \subfloat[][{$x_1x_2x_3$}] {\includegraphics[scale=0.55]{p4n.png}} \caption{Feynman graphs with 3 edges and their first graph polynomials.} \label{fig: log} \end{figure} The two outer graphs (a) and (d) are also trivial by Proposition~\ref{prop: trivial}, while the two middle graphs (b) and (c) satisfy $H_G = \mathbb{Q}(0) \oplus \mathbb{Q}(-1)$, which does not have $\log(p)^{\rm m}$ as a period. Indeed, $\log(p)$ cannot be obtained as an integral with a denominator equal to either of the first graph polynomials (b) or (c). We~conclude that $\log(p)^{\rm m}$ cannot be a Galois conjugate of a~$\phi^4$-period. By equation~\eqref{eq: coactLog}, we derive that $\log(p)^{\rm m} \notin \mathcal{P}_{\phi^4}^{\rm m}$. Note that this is consistent with Proposition~\ref{prop: Q02}. \end{Example} \begin{Example} Direct computation by Panzer and Schnetz~\cite{PS17} shows that all $\phi^4$-periods of loop order up to 6 are $\mathbb{Q}$-linear combinations of multiple zeta values. Following the small graph principle, we graphically order the set of MZVs by weight as \begin{equation} \begin{tikzcd}[row sep=-3.5, column sep=-2.5] 1 & \zeta(2) & \zeta(3) & \zeta(2)^2 & \zeta(5) & \zeta(3)^2 & \zeta(7) & \zeta(3,5) & \cdots \\ & & & & \zeta(2) \zeta(3) & \zeta(2)^3 & \zeta(2)\zeta(5) & \zeta(2)\zeta(3)^2 &\\ & & & & & & \zeta(2)^2\zeta(3) & \vdots & \end{tikzcd} \end{equation} As a consequence of the coaction conjecture and explicit expressions of the coaction formula, as~the ones in equation~\eqref{eq: coactZ2}, $\zeta(2)^{\rm m} \notin \mathcal{P}_{\phi^4}^{\rm m}$ implies that all MZVs which are linear in $\zeta(2)$ cannot be $\phi^4$-periods. Analogously, $ \big(\zeta(2)^2\big)^{\rm m} \notin \mathcal{P}_{\phi^4}^{\rm m}$ implies that all MZVs which are quadratic in $\zeta(2)$ are not $\phi^4$-periods. The set of MZVs that can appear as $\phi^4$-periods would then be reduced to \begin{equation} \begin{tikzcd}[row sep=-3.5, column sep=-2.5] 1 & & \zeta(3) & & \zeta(5) & \zeta(3)^2 & \zeta(7) & \zeta(3,5) & \cdots \\ & & & & & \zeta(2)^3 & & \vdots & \end{tikzcd} \end{equation} However, the statements $\zeta(2), \zeta(2)^2 \notin \mathcal{P}_{\phi^4}$ are only conjectural. Precisely, it is conjectured that $\zeta(2)^k \notin \mathcal{P}_{\phi^4}$ for $k \le 5$, while $\zeta(2)^k \in \mathcal{P}_{\phi^4}$ for $k \ge 6$. We~observe that such statements rely on~the control over weight drops, as conjectured by Panzer and Schnetz~\cite{PS17}, which excludes low weight MZVs coming from high loop order graphs. \end{Example} From similar considerations, other highly non-trivial constraints at all loop orders in perturbation theory can be derived using the Galois coaction and weight filtrations. For example, by~Conjecture~\ref{conj: PanzerSchnetz}, whenever it is shown that a given period is not a $\phi^4$-period, we conjecturally deduce that all periods that have the given one among their Galois conjugates cannot appear in~$\mathcal{P}_{\phi^4}$ either. \begin{Remark} Structures even more fundamental than those captured by the coaction conjecture and the small graph principle underly the space of motivic periods of Feynman graphs. Although not being sufficiently explored in the literature, the notion of \textit{operad} in the category of~motives imposes strong constraints on the admissible periods and it should be the object of~further investigation. The operad structure underlying the space of motivic Feynman integrals is interestingly the same structure governing the renormalisation group equation. Kaufmann and Ward~\cite{KW17} provide details on related notions in category theory. \end{Remark} \section{Conclusions} Originally providing a framework for re-organising and re-interpreting much of the previous knowledge on Feynman integrals, the theory of motivic periods has revealed unexpected features, placing restrictions on the set of numbers which can occur as amplitudes and paving the way for a more comprehensive understanding of their general structure. Indeed, the coaction conjecture gives new constraints at each loop order, which in turn propagate to all higher loop orders because of the recursive structure inherent in perturbative quantum field theories. At the same time, the small graph principle makes finite computations at low-loop into all-order results. Assume to deal with a Feynman integral of the form $\int_{\sigma} \omega$ in $\mathcal{P}$. The general prescription for its~investigation via the theory of motivic periods can be summarised as follows: \begin{itemize}\itemsep=0pt \item[(1)] Associate a motivic representation $[H,\omega, \sigma]^{\rm m}$ to the integral $\int_{\sigma} \omega$, deriving explicitly the corresponding algebraic de Rham and Betti system of realisations, and cohomology and homology classes. \item[(2)] Use all the known information about the mixed system of realisations $H$ to derive explicit filtrations. \item[(3)] Write down the period matrix of $H$. \item[(4)] Apply the Galois coaction and derive the Galois conjugates. \item[(5)] Apply the theory of weights of mixed Hodge structures to reduce the calculation of the Galois conjugates to the study of motivic periods of small graphs. \item[(6)] Analyse explicitly the few admissible small graphs and eliminate the excluded periods, sometimes called \textit{holes}. \item[(7)] Possibly use other known symmetries of the specific example at hand to draw conclusions. \end{itemize} This picture is, however, extensively conjectural. The very first step of replacing periods with their motivic version requests the validity of the period conjecture. Moreover, even disregarding the conjectural status of current statements, the present state of understanding of motivic amplitudes is still far from building a theory. Although the given general prescription for the investigation of motivic Feynman integrals has been particularly fruitful for massless scalar $\phi^4$ quantum field theory, further advancements are needed to enlarge the reach of current results. Speculating in full generality, consider the whole class of Feynman integrals in perturbative quantum field theory. We~expect them to have a natural motivic representation, and thus to generate a space $\mathcal{M}$ of motivic periods, a space $\mathcal{A}$ of de Rham periods, and a corresponding coaction $\Delta\colon\mathcal{M} \longrightarrow \mathcal{P}^{\rm dR} \otimes \mathcal{P}^{\rm m}$. A~potential coaction principle would then state that $\Delta(\mathcal{M}) \subseteq \mathcal{A} \otimes \mathcal{M}$. Being $\mathcal{A}$ a Hopf algebra, we could canonically introduce the group $C$ of homomorphisms from $\mathcal{A}$ to any commutative ring. It~would follow that the coaction principle can be recast in~terms of the group action $C \times \mathcal{M} \longrightarrow \mathcal{M}$, that is, the space of amplitudes is stable under the action of the group $C$, often referred to as \textit{cosmic Galois group}. This speculative construction, that broadly reproduces the general prescription summarised above, motivates a programme of research leading towards a systematic study of scattering amplitudes via the representation theory of groups. Although practically harder than the $\phi^4$-case, like-minded attempts are already on the way to gather information about the numbers that come from evaluating other classes of Feynman integrals. \begin{itemize}\itemsep=0pt \item[$(a)$] Towards a general motivic description of scalar quantum field theories, Abreu et al.~\cite{Aetal17, Aetal18, Aetal20} give evidence suggesting that scalar Feynman integrals of small graphs with non-trivial masses and momenta satisfy similar properties to $\phi^4$-periods. A~diagrammatic coaction for specific families of integrals appearing in the evaluation of scalar Feynman dia\-g\-rams, such as multiple polylogarithms and generalised hypergeometric functions, is proposed and a connection between this diagrammatic coaction and graphical operations on Feynman diagrams is conjectured. At one-loop order, a fully explicit and very compact representation of the coaction in terms of one-loop integrals and their cuts is found. Moreover, Brown and Dupont~\cite{BD19} investigate a rigorous theory of motives associated to certain hypergeometric integrals. \item[$(b)$] A subsequent generalisation arises transitioning from scalar quantum field theories to gauge theories. The problem of dealing with much more involved parametric integrands which are not explicitly expressed in terms of the Symanzik polynomials of the associated Feynman graphs has only recently been tackled. A~combinatoric and graph-theoretic approach to Schwinger parametric Feynman integrals in quantum electrodynamics by Golz~\cite{Gol18} has revealed that the parametric integrands can be explicitly written in terms of new types of~graph polynomials related to specific subgraphs. The tensor structure of quantum electrodynamics is given a diagrammatic interpretation. The resulting significant simplification of the integrands paves the way for a systematic motivic description of gauge theories. \item[$(c)$] In the same research direction, a high-precision computation of the 4-loop contribution to the electron anomalous magnetic moment $g-2$ by Laporta~\cite{Lap17} shows the presence of polylogarithmic parts with fourth and sixth roots of unity. This result is conjecturally recast into the motivic $f$-alphabet by Schnetz~\cite{Sch18}, giving a more compact expression which explicitly reveals a Galois structure. In this work, the $\mathbb{Q}$-vector spaces of Galois conjugates of the $g-2$ are conjectured up to weight four. \end{itemize} As a final remark, we mention that scattering amplitudes do not appear exclusively in perturbative quantum field theory. Among other settings, there are string perturbation theory and $\mathcal{N} = 4$ super Yang--Mills theory. In each of these theories, after suitably defining the space of integrals or amplitudes\footnote{In various modern approaches to $\mathcal{N} = 4~\text{SYM}$, including the bootstrap method, on-shell techniques, and the amplituhedron, the amplitude is constructed independently of the Feynman graphs. In these settings, the coaction principle operates on the entire amplitude, contrary to the case of perturbative quantum field theory, where it operates graph by graph.} under consideration, a version of the coaction principle is expected to hold and some promising preliminary results have already been found. We~refer to the work of Schlotterer, Stieberger and Taylor~\cite{SS13, ST14} and subsequent developments for superstring perturbation theory, and to the work of Caron-Huot et al.~\cite{CDetal20, CDetal19} for the planar limit of $\mathcal{N} = 4$ super Yang--Mills theory. \subsection*{Acknowledgements} I thank Francis Brown and Lionel Mason for useful discussions. I thank the three anonymous referees for their detailed reports that have provided a valuable guide in improving the paper. Fina\-lly, I thank Evgeny Mukhin and the rest of the organisers of the Conference on Representation Theory and Integrable Systems (ETH Z\"urich, 2019) for the opportunity to speak and to contribute to the special issue. This work is partially supported by the Italian Department of Edu\-ca\-tion, Research and University (Torno Subito 13474/19.09.2018 POR-Lazio-FSE/2014-2020) and the Swiss National Centre of Competence in Research SwissMAP (NCCR 51NF40-141869 The Mathematics of Physics). \addcontentsline{toc}{section}{References}
2,869,038,155,202
arxiv
\section{Introduction} Jamming attacks pose a serious threat to the continuous operability of wireless communication systems \cite{economist2021satellite, topgun}. Effective methods to mitigate such attacks are of paramount importance as wireless systems become increasingly critical to modern infrastructure~\cite{popovski2014ultra, pirayesh2022jamming}. In the uplink of massive multi-user multiple-input multiple-output (MU-MIMO) systems, effective jammer mitigation becomes possible by the asymmetry in the number of antennas between the basestation (BS), which has many antennas, and a mobile jamming device, which typically has one or few antennas. One possibility, for instance, is to project the receive signals on the subspace orthogonal to the jammer's channel~\cite{marti2021snips,yan2016jamming}. Unfortunately, such methods require accurate knowledge of the jammer's channel. If a jammer transmits permanently and with a static signature (often called barrage jamming), the~BS~can estimate its channel, for instance during a dedicated period in which the user equipments (UEs) are not transmitting~\cite{marti2021snips} or in which they transmit predefined symbols~\cite{yan2016jamming}. In contrast to barrage jamming, however, a smart jammer might jam the system only at specific time instants, such as when the UEs are transmitting data symbols, and thereby prevent the BS from estimating the jammer's channel using simple estimation algorithms. \subsection{State of the Art} Multi-antenna wireless systems offer the unique potential to effectively mitigate jamming attacks. Consequently, a variety of multi-antenna methods have been proposed for the mitigation of jamming attacks in MIMO systems \cite{pirayesh2022jamming, marti2021snips, shen14a, hoang2021suppression, yan2016jamming,zeng2017enabling, vinogradova16a, do18a, akhlaghpasand20a, akhlaghpasand20b, marti2021hybrid, wan2022robust, darsena2022anti}. Common to all of them~is the assumption---in one way or other---that information about the jammer's transmit characteristics (e.g., the jammer's channel, or the covariance matrix between the UE transmit signals and the jammed receive signals) can be estimated using some specific subset of the receive samples.\footnote{The method of \cite{vinogradova16a} is to some extent an exception as it estimates the~UEs' subspace and projects the receive signals thereon. This method, however, dist-inguishes the UEs' from the jammer's subspace based on the receive power, thereby presuming that the UEs and the jammer transmit with different power.} \fref{fig:traditional}~illustrates the approach of such methods: The data phase is preceded by an augmented training phase in which the jammer's transmit characteristics as well as the channel matrix are estimated. This augmented training phase may (i) complement a traditional pilot phase with a dedicated period during which the UEs do not transmit in order to enable jammer estimation (e.g., \cite{marti2021snips, shen14a, hoang2021suppression}) or (ii) consist of an extended pilot phase so that there exist pilot sequences that are unused by the UEs and on whose span the receive signals can be projected to estimate the jammer subspace~\mbox{(e.g., \cite{do18a, akhlaghpasand20a, akhlaghpasand20b}).} The estimated jammer characteristics are then used to perform jammer-mitigating data detection. Such an approach succeeds in the case of barrage jammers, but is unreliable for estimating the propagation characteristics of smart jammers, see \fref{sec:example}: A smart jammer can evade estimation and, thus, circumvent mitigation by not transmitting during the training phase, for instance because it is aware of the defense mechanism or simply because it jams in short bursts only. For this reason, our proposed method does not estimate the jammer channel based on a dedicated training phase, but instead utilizes the entire transmission period and unifies jammer estimation and mitigation, channel estimation and data detection; see \fref{fig:maed}. Many studies have already shown how smart jammers can disrupt wireless communication systems by targeting only specific parts of the wireless transmission process \cite{miller2010subverting, miller2011vulnerabilities, clancy2011efficient, sodagari2012efficient, lichtman2013vulnerability, lichtman20185g, lichtman2016lte, girke2019towards,lapan2012jamming} instead of using barrage jamming. Jammers that target only the pilot phase have received considerable attention \cite{miller2010subverting,miller2011vulnerabilities,clancy2011efficient,sodagari2012efficient,lichtman2013vulnerability}, as such attacks can be more energy-efficient than barrage jamming in disrupting communication systems that do not defend themselves against jammers~\cite{clancy2011efficient,sodagari2012efficient,lichtman2013vulnerability}. However, if a jammer is active during the pilot phase, then a BS that \emph{does} defend itself against attacks can estimate the jammer's channel by exploiting knowledge of the UE transmit symbols during the pilot phase, for instance with the aid of unused pilot sequences~\cite{do18a, akhlaghpasand20a, akhlaghpasand20b}. To disable such jammer-mitigating communication systems, a smart jammer might thus refrain from jamming the pilot phase and only target the data phase, even if such jamming attacks have not received much attention so far \cite{lichtman2016lte, girke2019towards}. Other threat models that have been analyzed include attacks on control channels \cite{lichtman2013vulnerability, lichtman2016lte, lichtman20185g}, the beam alignment procedure \cite{darsena2022anti}, or the time synchronization phase~\cite{lapan2012jamming}, but this paper will not consider such protocol or control channel attacking schemes. \begin{figure}[tp] \vspace{-1mm} \centering ~ \subfigure[Existing methods separate jammer estimation (JEST) and channel~estimation (CHEST) from the jammer-resilient data detection (DET). They~are ineffective against jammers that jam the data phase but not the training~phase.]{ \includegraphics[width=0.95\columnwidth]{figures/sota.pdf} \label{fig:traditional} } \newline \subfigure[Our method unifies jammer estimation and mitigation, channel estimation, and data detection to deal with jammers regardless of their activity~pattern.]{ \includegraphics[width=0.95\columnwidth]{figures/maed.pdf} \label{fig:maed} } \caption{ The approach to jammer mitigation taken by existing methods (a) compared to the proposed method (b). In the figure, $\bmy_1,\dots,\bmy_K$ are the receive signals, and $\hat\Hj, \hat\bH$, and $\hat\bS_D$ are the estimates of the jammer channel, the UE channel matrix, and the UE transmit symbols, respectively. } \vspace{-2mm} \label{fig:maed_vs_trad} \end{figure} \subsection{Contributions} To mitigate smart jammers, we propose a novel approach that does not depend on the jammer being active during \textit{any} specific period. Leveraging the fact that a jammer cannot change its subspace instantaneously, we utilize a problem formulation which unifies jammer estimation and mitigation, channel estimation, and data detection, instead of dealing with these tasks independently (cf.~\fref{fig:maed}). We support the soundness of the proposed optimization problem by proving that its global minimum is unique and recovers the transmitted data symbols, given that certain sensible conditions are satisfied. By building on techniques for joint channel estimation and data detection \cite{vikalo2006efficient, xu2008exact, kofidis2017joint, castaneda2018vlsi, yilmaz2019channel, he2020model, song2021soft}, we then develop two efficient iterative algorithms for approximately solving the optimization problem. The first algorithm is called MAED (short for MitigAtion, Estimation, and Detection) and solves the problem approximately using forward-backward splitting (FBS) \cite{goldstein16a}. The second algorithm is called SO-MAED (short for Soft-Output MAED) and extends MAED with a more informative prior on the data symbols to produce soft symbol estimates. SO-MAED also relies on deep unfolding to optimize its parameters {\cite{song2021soft, hershey2014deep, balatsoukas2019deep, goutay2020deep, monga2021algorithm}. We use simulations with different propagation models to demonstrate that MAED and SO-MAED effectively mitigate a wide variety of na\"ive and smart jamming attacks without requiring any knowledge about the attack type. \subsection{Notation} Matrices and column vectors are represented by boldface uppercase and lowercase letters, respectively. For a matrix~$\bA$, the transpose is $\tp{\bA}$, the conjugate transpose is $\herm{\bA}$, the Moore-Penrose pseudoinverse is $\pinv{\bA}$, the entry in the $\ell$th row and $k$th column is $[\bA]_{\ell,k}$, the $k$th column is $\bma_k$, the submatrix consisting of the columns from $n$ through $m$ is $\bA_{[n:m]}$, and the Frobenius norm is $\| \bA \|_F$. The $N\!\times\!N$ identity matrix is $\bI_N$. For a vector~$\bma$, the $\ell_2$-norm is $\|\bma\|_2$, the real part is $\Re\{\bma\}$, the imaginary part is $\Im\{\bma\}$, and the span is $\textit{span}(\bma)$. The $k$th standard unit vector is denoted $\bme_k$, where the dimension is implicit. Expectation with respect to a random vector~$\bmx$ is denoted by \Ex{\bmx}{\cdot}. We define $i^2=-1$. The complex $n$-hypersphere of radius $r$ is denoted by $\mathbb{S}_r^n$, and~$[n:m]$ are the integers from $n$ through~$m$. \section{System Setup}\label{sec:setup} We consider the uplink of a massive MU-MIMO system in which $U$ single-antenna UEs transmit data to a $B$ antenna BS in the presence of a single-antenna jammer. The channels are assumed to be frequency flat and block-fading with coherence time $K=T+D$. The first $T$ time slots are used to transmit pilot symbols; the remaining $D$ time slots are used to transmit data symbols. The UE transmit matrix is $\bS = [\bS_T,\bS_D]$, where $\bS_T\in \opC^{U\times T}$ and $\bS_D\in\setS^{U\times D}$ contain the pilots~and the transmit symbols, respectively. The set $\setS$ is the transmit constellation, which is normalized to have unit average symbol energy. We assume that the jammer does not prevent the UEs and the BS from establishing synchronization, which allows us to use the following discrete-time input-output relation: \begin{align} \bY = \bH\bS + \Hj\tp{\bsj} + \bN. \label{eq:io} \end{align} Here, $\bY\in\opC^{B\times K}$ is the BS receive matrix that contains the \mbox{$B$-dimensional} receive vectors over all $K$ time slots, \mbox{$\bH\!\in\!\opC^{B\times U}$} models the MIMO uplink channel, $\Hj\in\opC^B$ models the channel between the jammer and the BS, $\bsj=\tp{[\tp{\bsj_T},\tp{\bsj_D}]}\in\opC^K$ contains the jammer transmit symbols over all $K$ time slots, and $\bN\in\opC^{B\times K}$ models thermal noise consisting of independently and identically distributed (i.i.d.) circularly-symmetric complex Gaussian entries with variance~$N_0$. Unless stated otherwise, we assume that the jammer's transmit symbols $\bsj$ are independent of $\bS$. No other assumptions about the distribution of $\bsj$ are made; in particular, we do not assume that these entries are~i.i.d. In what follows, we use plain symbols for the true channels and transmit signals, variables with a tilde for optimization variables, and quantities with a hat for (approximate) solutions to optimization problems, e.g., $\hat\bS_D$ is the the estimate of the UE transmit symbol matrix~$\bS_D$ as determined by solving an optimization problem with respect to $\tilde\bS_D$. \section{Motivating Example} \label{sec:example} We start by considering the motivating example of \fref{fig:example}, which shows uncoded bit error-rates (BERs) of different receivers for an i.i.d. Rayleigh fading MU-MIMO system with \mbox{$B=128$} BS antennas and $U=32$ UEs that transmit 16-QAM symbols under~a jamming attack. In \fref{fig:example:success} the system is attacked by a barrage jammer that transmits i.i.d. Gaussian symbols and whose receive power exceeds that of the average UE by 30\,dB. The ``LMMSE'' curve shows the performance of a non-mitigating receiver that estimates the UE channel matrix based on orthogonal pilots with a least squares (LS) estimator followed by a linear minimum mean square error (LMMSE) detector. The \mbox{``JL-LMMSE''} curve shows the performance of an identical receiver but for a jammerless system without a jamming attack. The ``geniePOS'' receiver is furnished with ground-truth knowledge of the jammer channel~$\Hj$. This baseline method nulls the jammer by orthogonally projecting the receive signals on the orthogonal complement of $\textit{span}(\Hj)$ using the matrix $\bP_\Hj = \bI_B - \Hj\pinv{\Hj}$, where $\pinv{\Hj}=\herm{\Hj}/\|\Hj\|_2^2$,~as \begin{align} \bP_\Hj\bY &= \bP_\Hj\,\bH\bS + \bP_\Hj\,\Hj\tp{\bsj} + \bP_{\Hj}\,\bN \label{eq:pos} \\ &= \bP_\Hj\,\bH\bS + \bP_\Hj\,\bN, \end{align} since $\bP_\Hj\,\Hj=\mathbf{0}$. The result is an effective jammerless system with receive signal $\bY_{\bP} = \bP_\Hj \bY$, effective channel matrix \mbox{$\bH_{\bP} = \bP_\Hj\bH$}, and (colored) noise $\bN_\bP = \bP_\Hj \bN \sim \setC\setN(\mathbf{0},\No\bP_\Hj)$. Finally, geniePOS performs LS channel estimation and subsequent LMMSE data detection in this projected system \cite{marti2021snips}. The ``POS'' receiver works analogously to geniePOS, except that it is not furnished with ground-truth knowledge of the jammer channel---instead, this method estimates the jammer subspace $\Hj/\|\Hj\|_2$ based on ten receive samples in which the UEs do not transmit and only the jammer is active. If the matrix received in that period is denoted by $\bY_\text{J}$, then the jammer subspace is estimated as the left-singular vector of the largest singular value of $\bY_\text{J}$. \fref{fig:example:success} shows that geniePOS effectively mitigates the jammer, achieving a performance virtually identical to that of the jammer-free \mbox{JL-LMMSE} receiver. Indeed, geniePOS nulls the jammer perfectly, so that the only performance loss comes from the loss of one degree-of-freedom in the receive signal. POS is not as effective,~since it nulls the jammer only imperfectly due to its noisy estimate of the jammer subspace. However, this method still mitigates the jammer with a loss of less than 2\,dB in SNR (at $0.1\%$ BER) compared to the jammer-free JL-LMMSE receiver. \begin{remark} We point out that reserving time slots for jammer estimation in which the UEs can not transmit directly reduces the achievable data~rates. \end{remark} Contrastingly, in \fref{fig:example:fail} the attacking (smart) jammer is aware of the POS receiver's mitigation scheme and suspends transmission during the time slots that are used to estimate its subspace. The POS receiver's subspace estimate is thus based entirely on noise and is completely independent of the jammer's true channel~$\Hj$. Consequently, the mitigation mechanism fails spectacularly, yielding a bit error-rate identical to the non-mitigating LMMSE receiver. \begin{figure}[tp] \centering \!\! \subfigure[mitigation of a barrage jammer]{ \includegraphics[height=3.85cm]{figures/128x32_16QAM_I1_D128_barrage_gaussian_rho30_100Trials_success} \label{fig:example:success} }\!\!\! \subfigure[failed mitigation of a smart jammer]{ \includegraphics[height=3.85cm]{figures/128x32_16QAM_I1_D128_barrage_gaussian_rho30_100Trials_fail} \label{fig:example:fail} }\!\! \caption{Example that illustrates how methods that estimate the jammer's channel based on a subset of samples fail when facing a smart jammer.} \label{fig:example} \end{figure} \section{Joint Jammer Estimation and Mitigation, Channel Estimation, and Data Detection} The foregoing example has demonstrated the danger of estimating the jammer's subspace (or other characteristics of~the jammer, such as its spatial covariance) based on a certain subset of receive samples when facing a smart jammer. We therefore propose a method that does not depend on the jammer being active during any specific period. This independence is achieved by considering the receive signal over an entire coherence interval at once and exploiting the fact that the jammer subspace stays fixed within that period, regardless of the jammer's activity pattern or transmit sequence. Specifically, we first propose a novel optimization problem that combines a tripartite goal of (i) mitigating the jammer's interference by locating its subspace $\textit{span}(\Hj)$ and projecting the receive matrix $\bY$ onto the orthogonal complement of~that subspace, (ii) estimating the channel matrix~$\bH$, and (iii) recovering the data matrix $\bS_D$. We then establish the soundness of the proposed optimization problem by proving that, under certain sensible conditions, and assuming negligible thermal noise, its minimum is unique and corresponds to the desired solution; in particular, the problem recovers the data matrix~$\bS_D$. Finally, we develop efficient iterative algorithms that approximately solve the proposed optimization problem. \subsection{The Optimization Problem} We start our derivation by considering the maximum-likelihood problem for joint channel estimation and data detection in the absence of jamming, which is \cite{vikalo2006efficient} \begin{align} \big\{\hat\bH, \hat\bS_D\big\} &= \argmin_{\substack{\hspace{1.3mm}\tilde\bH\in\opC^{B\times U}\\ \tilde\bS_D\in\setS^{U\times D}}}\! \big\|\bY - \tilde\bH \tilde\bS \big\|^2_F, \label{eq:ml_jed} \end{align} where we define $\tilde\bS \triangleq [\bS_T,\tilde\bS_D]$ for brevity and leave the dependence on $\tilde\bS_D$ implicit. This objective already integrates the goals of estimating the channel matrix and detecting the data symbols: If the noise $\bN$ is small enough to be negligible, the problem is minimized by the true channel and data matrices, \begin{align} \|\bY - \bH \bS \|^2_F \approx 0, \end{align} where the pilot matrix $\bS_T$ ensures uniqueness.\footnote{If the noise $\bN$ is not strictly equal to zero, then the channel estimate $\hat\bH$ for which \eqref{eq:ml_jed} is minimized does not coincide \emph{exactly} with the true channel matrix~$\bH$. But thanks to the discrete search space, the minimizing data estimate $\hat\bS_D$ still coincides exactly with the true data matrix $\bS_D$ if $\bN$ is small enough.} However, in case of a jamming attack, the jammer will cause a residual \begin{align} \|\bY - \bH\bS\|^2_F &= \|\Hj\tp{\bsj} + \bN\|^2_F \approx \|\Hj\tp{\bsj}\|^2_F \gg 0 \label{eq:jed_residual} \end{align} when plugging the true channel and data matrices into \fref{eq:ml_jed}, and there is no reason to assume that there exists no tuple $\{\tilde\bH,\tilde\bS_D\}$ with $\tilde\bS_D\neq\bS_D$ such that $\|\bY - \tilde\bH \tilde\bS\|^2_F < \|\bY - \bH\bS\|^2_F$. Note, however, that the residual $\Hj\tp{\bsj}$ in \eqref{eq:jed_residual} is a rank-one matrix whose columns are all in $\textit{span}(\Hj)$, regardless of the jamming signal~$\bsj$. Consider therefore what happens when we take the~matrix\footnote{The dependence of $\tilde\bP$ on $\tilde\bmp$ is left implicit here and throughout the paper.} \begin{align} \tilde\bP\triangleq\bI-\tilde\bmp\herm{\tilde\bmp},~\tilde\bmp\in \mathbb{S}_1^B, \end{align} which projects a signal onto the orthogonal complement of some arbitrary one-dimensional subspace $\textit{span}(\tilde\bmp)$, and then apply that projection to the objective of \eqref{eq:ml_jed} as follows: \begin{align} \|\tilde\bP(\bY - \tilde\bH\tilde\bS)\|^2_F. \label{eq:ml_p_jed} \end{align} If we now plug the true channel and data matrices into \fref{eq:ml_p_jed} (still assuming negligibility of the noise $\bN$), then we obtain \begin{align} \|\tilde\bP(\bY - \bH\bS)\|^2_F &= \|\tilde\bP\Hj\tp{\bsj} + \tilde\bP\bN\|^2_F \\ &\approx \|\tilde\bP\Hj\tp{\bsj}\|^2_F \geq 0, \end{align} with equality if and only if $\tilde\bmp$ is collinear with $\Hj$. In other words, the unit vector $\tilde\bmp$ which in combination with the true channel and data matrices minimizes \eqref{eq:ml_p_jed} is collinear with the jammer's channel, in which case $\tilde\bP$ is the POS matrix $\bP_\Hj$ from~\eqref{eq:pos}. Thus, if the noise $\bN$ is negligible, and if (i)~$\tilde\bP$ is the projection onto the orthogonal complement of $\textit{span}(\Hj)$, (ii)~$\tilde\bH$ is the true channel matrix, and (iii) $\tilde\bS$ contains the true data matrix, then the tuple $\{\tilde\bmp,\tilde\bH,\tilde\bS\}$ minimizes \eqref{eq:ml_p_jed}. These~are, of course, exactly the goals which we want to attain. % We thus formulate our joint jammer estimation and mitigation, channel estimation, and data detection problem as follows: \begin{align} \big\{\hat\bmp, \hat\bH_\bP, \hat\bS_D\big\} &= \argmin_{\substack{\tilde\bmp\in \mathbb{S}_1^B\hspace{1.4mm}\\ \hspace{1.3mm}\tilde\bH_\bP\in\opC^{B\times U}\\ \tilde\bS_D\in\setS^{U\times D}}}\! \big\|\tilde\bP\bY - \tilde\bH_\bP \tilde\bS \big\|^2_F.\! \label{eq:obj1} \end{align} Note that, compared to \eqref{eq:ml_p_jed}, we have absorbed the projection matrix $\tilde\bP$ directly into the unknown channel matrix $\tilde\bH_\bP$, which replaces the product $\tilde\bP\tilde\bH$ in \eqref{eq:ml_p_jed}. This approach avoids the issue that the columns~of~$\tilde\bH$ would be indeterminate with respect to the length of their components in the direction of $\tilde\bmp\approx\Hj$, so that there would be no distinction between channel estimates $\tilde\bH + \Hj\tp{\tilde{\bsj}}$ with different jamming sequences~$\tilde{\bsj}$. \subsection{Theory}\label{sec:theory} We have derived the optimization problem \eqref{eq:obj1} based on intuitive but non-rigorous arguments. Thus, we will now support the soundness of \eqref{eq:obj1} by proving that, under certain sensible conditions, and assuming that the noise is negligible, its solution is unique and guaranteed to recover the true data matrix. We make the following assumptions: The channel matrix $\bH$ has full column rank $U$, the jammer channel $\Hj$ is not included in the columnspace of $\bH$, and the pilot matrix $\bS_T$ has full row rank $U$. In addition, we define a concept which may seem cryptic at first, but which will be clarified later. \begin{defi} \label{def:eclipse} We say that the jammer is \emph{eclipsed} in a given coherence interval if there exists a matrix $\tilde{\bS}_D\in\setS^{U\times D}$ such that \mbox{$\text{rank}(\bS_D - \tilde{\bS}_D)\leq 1$} and $\tp{\bsj_D} = \tp{\bsj_T}\pinv{\bS_T}\tilde{\bS}_D$. \end{defi} We can now state our result; the proof is in \fref{app:proof1}. \begin{thm} \label{thm:maed} In the absence of noise, $\bN=\mathbf{0}$, and if the jammer is not eclipsed, then the problem in \eqref{eq:obj1} has the unique solution $\{\hat\bmp, \hat\bH_\bP, \hat\bS_D\}=\{\bmp, \bP\bH, \bS_D\}$. (In fact, $\hat{\bmp}$ is unique only up to an immaterial complex rotation, $\hat{\bmp}=\alpha\bmp, |\alpha|=1$.) \end{thm} In other words, as long as the jammer is not ecplised, the problem in \eqref{eq:obj1} is uniquely minimized by the true jammer subspace, projected channel matrix, and data matrix. We now shed light on the notion of eclipsedness, and we answer in the affirmative the important question of whether one can expect the jammer to typically be un-eclipsed. In essence, the jammer is eclipsed if its jamming signal $\bmw$ is such that multiple possible ``explanations'' of the receive signal $\bY$ exist which are consistent with the pilot matrix $\bS_T$ and under some of which the jammer is not recognized as the jammer; cf. the discussion of \eqref{eq:eclipsing_equation} in \fref{app:proof1}. This is best explained by considering the two emblematic cases of an eclipsed jammer: \subsubsection{An inactive jammer (or no jammer)} Clearly, if $\bsj=\mathbf{0}$, then $\tp{\bsj_D}=\tp{\bsj_T}\pinv{\bS_T}\tilde{\bS}_D$ for all $\tilde{\bS}_D$, including \mbox{$\tilde{\bS}_D=\bS_D$}.~In~this case, there is a mismatch between the jammerless actual wireless transmission and the jammed model in \eqref{eq:io}. Since there is no jammer subspace to identify, the choice of the projection~$\tilde\bP$ is undetermined, so that \fref{thm:maed} no longer~applies. \subsubsection{The jammer transmits a valid pilot sequence} If the jammer transmits the $k$th UE's pilot sequence in the training phase and constellation symbols in the data phase, then there are no formal grounds for the receiver to distinguish between the jammer and the $k$th UE. It can readily be shown that, besides the desired solution $\{\hat\bmp, \hat\bH_\bP, \hat\bS_D\}=\{\bmp, \bP\bH, \bS_D\}$, there exists then another solution to~\eqref{eq:obj1} which identifies the $k$th UE as the jammer, nulls that UE by setting $\hat\bmp = \bmh_k/\|\bmh_k\|$, and instead identifies the jammer as the $k$th UE by estimating \begin{align} \hat\bH_\bP &= \hat\bP [ \bmh_1, \dots, \bmh_{k-1}, \Hj, \bmh_{k+1}, \dots, \bmh_U ], \\ \hat\bS_D &= \tp{[ \tp{\bms_{D,1}}, \dots, \tp{\bms_{D,k-1}}, \tp{\bmw_D}, \tp{\bms_{D,k+1}}, \dots, \tp{\bms_{D,U}} ]}, \end{align} where $\bms_{D,u}$ is the $u$th row of $\bS_D$. In addition to these two emblematic cases, eclipsing can also happen in more accidental cases where the symbol error matrix $\bS_D - \tilde\bS_D$ has rank one for some $\tilde\bS_D$ such that by some (un)fortunate coincidence $\tp{\bsj_D} = \tp{\bsj_T}\pinv{\bS_T}\tilde{\bS}_D$ holds true. However, we will now show that if the jammer does not know the pilot sequences, e.g., because they are drawn at random by the BS and secretly communicated to the UEs, then an active jammer (where $\bsj\neq \mathbf{0}$) is typically not eclipsed. To show this, we consider a case in which the pilot matrix $\bS_T$ is square; the proof is relegated to \fref{app:proof2}. \begin{thm} \label{thm:maed2} If the pilot matrix $\bS_T$ is drawn uniformly over the set of $U\times U$ unitary matrices and if $\bsj\neq\mathbf{0}$ is independent of $\bS_T$, then the probability that the jammer is eclipsed is zero. \end{thm} \begin{remark} \label{rem:rare} It is by no means necessary to use random pilots to avoid eclipsing. Another sufficient condition for the jammer to be eclipsed only with zero probability is if $\tp{\bmw_D}$ is independent of $\tp{\bsj_T}\pinv{\bS_T}$ and if at least one of the marginals of $\tp{\bmw_D}$ or of $\tp{\bsj_T}\pinv{\bS_T}$ has no mass points. In essence, unless the jammer choses its input sequence as some (partially randomized) function of the pilot matrix $\bS_T$, eclipsing is the rare exception, not the norm. In this regard, see also the simulation results in \fref{sec:results}. \end{remark} \begin{remark} The fact that error-free communication in the presence of jamming can be assured if the BS and UEs~share~a common secret that enables them to use a randomized~communication scheme, but not otherwise, is reminiscent of~information-theoretic results which prove a similar dichotomy on a more fundamental level. See \cite[Sec. V]{lapidoth1998reliable} and references therein. \end{remark} \section{Forward-Backward Splitting with a Box Prior} We now provide the first of two algorithms for approximately solving the joint jammer estimation and mitigation, channel estimation, and data detection problem in \eqref{eq:obj1}. Note first of all that the objective is quadratic in $\tilde\bH_\bP$, so we can derive the optimal value of $\tilde\bH_\bP$ as a function of $\tilde\bP$ and $\tilde\bS$ as \begin{align} \hat\bH_\bP = \tilde\bP\bY\pinv{\tilde\bS}, \end{align} where $\pinv{\tilde\bS}=\herm{\tilde\bS}\inv{(\tilde\bS\herm{\tilde\bS})}$. Substituting $\hat\bH_\bP$ back into \eqref{eq:obj1} yields an optimization problem which only depends on $\tilde\bmp$ and $\tilde\bS_D$: \begin{align} \big\{\hat\bmp, \hat\bS_D\big\} = \argmin_{\substack{\tilde\bmp\in \mathbb{S}_1^B\hspace{1.4mm}\\ \tilde\bS_D\in\setS^{D\times U}}} \big\|\tilde\bP\bY(\bI_K - \pinv{\tilde\bS}\tilde\bS)\big\|^2_F. \label{eq:obj3} \end{align} Solving \eqref{eq:obj3} remains difficult due to its combinatorial nature, so we resort to solving it approximately. First, we relax the constraint set $\setS$ to its convex hull $\setC\triangleq\textit{conv}(\setS)$ as in \cite{castaneda2018vlsi}. This can be viewed as replacing the probability mass function over the constellation $\setS$, which represents the true symbol prior, with a box prior that is uniform over $\setC$ and zero elsewhere \cite{jeon2021mismatched}. We then approximately solve this~relaxed problem formulation in an iterative fashion by alternating between a forward-backward splitting descent step in $\tilde\bS$ and a minimization step in $\tilde\bP$. \subsection{Forward-Backward Splitting Step in $\tilde\bS$} \label{sec:fbs} Forward-backward splitting (FBS) \cite{goldstein16a}, also called proximal gradient descent, is an iterative method for solving convex optimization problems of the form \begin{align} \argmin_{\tilde\bms}\, f(\tilde\bms) + g(\tilde\bms), \label{eq:fbs1} \end{align} where $f$ is convex and differentiable, and $g$ is convex but not necessarily differentiable, smooth, or bounded. Starting from an initialization vector $\tilde\bms^{(0)}$, FBS solves the problem in~\eqref{eq:fbs1} iteratively by computing \begin{align} \tilde\bms^{(t+1)} = \proxg\big(\tilde\bms^{(t)} - \tau^{(t)}\nabla f(\tilde\bms^{(t)}); \tau^{(t)}\big). \label{eq:fbs2} \end{align} Here, $\tau^{(t)}$ is the stepsize at iteration $t$, $\nabla f(\tilde\bms)$ is the gradient~of $f(\tilde\bms)$, and $\proxg$ is the proximal operator of $g$, defined as \cite{parikh13a} \begin{align} \proxg(\bmx, \tau) = \argmin_{\tilde\bmx} \tau g(\tilde\bmx) + \frac12 \|\bmx - \tilde\bmx\|_2^2. \end{align} For a suitable sequence of stepsizes $\{\tau^{(t)}\}$, FBS solves convex optimization problems exactly. FBS can also be used to approximately solve non-convex problems, although there are typically no guarantees for optimality or even convergence~\cite{goldstein16a}. For the optimization problem in \fref{eq:obj3}, we define $f$ and $g$ as \begin{align} f(\tilde\bS) &= \big\|\tilde\bP\bY(\bI_K - \pinv{\tilde\bS}\tilde\bS)\big\|^2_F \end{align} and \begin{align} g(\tilde\bS) &= \begin{cases} 0 &\text{if }\,\tilde\bS_{[1:T]}=\bS_T \text{ and } \tilde\bS_{[T+1:K]}\in\setC^{U\times D} \!\!\!\\ \infty &\text{else}. \end{cases} \end{align} The gradient of $f$ in $\tilde\bS$ is given by \begin{align} \nabla f(\tilde\bS) = -\herm{(\bY\pinv{\tilde\bS})}\tilde\bP\bY(\bI_K - \pinv{\tilde\bS}\tilde\bS), \label{eq:gradient} \end{align} and the proximal operator for $g$ is simply the orthogonal projection onto $\setC$, which acts entrywise on $\tilde\bS$~as \begin{align} [\proxg(\tilde\bS)]_{u,k} = \begin{cases} [\bS_T]_{u,k} &\text{ if } k\in[1:T] \\ \text{proj}_\setC([\tilde\bS_{u,k}]) &\text{ else,} \end{cases} \label{eq:proxg} \end{align} where the function $\text{proj}_\setC$ is given as \begin{align} \text{proj}_\setC(x) =\, & \min\{\max\{\Re(x),-\lambda\},\lambda\} \nonumber\\ &+ i\min\{\max\{\Im(x),-\lambda\},\lambda\}, \end{align} with $\lambda=\sqrt{\sfrac{1}{2}}$ for a QPSK constellation and $\lambda=\sqrt{\sfrac{9}{10}}$ for a 16-QAM constellation, see \fref{fig:constellations}. To select the per-iteration stepsizes~$\{\tau^{(t)}\}$, we use the Barzilai-Borwein method \cite{barzilai1988two}. \subsection{Minimization Step in $\tilde\bP$} After each FBS step in $\tilde\bS$, we minimize \eqref{eq:obj3} with respect to the vector~$\tilde\bmp$. Defining the residual matrix $\tilde\bE\triangleq \bY(\bI_K - \pinv{\tilde{\bS}}\tilde{\bS})$ and performing standard algebraic manipulations yields \begin{align} \hat\bmp &= \argmin_{\tilde\bmp\in \mathbb{S}_1^B} \big\|\tilde\bP \tilde\bE \big\|^2_F \\ &= \argmax_{\tilde\bmp\in \mathbb{S}_1^B} \, \herm{\tilde\bmp} \tilde\bE \herm{\tilde\bE} \tilde\bmp. \label{eq:rayleigh} \end{align} It follows that the vector $\hat\bmp$ minimizing \eqref{eq:obj3} for a fixed~$\tilde\bS$ is the unit vector that maximizes the Rayleigh quotient of $\tilde\bE \herm{\tilde\bE}$. The~solution is the unit-length eigenvector $\bmv_1(\tilde\bE \herm{\tilde\bE})$ associated with the largest eigenvalue of $\tilde\bE \herm{\tilde\bE}$~\cite[Thm.\,4.2.2]{horn2013matrix}, \begin{align} \hat\bmp=\bmv_1(\tilde\bE \herm{\tilde\bE}). \end{align} Calculating this eigenvector in every iteration of our algorithm would be computationally expensive, so we approximate it using a single power iteration \cite[Sec.\,8.2.1]{GV96}, i.e., we estimate \begin{align} \hat\bmp^{(t+1)} = \frac{\tilde\bE^{(t+1)} \herm{(\tilde\bE^{(t+1)})}\hat\bmp^{(t)}}{\|\tilde\bE^{(t+1)} \herm{(\tilde\bE^{(t+1)})}\hat\bmp^{(t)}\|_2}, \end{align} where the power method is initialized with the subspace estimate $\hat\bmp^{(t)}$ from the previous algorithm iteration. \subsection{Preprocessing} If the algorithm starts directly with a gradient descent step in the direction of \eqref{eq:gradient}, one runs the risk of advancing significantly into the wrong direction---especially if the jammer is extremely strong, since a strong jammer will also lead to a large gradient amplitude. Empirically, we observe that such a large initial digression can be problematic (if, e.g., the jammer is $\geq\!50$\,dB stronger than the average UE). It might therefore be tempting to start the algorithm directly with a projection step: If one initializes $\tilde\bS^{(0)}=\mathbf{0}_{U\times D}$, then $\tilde\bE^{(0)}=\bY$, so that the algorithm starts by nulling the dimension of $\bY$ which contains the most energy. In the presence of a strong jammer, this is a sensible strategy since this dimension then corresponds to the jammer subspace. However, if the received jamming energy is small compared to the energy received from the UEs (e.g., because the jammer does not transmit at all during a given coherence interval), then such a projection would inadvertently null the strongest user. To thread the needle between these two cases---largely removing a strong jammer before the first gradient step, but not removing any legitimate UEs when a strong jammer is absent---we propose to start with a \emph{regularized} projection step: The algorithm starts by a projection onto the orthogonal complement of the eigenvector of the largest eigenvalue of \begin{align} \bY\herm{\bY} + \mathbf{\Gamma}, \label{eq:regularizer} \end{align} where $\mathbf{\Gamma}\in\opC^{B\times B}$ is a constant regularization matrix. The basic idea is that this regularization matrix is still overshadowed by very strong jammers, so that these are largely nulled within the preprocessing, while, in the presence of only a weak jammer (or no jammer), the regularization matrix has a sufficiently diverting impact on the eigenvectors to prevent the nulling of a legitimate UE. There are countless ways of choosing such a regularization matrix. (Note, however, that $\mathbf{\Gamma}$ should not be a multiple of the identity matrix $\bI_B$, which does not affect the eigenvectors of \eqref{eq:regularizer}.) For simplicity, we set $\mathbf{\Gamma}$ to the all-zero matrix, except for the top left entry, which is set to $0.1BUK$. \subsection{Algorithm Complexity} We now have all the ingredients for MAED, which is summarized in \fref{alg:maed}. Its only input is the receive matrix~$\bY$, as it does not even require an estimate of the thermal noise variance $\No$. MAED is initialized with $\tilde\bS^{(0)}=[\bS_T, \mathbf{0}_{U\times D}]$ and $\tau^{(0)}=\tau=0.1$, and runs for a fixed number of $t_{\max}$~iterations. The complexity of MAED is dominated by the eigenvector calculation in the preprocessing step (which could be reduced by approximating it using the power method) as well as the gradient computation in line 5 of \fref{alg:maed}, which has a complexity of $O(3BUK+2U^2K+U^3)$. The overall complexity of MAED is therefore \mbox{$O(t_{\max}(3BUK+2U^2K+U^3))$.} Note, however, that MAED detects $D$ data vectors at once. Thus, the computational complexity per detected symbol is only \mbox{$O(t_{\max}(3BK+2KU+U^2)/D)$.} \section{Soft-Output Estimates with Deep Unfolding} MAED, wich corresponds to the algorithm proposed in~\cite{marti2022smart} (newly adding the preprocessing step), already attains the goal of mitigating smart jammers, see \fref{sec:results}. However, its detection performance can be suboptimal, especially when higher-order transmit constellations such as 16-QAM are used. The culprit is the box prior of MAED, which does not fully exploit the discrete nature of the transmit constellation. In particular, the box prior is uninformative about the constellation symbols in the interior. To improve detection performance, especially in the cases where MAED performs suboptimally, we now provide a second algorithm for approximately solving the problem in \eqref{eq:obj1}. This second algorithm builds on MAED but replaces the proximal operator in \eqref{eq:proxg}, which enforces MAED's box prior, by an approximate posterior mean estimator (PME) based on the discrete symbol prior as in \cite{song2021soft}. Since the PME also enables meaningful soft-output estimates of the bits that underlie the transmitted data symbols, we refer to this second algorithm as soft-output MAED (SO-MAED). \begin{algorithm}[tp] \caption{MAED} \label{alg:maed} \begin{algorithmic}[1] \setstretch{1.0} \State {\bfseries input:} $\bY$ \State \text{initialize} $\tilde\bS^{(0)} \!=\! \left[\bS_T, \mathbf{0}_{U\!\times\! D} \right]\!, \tilde\bmp^{(0)} \!=\! \bmv_1(\bY \herm{\bY} \!+ \mathbf{\Gamma} ), \tau^{(0)} \!= \tau$ \State $\tilde\bP^{(0)} = \bI_B - \tilde\bmp^{(0)}\tilde\bmp^{(0)}{}^\text{H}$ \For{$t=0$ {\bfseries to} $t_{\max}-1$} \State $\nabla f(\tilde\bS^{(t)}) = -\herm{\big(\bY\tilde\bS^{(t)}{}^\dagger\big)} \tilde\bP^{(t)}\bY(\bI_K - \tilde\bS^{(t)}{}^\dagger \tilde\bS^{(t)})$ \State $\tilde\bS^{(t+1)} = \proxg\big(\tilde\bS^{(t)} - \tau^{(t)}\nabla f(\tilde\bS^{(t)})\big)$ \State $\tilde\bE^{(t+1)} = \bY(\bI_K - \tilde\bS^{(t+1)}{}^\dagger \tilde\bS^{(t+1)})$ \State $\tilde\bmp^{(t+1)} = \tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H}\, \tilde\bmp^{(t)}/\|\tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H}\, \tilde\bmp^{(t)}\|_2$ \State $\tilde\bP^{(t+1)} = \bI_B - \tilde\bmp^{(t+1)}\tilde\bmp^{(t+1)}{}^\text{H}$ \State compute $\tau^{(t+1)}$ by following \cite[Sec.\,4.1]{goldstein16a} \EndFor \State \textbf{output:} $\tilde\bS^{(t_{\max})}_{[T+1:K]}$ \end{algorithmic} \end{algorithm} \subsection{Approximate Posterior-Mean Estimation} To replace the proximal operator following the gradient descent step in \eqref{eq:fbs2} with a more appropriate data symbol estimator which takes into account the discrete constellation~$\setS$, we model the per-iteration outputs of the gradient descent step \begin{align} \tilde\bX^{(t)} = \tilde\bS^{(t)} - \tau^{(t)}\nabla f(\tilde\bS^{(t)}) \end{align} as \begin{align} \tilde\bX^{(t)} &= \bS + \bZ^{(t)} = \big[ \bS_T, \bS_D \big] + \big[ \bZ_T^{(t)}, \bZ_D^{(t)} \big], \label{eq:symbol_noise} \end{align} i.e., as the true transmit symbol matrix $\bS$ corrupted by an additive error~$\bZ^{(t)}$. If the distribution of $\bZ^{(t)}$ were known, one could compute the posterior mean $\mathbb{E}[\bS\,|\,\tilde\bX^{(t)}]$ and use it as a constellation-aware replacement of the proximal step \eqref{eq:proxg}, \begin{align} \tilde\bS^{(t+1)} = \mathbb{E}\big[\bS\,|\,\tilde\bX^{(t)}\big]. \end{align} Unfortunately, the distribution of $\bZ^{(t)}$ is unknown in practice, but the~submatrix $\bS_T$ of $\bS$ (and hence its mean) is known at the receiver. To estimate the mean of the submatrix $\bS_D$, we assume that the entries of $\bZ_D^{(t)}$ are distributed independently~of $\bS$ as i.i.d.\ circularly-symmetric complex Gaussians with variance~$\nu^{(t)}$, \begin{align} \big[\bZ^{(t)}\big]_{u,k}\sim\setC\setN(0,\nu^{(t)}). \end{align} The variances $\{\nu^{(t)}\}_{t=0}^{t_{\max}-1}$ are treated as algorithm parameters and will be optimized using deep unfolding as detailed below. Based on this idealized model, we utilize a three-step procedure as in \cite{song2021soft} for computing symbol estimates. First, we use~\fref{eq:symbol_noise} to compute log-likelihood ratios (LLRs) for every transmitted bit. We then convert these LLRs to the probabilities of the respective bits being $1$. This step also provides the aforementioned soft-output estimates. Finally, we convert the bit probabilities back to symbol estimates by calculating the symbol mean. \mbox{The LLRs can be computed following \cite{collings2004low, jeon2021mismatched} as} \begin{align} L_{i,u,k}^{(t)} = \frac{\ell\big(\tilde X_{u,k}^{(t)}\big)\!}{\nu^{(t)}}, ~ i\!\in\![1\!:\!\log_2\!|\setS|],~ u\!\in\![1\!:\!U],~ k\!\in\![T\!+\!1\!:\!K], \label{eq:llrs} \end{align} where $\ell(\cdot)$ is specified in Table~\ref{tab:llr} (cf.~\fref{fig:constellations}). The LLR values are exact for QPSK and use the max-log approximation for 16-QAM~\cite{jeon2021mismatched}. The LLRs can then be converted to probabilities~via \begin{align} p_{i,u,k}^{(t)} = \frac12\left( 1 + \tanh{\left(\frac{L_{i,u,k}^{(t)}}{2}\right)} \right). \label{eq:bit_probabs} \end{align} Finally, the probabilities of \eqref{eq:bit_probabs} can be used to compute symbol estimates according to Table~\ref{tab:symbol_estimates}. To summarize, SO-MAED replaces MAED's proximal operator in \eqref{eq:proxg} with the symbol estimator that consists of \eqref{eq:llrs}, \eqref{eq:bit_probabs}, and Table~\ref{tab:symbol_estimates}. We refer to this symbol estimation as posterior-mean approximation (PMA) and denote it as \begin{align} \tilde\bS^{(t+1)} = \pma(\tilde\bX^{(t)},\nu^{(t)}), \end{align} where the subscript $\setS$ makes explicit the dependence of the PMA on the symbol constellation. Since the PMA involves only scalar computations, its complexity is negligible compared to the matrix-vector and matrix-matrix operations of SO-MAED. The complexity order of SO-MAED is therefore identical to that of MAED, namely \mbox{$O(t_{\max}(3BUK+2U^2K+U^3))$.} \subsection{Deep Unfolding of SO-MAED} The procedure outlined in the previous subsection requires the variances $\{\nu^{(t)}\}_{t=0}^{t_{\max}-1}$ of the per-iteration estimation errors~$\bZ^{(t)}$, which are generally unknown. We treat these variances as parameters of SO-MAED and optimize them using deep unfolding \cite{song2021soft, hershey2014deep, balatsoukas2019deep, goutay2020deep, monga2021algorithm}. Deep unfolding is an emerging paradigm under which iterative algorithms are unfolded into artificial neural networks with one layer per iteration, so that the algorithm parameters can be regarded as trainable weights of that network. These weights are then learned from training data with standard deep learning tools~\cite{Baydin2018automatic, tensorflow2015whitepaper}. To improve stability of learning, we use the error precisions $\{\rho^{(t)}\}_{t=0}^{t_{\max}-1}$ instead of the variances $\{\nu^{(t)}\}_{t=0}^{t_{\max}-1}$ as parameters of the unfolded network, with $\rho^{(t)}=1/\nu^{(t)}$. In addition, we also regard the gradient step sizes $\{\tau^{(t)}\}_{t=0}^{t_{\max}-1}$ as trainable weights (instead of computing them according to the Barzilai-Borwein method). Furthermore, we add a momentum term with per-iteration weights $\{\lambda^{(t)}\}_{t=0}^{t_{\max}-1}$ to our gradient descent procedure. Finally, inspired by the Bussgang decomposition \cite{bussgang52a, minkoff85a}, we add per-iteration scale factors $\{\alpha^{(t)}\}_{t=0}^{t_{\max}-1}$ to the output of \eqref{eq:symbol_noise}, with the goal of accommodating uncorrelatedness (if not independence) between $\bZ_D^{(t)}$ and $\bS_D$ in \eqref{eq:symbol_noise}. The final algorithm is summarized in \fref{alg:so_maed}. We implement the unfolded algorithm in TensorFlow \cite{tensorflow2015whitepaper} and train it using as cost function the empirical binary cross-entropy (BCE) between the transmitted bits and the estimated bit probabilities \eqref{eq:bit_probabs} from the last iteration as the output of our network. The loss function is given as \begin{align} \sum_{\text{sample}\in\setD} \beta_{\text{sample}} \!\left( \sum_{i=1}^{\log_2|\setS|} \sum_{u=1}^U \sum_{k=T+1}^K \text{BCE}(b_{i,u,k}, p_{i,u,k}^{t_{\max}}) \right)\!\!,\! \end{align} where \begin{align} \text{BCE}(b,p(b)) = b \log_2(p(b)) + (1-b)\log_2(1-p(b)), \end{align} and where $\beta_{\text{sample}}$ are the weights given to the different samples in the training set $\setD$. We only learn a single set of weights per system dimensions $\{U,B,K\}$, which is used for all signal-to-noise ratios (SNRs) and, most importantly, all jamming attacks (since a receiver does not typically know in advance which type of a jamming attack it is facing). For this reason, we train using samples from different SNRs and different jamming attacks. We also have to avoid overfitting to a specific type of jamming attack. If our evaluation in \fref{sec:results} would feature only the exact same types of jammers that were used for training, this would raise questions about the ability of SO-MAED to generalize to jamming attacks which differ from those explicitly included in the training set. However, the principles underlying the SO-MAED algorithm are essentially invariant with respect to the type of a jamming attack. For this reason, we only train on a single type of jammers, namely pilot jammers, cf. \fref{sec:setup} (which we have empirically recognized to be the most difficult to mitigate), while evaluating the trained algorithm on many other jammer types besides pilot jammers, cf. \fref{sec:results}. The attacks used for training also comprise different jammer receive strengths, namely $\{-\infty\,\text{dB}, 0\,\text{dB}, 10\,\text{dB}, 20\,\text{dB}, 40\,\text{dB}, 80\,\text{dB}\}$ compared to the average UE. \begin{figure}[tp] \centering \subfigure[QPSK\hspace{1cm}]{ \includegraphics[height=3.4cm]{figures/constellation_qpsk} \label{fig:constellations:qpsk} } \subfigure[16-QAM\hspace{1cm}]{ \includegraphics[height=3.4cm]{figures/constellation_16qam} \label{fig:constellations:16qam} } \caption{Transmit constellations $\setS$ (including the used Gray mapping) and their convex hulls $\setC$. $\lambda=\sqrt{\sfrac{1}{2}}$ for QPSK and $\lambda=\sqrt{\sfrac{9}{10}}$ for 16-QAM. } \label{fig:constellations} \end{figure} \begin{table}[tp!] \centering \caption{LLR computation according to \cite[Tbl. 1]{collings2004low}, \cite{jeon2021mismatched}} \vspace{-5mm} \setstretch{1.1} \begin{tabular}[t]{@{}lcl@{}} \toprule &\!\!Bit $i$\!\!& $\ell(x)$ \\ \midrule \multirow{2}{*}{\!QPSK\!}& $1$ & $4\lambda\Re\{x\}$ \\ & $2$ & $4\lambda\Im\{x\}$ \vspace{1mm} \\ \multirow{4}{*}{\!16-QAM\!}& $1$ & $\frac{2\lambda}{3} \left( 4\Re\{x\} + \left|\Re\{x\} \!-\! \frac{2\lambda}{3} \right| - \left|\Re\{x\} \!+\! \frac{2\lambda}{3} \right| \right)\!$ \\ & $2$ & $\frac{4\lambda}{3} \left( \frac{2\lambda}{3} - \left|\Re\{x\} \right| \right) $ \\ & $3$ & $\frac{2\lambda}{3} \left( 4\Im\{x\} + \left|\Im\{x\} \!-\! \frac{2\lambda}{3} \right| - \left|\Im\{x\} \!+\!\frac{2\lambda}{3} \right| \right)\! $ \\ & $4$ & $\frac{4\lambda}{3} \left( \frac{2\lambda}{3} - \left|\Im\{x\} \right| \right) $ \\ \bottomrule \end{tabular} \vspace{5mm} \label{tab:llr} \centering \caption{Mapping the probabilities in \eqref{eq:bit_probabs} to symbol estimates \cite{jeon2021mismatched, tomasoni2006low}} \vspace{-5mm} \setstretch{1.1} \begin{tabular}[t]{@{}lcc@{}} \toprule & $\Re\{\hat{s}\}$\!& $\Im\{\hat{s}\}$ \\ \midrule QPSK & $\lambda(2p_{1}-1)$ & $\lambda(2p_{2}-1)$ \\ 16-QAM& $\lambda(2p_1-1)(3-2p_2)$ & $\lambda(2p_3-1)(3-2p_4)$ \\ \bottomrule \end{tabular} \vspace{-2.5mm} \label{tab:symbol_estimates} \end{table Also with regard to the jammer receive strength, the evaluation in \fref{sec:results} will consider jammers with different strengths than have been used for training. The sample weights $\beta_{\text{sample}}$ are used to prevent certain training samples (e.g., those at low SNR with strong jammers) from dominating the learning process by drowning out the loss contribution from training samples with inherently lower BCE. For this, we fix a ``baseline performance'' and select the weight of a training sample as the inverse of this sample's BCE loss according to the baseline. The baseline is set by an untrained version of SO-MAED with reasonably initialized weights (its performance in general already exceeds that of~MAED). For training, we use the Adam optimizer \cite{kingma2014adam} from Keras with default values. The batch size starts at one sample, but is increased first to five and then to ten samples whenever the training loss does not improve for two consecutive epochs. \begin{algorithm}[tp] \caption{SO-MAED} \label{alg:so_maed} \begin{algorithmic}[1] \setstretch{1.0} \State {\bfseries input:} $\bY, \{\tau^{(t)}, \alpha^{(t)}, \lambda^{(t)}, \rho^{(t)} \}_{t=0}^{t_{\max} -1}$ \State \text{init} $\tilde\bS^{(0)} \!=\! \left[\bS_T, \mathbf{0}_{U\!\times\! D} \right], \tilde\bmp^{(0)} \!=\! \bmv_1(\bY \herm{\bY} \!+ \mathbf{\Gamma} ), \mathbf{\Delta}^{(-1)} = \mathbf{0}$ \State $\tilde\bP^{(0)} = \bI_B - \tilde\bmp^{(0)}\tilde\bmp^{(0)}{}^\text{H}$ \For{$t=0$ {\bfseries to} $t_{\max}-1$} \State $\nabla f(\tilde\bS^{(t)}) = -\herm{\big(\bY\tilde\bS^{(t)}{}^\dagger\big)} \tilde\bP^{(t)}\bY(\bI_K - \tilde\bS^{(t)}{}^\dagger \tilde\bS^{(t)})$ \State $\mathbf{\Delta}^{(t)} = - \tau^{(t)} \nabla f(\tilde\bS^{(t)}) + \lambda^{(t)} \mathbf{\Delta}^{(t-1)}$ \State $\tilde\bX^{(t)} = \tilde\bS^{(t)} + \mathbf{\Delta}^{(t)} $ \State $\tilde\bS^{(t+1)} = \pma\big( \alpha^{(t)}\tilde\bX^{(t)}, 1/\rho^{(t)} \big)$ \State $\tilde\bE^{(t+1)} = \bY(\bI_K - \tilde\bS^{(t+1)}{}^\dagger \tilde\bS^{(t+1)})$ \State $\tilde\bmp^{(t+1)} = \tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H}\, \tilde\bmp^{(t)}/\|\tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H}\, \tilde\bmp^{(t)}\|_2$ \State $\tilde\bP^{(t+1)} = \bI_B - \tilde\bmp^{(t+1)}\tilde\bmp^{(t+1)}{}^\text{H}$ \EndFor \State \textbf{output:} $\tilde\bS^{(t_{\max})}_{[T+1:K]}$ \end{algorithmic} \end{algorithm} \section{Simulation Results} \label{sec:results} \subsection{Simulation Setup} \label{sec:setup} We simulate a massive MU-MIMO system with $B=128$~BS antennas, $U=32$ single-antenna UEs, and one single-antenna jammer. The UEs transmit for $K=160$ time slots, where the first $T=32$ slots are used for orthogonal pilots $\bS_T$ in the form of a $32\times32$ Hadamard matrix with unit symbol energy. The remaining $D=128$ slots are used to transmit QPSK or 16-QAM payload data. Unless noted otherwise, the channels are modelled as i.i.d. Rayleigh fading. We define the average receive signal-to-noise ratio (SNR) as \begin{align} \textit{SNR} \define \frac{\Ex{\bS}{\|\bH\bS\|_F^2}}{\Ex{\bN}{\|\bN\|_F^2}}. \end{align} We consider four different jammer types: (J1) barrage jammers that transmit i.i.d.\ jamming symbols during the entire coherence interval, (J2) pilot jammers that transmit i.i.d.\ jamming symbols during the pilot phase but do not jam the data phase, (J3) data jammers that transmit i.i.d.\ jamming symbols during the data phase but do not jam the pilot phase, and (J4) sparse jammers that transmit i.i.d.\ jamming symbols during some fraction $\alpha$ of randomly selected bursts of unit length (i.e., one time slot). The jamming symbols are either circularly symmetric complex Gaussian or drawn uniformly from the transmit constellation (i.e., QPSK or 16-QAM). They are also independent of the UE transmit symbols $\bS$, unless stated otherwise. We quantify the strength of the jammer's interference relative to the strength of the average UE, either as the ratio between total receive~\textit{energy} \begin{align} \rE \define \frac{\Ex{\bsj}{\|\Hj\bsj\|_2^2}}{\frac1U\Ex{\bS}{\|\bH\bS\|_2^2}}, \end{align} or as the ratio between receive \textit{power during those phases that the jammer is jamming} \begin{align} \rP \triangleq \frac{\rE}{\gamma}, \end{align} where $\gamma$ is the jammer's duty cycle and equals $1,\frac{T}{K},\frac{D}{K}$, or~$\alpha$ for barrage, pilot, data, or sparse jammers, respectively. This allows us to either precisely control the jammer energy (for jammers which are assumed to be essentially energy-limited) or the transmit intensity (for jammers which may want to pass themselves off as a legitimate UE, for instance). \subsection{Performance Baselines} \label{sec:baseline} We set the number of iterations for MAED and \mbox{SO-MAED} to $t_{\max}=20$ and emphasize again that we use only two different sets of weights for \mbox{SO-MAED}: one for QPSK transmission and one for 16-QAM transmission. Neither \mbox{SO-MAED} nor MAED is adapted to the different jammer scenarios. We compare our algorithms to the following baseline methods: The first baseline is the ``LMMSE'' method from \fref{sec:example}, which does not mitigate the jammer and separately performs least-squares (LS) channel estimation and LMMSE data detection. The second baseline is the ``geniePOS'' method from \fref{sec:example}, which projects the receive signals onto the orthogonal complement of the true jammer subspace and then separately performs LS channel estimation and LMMSE data detection in this projected space. The last baseline, \mbox{``JL-SIMO,''} serves as a lower bound for attainable error-rate performance. This method operates in a jammerless but otherwise equivalent system and implements (with perfect channel knowledge) the single-input multiple-output (SIMO) bound corresponding to the idealized case in which no inter-user interference is present. \begin{figure*}[tp] \!\!\!\!\! \subfigure[strong barrage jammer (J1)]{ \includegraphics[height=4cm]{figures/firm_final_weights/rayleigh_qpsk/128x32_QPSK_I1_D128_barrage_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:qpsk:strong:static} }\!\! \subfigure[strong pilot jammer (J2)]{ \includegraphics[height=4cm]{figures/firm_final_weights/rayleigh_qpsk/128x32_QPSK_I1_D128_pilot_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:qpsk:strong:pilot} }\!\! \subfigure[strong data jammer (J3)]{ \includegraphics[height=4cm]{figures/firm_final_weights/rayleigh_qpsk/128x32_QPSK_I1_D128_data_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:qpsk:strong:data} }\!\! \subfigure[strong sparse jammer (J4)]{ \includegraphics[height=4cm]{figures/firm_final_weights/rayleigh_qpsk/128x32_QPSK_I1_D128_sparse_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:qpsk:strong:burst} }\!\! \caption{Uncoded bit error-rate (BER) for \emph{QPSK} transmission in the presence of a \emph{strong} ($\rE=30$\,dB) jammer which transmits Gaussian symbols (a) during the entire coherence interval, (b) during the pilot phase only, (c) during the data phase only, or (d) in random unit-symbol bursts with a duty cycle of $\alpha=20\%$. } \label{fig:qpsk_strong_jammers} \end{figure*} \begin{figure*}[tp] \vspace{-1mm} \!\!\!\!\! \subfigure[strong barrage jammer (J1)]{ \includegraphics[height=4cm]{figures/firm_final_weights/30dB_16qam/128x32_16QAM_I1_D128_barrage_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:strong:static} }\!\! \subfigure[strong pilot jammer (J2)]{ \includegraphics[height=4cm]{figures/firm_final_weights/30dB_16qam/128x32_16QAM_I1_D128_pilot_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:strong:pilot} }\!\! \subfigure[strong data jammer (J3)]{ \includegraphics[height=4cm]{figures/firm_final_weights/30dB_16qam/128x32_16QAM_I1_D128_data_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:strong:data} }\!\! \subfigure[strong sparse jammer (J4)]{ \includegraphics[height=4cm]{figures/firm_final_weights/30dB_16qam/128x32_16QAM_I1_D128_sparse_gaussian_rho30_NJE1_T20_1000Trials} \label{fig:strong:burst} }\!\! \caption{Uncoded bit error-rate (BER) for \emph{16-QAM} transmission in the presence of a \emph{strong} ($\rE\!=\!30$\,dB) jammer which transmits Gaussian symbols~(a)~during the entire coherence interval, (b) during the pilot phase only, (c) during the data phase only, or (d) in random unit-symbol bursts with a duty cycle of $\alpha=20\%$. } \label{fig:strong_jammers} \end{figure*} \begin{figure*}[h!] \vspace{-1mm} \!\!\!\!\! \subfigure[weak barrage jammer (J1)]{ \includegraphics[height=4cm]{figures/firm_final_weights/0dB_16qam/128x32_16QAM_I1_D128_barrage_constellation_rho0_NJE0_T20_1000Trials} \label{fig:strong:static} }\!\! \subfigure[weak pilot jammer (J2)]{ \includegraphics[height=4cm]{figures/firm_final_weights/0dB_16qam/128x32_16QAM_I1_D128_pilot_constellation_rho0_NJE0_T20_1000Trials} \label{fig:strong:pilot} }\!\! \subfigure[weak data jammer (J3)]{ \includegraphics[height=4cm]{figures/firm_final_weights/0dB_16qam/128x32_16QAM_I1_D128_data_constellation_rho0_NJE0_T20_1000Trials} \label{fig:strong:data} }\!\! \subfigure[weak sparse jammer (J4)]{ \includegraphics[height=4cm]{figures/firm_final_weights/0dB_16qam/128x32_16QAM_I1_D128_sparse_constellation_rho0_NJE0_T20_1000Trials} \label{fig:strong:burst} }\!\! \caption{Uncoded bit error-rate (BER) for \emph{16-QAM} transmission in the presence of a \emph{weak} ($\rP=0$\,dB) jammer which transmits 16-QAM symbols (a) during the entire coherence interval, (b) during the pilot phase only, (c) during the data phase only, or (d) in random unit-symbol bursts with a duty cycle of $\alpha=20\%$. } \label{fig:weak_jammers} \end{figure*} \begin{figure*}[tp] \!\!\!\!\! \subfigure[barrage jammers (J1)]{ \includegraphics[height=4cm]{figures/firm_final_weights/many_barrage_jammers/128x32_16QAM_I1_D128_barrage_gaussian_NJE0_T20_1000Trials} \label{fig:many:static} }\!\! \subfigure[pilot jammers (J2)]{ \includegraphics[height=4cm]{figures/firm_final_weights/many_pilot_jammers/128x32_16QAM_I1_D128_pilot_gaussian_NJE0_T20_1000Trials} \label{fig:many:pilot} }\!\! \subfigure[data jammers (J3)]{ \includegraphics[height=4cm]{figures/firm_final_weights/many_data_jammers/128x32_16QAM_I1_D128_data_gaussian_NJE0_T20_1000Trials} \label{fig:many:data} }\!\! \subfigure[sparse jammers (J4)]{ \includegraphics[height=4cm]{figures/firm_final_weights/many_sparse_jammers/128x32_16QAM_I1_D128_sparse_gaussian_NJE0_T20_1000Trials} \label{fig:many:burst} }\!\! \caption{Uncoded bit error-rate (BER) performance curves of SO-MAED in the presence of jammers with different receive powers compared to the average UE, $\rP\in\{-20\,\text{dB}, -10\,\text{dB}, 0\,\text{dB}, 10\,\text{dB}, 20\,\text{dB}, 40\,\text{dB}, 80\,\text{dB}\}$. The subfigures correspond to the different jammer types (J1)\,-\,(J4) and show one curve per~jammer power (plotted with 25\% opacity to depict the degree of overlap between curves). Curves that level off into an error floor are labeled with their jammer power, e.g., in \fref{fig:many:static}, the barrage jammer with receive power $\rP=-20$\,dB has an error floor while all other barrage jammers have virtually identical BER curves. } \label{fig:many_jammers} \end{figure*} \subsection{Mitigation of Strong Gaussian Jammers} We first investigate the ability of MAED and SO-MAED to mitigate strong jamming attacks. For this, we simulate Gaussian jammers with \mbox{$\rE=30$\,dB} of all four types introduced in Section~\ref{sec:setup} and evaluate the performance of our algorithms compared to the baselines of Section~\ref{sec:baseline} for QPSK transmission (\fref{fig:qpsk_strong_jammers}) as well as for 16-QAM transmission (\fref{fig:strong_jammers}). We note at this point that the performances of geniePOS and JL-SIMO are independent of the considered jammer type: geniePOS uses the genie-provided jammer channel to null the jammer perfectly, regardless of its transmit sequence, and JL-SIMO operates on a jammer-free system. Unsurprisingly, the jammer-oblivious LMMSE baseline performs significantly worse than the jammer-robust geniePOS baseline under all attack scenarios, with the data jamming attack turning out to be the most harmful and the pilot jamming attack the least harmful. Both MAED and SO-MAED succeed in mitigating all four jamming attacks with highest effectiveness, even outperforming the genie-assisted geniePOS method by a considerable margin.\footnote{The potential for MAED and SO-MAED to outperform geniePOS is a consequence of the superiority of joint channel estimation and data detection over separating channel estimation from data detection.} Their efficacy is further reflected in the fact that SO-MAED and MAED approach the performance of the jammerless and MU interference-free JL-SIMO lower bound to within less than $2$\,dB and $3$\,dB at $0.1$\%~BER, respectively, in all considered~scenarios. The behavior is largely similar when 16-QAM instead of QPSK is used as transmit constellation~(\fref{fig:strong_jammers}). However, due to the decreased informativeness of the box prior for such higher-order constellations, MAED performs now closer to geniePOS, while SO-MAED still performs within $2$\,dB (at $0.1$\% BER) of the JL-SIMO lower bound. The increased performance gap between them notwithstanding, both MAED and SO-MAED are able to effectively mitigate all four attack types. \subsection{Mitigation of Weak Constellation Jammers} \label{sec:results:weak} We now turn to the analysis of more restrained jamming attacks in which the jammer transmits constellation symbols with relative power $\rP=0$\,dB during its on-phase (to pass itself off as just another UE, for instance \cite{vinogradova16a}). Simulation results for 16-QAM transmission under all four types of jamming attacks are shown in~\fref{fig:weak_jammers}. Because of the weaker jamming attacks, the jammer-oblivious LMMSE baseline now performs closer to the jammer-resistant geniePOS baseline than it does in \fref{fig:strong_jammers}. MAED again mitigates all attack types rather successfully, outperforming geniePOS in the low-SNR regime but slightly leveling off at high SNR. Interestingly, MAED shows worse performance under these weak jamming attacks than under the strong jamming attacks of \fref{fig:strong_jammers}. The reason is the following: MAED searches for the jamming subspace by looking for the dominant dimension of the iterative residual error $\tilde\bE^{(t)}$, see~\fref{eq:rayleigh}. If the received jamming energy is small compared to the received signal energy, then it becomes hard to distinguish the residual errors caused by the jamming signal from those caused by errors in estimating the channel and data matrices $\tilde\bH_\bP^{(t)}$ and $\tilde\bS_D^{(t)}$. Note in contrast that, due to its superior signal prior, the equivalent performance loss of SO-MAED is only so small as to be virtually unnoticeable. Thus, SO-MAED outperforms MAED by a large margin and still approaches the JL-SIMO bound by less than 2dB at a BER of 0.1\%.\footnote{We mention that MAED does not suffer such a performance loss under weak jamming attacks when the transmit constellation is QPSK, since in that case the box signal prior of MAED is sufficiently accurate, see also \cite{marti2022smart}.} \subsection{How Versatile is SO-MAED Really?} In the remainder of our evaluation, we focus mostly on \mbox{SO-MAED}, since it is clearly the better of the two proposed algorithms. To show that our approach indeed succeeds in mitigating arbitrary jamming attacks without need for fine-tuning of the algorithm or its parameters, \fref{fig:many_jammers} depicts performance results for a series of jamming attacks spanning a dynamic range from $\rP=-20$\,dB to $\rP=80$\,dB. Specifically, \fref{fig:many_jammers} shows results for all four jammer types, where every subfigure plots BER curves for jamming attacks with $\rP\in\{-20\,\text{dB}, -10\,\text{dB}, 0\,\text{dB}, 10\,\text{dB}, 20\,\text{dB}, 40\,\text{dB}, 80\,\text{dB}\}$. The purpose of these plots is to illustrate that, apart from jamming attacks where the jammer is significantly weaker than the average UE\footnote{A jammer that is much weaker than the average UE resembles a non-transmitting, and thus eclipsed, jammer; see Sections \ref{sec:theory} and \ref{sec:results:eclipsed}.}, the curves are virtually indistinguishable, meaning that the performance of SO-MAED is virtually independent of the specific type of jamming attack that it is facing. \subsection{Eclipsed Jammers} \label{sec:results:eclipsed} \begin{figure}[tp] \centering \!\!\!\!\! \subfigure[no jammer]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/eclipsed/128x32_16QAM_I1_D128_all-zero_eclipsed_gaussian_rho0_NJE0_T20_1000Trials} \label{fig:eclipsed:no_jam} }\!\!\! \subfigure[weak UE-impersonating jammer]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/eclipsed/128x32_16QAM_I1_D128_pilot_eclipsed_gaussian_rho0_NJE0_T20_1000Trials} \label{fig:eclipsed:weak_impers} }\!\! \\ \!\!\!\!\! \subfigure[strong UE-impersonating jammer]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/eclipsed/128x32_16QAM_I1_D128_pilot_eclipsed_gaussian_rho30_NJE0_T20_1000Trials} \label{fig:eclipsed:strong_impers} }\!\!\! \subfigure[data-dependent jammer]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/eclipsed/128x32_16QAM_I1_D128_row-difference_eclipsed_gaussian_rho30_NJE0_T20_1000Trials} \label{fig:eclipsed:row_eclipsed} }\!\! \caption{Uncoded bit error-rate of SO-MAED for different types of eclipsed jammers: (a) no jammer, (b) $\rP=0$\,dB jammer impersonating the $j$th UE by transmitting its pilot sequence ($\text{UE}_j$ denotes the BER of the impersonated UE, and $\overline{\text{UE}}_j$ the BER among all other UEs), (c) $\rP=30$\,dB jammer impersonating the $j$th UE by transmitting its pilot sequence, and (d) jammer causes eclipsing by transmitting a jamming sequence that depends on the UE transmit matrix~$\bS$. Dashed lines represent the BER of the impersonated UE, transparent lines represent the BER among the UEs that are not impersonated by the jammer. } \label{fig:eclipsed} \vspace{-2mm} \end{figure} Up to this point, the jamming signal $\bmw$ has always been generated independently from the UE transmit matrix $\bS$. The strong performance results of both MAED and SO-MAED have supported the claim in Remark~\ref{rem:rare} that, in this case, eclipsing is the (rare) exception, not the norm. We now turn to an empirical analysis of how SO-MAED behaves when eclipsing \emph{does} occur (\fref{fig:eclipsed}). To this end, we consider scenarios in which the jammer is eclipsed because there is no jamming activity (\fref{fig:eclipsed:no_jam}), because the jammer transmits a UE's pilot sequence (\fref{fig:eclipsed:weak_impers}, \fref{fig:eclipsed:strong_impers}), or because the jamming sequence~$\bmw$ depends on the transmit matrix $\bS$ (which in reality would be unknown to the jammer) in a way that causes eclipsing (\fref{fig:eclipsed:row_eclipsed}). In the case of no jammer (\fref{fig:eclipsed:no_jam}), or no jamming activity within a coherence interval, we see that SO-MAED still reliably detects the transmit data. However, it now suffers from an error floor (albeit significantly below $0.1$\% BER). The reason for this error floor is that, in the absence of jamming energy to guide the choice of the nulled direction $\tilde\bmp$, there is the temptation to instead ``cover up'' detection errors (similar to the phenomenon discussed in \fref{sec:results:weak}). However, the low level of the error floor shows that this potential pitfall does not cause a systematic breakdown of SO-MAED. We emphasize also that SO-MAED does not simply null the strongest UE. Such (degenerate) behavior would only occur if one UE were \emph{far} stronger than the others. With any reasonable power control scheme, UE nulling is not an issue.\footnote{This is exemplified by our experiments with i.i.d. Rayleigh fading chanels, which also exhibit minor imbalances in receive power between different UEs. Cf. also our results in \fref{sec:beyond}, where we use $\pm1.5$\,dB power control.} In the case of a jammer that impersonates the $j$th UE by transmitting its pilot sequence in the training phase and constellation symbols in the data phase, with the same power ($\rP=0$\,dB) as the average~UE, SO-MAED indeed suffers a performance breakdown (\fref{fig:eclipsed:weak_impers}). However, closer analysis shows that this error floor is caused solely by errors in detecting the symbols of the impersonated UE. This is not surprising: The jammer is statistically indistinguishable from the $j$th UE, so that is impossible to reliably separate the UE transmit symbols from the fake jammer transmit symbols. In this regard, we refer again to the information-theoretic discussion of \mbox{\cite[Sec. V]{lapidoth1998reliable}}. Such impersonation attacks could be forestalled by using encrypted pilots~\cite{basciftci2015securing}. If the jammer transmits the $j$th UE's pilot sequence and constellation symbols, but with much more power ($\rP=30$\,dB), then the iterative detection procedure of \mbox{SO-MAED} will separate the jammer subspace from the $j$th UE's subspace (\fref{fig:eclipsed:strong_impers}), since, being so much stronger than any UE, the jammer subspace will dominate the residual matrix $\tilde\bE^{(t)}$ in~\fref{eq:rayleigh}. Finally, \fref{fig:eclipsed:row_eclipsed} shows results for a case where the jammer knows $\bS$ and choses~$\tilde\bS_D$ to differ from $\bS_D$ in a single row (with valid constellation symbols in the differing row), so that $\textit{rank}(\bS_D-\tilde\bS_D)=1$. It first draws $\bmw_T\sim\setC\setN(\mathbf{0},\bI_D)$ and then sets $\tp{\bsj_D} = \tp{\bsj_T}\pinv{\bS_T}\tilde{\bS}_D$ to cause eclipsing. The jammer strength is $\rP=30$\,dB. The results show an error floor at roughly 0.2\% BER caused by the presence of an alternative spurious solution. However, the results in Figs.~\ref{fig:qpsk_strong_jammers}\,--\,\ref{fig:many_jammers} show that, when the jammer has to select~$\bmw$ without knowing $\bS_D$, such accidental eclipsing is extremely rare. \subsection{Beyond i.i.d. Rayleigh Fading} \label{sec:beyond} So far, our experiments were based on i.i.d. Rayleigh fading channels, but our method does not depend in any way on this particular channel model. To demonstrate that MAED and SO-MAED are also applicable in scenarios that deviate strongly from the i.i.d. Rayleigh model, we now evaluate our algorithms on realistic mmWave channels generated with the commercial Wireless InSite ray-tracer \cite{Remcom}. The simulated scenario is depicted in \fref{fig:remcom_scenario}. We simulate a mmWave massive MU-MIMO system with a carrier frequency of $60$\,GHz and a bandwidth of $100$\,MHz. The BS is placed at a height of $10$\,m and consists of a horizontal uniform linear array with $B=128$ omnidirectional antennas spaced at half a wavelength. The omnidirectional single-antenna UEs and the jammer are located at a height of $1.65$\,m and placed in a $150^\circ$ sector spanning $180$\,m$\times$$90$\,m in front of the BS; see \fref{fig:remcom_scenario}. The~UEs~and the jammer are drawn at random from a grid with $5$\,m pitch while ensuring that the minimum angular separation between any two UEs, as well as between the jammer and any UE, is $2.5^\circ$. We assume $\pm1.5$\,dB power control, so that the ratio between the maximum and minimum per-UE receive power is~2. \begin{figure}[tp] \centering \includegraphics[width=0.9\columnwidth]{figures/sector_marked_cropped} \caption{Simulated scenario. The location of the BS his highlighted by the white circle while the red squares depict all possible UE locations.} \label{fig:remcom_scenario} \vspace{-2mm} \end{figure} \begin{figure}[tp] \centering \!\!\!\!\! \subfigure[barrage jammer (J1)]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/remcom/128x32_QPSK_I1_D128_barrage_gaussian_rho30_NJE1_T30_1000Trials} \label{fig:remcom:barrage} }\!\!\! \subfigure[pilot jammer (J2)]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/remcom/128x32_QPSK_I1_D128_pilot_gaussian_rho30_NJE1_T30_1000Trials} \label{fig:remcom:pilot} }\!\! \\ \!\!\!\!\! \subfigure[data jammer (J3)]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/remcom/128x32_QPSK_I1_D128_data_gaussian_rho30_NJE1_T30_1000Trials} \label{fig:remcom:data} }\!\!\! \subfigure[sparse jammer (J4)]{ \includegraphics[height=3.95cm]{figures/firm_final_weights/remcom/128x32_QPSK_I1_D128_sparse_gaussian_rho30_NJE1_T30_1000Trials} \label{fig:remcom:sparse} }\!\! \caption{Uncoded bit error-rate (BER) for \emph{QPSK} transmission over realistic mmWave channels in the presence of a \emph{strong} ($\rE=30$\,dB) jammer.} \label{fig:remcom_results} \vspace{-2mm} \end{figure} The high correlation exhibited by these mmWave channels slows convergence of MAED and SO-MAED, so we increase their number of iterations to $t_{\max}=30$. We also retrain the parameters from SO-MAED on mmWave channels (while making a clear split between the training set and the~evaluation~set). The results for QPSK transmission in the presence of $\rE=30$\,dB are shown in \fref{fig:remcom_results}. The performance hierarchy is identical as in the equivalent Rayleigh-fading setup of \fref{fig:qpsk_strong_jammers}: geniePOS is clearly outperformed by MAED, which is in turn outperformed by SO-MAED. However, the more challenging nature of mmWave channels amplifies performance differences: Due to its artificial immunity from the high inter-user interference of mmWave channels, JL-SIMO is now in a class of its own. However, MAED and SO-MAED gain almost $4$\,dB and $6$\,dB in SNR on geniePOS at $0.1\%$ BER, respectively, regardless of the jammer type. This shows that MAED and SO-MAED are also well suited for scenarios that deviate significantly from the i.i.d. Rayleigh model. \vspace{-1mm} \section{Conclusions} We have proposed an approach for the mitigation of smart jamming attacks on the massive MU-MIMO uplink and supported its basic soundness with theoretical results. In contrast to existing mitigation methods, our approach does not rely on jamming activity during any particular time instant. Instead, our approach utilizes a newly proposed problem formulation which exploits the fact that the jammer's subspace remains constant within a coherence interval. We have developed two efficient iterative algorithms, MAED and SO-MAED, which approximately solve the proposed optimization problem. Our simulation results have shown that MAED and SO-MAED are able to effectively mitigate a wide range of jamming attacks. In particular, they succeed in mitigating attack types like data jamming and sparse jamming, for which---to the best of our knowledge---no mitigation methods have existed so far. Future work could focus on jammer mitigation with iterative detection and decoding, for which the soft-output estimates of SO-MAED are well suited. Other directions for future work are the extension to mutli-antenna jammers as well as to jammer mitigation in wideband systems. \appendices \vspace{-1mm} \section{Proof of \fref{thm:maed}} \label{app:proof1} Obviously, if $\{\hat\bmp, \hat\bH_\bP, \hat\bS_D\}=\{\bmp, \bP\bH, \bS_D\}$, then \begin{align} \hat\bP\bY - \hat\bH_\bP \hat\bS &= \bP\bY - \bP\bH \bS \\ &= \bP(\bH\bS + \Hj\tp{\bsj}) - \bP\bH \bS \\ &= \bP \Hj\tp{\bsj} = \mathbf{0}, \end{align} and so the objective value of \eqref{eq:obj1} is zero. Since the objective function in \eqref{eq:obj1} is nonnegative, it follows that $\{\hat\bmp, \hat\bH_\bP, \hat\bS_D\}=\{\bmp, \bP\bH, \bS_D\}$ is a solution to \eqref{eq:obj1}. It remains to prove uniqueness. For this, we rewrite the objective in \eqref{eq:obj1} as \begin{align} \big\|\tilde\bP\bY \!- \tilde\bH_\bP \tilde\bS \big\|^2_F = \big\|\tilde\bP\bY_T \!- \tilde\bH_\bP \bS_T \big\|^2_F + \big\|\tilde\bP\bY_D \!- \tilde\bH_\bP \tilde\bS_D \big\|^2_F \label{eq:decomposition} \end{align} The objective can only be zero if both terms on the right-hand-side (RHS) of \eqref{eq:decomposition} are zero. The first term is zero iff \begin{align} & \tilde\bP\bY_T - \tilde\bH_\bP \bS_T = \mathbf{0}, \end{align} which implies \begin{align} \tilde\bH_\bP = \tilde\bP\bY_T \pinv{\bS_T}, \label{eq:optimal_h} \end{align} since $\bS_T$ has full row rank. Plugging this back into the second term on the RHS of \eqref{eq:decomposition} gives \begin{align} & \tilde\bP\bY_D - \tilde\bH_\bP \tilde\bS_D \\ &= \tilde\bP(\bY_D - \bY_T \pinv{\bS_T} \tilde\bS_D) \\ &= \tilde\bP \left(\bH\bS_D + \Hj\tp{\bsj_D} - (\bH\bS_T + \Hj\tp{\bsj_T}) \pinv{\bS_T} \tilde\bS_D \right) \\ &= \tilde\bP \left(\bH [\bS_D- \tilde\bS_D] + \Hj[\tp{\bsj_D} - \tp{\bsj_T}\pinv{\bS_T} \tilde\bS_D] \right). \label{eq:term_mat} \end{align} The second term on the RHS of \eqref{eq:decomposition} (and, hence, the objective) is zero if and only if the matrix in \eqref{eq:term_mat} is the zero matrix. The projector $\tilde{\bP}$ can null a matrix of (at most) rank one. It follows that the objective function in \eqref{eq:decomposition} can be zero only if \begin{align} \bH [\bS_D- \tilde\bS_D] + \Hj[\tp{\bsj_D} - \tp{\bsj_T}\pinv{\bS_T} \tilde\bS_D] \label{eq:eclipsing_equation} \end{align} is a matrix of (at most) rank one. Since $\bH$ has full column rank and $\Hj$ is not included in the columnspace of $\bH$, this requires that either $\text{rank}(\bS_D- \tilde\bS_D)=1$ and $\tp{\bsj_D} - \tp{\bsj_T}\pinv{\bS_T} \tilde\bS_D=\mathbf{0}$, or that $\bS_D - \tilde\bS_D=\mathbf{0}$. But the first case is precluded since it implies that the jammer is eclipsed, in contrast to our assumption. In the second case, we have $\tilde\bS_D = \bS_D$, so the estimated data matrix coincides with the true data matrix. In that case,~\eqref{eq:term_mat} is \begin{align} \tilde\bP \Hj[\tp{\bsj_D} - \tp{\bsj_T}\pinv{\bS_T} \tilde\bS_D], \end{align} which (again by the assumption that the jammer is not ecplised) is zero if and only if $\tilde\bmp$ is collinear with $\bmj$, meaning that $\tilde\bmp = \alpha \bmp, |\alpha|=1$. This means that also the estimated jammer subspace coincides with the true jammer subspace. Finally, plugging this value of $\tilde\bmp$ back into \eqref{eq:optimal_h} yields \begin{align} \tilde\bH_\bP &= \tilde\bP\bY_T \pinv{\bS_T} \\ &= \tilde\bP (\bH\bS_T + \Hj\tp{\bsj_T}) \pinv{\bS_T} \\ &= \tilde\bP\bH\bS_T \pinv{\bS_T} = \bH_\bP, \end{align} showing that also the estimated channel matrix coincides with the projection of the true channel matrix. We have thereby shown that $\big\|\tilde\bP\bY - \tilde\bH_\bP \tilde\bS \big\|^2_F$ is zero if and only if $\tilde\bS_D=\bS_D$, $\tilde\bmp = \alpha\bmp, |\alpha|=1$, and $\tilde\bH_\bP = \bH_\bP$. \hfill $\blacksquare$ \section{Proof of \fref{thm:maed2}} \label{app:proof2} $\bS_T$ is unitary, so the jammer eclipses if \begin{align} \tp{\bsj_D} &= \tp{\bsj_T}\pinv{\bS_T}\tilde{\bS}_D \\ &= \tp{\bsj_T}\bS_T^H\tilde{\bS}_D = (\underbrace{\bS_T\bsj_T^\ast}_{\triangleq \bmx})^H \tilde{\bS}_D \end{align} for some $\tilde{\bS}_D\in\setS^{U\times D}$ such that $\text{rank}(\bS_D - \tilde{\bS}_D)\leq 1$. Since~$\bS_T$ is Haar distributed\footnote{The uniform distribution over unitary matrices is called Haar distribution.}, the vector $\bmx$ is distributed uniformly over the complex $U$-dimensional sphere of radius $\|\bmw\|$ \cite[p. 16]{meckes2019random}. In particular, $\bmx$ is independent of $\bmw_D$. We can now show that \begin{align} \tp{\bmw_D} \neq \bmx^H\tilde{\bS}_D \end{align} holds with probability one by showing that already for the first entry, we have \begin{align} w_{D,1} \neq \bmx^H \tilde\bms_{D,1} \label{eq:first_entry_criterion} \end{align} with probability one, where $ \tilde\bms_{D,1}$ is the leftmost column of~$\tilde{\bS}_D$: We can interpret $\bmx^H \tilde\bms_{D,1}=\|\tilde\bms_{D,1}\| \langle \tilde\bms_{D,1}/\|\tilde\bms_{D,1}\|, \bmx \rangle$ as the top left entry of a matrix product $\|\tilde\bms_{D,1}\|\bX\bZ$, where $\bX$ is a Haar-distributed matrix whose first row is $\bmx^H$, and where $\bZ$ is an unitary matrix whose first column is $\tilde\bms_{D,1}/\|\tilde\bms_{D,1}\|$. It then follows from \cite[p. 7]{meckes2019random} that $\bX\bZ$ is Haar distributed. Thus, the distribution of its top left entry and hence also of $\bmx^H\tilde\bms_{D,1}$ has no mass points. Since $w_{D,1}$ is independent of $\bmx$, \eqref{eq:first_entry_criterion} must therefore hold with probability one. \hfill $\blacksquare$
2,869,038,155,203
arxiv
\section{Introduction} In this paper we study the dimensional reduction of $\mathcal{N}=1$ 4d s-confining theories to 3d in the brane setup. We show that a key role is played by the exotic contribution of stringy instantons. A general procedure for reducing 4d dualities to 3d has been furnished in \cite{Aharony:2013dha,Aharony:2013kma} (see also \cite{Niarchos:2012ah} for an earlier discussion). It is based on the observation that a straight compactification of dual theories on a circle generally spoils the 4d duality. This is because when reducing the theories the scale of validity of the 3d duality is lower than the effective scale at which the new theories have to be considered. Alternatively one can think that in this process extra symmetries, anomalous in 4d, are generated in 3d. These symmetries lead the 4d dual theories to different fixed points in 3d. This problem can be avoided if the finite size effects of the circle are considered. In this way there is an effective 3d duality at the scale set by the dimension of the circle. This is the scale of the KK monopoles, that act as instantons in this setup and generate non perturbative superpotentials. These superpotential break the extra symmetries discussed above. The theories on the circle can be considered as effective 3d dualities. More conventional dual pairs are obtained by a real mass flow. This procedure is general and can be applied to any 4d duality between gauge theories. It has been shown \cite{Amariti:2015yea,Amariti:2015mva} that if the theories have a IIA brane realization the mechanism can be reproduced in string theory. This is based on T-duality and the effects of the monopoles are reproduced by the action of D1 branes. There is another possibile IR behavior of UV free theories in 4d: confinement. In this case the low energy dynamics is not described by a dual gauge theory but in terms of mesons and baryons. In this case the reduction to 3d is more complicated and the prescription of \ \cite{Aharony:2013dha,Aharony:2013kma} requires some modification. Before describing the reduction in the confining phase we discuss some 4d aspects of these theories that will be useful in the following. In supersymmetry some confining theories correspond to limiting cases of electric/magnetic Seiberg dualities \cite{Seiberg:1994pq}. The simplest example is $\mathcal{N}=1$ SQCD with $N$ colors and $N+1$ flavors. This theory is s-confining in the IR \cite{Seiberg:1994bz}, and in this regime it is described by the mesonic and the baryonic degrees of freedom, interacting through a superpotential. This superpotential corresponds to the classical constraint on the moduli space. Equivalently this low energy description has been obtained by adding a massive flavor in the gauge theory and by studying the large mass regime \cite{Seiberg:1994pq}. In this case the theory has originally $N+2$ flavors and the strongly coupled phase can be described by a dual IR free $SU(2)$ gauge theory. Integrating out the massive flavor in the electric theory corresponds to a total higgsing of the $SU(2)$ dual gauge group. In this case the superpotential is reproduced by a scale matching relation on the instantonic contribution of the totally broken $SU(2)$. Gauge instantons have a counterpart in string theory: they are related to Dp branes placed on stacks of D(p+4) branes (see \cite{Tong:2005un,Blumenhagen:2009qh} and references therein for review). The stack of D(p+4) branes corresponds to the non-abelian gauge theory and the Dp branes reproduce the effect of the gauge instantons. The gauge instanton effect discussed above can be captured in a different way in string theory, without requiring to UV complete the confining phase to an $SU(2)$ gauge theory. This effect can be observed by a IIA brane engineering, by considering a single D4 brane extended between two non parallel NS branes, with in addition some D6 flavor branes. In 4d the abelian gauge factor associated to this D4 brane decouples in the IR. Unexpectedly, also in this case, the D instanton, an euclidean D0 brane, reproduces the superpotential effect of the gauge instanton of the broken $SU(2)$ \cite{GarciaEtxebarria:2007zv,Krefl:2008gs,Amariti:2008xu}. For this reason this D instanton has been called "exotic" in the literature. Observe that the size of the instantonic correction is different in the two regimes, the stringy and the field theory one, signaling that the two descriptions are accurate at different scales \cite{Krefl:2008gs}. \\ \\ In this paper we study the fate of this type of stringy instantons when the s-confining gauge theories are compactified on a circle and reduced to 3d. We perform the reduction separately in the confining and in the confined phase\footnote{Observe that in 4d, in the limiting case of $SU(N)$ Seiberg duality, the electric theory becomes a confining phase while the magnetic theory is identified with the confined phase. In the 3d case, where the theories are conformal, we have a duality. For this reason, with a slight abuse of notation, in the 3d case, we refer to the two dual phases as to the electric and the magnetic theory.} both in the field theory regime and in the string theory regime. In the field theory regime we follow the procedure of \cite{Aharony:2013dha,Aharony:2013kma} for reducing the confining phase. In the confined regime the gauge theory is absent. In this case there is a different prescription \cite{Csaki:2014cwa,Amariti:2015kha} stating that the effective confined theory on the circle has the same field content and superpotential of the 4d parent. In the string theory regime we follow the arguments of \cite{Amariti:2015yea,Amariti:2015mva} for reducing the confining gauge theory on the circle. In the confined case we observe that the extra contribution is captured by the T-dual version of the D0 stringy instanton. This corresponds to a D1 euclidean brane, or a D string, \emph{i.e.} a monopole acting as an instanton in 3d. Thanks to this observation we reproduce the results obtined in field theory with the prescription of \cite{Csaki:2014cwa,Amariti:2015kha}. We study the reduction of 4d confining theories both with unitary and symplectic gauge groups. Furthermore we study this mechanism in terms of the reduction of the 4d superconformal index to the 3d partition function. In section \ref{sec:4ds} we review the 4d s-confining $SU(N)$ SQCD and the relation with the stringy instanton. In section \ref{sec:redSU} we study the reduction of this theory to 3d in field theory and in string theory. We study the role of the stringy instanton in the brane engineering of this theory. In the 3d limit, by gauging the baryonic symmetry, we arrive at the $U(N)$ case with $N$ flavors, and reproduce the limiting case of Aharony duality \cite{Aharony:1997gp}. We conclude this section with the reduction of the superconformal index to the partition function. It confirms the validity of the procedure. In section \ref{sec:redSP} we repeat the analysis for the symplectic case. In section \ref{sec:conc} we conclude. \section{s-confinement and exotic instantons} \label{sec:4ds} In this section we review the exotic effects of stringy instantons in 4d $\mathcal{N}=1$ supersymmetric gauge theories. This instanton configuration has been extensively studied in cascading quiver gauge theories \footnote{Simliar ideas have been discussed in the context of matrix model \cite{Aganagic:2003xq}. The relation with the stringy instanton has been shown in \cite{GarciaEtxebarria:2008iw}.}, associated to the CY singularity probed by a stack of D3 brane in AdS/CFT \cite{GarciaEtxebarria:2007zv,Florea:2006si,Argurio:2007vqa,Aharony:2007pr, Petersson:2007sc,Kachru:2008wt, Argurio:2008jm,Ferretti:2009tz}. The RG cascading quivers are obtained by the addition of fractional D3 branes. There are cases where some of the nodes have rank $N=1$, i.e. a single D3 is left at the end of the duality cascade on such nodes. With an abuse of notation we denote these nodes as $SU(1)$ nodes, having in mind the decoupling of the $U(1)$ factors in the IR. Even if there are no gauge dynamical degrees of freedom from the singularities associated to these nodes, the latter play a role in the dynamics if euclidean rigid D(-1) instantons are wrapped on them. The instantons generate a non perturbative dynamics that modifies the effective superpotential. This is obtained by considering the bosonic and fermionic zero modes in the ADHM construction and the relative action and constraints. Many of the zero modes are lifted, except for two fermionic modes that correspond to two fields, $\alpha$ and $\beta$, also called Ganor strings \cite{Ganor:1996pe}, connecting the $SU(1)$ node to the rest of the quiver. These modes are lifted by an interaction $\alpha M \beta$, giving origin to the superpotential \begin{equation} \label{eq:W-stringy-unitary} W = \int d \alpha \, d\beta \, e^{\alpha_a M_{ab} \beta_b} \simeq \det M \end{equation} Observe that the fermionic zero modes and the generalized meson $M$ have an index structure inherited from the quiver. This is reminiscent of the mesonic superpotential appearing in the confined phase of $SU(N)$ SQCD with $N+1$ flavors \cite{GarciaEtxebarria:2007zv,Krefl:2008gs,Amariti:2008xu}. As discussed in the introduction this theory confines in the IR and it can be obtained as a limiting case of Seiberg duality. The low energy theory in this case has superpotential of the form \begin{equation} \label{eq:W-IR-SU(N)-SQCD} W = b M \tilde b + \det M \end{equation} The second term in (\ref{eq:W-IR-SU(N)-SQCD}) has been interpreted as a gauge instanton in field theory, by UV completing the s-confining phase to $SU(N)$ SQCD with $N+2$ flavors. In this case there is a Seiberg dual description in terms of an $SU(2)$ gauge theory with $N+2$ dual flavors. If a mass term $W = m Q_{N+2}^{\alpha} \tilde Q_{\alpha}^{N+2}$ is added the IR theory has $N+1$ light flavors. In the magnetic theory the mass term enforces the total higgsing of the $SU(2)$ gauge group, forced by the $F$ term of the meson $M_{N+2}^{N+2}$. In the higgsed phase the dual quarks are identified with the baryons of the electric theory with $N+1$ flavors. The gauge instanton associated to the broken of $SU(2)$ generates a contribution proportional to $\det M$ in the superpotential. This construction reproduces the superpotential (\ref{eq:W-IR-SU(N)-SQCD}). As discussed above this contribution can be obtained also from an instantonic calculation by engineering the gauge theory in an Hanany-Witten (HW) \cite{Hanany:1996ie} setup. The electric SQCD theory is represented as the low energy limit of a stack of $N$ D4 branes in type IIA string theory. Here we review the construction of this theory. \begin{figure} \begin{center} \includegraphics[width=10cm]{GAUGESUN.pdf} \caption{The top figure represents the IIA brane setup describing $SU(N)$ SQCD with $N+1$ flavors, the bottom figure represents the theory after an HW transition.} \label{fig:SU(N)-SQCD anD branes} \end{center} \end{figure} The 4d gauge theory is described by a stack of $N$ D4 branes suspended between an NS and an NS' brane. When $N+1$ D6 are considered this setup describes $SU(N)$ SQCD with $N+1$ flavors. The branes are extended in the ten dimensional space-time as follows: \begin{center} \begin{tabular}{l||cccccccccc} &0& 1&2&3&4&5&6&7&8&9 \\ \hline D4 & $\times$ & $\times$ & $\times$ & $\times$ &&&$\times$&&\\ D6 & $\times$ & $\times$ & $\times$ & $\times$ &&&&$\times$&$\times$ & $\times$\\ NS & $\times$ & $\times$ & $\times$ & $\times$&$\times$ & $\times$ \\ NS' & $\times$ & $\times$ & $\times$ & $\times$ &&&&&$\times$ & $\times$\\ D0 &&&&&&&$\times$\\ \end{tabular} \end{center} The brane setup is shown in Figure \ref{fig:SU(N)-SQCD anD branes}. The confined theory is obtained by exchanging the NS and the NS' branes. A single D4 brane is stretched between the NS and the NS' brane after the transition. This corresponds to a $U(1)$ gauge theory. In 4d the vector multiplet of an abelian gauge theory decouples in the IR. We are in presence of the $SU(1)$ factor discussed above. The light degrees of freedom in this setup are the baryons, oriented strings connecting the $N+1$ flavor D4 branes with the D4 brane stretched between the NS and the NS' brane, and the meson, open strings with both the endpoints on the $N+1$ flavor D4 branes. The meson is dynamical because it is associated to the freedom in moving the $N+1$ flavor D4 branes in the directions $(8,9)$. This motion corresponds to an effective mass for the baryons and it can be represented as a superpotential term \begin{equation} \label{WMbb} W=M b \tilde b \end{equation} In other words the dynamical mass term of the baryon corresponds to the vev of the meson $M$ in the $(8,9)$ directions. The description of the confined theory in terms of D branes requires also a term proportional to $\det M$ in (\ref{WMbb}). This term is obtained from the action of the exotic instanton, \emph{i.e.} an euclidean D0 brane sit on the D4 brane stretched between the NS and the NS' branes. \section{Reduction to 3d of $SU(N_c)$ SQCD with $N_c+1$ flavors} \label{sec:redSU} In this section we reduce the 4d theories discussed above to 3d, in the field theory and in the string theory regime. We also study the reduction of the 4d superconformal index to the 3d partition function. \subsection*{Dimensional reduction in the field theory regime} When reducing a 4d duality to 3d a straight compactification may be too naive, and the 3d pairs obtained are not necessarily dual. Indeed 4d anomalous abelian symmetries may arise in 3d and mix with the current $J_\mu^R$. This mixing can induce different RG flows in the two phases, spoiling the original 4d duality \cite{Aharony:2013dha}. There is a general procedure to reduce a 4d dual pair to a 3d one. In consists of considering the 4d theories on $\mathbb{R}^3 \times S^1$ as effective 3d theories. The finire size effects from the circle are encoded into an effective superpotential \cite{Aharony:2013dha}. This effective interaction prevents the generation of 4d anomalous symmetries in 3d. This procedure generates 3d dual effective theories. More conventional dual pairs are recovered by shrinking the circle $S^1$: this limit is not always possible. In some cases, like the $SU(N)$ and $SP(N)$ theories considered here, it requires a further real mass deformation, leading to an RG flow. This general procedure is valid when both the phases of the duality correspond to a gauge theory. In these cases the extra non perturbative effects can be thought as the fractionalization of a 4d instanton into 3d monopoles \cite{Brodie:1998bv}. These monopoles, acting as instantons in 3d, are of two types. There are BPS monopoles, that survive the compactification and KK monopoles, that encode the information of circle at finite radius \cite{Hanany:1996ie,deBoer:1997ka,Davies:1999uw,Davies:2000nw}. In the limiting cases considered here, where instead of a dual theory the IR physics is described by a confined phase, there are no instanton contributions in the low energy theory, because the gauge group vanishes. Anyway there is a contribution to the superpotential of non-perturbative origin. This effect corresponds to the instanton of the totally broken $SU(2)$ as discussed above. One can study the $SU(2)$ dual gauge theory on $\mathbb{R}^3 \times S^1$, consider the effect of the KK monopole and completely break the dual gauge symmetry as done in the 4d case. A different strategy has been proposed in \cite{Csaki:2014cwa}: when reducing a confined phase on the circle the effective theory is formally identical to the 4d parent. We will adopt this strategy in the rest of the discussion. First we reduce the electric gauge theory, $SU(N)$ SQCD with $N+1$ flavors, on $\mathbb{R}^3 \times S^1$. Here there is a KK monopole contribution, through the effective superpotential \begin{equation} \label{eq:eta} W=\eta Y \quad \text{where} \quad Y=e^{(\sigma_1-\sigma_{N_c})/g_3^2+i(\varphi_1-\varphi_{N})}. \end{equation} We will refer to this superpotential as the $\eta Y$ superpotential in the rest of the paper. The fields $\sigma$ are the real scalars in the vector multiplet and the fields $\varphi$ correspond to the dual photons, in the Coulomb branch. These two fields organize in a chiral multiplet $\Sigma=\sigma/g_3^2+i \varphi$ that parameterizes the directions of the Coulomb branch. The operator $Y= e^{\Sigma}$ corresponds to a monopole operator in the high energy description. The 4d gauge coupling $g_4$ reduces to the 3d one by the relation $g_4^2 = 2 \pi r g_3^2$. The extra superpotential (\ref{eq:eta}) is associated to the holomorphic scale of the 4d theory by $\Lambda^b = \eta = e^{\frac{4 \pi}{r g_3^2}}$. The confined case on $\mathbb{R}^3 \times S^1$ corresponds to the set of mesons and baryons discussed above. The superpotential of this theory is again (\ref{WMbb}). The pure 3d duality is obtained from the duality on the circle by perturbing the electric theory with a real mass deformation. We assign one large real mass to a pair of fundamental and anti-fundamental and reduce the flavor from $N+1$ to $N$. This is done by weakly gauging a combination of generators of the baryonic symmetry and of the non abelian symmetry as discussed in the appendix of \cite{Csaki:2014cwa}. We choose a combination that assigns the opposite sign to the real masses of the two fields. In the large mass limit the electric theory becomes $SU(N)$ SQCD with $N$ flavors. The masses have opposite sign and the flow does not generate any CS term. The real mass deformations induces real masses also in some components of the mesons and of the baryons. These masses are assigned consistently with the global symmetry structure. In the dual theory, if we split the fields as \begin{equation} M = \left( \begin{array}{cc} M_{i}^{i} & M_{i}^{N+1}\\ M_{N+1}^{i}&M_{N+1}^{N+1} \end{array} \right) \quad B = \left( \begin{array}{cc} B_{i} &B_{N+1} \end{array} \right) \quad \tilde B = \left( \begin{array}{l} \tilde B^{i}\\ \tilde B^{N+1} \end{array} \right) \end{equation} the massless components are $M_{i}^{i}$,$M_{N+1}^{N+1}$, $B_{N+1}$ and $\tilde B^{N+1}$. The superpotential for the massless fields is \begin{equation} W = M_{N+1}^{N+1} (B_{N+1} \tilde B^{N+1} + \det M_{i}^{i}) \end{equation} The singlet $M_{N+1}^{N+1}$ has the same global charges of the electric monopole $Y$ defined in (\ref{eq:eta}). We identify $M_{N+1}^{N+1}$ with $Y$ and obtain the 3d duality corresponding to the limiting case of $SU(N)$ Aharony duality \cite{Aharony:1997gp}. We can also gauge the $U(1)_B$ baryonic symmetry. The electric theory in this case becomes $U(N)$ SQCD with $N$ flavors. In the dual phase we have a $U(1)$ gauge theory with one charged fundamental and one charged anti-fundamental. This theory is mirror dual to the $\mathcal{XYZ}$ model \cite{Aharony:1997bx}. Here we associate the field $\mathcal{X}$ to the gauge invariant combination $\tilde B^{N+1} B_{N+1}$, while the other two chiral fields can be denoted as $\mathcal{Y}=v_+$ and $\mathcal{Z}=v_-$. The superpotential of the mirror dual theory becomes \begin{equation} W = Y \mathcal{X} + Y \det M_{i}^{i} + v_+ v_- \mathcal{X} \end{equation} By integrating out the massive fields we obtain the relation $Y=v_+ v_-$ . Eventually the superpotential of the dual theory is \begin{equation} \label{Wfin} W = v_+ v_- \det M_{i}^i \end{equation} and it corresponds to the superpotential of the limiting case of $U(N)$ Aharony duality \cite{Aharony:1997gp}. \subsection*{Brane interpretation} Here we provide a brane interpretation of the reduction discussed above. A general procedure for reducing 4d dualities engineered in type IIA string theory to 3d dualities in IIB setups has been developed in \cite{Amariti:2015yea} for unitary theories with fundamental flavor, and extended in \cite{Amariti:2015mva} to more general gauge and field content. The procedure is based on compactification and T-duality. The KK monopole effects are captured by D1-branes or, by S-duality, by F-strings. Reducing to pure 3d pairs requires a double scaling limit on the radius and the real masses, associated to the position of some flavor brane on the circle. Here we consider the 4d brane setup of Figure \ref{fig:SU(N)-SQCD anD branes}, and compactify the theory on $x_3$. By T-duality along this direction the IIA system becomes a IIB system and describes the effective 4d field theory on $R^3 \times S_r^1$, where $r$ is the radius of the circle. The NS and NS' branes are left invariant by T-duality while the D4 and the D6 branes are become D3 and D5 branes respectively. The D0 branes become D1 branes. At large T-dual radius $\alpha'/r$ this brane setup describes an effective 3d theory. When considering a 3d duality at brane level we must also gauge the $U(1)_B$ symmetry \cite{Amariti:2015yea}. At brane level we associate this symmetry to the relative position of the center of mass of the stack of the gauge D3 branes with respect to the position of the center of mass of the flavor D3/D5 branes. By fixing the position of the center of mass of one stack and allowing the motion of the other one can consider the $U(1)_B$ symmetry as gauged or not. In the $U(N)$ theory on the circle the effect of the monopoles is encoded in the D1 branes stretched between the D3 and the NS branes along the directions $x_3$ and $x_6$. Equivalently it is associated to spectrum of the S-dual F-strings. In the 3d decompactified case these effects, corresponding to the repulsive interactions between the parallel D3 branes, are associated to the BPS monopole superpotential, $W \simeq \sum Y_i^{-1}$. In the compact case there is an additional contribution, corresponding to the KK monopole contribution \cite{deBoer:1997ka,Davies:1999uw}. The analysis of the reduction of the confining phase differs from the one of \cite{Amariti:2015yea}. Indeed in this case the $\eta Y$ superpotential cannot arise, because of the absence of the dual gauge group. In the brane picture there is still a D3 brane and we can consider the effect of a D1 brane wrapping the $x_3$ direction. This is the effect of the 4d stringy instanton once the theory is reduced on the circle. In 4d it gave origin to the extra term $\det M$ in the superpotential. Here, in the 3d description, this effect is captured by the T-dual D1 brane. We can further flow to a pure 3d duality by a real mass flow. In the electric case we move a D5 brane at $x_3 = \pi r$ on the circle. By fixing the D3 branes at the position $x_3 = 0$ on the T-dual circle we locate a D5 at the position $x_3 = \pi \alpha'/r$, defined as the mirror point in \cite{Amariti:2015mva}. By considering the limit $r \rightarrow 0$ the sector at this mirror point can be further decoupled and we are left with 3d $U(N)$ SQCD with $N$ flavors. In the dual theory the motion of a D5 at the mirror point generates a D3 brane by the HW effect. In this case there are no D3 branes left in the gauge sector at $x_3=0$. The baryons at $x_3=0$ are massive and there is an $N \times N$ meson $M$ left. At small $r$ an S-duality produces an $\mathcal{XYZ}$-like model at the mirror point. This is not the usual $\mathcal{XYZ}$ model because there are $N$ D3 stretched between a D5 parallel to an NS' brane. This signals the presence of a mass term for one of the singlets, the freedom to move the D3 branes between the D5 and the parallel NS'. The superpotential is $W=\mathcal{X} Y + \mathcal{XYZ}$, where $Y$ parameterizes the motion of the flavor D3 brane placed at the mirror point. In the small $r$ limit this field $Y$ interacts with the mesons $M_{11}$ at $x_3=0$ through the superpotential generated by the D1 brane. This interactions corresponds indeed to moving one of the D5 brane at $x_3=\pi \alpha'/r$. The original term $\det M$ becomes in this case $Y \det M_{1}^{1}$. Putting everything together the superpotential becomes $W=\mathcal{YZ} \det M_{11}$, that corresponds to (\ref{Wfin}) if the fields $\mathcal{Y}$ and $\mathcal{Z}$ are identified with the monopole and the anti-monopole of the $U(N)$ theory, $v_{\pm}$. \subsection*{The partition function} The reduction of 4d $SU(N)$ SQCD with $N+1$ flavors can be studied also at the level of the 4d superconformal \footnote{We keep the usual abuse of notation in this terminology because the index does not require a superconformal theory \cite{Festuccia:2011ws}, but just the presence of a conserved R symmetry. More correctly we should refer to the supersymmetric partition function on $S^3 \times_q S^1$.} index \cite{Kinney:2005ej,Romelsberger:2005eg}. The index reduces to the 3d partition function \cite{Dolan:2011rp,Gadde:2011ia,Imamura:2011uw,Agarwal:2012hs}, computed on a squashed three sphere $S_b^3$ \cite{Hama:2011ea}, preserving an $U(1)^2$ isometry of $SO(4)$. The subscript $b$ represents the squashing parameter. The index for the 4d theories has been studied in \cite{Spiridonov:2009za,Spiridonov:2014cxa}. After we reduce the 4d index we obtain the 3d partition function of $SU(N)$ SQCD with $N+1$ flavor with the extra $\eta Y $ superpotential. This partition function can be written as \begin{eqnarray} \label{eleS1} \mathcal{Z}_e(\mu+m_B;\nu-m_B) &=& \int \prod_{i=1}^{N} d\sigma_i \prod_{a=1}^{N+1}\Gamma_{h}(\mu_a+\sigma_i+m_B) \Gamma_{h}(\nu_a-\sigma_i-m_B) \nonumber \\ &\times& \prod_{1\leq i<j \leq N} \Gamma_{h}^{-1} (\pm(\sigma_i-\sigma_j)) \delta \Big(\sum_{i=1}^{N} \sigma_i\Big) \end{eqnarray} The functions $\Gamma_h$, hyperbolic gamma functions \cite{vandebult}, are the one loop-exact determinants of the matter and vector multiplets computed from localization in \cite{Hama:2011ea,Jafferis:2010un,Hama:2010av}. The parameters $\mu_a$ and $\nu_a$ correspond to the real masses associated to the $SU(N+1)_l \times SU(N+1)_r$ flavor symmetry. These masses have an imaginary part corresponding to the $R$-symmetry, i.e. $\mu_a = m_a + \omega \Delta$ and $\nu_a = \tilde m_a + \omega \Delta$, where $m_a$ and $\tilde m_a$ are real mass parameters satisfying $\sum m_a=\sum \tilde m_a = 0$, $\Delta$ is the $R$-charge and $\omega \equiv i( b+1/b)$. The parameter $m_B$ is associated to the real mass for the baryonic $U(1)_B$ symmetry. The real coordinate $\sigma_i$ corresponds to the scalar in the vector multiplet, and the constraint $\sum \sigma_i=0$ is enforced by the $\delta$-function. The real scalar parameterizes the fundamental representation as $(+ \sigma_i)$, the anti-fundamental as $(-\sigma_i)$ and the adjoint as $(\sigma_i - \sigma_j)$. We used the definition $\Gamma_h(\pm z) = \Gamma_h(z) \Gamma_h(-z)$. There is a constraint between the real masses, that corresponds to the condition imposed by the presence of the superpotential (\ref{eq:eta}). This constraint is \begin{equation} \label{balancing} \sum_{a=1}^{N+1} \mu_a + \sum_{a=1}^{N+1} \nu_a = 2\omega \end{equation} We can also study the effect of the gauging of the baryonic $U(1)_B$ symmetry on the partition function. First we introduce a factor $e^{2 \pi i m_B \Lambda N}$, where $\Lambda$ is an arbitrary real parameter. Then we Fourier transform the $\delta$-function, we shift $\sigma_i \rightarrow \sigma_i-m_B$ and obtain a new $\delta$-function $\delta (\Lambda -\xi)$. After performing the integral over $\xi$ we obtain \begin{equation} \mathcal{Z}_e(\mu;\nu;\Lambda) = \frac{1}{N} \int e^{2 \pi i \Lambda \sum_i \sigma_i} \prod_{i=1}^{N} d\sigma_i \prod_{a=1}^{N+1}\Gamma_{h}(\mu_a+\sigma_i) \Gamma_{h}(\nu_a-\sigma_i) \!\!\!\!\! \prod_{1\leq i<j \leq N} \!\!\!\!\! \Gamma_{h}^{-1} (\pm(\sigma_i-\sigma_j)) \end{equation} When reduced on the circle the partition function of the 4d confined phase corresponds to the product of the contributions of the mesons and of the baryons. The 3d partition function of the dual effective theory is \begin{equation} \label{magS1} \mathcal{Z}_m=\mathcal{Z}_M \mathcal{Z}_b \mathcal{Z}_{\tilde b} = \prod_{a,b}^{N+1} \Gamma_{h} (\mu_a+\nu_b) \prod_{a=1}^{N+1} \Gamma(\omega+N m_B-\mu_a) \Gamma_h(\omega-N m_B-\nu_a) \end{equation} The partition function in (\ref{eleS1}) coincides with the one in (\ref{magS1}) if the parameters satisfy (\ref{balancing}). As done in the electric case we can further add the extra factor $e^{2 \pi i m_B \Lambda N}$ and gauge the $U(1)_B$ symmetry. We obtain \begin{equation} \mathcal{Z}_m = \prod_{a,b}^{N+1} \Gamma_{h} (\mu_a+\nu_b) \int d m_B e^{2 \pi i N m_B \Lambda}\prod_{a=1}^{N+1} \Gamma(\omega+N m_B-\mu_a) \Gamma_h(\omega-N m_B-\nu_a) \end{equation} The decompactification limit requires a real mass flow in the field theory analysis. This real mass flow is reproduced if we consider the assignation \begin{eqnarray} \mu_a = \left\{ \begin{array}{ll} m_a +m_A & a=1,\dots, N\\ m - m_A N + \omega & \end{array}\right. \quad\quad \nu_a = \left\{ \begin{array}{ll} ~~\tilde m_a +m_A & a=1,\dots, N\\ -m - m_A N +\omega& \end{array}\right. \nonumber \\ \end{eqnarray} with the constraint $\sum m_a=\sum \tilde m_a=0$. The flow is reproduced by the limit $m\rightarrow \infty$ on the partition function. On the hyperbolic gamma functions this limit is obtained from the formula \cite{vandebult} \begin{equation} \label{kargeZmass} \lim_{x \rightarrow \infty} \Gamma_h(x) = e^{i \pi \text{sign}(x) (x-\omega)^2} \end{equation} We study this limit on the partition function on both sides of the duality. The partition function of the electric theory becomes (we omit the large $m$ dependence in the following because we checked that it coincides to the one computed in the dual frame) \begin{eqnarray} \label{Zelefin} \mathcal{Z}_e = \!\!\! \int \frac{e^{2 \pi i \Lambda \text{Tr} \sigma} }{N} \prod_{i=1}^{N} d\sigma_i \prod_{a=1}^{N} \Gamma_{h}(m_a+m_A+\sigma_i) \Gamma_{h}(\tilde m_a+m_A -\sigma_i) \!\!\!\!\! \prod_{1\leq i<j \leq N} \!\!\!\!\! \Gamma_{h}^{-1} (\pm(\sigma_i-\sigma_j)) \nonumber \\ \end{eqnarray} This formula corresponds to the partition function of the $U(N)$ gauge theory with $N$ flavors. In the magnetic case we first rescale $m_B$ as $m_B/N$ and then assign the real masses. In the large $M$ limit we have \begin{equation} \mathcal{Z}_m = \Gamma_h(2 \omega-2 N m_A )) \prod_{a,b}^{N} \Gamma_{h} (m_a+\tilde m_b+2 m_A) \int \frac{d m_B}{N} e^{2 \pi i m_B \Lambda} \Gamma(\pm m_B+N m_A) \end{equation} The last step consists of using the duality between SQED with one flavor and the $\mathcal{XYZ}$ model. On the partition function this duality is encoded in the relation \cite{Benini:2011mf} \begin{equation}\int d m_B e^{2 \pi i m_B \Lambda} \Gamma(\pm m_B+N m_A) = \Gamma_h(2 N m_A) \Gamma_h(\pm\frac{\Lambda}{2}- N m_A +\omega ) \end{equation} The product $\Gamma_h(2 N m_A) \Gamma_h(2\omega-2 N m_A) $ is compatible with a superpotential mass term and it can be further simplified by the relation $\Gamma_h(x) \Gamma_h(2 \omega-x)=1$. We obtain \begin{equation} \label{Zmagnfin} \mathcal{Z}_m = \prod_{a,b}^{N} \Gamma_{h} (m_a+\tilde m_b+2 m_A) \Gamma_h\bigg(\pm\frac{\Lambda}{2}- N m_A +\omega \bigg ) \end{equation} that is equivalent to (\ref{Zelefin}) and describes the partition function of the dual theory with superpotential (\ref{Wfin}). Observe the role of $\Lambda$: it is an FI term in the electric theory, added when gauging the baryonic symmetry. This parameter becomes a real mass parameter in the magnetic theory. This is expected because the FI parameter corresponds to the real mass parameter of the $U(1)_J$ symmetry. This is a topological symmetry that shifts the dual photon and that arises only in the $U(N)$ case. In the dual theory the topological symmetry does not disappear, even if the dual gauge theory is trivial, because there are gauge singlets, the electric monopole operators, carrying a non trivial charge under $U(1)_J$. \section{The symplectic case} \label{sec:redSP} Another exotic instanton contribution has been obtained for models with symplectic gauge groups. In this case a stringy instanton contributes to the effective Lagrangian of $SP(0)$ theories (as in the $SU(1)$ case here we use a similar abuse of terminology) and corresponds to an $O(1)$ instanton. \subsection*{The $O(1)$ instanton in 4d} Here we consider a 4d $\mathcal{N}=1$ $SP(2N)$ gauge theory \footnote{We use the convention $SP(2)\simeq SU(2)$.} with $2(N+2)$ flavors. The theory confines in the IR \cite{Intriligator:1995ne}. In the symplectic case there are no baryons and the low energy description consists of a mesonic operator $M= Q Q $ with superpotential \begin{equation} \label{Pf} W = \text{Pf} \, M \end{equation} This theory can be thought as the limiting case of Seiberg duality for an $SP(2N)$ gauge group with $2F$ flavors, where indeed the dual gauge group is $SP(2(F-N-2))$. For this reason we denote the theory as an $SP(0)$ gauge theory. It has been shown that in quiver gauge theories, if there are confining symplectic groups, the effective superpotential (\ref{Pf}) is obtained by wrapping a D instanton on the singularity associated to that node \cite{GarciaEtxebarria:2007zv,Argurio:2007vqa}. For example, consider a IIA description of an elliptic quiver, with a circular D4 brane intersecting NS and NS' branes. By adding fractional branes the number of D4 branes between two consecutive and non parallel NS branes is reduced by an HW transition. We can add also orientifolds to this geometry, O4 or O6 planes. The O4 can be put on the circular D4 branes and it switches its charge each time it crosses an NS branes. In this case we have a quiver with alternating SO/SP groups. On the other hand, if we can consider the action of the O6 planes there can be both real and unitary gauge groups. Consider a node of an elliptic quiver associated to a segment with $2N_i$ D4 branes, stretched between a pair of NS branes. When an O4$^+$ or an O6$^-$ plane \footnote{The sign represents the action of the orientifold on the NS sector. The charge is associated to the projection of an $SU(2N_i)$ gauge theory.} acts on this node the gauge group is projected to $SP(2N_i)$. If $N_i=0$ the group is $SP(0)$. Nevertheless if we consider a D(-1) brane wrapping this node we have an $O(1)$ instanton. The orientifold projects out the extra fermionic zero modes in the ADHM construction and the instanton contributes to the superpotential. There are only two Ganor strings \cite{Ganor:1996pe} connecting this nodes to the other(s) and the instanton contributes to the superpotential with a contribution of the form (\ref{Pf}), where the meson is obtained in terms of the other bi-fundamentals of the quiver. This construction can be used also for non quiver theories, for example for the $SP(2N)$ SQCD with $2(N+2)$ flavors discussed above. There are two possible brane configurations. In one case we put an O6$^-$ plane orthogonal to the stack of $N+2$ D4 branes. The orientifold is extended along the direction $(0,1,2,3,4,5)$. The NS and the D6 branes in the setup are in this case rotated along (4,5) and (8,9) to preserve $\mathcal{N}=1$ supersymmetry in four dimensions. In the second case we consider the setup of section \ref{sec:4ds} and add on a stack of $2(N+2)$ D4 branes an O4$^{+}$ planes. It becomes an O4$^-$ plane when the NS brane is crossed. The two cases are summarized in Figure \ref{O46}. \begin{figure} \begin{center} \includegraphics[width=10cm]{Sinstanton.pdf} \caption{In this figure we reproduce the IIA brane setup describing $SP(2N)$ SQCD with $2(N+2)$ flavors. We represent both the realizations in terms of the O6 and O4 plane.} \label{O46} \end{center} \end{figure} We can add the D instanton in both cases and perform an HW transition. After the transition the number of D4 branes extended between the NS and the NS' brane vanishes because of charge conservation \footnote{This charge is the linking number defined in \cite{Hanany:1996ie}. In this case the cancellation occurs because the orientifold modifies this charge.}. Even in absence of D4 branes the D instanton contributes to the superpotential. The exotic contribution is \begin{equation} \label{WSP} W = \int d \alpha \, e^{\alpha M \alpha^T} = \text{ Pf } M \end{equation} and coincides with (\ref{Pf}). \subsection*{Reduction to 3d} Also in this case we can dimensionally reduce the theories to 3d. When the $SP(2N)$ theory is reduced on the circle there is an extra superpotential term, $\eta Y$, as explained in \cite{Aharony:2013dha}. Moreover one can flow to a pure 3d theory by assigning to two fundamentals an opposite large real mass. In the confined case we use again the prescription of \cite{Csaki:2014cwa}: when considering the theory on the circle we keep the same field content and interactions of the 4d case. This theory can be further reduced to a pure 3d theory. This is done by assigning the real mass to the meson consistently with the masses of the fundamental flavors. In this case there are two massless components in the low energy spectrum: one of them corresponds to the reduced meson $M_{red.}$ of the theory with $2(N+1)$ flavors while the second massless field is the component $M_{2N+3}^{2N+4}$. The superpotential of the 3d theory is $W=M_{2N+3}^{2N+4}$ Pf $M_{red.} $. The term $M_{2N+3}^{2N+4}$ has the same quantum numbers of the electric monopole $Y$ parametrizing the Coulomb branch of the electric theory with $2(N+1)$ flavors in the pure 3d case. By pursuing this identification the superpotential of the dual theory is \begin{equation} \label{Wsympl} W = Y \text{ Pf } M_{red.} \end{equation} \subsection*{Brane interpretation} We can interpret the reduction of the 4d confining $SP(2N)$ theory with $2(N+2)$ flavors and of its confined phase in terms of D branes. We consider the system discussed above. Here we focus on the case with the O4 planes, a similar discussion applies to the case with the O6 plane. We compactify the direction $x_3$ and T-dualize along this direction. After T-duality we have a stack of $2N$ D3 branes stretched between an NS and NS' brane. The D6 branes become D5 branes and the D0 instanton becomes a D1 brane, extended along $x_3$. When this brane wraps the compact direction $x_3$ it encounters two orientifold planes, because after T-duality the $O4^+$ splits into the pair $(O3^+,O3^-)$ \cite{Hanany:2000fq}. The first plane is fixed at $x_3=0$ and the second one is at the \emph{mirror} point $\pi \alpha'/r$ \cite{Amariti:2015mva}. Here we consider a situation with $2(N+2)$ D5 branes such that the dual theory, obtained by HW transition, becomes an $SP(0)$ theory, i.e. there are no D3 left between the NS branes. We can study the contribution of the D1 branes to the theory or of their magnetic duals, corresponding to the F1 strings. Their contribution to the effective action descends from the contribution of the stringy instanton, corresponding to the superpotential (\ref{WSP}). The flow to the pure 3d limit follows from integrating out two fundamentals with large real mass. On the brane side this real mass is obtained by moving two D5 branes at the mirror point on the T-dual circle. The electric theory at $x_3=0$ is an $SP(2N)$ theory with $2(N+1)$ fundamentals. At the mirror point the D5s do not give any effect in the HW transition because they cancel against the orientifold charge. In this case there is an $\eta Y$ superpotential but no extra sectors. In the dual picture moving the two D5s at the mirror point has the same effect discussed in \cite{Amariti:2015mva}: this effect becomes a scale matching on the mesonic superpotential. The extra D5 branes at the mirror point break the meson into two massless components. This breaking induces the superpotential (\ref{Wsympl}). This interaction involves the massless fields and it has to be considered also in the large mass limit. \subsection*{The partition function} Here we study the effects of the reduction of the confining $SP(2N)$ theory on the superconformal index. The reduction of the index to the partition functions for symplectic theories appeared in \cite{Aharony:2013kma,Gahramanov:gka}. In the case with $2(N+2)$ fundamentals the relation between the squashed three sphere partition functions of the two phases considered on the circle is \begin{equation} \int \prod_{i=1}^{N} d \sigma_i \left(\prod_{a=1}^{2(N+2)} \Gamma_h(\pm \sigma+\mu_a) \! \right) \! \Gamma_h^{-1}(\pm 2 \sigma_i) \!\!\! \prod_{1\leq i<j \leq N} \!\!\!\Gamma_h^{-1}(\pm \sigma_i \pm \sigma_j) = \!\!\!\!\!\!\! \prod_{1\leq a<b \leq 2(N+2)} \!\!\!\! \Gamma_h (\mu_a + \mu_b) \end{equation} where we have to enforce the constraint $\sum \mu_a = 2 \omega$. This constraint corresponds to the presence of the superpotential $\eta Y$ in the electric theory and to the superpotential (\ref{WSP}) in the dual phase. The flow that reduces this duality to a pure 3d one is obtained by assigning the real masses as \begin{equation} \mu_a = \left\{ \begin{array}{lr} ~~m_a + m_A + \omega \Delta_1 & a =1,\dots, 2N+2 \\ ~~m ~- (N+1) m_A + \omega (1-(N+1) \Delta_1) \\ -m ~- (N+1) m_A + \omega (1-(N+1) \Delta_1) \\ \end{array} \right. \end{equation} and computing the large $m$ limit. The expected partition function for the $SP(2N)$ theory with $2(N+1)$ fundamental flavors and without the superpotential $\eta Y$ is obtained in the large $m$ limit. This is computed by using formula (\ref{kargeZmass}). In the magnetic theory we obtain the contribution of the reduced meson $M_{red.}$ and, in addition, we have an extra term of the form $$ \Gamma_h ( 2 \omega(1-(N+1)\Delta_1) - 2 (N+1) m_A) = \Gamma_h(2 \omega - \sum \mu_a) $$ where $\sum \mu_a = 2 (N+1) m_A$. This corresponds to the contribution of the electric monopole $Y$ acting as a singlet in the dual phase. \section{Conclusions} \label{sec:conc} In this paper we studied the reduction to 3d of a class of 4d s-confining theories, in the field theory and in the string theory regime, and we obtained 3d dualities. In the string theory regime the structure of the 4d interaction of the confined phase is determined by an exotic D instanton configuration: this contribution corresponds to the effective T-dual contribution of a D1 brane, when the compactification circle is kept at finite size. We checked also the validity of the dualities by studying the reduction of the 4d superconformal index to the 3d partition function on the squashed three sphere. In this paper we did not discuss the reduction of orthogonal theories. It would be interesting to perform the analysis in the $SO(N)$ case for both even and odd $N$. Another interesting aspect regards the reduction of the $\mathcal{N}=2$ stringy instanton studied in \cite{ Ghorbani:2010ks,Ghorbani:2011xh,Argurio:2012iw,Ghorbani:2013xga}. \section*{Acknowledgments} We are grateful to Claudius Klare and Alberto Mariotti for comments on the draft. A.~A.~is funded by the European Research Council (\textsc{erc}-2012-\textsc{adg}\_20120216) and acknowledges support by \textsc{anr} grant 13-\textsc{bs}05-0001. A.~A.~would like to thank \textsc{ccny}, \textsc{ucsd}, Milano-Bicocca and Bern University for hospitality during various stages of this work. \bibliographystyle{JHEP}
2,869,038,155,204
arxiv
\section{Introduction} The decay behavior of the entries of functions of banded and sparse matrices has attracted considerable interest over the years. It has been known for some time that if $A$ is a banded Hermitian matrix and $f$ is a smooth function with no singularities in a neighborhood of the spectrum of $A$, then the entries in $f(A)$ usually exhibit rapid decay in magnitude away from the main diagonal. The decay rates are typically exponential, with even faster decay in the case of entire functions. The interest for the decay behavior of matrix functions stems largely from its importance for a number of applications including numerical analysis \cite{Benzi.Golub.99,CanutoSimonciniVeraniJOMP.14,DelLopPel05,Demko,EP88,Meurant,ye13}, harmonic analysis \cite{Bask1,Gro10,Jaffard}, quantum chemistry \cite{BBR13,BM12,Lin14,Shao}, signal processing \cite{KSW,Strohmer}, quantum information theory \cite{CE06,CEPD06,ECP10}, multivariate statistics \cite{Aune}, queueing models \cite{Bini05}, control of large-scale dynamical systems \cite{Haber14}, quantum dynamics \cite{Giscard14}, random matrix theory \cite{Molinari}, and others. The first case to be analyzed in detail was that of $f(A) = A^{-1}$, see \cite{Demko,DMS,EP88,Kershaw}. In these papers one can find exponential decay bounds for the entries of the inverse of banded matrices. A related, but quite distinct line of research concerned the study of inverse-closed matrix algebras, where the decay behavior in the entries of a (usually infinite) matrix $A$ is ``inherited" by the entries of $A^{-1}$. Here we mention \cite{Jaffard}, where it was observed that a similar decay behavior occurs for the entries of $f(A) = A^{-1/2}$, as well as \cite{Bask1,Bask2,Gro10,GroLei06,KSW}, among others. The study of the decay behavior for general analytic functions of banded matrices, including the important case of the matrix exponential, was initiated in \cite{Benzi.Golub.99,iserles} and continued for possibly non-normal matrices and general sparsity patterns in \cite{Benzi2007}; further contributions in these directions include \cite{BB14,DelLopPel05,mastronardi,Shao}. Collectively, these papers have largely elucidated the question of when one can expect exponential decay in the entries of $f(A)$, in terms of conditions that the function $f$ and the matrix $A$ must satisfy. Some of these papers also address the important problem of when the rate of decay is asymptotically independent of the dimension $n$ of the problem, a condition that allows, at least in principle, for the approximation of $f(A)$ with a computational cost scaling linearly in $n$ (see, e.g., \cite{BBR13,Benzi2007,BM12}). A limitation of these papers is that they provide decay bounds for the entries of $f(A)$ that are often pessimistic and may not capture the correct, non-monotonic decay behavior actually observed in many situations of practical interest. A first step to address this issue was taken in \cite{CanutoSimonciniVeraniLAA.14}, where new bounds for the inverses of matrices that are Kronecker sums of banded matrices (a kind of structure of considerable importance in the numerical solution of PDE problems) were obtained; see also \cite{Meurant} for an early such analysis for a special class of matrices, and \cite{mastronardi} for functions of multiband matrices. In this paper we build on the work in \cite{CanutoSimonciniVeraniLAA.14} to investigate the decay behavior in (Hermitian) matrix functions where the matrix is a Kronecker sum of banded matrices. We also present new bounds for functions of banded (more generally, sparse) Hermitian matrices. For certain broad classes of analytic functions that frequently arise in applications (including as special cases the resolvent, the inverse square root, and the exponential) we obtain improved decay bounds that capture much more closely the actual decay behavior of the matrix entries than previously published bounds. A significant difference with previous work in this area is that our bounds are expressed in integral form, and in order to apply the bounds to specific matrix functions it may be necessary to evaluate these integrals numerically. The paper is organized as follows. In section~\ref{sec:pre} we provide basic definitions and material from linear algebra and analysis utilized in the rest of the paper. In section~\ref{sec:prev} we briefly recall earlier work on decay bounds for matrix functions. New decay results for functions of banded matrices are given in section~\ref{sec:banded}. Generalizations to more general sparse matrices are briefly discussed in section~\ref{sec:ext}. Functions of matrices with Kronecker sum structure are treated in section~\ref{sec:Kron}. Conclusive remarks are given in section~\ref{sec:Conc}. \section{Preliminaries}\label{sec:pre} In this section we give some basic definitions and background information on the types of matrices and functions considered in the paper. \subsection{Banded matrices and Kronecker sums} We begin by recalling two standard definitions. \begin{definition} We say that a matrix $M\in \CC^{n\times n}$ is $\beta$-banded if its entries $M_{ij}$ satisfy $M_{ij} = 0$ for $|i-j|>\beta$. \end{definition} \begin{definition} Let $M_1, M_2\in \CC^{n\times n}$. We say that a matrix ${\cal A}\in \CC^{n^2\times n^2}$ is the {\em Kronecker sum} of $M_1$ and $M_2$ if \begin{eqnarray}\label{eqn:kron} {\cal A} = M_1\oplus M_2 := M_1\otimes I + I\otimes M_2\,, \end{eqnarray} where $I$ denotes the $n\times n$ identity matrix. \end{definition} In this paper we will be especially concerned with the case $M_1=M_2=M$, where $M$ is $\beta$-banded and Hermitian positive definite (HPD). In this case $\cal A$ is also HPD. The definition of Kronecker sum can easily be extended to three or more matrices. For instance, we can define $$ {\cal A} = M_1\oplus M_2 \oplus M_3 := M_1\otimes I\otimes I + I\otimes M_2\otimes I + I\otimes I\otimes M_3 . $$ The Kronecker sum of two matrices is well-behaved under matrix exponentiation. Indeed, the following relation holds (see, e.g., \cite[Theorem 10.9]{Higham2008}): \begin{eqnarray}\label{eqn:exp_kron} \exp(M_1\oplus M_2) = \exp(M_1)\otimes \exp(M_2) . \end{eqnarray} Similarly, the following matrix trigonometric identities hold for the matrix sine and cosine \cite[Theorem 12.2]{Higham2008}: \begin{eqnarray}\label{eqn:sin_kron} \sin(M_1\oplus M_2) = \sin(M_1)\otimes \cos(M_2) + \cos(M_1)\otimes \sin(M_2) \end{eqnarray} and \begin{eqnarray}\label{eqn:cos_kron} \cos(M_1\oplus M_2) = \cos(M_1)\otimes \cos(M_2) - \sin(M_1)\otimes \sin(M_2). \end{eqnarray} As we will see, identity (\ref{eqn:exp_kron}) will be useful in extending decay results for functions of banded matrices to functions of matrices with Kronecker sum structure. \subsection{Classes of functions defined by integral transforms}\label{sec:classes} We will be concerned with analytic functions of matrices. It is well known that if $f$ is a function analytic in a domain $\Omega \subseteq \CC$ containing the spectrum of a matrix $A\in \CC^{n\times n}$, then \begin{eqnarray}\label{eqn:contour} f(A) = \frac {1}{2\pi i}\int_{\Gamma} f(z)(zI - A)^{-1} {\rm d}z\,, \end{eqnarray} where $i= \sqrt{-1}$ is the imaginary unit and $\Gamma$ is any simple closed curve surrounding the eigenvalues of $A$ and entirely contained in $\Omega$, oriented counterclockwise. Our main results concern certain analytic functions that can be represented as integral transforms of measures, in particular, {\em strictly completely monotonic functions} (associated with the Laplace--Stieltjes transform) and {\em Markov functions} (associated with the Cauchy--Stieltjes transform). Here we briefly review some basic properties of these functions and the relationship between the two classes. We begin with the following definition (see \cite{Widder.46}). \vskip 0.01in \begin{definition} Let $f$ be defined in the interval $(a,b)$ where $-\infty \le a < b \le +\infty$. Then, $f$ is said to be {\em completely monotonic} in $(a,b)$ if $$(-1)^{k}f^{(k)} (x) \ge 0 \quad {\rm for\ all} \quad a < x < b \quad {\rm and\ all} \quad k=0,1,2,\ldots $$ Moreover, $f$ is said to be {\em strictly completely monotonic} in $(a,b)$ if $$(-1)^{k}f^{(k)} (x) > 0 \quad {\rm for\ all} \quad a < x < b \quad {\rm and\ all} \quad k=0,1,2,\ldots $$ \end{definition} Here $f^{(k)}$ denotes the $k$th derivative of $f$, with $f^{(0)}\equiv f$. It is shown in \cite{Widder.46} that if $f$ is completely monotonic in $(a,b)$, it can be extended to an analytic function in the open disk $|z - b| < b - a$ when $b$ is finite. When $b=+\infty$, $f$ is analytic in $\Re(z) > a$. Therefore, for each $y\in (a,b)$ we have that $f$ is analytic in the open disk $|z - y| < R(y)$, where $R(y)$ denotes the radius of convergence of the power series expansion of $f$ about the point $z=y$. Clearly, $R(y) \ge y-a$ for $y\in (a,b)$. In \cite{Bernstein.29} Bernstein proved that a function $f$ is completely monotonic in $(0,\infty)$ if and only if $f$ is the Laplace--Stieltjes transform of $\alpha (\tau)$; \begin{equation}\label{bern} f(x) = \int_0^\infty {\rm e}^{-\tau x} {\rm d}\alpha(\tau), \end{equation} where $\alpha (\tau)$ is nondecreasing and the integral in (\ref{bern}) converges for all $x>0$. Moreover, under the same assumptions $f$ can be extended to an analytic function on the positive half-plane $\Re(z) > 0$. A refinement of this result (see \cite{Dub40}) states that $f$ is strictly completely monotonic in $(0,\infty)$ if it is completely monotonic there and moreover the function $\alpha (\tau)$ has at least one positive point of increase, that is, there exists a $\tau_0 > 0$ such that $\alpha(\tau_0+\delta) > \alpha(\tau_0)$ for any $\delta >0$. Prominent examples of strictly completely monotonic functions include (see \cite{Varga.68}): \begin{enumerate} \item $f_1(x) = 1/x = \int_0^\infty {\rm e}^{-x\tau} d\alpha_1(\tau)$ for $x>0$, where $\alpha_1(\tau) = \tau$ for $\tau\ge 0$. \item $f_2(x) = {\rm e}^{-x} = \int_0^\infty {\rm e}^{-x\tau} d\alpha_2(\tau)$ for $x>0$, where $\alpha_2(\tau) = 0$ for $0\le \tau < 1$ and $\alpha_2(\tau) = 1$ for $\tau\ge 1$. \item $f_3(x) = (1 - {\rm e}^{-x})/x = \int_0^\infty {\rm e}^{-x\tau} d\alpha_3(\tau)$ for $x>0$, where $\alpha_3 (\tau) = \tau$ for $0\le \tau \le 1$, and $\alpha_3(\tau) = 1$ for $\tau\ge 1$. \end{enumerate} \vspace{0.1in} Other examples include the functions $x^{-\sigma}$ (for any $\sigma > 0$), $\log(1+1/x)$ and $\exp(1/x)$, all strictly completely monotonic on $(0,\infty)$. Also, products and positive linear combinations of strictly completely monotonic functions are strictly completely monotonic, as one can readily check. A closely related class of functions is given by the Cauchy--Stieltjes (or Markov-type) functions, which can be written as \begin{eqnarray}\label{eqn:markov} f(z) = \int_\Gamma \frac {{\rm d}\gamma(\omega)} {z-\omega}, \quad z\in \CC \setminus \Gamma\,, \end{eqnarray} where $\gamma$ is a (complex) measure supported on a closed set $\Gamma \subset \CC$ and the integral is absolutely convergent. In this paper we are especially interested in the special case $\Gamma = (-\infty, 0]$ so that $$f(x) = \int_{-\infty}^0 \frac {{\rm d}\gamma(\omega)} {x - \omega}, \quad x\in \CC \setminus (-\infty, 0]\,, $$ where $\gamma$ is now a (possibly signed) real measure. The following functions, which occur in various applications (see, e.g., \cite{Guettel.13} and references therein), fall into this class: \begin{eqnarray*} && z^{-\frac 1 2} = \int_{-\infty}^0 \frac 1 {z-\omega} \frac 1 {\pi \sqrt{-\omega}} {\rm d}\omega, \\ && \frac{{\rm e}^{-t\sqrt{z}}-1}{z} = \int_{-\infty}^0 \frac 1 {z-\omega} \frac {\sin(t\sqrt{-\omega})}{-\pi \omega} {\rm d}\omega, \\ && \frac{\log(1+z)}{z} = \int_{-\infty}^{-1} \frac 1 {z-\omega} \frac 1 {(-\omega)} {\rm d}\omega. \end{eqnarray*} The two classes of functions just introduced overlap. Indeed, it is easy to see (e.g., \cite{Merkle}) that functions of the form $$f(x) = \int_0^{\infty} \frac {{\rm d}\mu(s)}{ x + \omega },$$ with $\mu$ a positive measure, are strictly completely monotonic on $(0,\infty)$; but every such function can also be written in the form $$f(x) = \int_{-\infty}^0 \frac {{\rm d}\gamma(\omega)}{ x - \omega}, \quad \gamma(\omega) = - \mu(-\omega),$$ and therefore it is a Cauchy--Stieltjes function. We note that $f(x) = \exp(-x)$ is an example of a function that is strictly completely monotonic but not a Cauchy--Stieltjes function. In the rest of the paper, the term {\em Laplace--Stieltjes function} will be used to denote a function that is strictly completely monotonic on $(0,\infty)$. \section{Previous work} \label{sec:prev} In this section we briefly review some previous decay results from the literature. Given a $n\times n$ Hermitian positive definite $\beta$-banded matrix $M$, it was shown in \cite{DMS} that \begin{eqnarray}\label{eqn:demko} |(M^{-1})_{ij} | \le C q^{\frac{|i-j|}{\beta}} \end{eqnarray} for all $i,j=1,\ldots ,n$, where $q =(\sqrt{\kappa}-1)/(\sqrt{\kappa}+1)$, $\kappa$ is the spectral condition number of $M$, $C = \max\{1/\lambda_{\rm min}(M), \hat C\}$, and $\hat C= (1+\sqrt{\kappa})^2/(2\lambda_{\max}(M))$. In this bound the diagonal elements of $M$ are assumed not to be greater than one, which can always be satisfied by dividing $M$ by its largest diagonal entry, after which the bound (\ref{eqn:demko}) will have to be multiplied by its reciprocal. The bound is known to be sharp, in the sense that it is attained for a certain tridiagonal Toeplitz matrix. We mention that (\ref{eqn:demko}) is also valid for infinite and bi-infinite matrices as long as they have finite condition number, i.e., both $M$ and $M^{-1}$ are bounded. Using the identity $M^{-1} = (M^*M)^{-1}M^*$, simple decay bounds were also obtained in \cite{DMS} for non-Hermitian matrices. Similarly, if $M$ is $\beta$-banded and Hermitian and $f$ is analytic on a region of the complex plane containing the spectrum $\sigma (M)$ of $M$, then there exist positive constants $C$ and $q<1$ such that \begin{eqnarray}\label{eqn:BG} |(f(M))_{ij} | \le C q^{\frac{|i-j|}{\beta}}, \end{eqnarray} where $C$ and $q$ can be expressed in terms of the parameter of a certain ellipse surrounding $\sigma (M)$ and of the maximum modulus of $f$ on this ellipse; see \cite{Benzi.Golub.99}. The bound (\ref{eqn:BG}), in general, is not sharp; in fact, since there are infinitely many ellipses containing $\sigma(M)$ in their interior and such that $f$ is analytic inside the ellipse and continuous on it, one should think of (\ref{eqn:BG}) as a parametric family of bounds rather than a single bound. By tuning the parameter of the ellipse one can obtain different bounds, usually involving a trade-off between the values of $C$ and $q$. This result was extended in \cite{Benzi2007} to the case where $M$ is a sparse matrix with a general sparsity pattern, using the graph distance instead of the distance from the main diagonal; see also \cite{CE06,Jaffard} and section \ref{sec:ext} below. Similar bounds for analytic functions of non-Hermitian matrices can be found in \cite{BB14,Benzi2007}. Practically all of the above results consist of exponential decay bounds on the magnitude of the entries of $f(M)$. However, for entire functions the actual decay is typically superexponential, rather than exponential. Such bounds have been obtained by Iserles for the exponential of a tridiagonal matrix in \cite{iserles}. This paper also presents superexponential decay bounds for the exponential of banded matrices, but the bounds only apply at sufficiently large distances from the main diagonal. None of these bounds require $M$ to be Hermitian. Superexponential decay bounds for the exponential of certain infinite tridiagonal skew-Hermitian matrices arising in quantum mechanical computations have been recently obtained in \cite{Shao}. \section{Decay estimates for functions of a banded matrix} \label{sec:banded} In this section we present new decay bounds for functions of matrices $f(M)$ where $M$ is a banded, Hermitian and positive definite. First, we make use of an important result from \cite{HocLub97} to obtain decay bounds for the entries of the exponential of a banded, Hermitian, positive semidefinite matrix $M$. This result will then be used to obtain bounds or estimates on the entries of $f(M)$, where $f$ is strictly completely monotonic. In a similar manner, we will obtain bounds or estimates on the entries of $f(M)$ where $f$ is a Markov function by making use of the classical bounds of Demko et al.~\cite{DMS} for the entries of the inverses of banded positive definite matrices. In section \ref{sec:Kron} we will use these results to obtain bounds for matrix functions $f({\cal A})$, where $\cal A$ is a Kronecker sum of banded matrices and $f$ belongs to one of the two above-mentioned classes of functions. \subsection{The exponential of a banded Hermitian matrix} We first recall (with a slightly different notation) an important result due to Hochbruck and Lubich \cite{HocLub97}. Here the $m$ columns of $V_m\in \CC^{n\times n}$ form an orthonormal basis for the Krylov subspace $K_m(M,v)={\rm span}\{v, Mv, \ldots, M^{m-1}v\}$ with $\|v\|=1$, and $H_m = V_m^* M V_m$. \begin{theorem}\label{th:HL} Let $M$ be a Hermitian positive semidefinite matrix with eigenvalues in the interval $[0,4\rho]$. Then the error in the Arnoldi approximation of $\exp(\tau M) v$ with $\|v\|=1$, namely $\varepsilon_m:= \|\exp(-\tau M) v - V_m \exp(-\tau H_m) e_1 \|$, is bounded in the following ways: \begin{enumerate} \item[i)] $\varepsilon_m \le 10 \exp(-m^2/(5\rho\tau))$, for $\rho\tau\ge 1$ and $\sqrt{4\rho\tau}\le m \le 2\rho\tau$ \item[ii)] $\varepsilon_m \le 10 (\rho\tau)^{-1} \exp(-\rho\tau) \left ( \frac{{\rm e}\rho\tau}{m}\right)^m$ for $m\ge 2\rho\tau$. \end{enumerate} \end{theorem} \vskip 0.01in With this result we can establish bounds for the entries of the exponential of a banded Hermitian matrix. \vskip 0.01in \begin{theorem}\label{th:boundexp} Let $M$ be as in Theorem \ref{th:HL}. Assume in addition that $M$ is $\beta$-banded. Then, with the notation of Theorem \ref{th:HL} and for $k\ne t$: \begin{enumerate} \item[i)] For $\rho\tau\ge 1$ and $\sqrt{4\rho\tau}\le |k-t|/\beta\le 2\rho\tau$, $$ | (\exp(-\tau M) )_{kt}| \le 10 \exp\left(-\frac{(|k-t|/\beta)^2}{5 \rho\tau}\right) ; $$ \item[ii)] For $|k-t|/\beta \ge 2\rho\tau$, $$ | (\exp(-\tau M) )_{kt}| \le 10 \frac{\exp(-\rho\tau)}{\rho\tau} \left ( \frac{{\rm e}\rho\tau}{\frac{|k-t|}{\beta}}\right)^{\frac{|k-t|}{\beta}} . $$ \end{enumerate} \end{theorem} \begin{proof} We first note that an element of the Krylov subspace $K_m(M,v)$ is a polynomial in $M$ times a vector, so that $V_m \exp(-\tau H_m) e_1 = p_{m-1}(\tau M) v$ for some polynomial $p_{m-1}$ of degree at most $m-1$. Because $M$ is Hermitian and $\beta$-banded, the matrix $p_{m-1}(\tau M)$ is at most $(m-1)\beta$-banded. Let now $k,t$ with $k\ne t$ be fixed, and write $|k-t| = (m-1)\beta + s$ for some $m\ge 1$ and $s=1, \ldots, \beta$; in particular, we see that $(p_{m-1}(\tau M))_{kt}=0$, moreover $|k-t|/\beta \le m$. Consider first case ii). If $m \ge 2\rho\tau$, for $v=e_t$ we obtain \begin{eqnarray*} | (\exp(-\tau M) )_{kt}| & = & | (\exp(-\tau M) )_{kt} - (p_{m-1}(\tau M))_{kt}| \\ &= & | e_k^T ( \exp(-\tau M) e_t - p_{m-1}(\tau M) e_t)| \\ &\le & \|\exp(-\tau M) e_t - p_{m-1}(\tau M) e_t \| \\ &\le& 10 (\rho\tau)^{-1} \exp(-\rho\tau) \left ( \frac{e\rho\tau\beta}{|k-t|}\right)^{\frac{|k-t|}{\beta}} , \end{eqnarray*} where in the last inequality Theorem~\ref{th:HL} was used for $m |k-t|/\beta\ge 2\rho\tau$. An analogous result is obtained for $m$ in the finite interval, so as to verify i). \end{proof} As remarked in \cite{HocLub97}, the restriction to positive semidefinite $M$ leads to no loss of generality, since a shift from $M$ to $M+\delta I$ entails a change by a factor ${\rm e}^{\tau \delta}$ in the quantities of interest. We also notice that in addition to Theorem \ref{th:HL} other asymptotic bounds exist for estimating the error in the exponential function with Krylov subspace approximation; see, e.g., \cite{DK1, DK2}. An advantage of Theorem \ref{th:HL} is that it provides explicit upper bounds, which can then be easily used for our purposes. \begin{example}\label{ex:exp} {\rm Figure \ref{fig:expM} shows the behavior of the bound in Theorem \ref{th:boundexp} for two typical matrices. The plot on the left refers to the tridiagonal matrix $M={\rm tridiag}(-1,4,-1)$ ($\beta=1$) of size $n=200$, with $\tau=4$, so that $\tau\rho \approx 3.9995$. The $t$th column with $t=127$ is reported, and only the values above $10^{-60}$ are shown. The plot on the right refers to the pentadiagonal matrix $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$ ($\beta=2$) of size $n=200$, with $\tau=4$, so that $\tau\rho \approx 4.4989$. The same column $t=127$ is shown. Note the superexponential decay behavior. In both cases, the estimate seems to be rather sharp. } \end{example} \begin{figure}[t] \centering \includegraphics[width=2.5in,height=2.5in]{tridiag_tau4_n200.eps} \includegraphics[width=2.5in,height=2.5in]{penta_tau4_n200.eps} \caption{Example \ref{ex:exp}. Bounds for $|\exp(-\tau(M-\lambda_{\min}I))|_{:,t}$, $t=127$, using Theorem \ref{th:boundexp}. $M$ of size $n=200$ and $\tau=4$. Left: $M={\rm tridiag}(-1,4,-1)$. Right: $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$. Logarithmic scale. \label{fig:expM}} \end{figure} \subsection{Bounds for Laplace--Stieltjes functions} By exploiting the connection between the exponential function and Laplace--Stieltjes functions, we can apply Theorem \ref{th:boundexp} to obtain bounds or estimates for the entries of Laplace--Stieltjes matrix functions. \begin{theorem}\label{th:LS} Let $M=M^*$ be $\beta$-banded and positive definite, and let $\widehat M = M-\lambda_{\min} I$, with the spectrum of $\widehat M$ contained in $[0,4\rho]$. Assume $f$ is a Laplace--Stieltjes function, so that it can be written in the form $f(x) = \int_0^\infty {\rm e}^{-x\tau} {\rm d}\alpha(\tau)$. Then, with the notation and assumptions of Theorem \ref{th:boundexp} and for $|k-t|/\beta\ge 2$: \begin{eqnarray} |f(M)|_{k,t} &\le& \int_0^\infty \exp(-\lambda_{\min}\tau) |(\exp(-\tau\widehat M))_{k,t}| {\rm d}\alpha(\tau) \nonumber \\ &\le & 10 \int_0^{\frac{|k-t|}{2\rho\beta}} \exp(-\lambda_{\min}\tau) \frac{\exp(-\rho\tau)}{\rho\tau} \left ( \frac{{\rm e}\rho\tau}{\frac{|k-t|}{\beta}}\right)^{\frac{|k-t|}{\beta}} {\rm d}\alpha(\tau) \label{eqn:LSbound} \\ && \quad +10 \int_{\frac{|k-t|}{2\rho\beta}}^{\frac{|k-t|^2}{4\rho\beta^2}} \exp(-\lambda_{\min}\tau) \exp\left(-\frac{(|k-t|/\beta)^2}{5 \rho\tau}\right) {\rm d}\alpha(\tau) \nonumber \\ && + \int_{\frac{|k-t|^2}{4\rho\beta^2}}^\infty \exp(-\lambda_{\min}\tau) (\exp(-\tau\widehat M))_{k,t} {\rm d}\alpha(\tau) = I + II + III. \nonumber \end{eqnarray} \end{theorem} \vskip 0.01in In general, these integrals may have to be evaluated numerically. We observe that in the above bound, the last term (III) does not significantly contribute provided that $|k - t|$ is sufficiently large while $\rho$ and $\beta$ are not too large. As an illustration, consider the function $f(x) = 1/\sqrt{x}$. For this function we have $\alpha(\tau) = \frac 1{\sqrt{\tau}} \Gamma(-\frac 1 2+1) = \sqrt{\pi/\tau}$ with $\tau \in (0,\infty)$. We have \begin{eqnarray*} I + II &= & 10 \sqrt{\pi} \int_0^{\frac{|k-t|}{2\rho\beta}} \frac{\exp(-\lambda_{\min}\tau)}{\tau\sqrt{\tau}} \frac{\exp(-\rho\tau)}{\rho\tau} \left ( \frac{{\rm e}\rho\tau}{\frac{|k-t|}{\beta}}\right)^{\frac{|k-t|}{\beta}} {\rm d}\tau \\ && + 10\sqrt{\pi} \int_{\frac{|k-t|}{2\rho\beta}}^{\frac{|k-t|^2}{4\rho\beta^2}} \frac{\exp(-\lambda_{\min}\tau)}{\tau\sqrt{\tau}} \exp\left(-\frac{(|k-t|/\beta)^2}{5 \rho\tau}\right) {\rm d}\tau , \end{eqnarray*} while $$ III \le \sqrt{\pi} \int_{\frac{|k-t|^2}{4\rho\beta^2}}^\infty \frac{\exp(-\lambda_{\min}\tau) }{\tau\sqrt{\tau}} {\rm d}\tau . $$ Figure \ref{fig:LS-1/2} shows two typical bounds for the entries of $M^{-\frac 1 2}$ for the same matrices $M$ considered in Example \ref{ex:exp}. The integrals $I$ and $II$ and the one appearing in the upper bound for $III$ have been evaluated accurately using the built-in Matlab function {\tt quad}. Note that the decay is now exponential. In both cases, the decay is satisfactorily captured. \begin{figure}[thb] \centering \includegraphics[width=2.5in,height=2.5in]{tridiag_n200_minusonehalf_new.eps} \includegraphics[width=2.5in,height=2.5in]{pentadiag_n200_minusonehalf_new.eps} \caption{Estimates for $|M^{-1/2}|_{:,t}$, $t=127$, using I+II and the upper bound for III. Size $n=200$, Log-scale. Left: $M={\rm tridiag}(-1,4,-1)$. Right: $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$. \label{fig:LS-1/2}} \end{figure} As yet another example, consider the entire function $f(x)=(1-\exp(-x))/x$ for $x\in [0,1]$, which is a Laplace--Stieltjes function with ${\rm d}\alpha(\tau)={\rm d}\tau $ (see section \ref{sec:classes}). Starting from (\ref{eqn:LSbound}) we can determine new terms $I, II$, and estimate $III$ as it was done for the inverse square root. Due to the small interval size, the first term $I$ accounts for the whole bound for most choices of $k,t$. For the same two matrices used above, the actual (superexponential) decay and its approximation are reported in Figure~\ref{fig:f3}. \begin{figure}[thb] \centering \includegraphics[width=2.5in,height=2.5in]{f3_exp_minus_x_over_x_tridiag_new.eps} \includegraphics[width=2.5in,height=2.5in]{f3_exp_minus_x_over_x_penta_new.eps} \caption{Estimates for $|M^{-1}(I-\exp(-M))|_{:,t}$, $t=127$, using I+II and the upper bound for III. size $n=200$, Log-scale. Left: $M={\rm tridiag}(-1,4,-1)$. Right: $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$. \label{fig:f3}} \end{figure} We remark that for the validity of Theorem \ref{th:LS}, we cannot relax the assumption that $M$ be positive definite. This makes sense since we are considering functions of $M$ that are defined on $(0,\infty)$. If $M$ is not positive definite but $f$ happens to be defined on a larger interval containing the spectrum of $M$, for instance on all of $\RR$, it may still be possible, in some cases, to obtain bounds for $f(M)$ from the corresponding bounds on $f(M + \delta I)$ where the shifted matrix $M + \delta I$ is positive definite. \vskip 0.05in \begin{remark}\label{rem:shiftedM} We observe that if $f(M+i \zeta I)$ is well defined for $\zeta\in\RR$, then the estimate (\ref{eqn:LSbound}) also holds for $|f(M+i \zeta I)|_{k,t}$, since $|\exp(i\zeta)|=1$. \end{remark} \subsection{Bounds for Cauchy--Stieltjes functions} Bounds for the entries of $f(M)$, where $f$ is a Cauchy--Stieltjes function and $M=M^*$ is positive definite, can be obtained in a similar manner, with the bound (\ref{eqn:demko}) of Demko et al.~\cite{DMS} replacing the bounds on $\exp(-\tau M)$ from Theorem \ref{th:boundexp}. For a given $\omega \in \Gamma = (\infty, 0)$, let $\kappa=\kappa(\omega)=(\lambda_{\max}-\omega)/(\lambda_{\min}-\omega)$, $q = q(\omega) = (\sqrt{\kappa}-1)/(\sqrt{\kappa}+1)$, $C=C(-\omega)=\max\{1/(\lambda_{\min}-\omega), C_0\}$, with $C_0 = C_0(-\omega) = (1+\sqrt{\kappa})^{1/2}/(2(\lambda_{\max}-\omega))$. We immediately obtain the following result. \begin{theorem}\label{th:CS} Let $M=M^*$ be positive definite and let $f$ be a Cauchy--Stieltjes function. Then for all $k$ and $t$ we have \begin{eqnarray}\label{eqn:Markov_general} |f(M)_{kt}|\le \int_{-\infty}^0 C(\omega) q(\omega)^{\frac{|k-t|}{\beta}} {\rm d}\omega. \end{eqnarray} \end{theorem} For specific functions we can be more explicit, and provide more insightful upper bounds by evaluating or bounding the integral on the right-hand side of (\ref{eqn:Markov_general}). As an example, let us consider again $f(x) = x^{-\frac12}$, which happens to be both a Laplace--Stieltjes and a Cauchy--Stieltjes function. In this case we find the bound \begin{equation}\label{bound_sqrt} |M_{kt}^{-\frac12}| \le \frac 2 {\pi} (C(0)+C_2) \left( \frac{ \sqrt{\lambda_{\max}} - \sqrt{\lambda_{\min}}}{ \sqrt{\lambda_{\max}} + \sqrt{\lambda_{\min}}} \right )^{ \frac{|k-t|}{\beta}}, \end{equation} where $C_2= \max\left\{ 1, (1+\frac 1 2 \sqrt{\kappa(0)})^{\frac 1 2}\right\}$. Indeed, for the given function and upon substituting $\tau=-\omega$, (\ref{eqn:Markov_general}) becomes \begin{eqnarray} |M_{kt}^{-\frac12}| &\le& \frac{1}{\pi} \int_0^\infty C(\tau) \left( \frac{ \sqrt{\lambda_{\max}+\tau} - \sqrt{\lambda_{\min}+\tau}} \sqrt{\lambda_{\max}+\tau} + \sqrt{\lambda_{\min}+\tau}} \right )^{ \frac{|k-t|}{\beta}} \frac{1}{\sqrt{\tau}} {\rm d}\tau\\ &\le& \frac{1}{\pi} \left( \frac{ \sqrt{\lambda_{\max}} - \sqrt{\lambda_{\min}}}{ \sqrt{\lambda_{\max}} + \sqrt{\lambda_{\min}}} \right )^{ \frac{|k-t|}{\beta}} \int_0^\infty C(\tau) \frac{1}{\sqrt{\tau}} {\rm d}\tau. \end{eqnarray} Let $\phi(\tau)$ be the integrand function. We split the integral as $\int_0^\infty \phi(\tau) {\rm d}\tau = \int_0^1 \phi(\tau) {\rm d}\tau + \int_1^\infty \phi(\tau) {\rm d}\tau$. For the first integral, we observe that $C(\tau) \le C(0)$, so that \begin{eqnarray*} \int_0^1 C(\tau) \frac{1}{\sqrt{\tau}} {\rm d}\tau \le C(0) \int_0^1 \frac{1}{\sqrt{\tau}} {\rm d}\tau = 2 C(0). \end{eqnarray*} For the second integral, we observe that $C(\tau) \le C_2 \frac 1 {\tau}$ where $C_2= \max\{ 1, (1+\sqrt{\kappa(0)})^{1/2}/2\}$, so that \begin{eqnarray*} \int_1^\infty C(\tau) \frac{1}{\sqrt{\tau}} {\rm d}\tau \le C_2 \int_1^\infty \frac{1}{\tau\sqrt{\tau}} {\rm d}\tau = 2 C_2. \end{eqnarray*} Collecting all results the final upper bound (\ref{bound_sqrt}) follows. We note that for this particular matrix function, using the approach just presented results in much more explicit bounds than those obtained earlier using the Laplace--Stieltjes representation, which required the numerical evaluation of three integrals. Also, since the bound (\ref{eqn:demko}) is known to be sharp (see \cite{DMS}), it is to be expected that the bounds (\ref{bound_sqrt}) will be generally better than those obtained in the previous section. Figure~\ref{fig:CS-1/2} shows the accuracy of the bounds in (\ref{bound_sqrt}) for the same matrices as in Figure~\ref{fig:LS-1/2}, where the Laplace--Stieltjes bounds were used. For both matrices, the quality of the Cauchy--Stieltjes bound is clearly superior. \begin{figure}[thb] \centering \includegraphics[width=2.5in,height=2.5in]{tridiag_n200_minusonehalf_CS.eps} \includegraphics[width=2.5in,height=2.5in]{pentadiag_n200_minusonehalf_CS.eps} \caption{Estimates for $|M^{-1/2}|_{:,t}$, $t=127$, using (\ref{bound_sqrt}), size $n=200$, Log-scale. Left: $M={\rm tridiag}(-1,4,-1)$. Right: $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$. \label{fig:CS-1/2}} \end{figure} We conclude this section with a discussion on decay bounds for functions of $M- i \zeta I$, where $\zeta\in\RR$. These estimates may be useful when the integral is over a complex curve. We first recall a result of Freund for $(M-i \zeta I)^{-1}$. To this end, we let again $\lambda_{\min}, \lambda_{\max}$ be the extreme eigenvalues of $M$ (assumed to be HPD), and we let $\lambda_1 = \lambda_{\min} - i\zeta$, $\lambda_2 = \lambda_{\max} - i \zeta$. \begin{proposition}\label{prop:Freund} Assume $M$ is Hermitian positive definite and $\beta$-banded. Let $R>1$ be defined as $R=\alpha + \sqrt{\alpha^2-1}$, with $\alpha=(|\lambda_1|+|\lambda_2|)/|\lambda_2-\lambda_1|$. Then for $k\ne t$, $$ |( M - i\zeta I )^{-1}|_{tk} \le C(\zeta) \left (\frac{1}{R}\right )^{\frac{|t -k|}{\beta}} \,\, {\rm with}\,\, C(\zeta) = \frac{2R}{|\lambda_1-\lambda_2|} \frac{4R^2}{(R^2-1)^2} . $$ \end{proposition} With this bound, we can modify (\ref{eqn:Markov_general}) so as to handle more general matrices as follows. Once again, we let $\lambda_{\min}, \lambda_{\max}$ be the extreme eigenvalues of $M$, and now we let $\lambda_1 = \lambda_{\min} - i\zeta - \omega$, $\lambda_2 = \lambda_{\max} - i \zeta - \omega$; $\alpha$ and $R$ are defined accordingly. \begin{eqnarray}\label{eqn:shiftedM_CS} |f(M-i\zeta I)|_{kt} \le \int_{-\infty}^0 C \left (\frac{1}{R}\right )^{\frac{|k -t|}{\beta}} {\rm d}\gamma(\omega), \quad k\ne t . \quad \end{eqnarray} Since $R=R(\zeta,\omega)$ is defined in terms of spectral information of the shifted matrix $M-i\zeta I - \omega I$, we also obtain $C=C(\zeta,\omega) = \frac{2R(\zeta,\omega)}{|\lambda_{\max}-\lambda_{\min}|} \frac{4R(\zeta,\omega)^2}{(R(\zeta,\omega)^2-1)^2}$. \section{Extensions to more general sparse matrices} \label{sec:ext} Although all our main results so far have been stated for matrices that are banded, it is possible to extend the previous bounds to functions of matrices with general sparsity patterns. Following the approach in \cite{CEPD06} and \cite{Benzi2007}, let $G=(V,E)$ be the undirected graph describing the nonzero pattern of $M$. Here $V$ is a set of $n$ vertices (one for each row/column of $M$) and $E$ is a set of edges. The set $E\subseteq V\times V$ is defined as follows: there is an edge $(i,j)\in E$ if and only if $M_{ij}\ne 0$ (equivalently, $M_{ji}\ne 0$ since $M=M^*$). Given any two nodes $i$ and $j$ in $V$, a {\em path of length $k$} between $i$ and $j$ is a sequence of nodes $i_0=i,i_1,i_2,\ldots ,i_{k-1},i_k=j$ such that $(i_{\ell},i_{\ell +1})\in E$ for all $\ell = 0,1,\ldots ,k-1$ and $i_{\ell}\ne i_m$ for $\ell \ne m$. If $G$ is connected (equivalently, if $M$ is irreducible, which we will assume to be the case), then there exists a path between any two nodes $i,j\in V$. The {\em geodesic distance} $d(i,j)$ between two nodes $i,j\in G$ is then the length of the shortest path joining $i$ and $j$. With this distance, $(G,d)$ is a metric space. We can then extend every one of the bounds seen so far for banded $M$ to a general sparse matrix $M=M^*$ simply by systematically replacing the quantity $\frac{|k-t|}{\beta}$ by the geodesic distance $d(k,t)$. Hence, the decay in the entries of $f(M)$ is to be understood in terms of distance from the nonzero pattern of $M$, rather than away from the main diagonal. We refer again to \cite{Benzi2007} for details. We note that this extension easily carries over to the bounds presented in the following section. Finally, we observe that all the results in this paper apply to the case where $M$ is an infinite matrix with bounded spectrum, provided that $f$ has no singularities on an open neighborhood of the spectral interval $[\lambda_{\min}, \lambda_{\max}]$. This implies that our bounds apply to all the $n\times n$ principal submatrices (``finite sections") of such matrices, and that the bounds are uniform in $n$ as $n\to \infty$. \section{Estimates for functions of Kronecker sums of matrices}\label{sec:Kron} The decay pattern for matrices with Kronecker structure has a rich structure. In addition to a decay away from the diagonal, which depends on the matrix bandwidth, a ``local'' decay can be observed within the bandwidth; see Figure \ref{fig:DecKron}. This particular pattern was described for $f(x)=x^{-1}$ in \cite{CanutoSimonciniVeraniLAA.14}; here we largely expand on the class of functions for which the phenomenon can be described. \begin{figure}[thb] \centering \includegraphics[width=2.5in,height=2.5in]{exp_new.eps} \includegraphics[width=2.5in,height=2.5in]{isqrt_new.eps} \caption Three-dimensional decay plots for $[f({\cal A})]_{ij}$ where $\cal A$ is the 5-point finite difference discretization of the negative Laplacian on the unit square on a $10\times 10$ uniform grid with zero Dirichlet boundary conditions. Left: $f({\cal A}) = \exp(-5 {\cal A})$. Right: $f({\cal A}) = {\cal A}^{-1/2}$. \label{fig:DecKron}} \end{figure} \vskip 0.03in Some matrix functions enjoy properties that make their application to Kronecker sums of matrices particularly simple. This is the case for instance of the exponential and certain trigonometric functions like $\sin(x)$ and $\cos(x)$. For these, bounds for their entries can be directly obtained from the estimates of the previous sections. \subsection{The exponential function} Recall the relation (\ref{eqn:exp_kron}), which implies that \begin{equation} \label{eqn:kron_tau} \exp(-\tau {\cal A}) = \exp(-\tau M)\otimes \exp(-\tau M), \quad \tau\in \RR \end{equation} when ${\cal A} = M\otimes I + I \otimes M$. Here and in the following, a lexicographic ordering of the entries will be used, so that each row or column index $k$ of ${\cal A}$ corresponds to the pair $k=(k_1,k_2)$ in the two-dimensional Cartesian grid. Furthermore, for any fixed values of $\tau, \rho, \beta >0$, define \begin{equation}\label{eqn:Phi} \Phi (i,j) := \left\{\begin{array}{ll} 10\exp \left (-\frac{(|i-j|/\beta)^2}{5\rho \tau}\right ), & \textnormal{for}\quad \sqrt{4\rho\tau} \le \frac{|i-j|}{\beta} \le 2\rho \tau,\\ 10\frac{\exp(-\rho\tau)}{\rho \tau} \left ( \frac{{\rm e}\rho\tau}{\frac{|i-j|}{\beta}} \right )^{\frac{|i-j|}{\beta}} , & \textnormal{for}\quad \frac{|i-j|}{\beta} \ge 2\rho\tau . \end{array}\right . \end{equation} Note that $\Phi(i,j)$ is only defined for $|i-j| > \sqrt{4\rho \tau} \beta$. With these notations, the following bounds can be obtained. \begin{theorem}\label{th:exp} Let $M$ be Hermitian and positive semidefinite with bandwidth $\beta$ and spectrum contained in $[0,4\rho]$, and let ${\cal A}=I\otimes M + M \otimes I$. Then $$ (\exp(-\tau {\cal A}))_{kt} = (\exp(-\tau M))_{k_1 t_1} (\exp(-\tau M))_{k_2 t_2} . $$ Therefore, for $\tau > 0$ $$|(\exp(-\tau {\cal A}))_{kt}| \le \Phi(k_1,t_1) \Phi (k_2,t_2)$$ for all $t=(t_1,t_2)$ and $k=(k_1,k_2)$ with $\min \{|t_1-k_1|, |t_2-k_2|\}\ge \sqrt{4\rho\tau} \beta$. \end{theorem} \begin{proof} Using (\ref{eqn:exp_kron}) we obtain $$ e_k^T \exp(-{\tau \cal A}) e_t = e_k^T \exp(-\tau M)\otimes \exp(-\tau M) e_t. $$ Let $E_{t_1 t_2}$ be the $n\times n$ matrix such that $e_t={\rm vec}(E_{t_1 t_2}) \in\RR^{n^2}$, and in particular $E_{t_1 t_2} = e_{t_1} e_{t_2}^T$, with $e_{t_1}, e_{t_2} \in\RR^n$. Then \begin{eqnarray*} e_k^T \exp(-\tau M)\otimes \exp(-\tau M) e_t &=& e_k^T {\rm vec}( \exp(-\tau M) E_{t_1 t_2} \exp(-\tau M)^*) \\ &=&e_k^T {\rm vec}( \exp(-\tau M) e_{t_1} e_{t_2}^T \exp(-\tau M)^*) \\ &=& e_k^T \begin{bmatrix} \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)^*)e_1 \\ \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)^*)e_2 \\ \vdots \\ \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)^*)e_n \end{bmatrix} \\ &=& e_{k_1}^T \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)^*)e_{k_2}) , \end{eqnarray*} which proves the first relation for $M$ Hermitian. For the bound, it is sufficient to use (\ref{eqn:Phi}) to obtain the desired conclusion. \end{proof} The result can be easily generalized to a broader class of matrices. \begin{corollary} Let ${\cal A}=I\otimes M_1 + M_2 \otimes I$ with $M_1$ and $M_2$ having bandwidth $\beta_1$ and $\beta_2$, respectively. Also, let the spectrum of $M_1$ be contained in the interval $[0,4\rho_1]$ and that of $M_2$ in the interval $[0,4\rho_2]$, with $\rho_1,\rho_2 > 0$. Then for $t=(t_1,t_2)$ and $k=(k_1,k_2)$, with $|t_\ell-k_\ell|\ge \sqrt{4\rho_\ell\tau} \beta_\ell$, $\ell=1,2$, $$ (\exp(-\tau {\cal A}))_{kt} = (\exp(-\tau M_1))_{k_1 t_1} (\exp(-\tau M_2))_{t_2 k_2} . $$ Therefore, $$ |(\exp(-\tau {\cal A}))_{kt}| \le \Phi_1 (k_1,t_1) \Phi_2(k_2,t_2) $$ where $\Phi_k (i,j)$ is defined as $\Phi (i,j)$ in (\ref{eqn:Phi}) with $\rho_\ell$, $\beta_\ell$ replacing $\rho$, $\beta$. \end{corollary} Generalization to the case of Kronecker sums of more than two matrices is relatively straightforward. Consider for example the case of three summands. A lexicographic order of the entries is again used, so that each row or column index $k$ of ${\cal A} = M\otimes I\otimes I + I\otimes M\otimes I + I\otimes I\otimes M$ corresponds to a triplet $k=(k_1,k_2,k_3)$ in the three-dimensional Cartesian grid. \begin{corollary} Let $M$ be $\beta$-banded, Hermitian and with spectrum contained in $[0,4\rho]$, and let ${\cal A} = M\otimes I\otimes I + I\otimes M\otimes I + I\otimes I\otimes M$ and $k = (k_1, k_2, k_3)$ and $t=(t_1,t_2,t_3)$. Then $$ (\exp(-\tau {\cal A}))_{kt} = (\exp(-\tau M))_{k_1,t_1} (\exp(-\tau M))_{t_2,k_2} (\exp(-\tau M))_{t_3,k_3} , $$ from which it follows $$ |(\exp(-\tau {\cal A}) )_{kt}| \le \Phi(k_1,t_1)\Phi(k_2,t_2)\Phi(k_3,t_3), $$ for all $(k_1,t_1), (k_2,t_2), (k_3,t_3)$ with $\min \{|k_1-t_1|,|k_2-t_2|,|k_3-t_3|\} > \sqrt{4\tau \rho}\beta$. \end{corollary} \begin{proof} We write ${\cal A} = M\otimes (I\otimes I) + I\otimes (M\otimes I + I\otimes M )$, so that \begin{eqnarray*} \exp(-\tau {\cal A}) &=& \exp(-\tau M) \otimes \exp(-\tau M\otimes I + I\otimes (-\tau M) ) \\ &=& \exp(-\tau M) \otimes \exp(-\tau M) \otimes \exp(-\tau M). \end{eqnarray*} Therefore, using $i=(t_1,t_2)$, \begin{eqnarray*} (\exp(-\tau {\cal A}))_{kt} &=& e_k^T {\rm vec} \left ( (\exp(-\tau M)\otimes \exp(-\tau M))e_{i} e_{t_3}^T \exp(-\tau M) \right ) \\ &=& e_k^T {\rm vec}\left( {\rm vec}(\exp(-\tau M) e_{t_1} e_{t_2}^T \exp(-\tau M) ) e_{t_3}^T \exp(-\tau M) \right ) \\ &=& e_k^T {\rm vec} \left ( \begin{bmatrix} \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)e_1) \\ \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)e_2) \\ \vdots \\ \exp(-\tau M) e_{t_1} (e_{t_2}^T \exp(-\tau M)e_n) \end{bmatrix} \right ) e_{t_3}^T \exp(-\tau M) \\ &=& (e_{k_1}^T \exp(-\tau M) e_{t_1}) \,(e_{t_2}^T \exp(-\tau M)e_{k_2})\, (e_{t_3}^T \exp(-\tau M)e_{k_3}) . \end{eqnarray*} The rest follows as in the proof of Theorem \ref{th:exp}. \end{proof} \begin{remark} Using (\ref{eqn:sin_kron}), one can obtain similar bounds for $\cos({\cal A})$ and $\sin({\cal A})$, where ${\cal A} = M_1\otimes I + I \otimes M_2$ with $M_1$, $M_2$ banded. \end{remark} \subsection{Laplace--Stieltjes functions} If $f$ is a Laplace--Stieltjes function, then $f({\cal A})$ is well-defined and exploiting the relation (\ref{eqn:exp_kron}) we can write $$ f({\cal A}) = \int_{0}^\infty \exp (-\tau{\cal A}) {\rm d}\alpha(\tau) = \int_{0}^\infty \exp (-\tau M)\otimes \exp (-\tau M) {\rm d}\alpha(\tau). $$ Thus, using $k=(k_1,k_2)$ and $t=(t_1,t_2)$, \begin{eqnarray (f({\cal A}))_{kt} & = & \int_{0}^\infty e_k^T\exp (-\tau M)\otimes \exp (-\tau M)e_t {\rm d}\alpha(\tau)\nonumber\\ &=& \int_{0}^\infty (\exp (-\tau M))_{k_1 t_1}(\exp (-\tau M))_{t_2 k_2}{\rm d}\alpha(\tau) . \nonumber \end{eqnarray} With the notation of Theorem \ref{th:LS}, we have \begin{equation}\label{eqn:LS_kron} |f({\cal A})|_{kt}\le \int_0^{\infty} \exp(-2\lambda_{\min} \tau) |\exp(-\tau \widehat M)|_{k_1 t_1} |\exp(-\tau \widehat M)|_{k_2 t_2} {\rm d}\alpha(\tau) . \end{equation} In this form, the bound (\ref{eqn:LS_kron}), of course, is not particularly useful. Explicit bounds can be obtained, for specific examples of Laplace--Stieltjes functions, by evaluating or bounding the integral on the right-hand side of (\ref{eqn:LS_kron}). For instance, using once again the inverse square root, so that $\alpha(\tau) = \sqrt{\pi/\tau}$, we obtain \begin{eqnarray} |{\cal A}^{-\frac 1 2}|_{kt}&\le& \sqrt{\pi} \int_0^{\infty} \frac {1}{\sqrt{\tau^3}} \exp(-2\lambda_{\min} \tau) |\exp(-\tau \widehat M)|_{k_1 t_1} |\exp(-\tau \widehat M)|_{k_2 t_2} {\rm d}\tau \label{eqn:LS-1/2kron}\\ &\le & \sqrt{\pi} \left(\int_0^{\infty} \left (\frac 1 {\tau^{3/4}} \exp(-\lambda_{\min} \tau) |\exp(-\tau \widehat M)|_{k_1 t_1}\right)^2 {\rm d}\tau\right)^{\frac 1 2} \cdot \nonumber\\ & & \qquad \left(\int_0^{\infty} \left (\frac 1 {\tau^{3/4}} \exp(-\lambda_{\min} \tau) |\exp(-\tau \widehat M)|_{k_2 t_2}\right)^2 {\rm d}\tau\right)^{\frac 1 2} \nonumber . \end{eqnarray} The two integrals can then be bounded as done in Theorem~\ref{th:LS}. For the other example we have considered earlier, namely the function $f(x)=(1-\exp(-x))/x$, the bound is the same except that $\sqrt{\pi}/\sqrt{\tau^3}$ is replaced by one, and the integration interval reduces to $[0,1]$; see also Example~\ref{ex:kron_f3} next. \vskip 0.03in \begin{example}\label{ex:kron_f3} {\rm We consider again the function $f(x) = (1-\exp(-x))/x$, and the two choices of matrix $M$ in Example~\ref{ex:exp}; for each of them we build $\cal A$ as the Kronecker sum ${\cal A}=M\otimes I + I\otimes M$. The entries of the $t$th column with $t=94$, that is $(t_1,t_2)=(14,5)$ are shown in Figure~\ref{fig:kronLS_f3}, together with the bound obtained above. The oscillating pattern is well captured in both cases, with a particularly good accuracy also in terms of magnitude in the tridiagonal case. The lack of approximation near the diagonal reflects the condition $|k_i - t_i|/\beta\ge 2$, $i=1,2$. } \end{example} \begin{figure}[thb] \centering \includegraphics[width=2.5in,height=2.5in]{f3_exp_minus_x_over_x_tridiag_A.eps} \includegraphics[width=2.5in,height=2.5in]{f3_exp_minus_x_over_x_pentadiag_A.eps} \caption{Example \ref{ex:kron_f3}. True decay and estimates for $|f(A)|_{:,t}$, $t=94$, $A=M\otimes I +I \otimes M$ of size $n=400$. Left: $M={\rm tridiag}(-1,4,-1)$. Right: $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$. \label{fig:kronLS_f3}} \end{figure} \vskip 0.03in \begin{remark} These results can be easily extended to the case where ${\cal A} = M_1\otimes I + I \otimes M_2$ with $M_1$, $M_2$ both Hermitian positive definite and having bandwidths $\beta_1$, $\beta_2$. It can also be generalized to the case where ${\cal A}$ is the Kronecker sum of three or more banded matrices. \end{remark} \subsection{Cauchy--Stieltjes functions} If $f$ is a Cauchy--Stieltjes function and $\cal A$ has no eigenvalues on the closed set $\Gamma \subset \CC$, then $$ f({\cal A}) = \int_\Gamma ({\cal A} -\omega I)^{-1} {\rm d}\gamma (\omega) , $$ so that $$ e_k^Tf({\cal A}) e_t = \int_\Gamma e_k^T({\cal A} - \omega I)^{-1}e_t {\rm d}\gamma (\omega) . $$ We can write ${\cal A} - \omega I = M\otimes I + I \otimes ( M - \omega I)$. Each column $t$ of the matrix inverse, $x_t := (\omega I - {\cal A})^{-1}e_t$, may be viewed as the matrix solution $X_t=X_t(\omega)\in\CC^{n\times n}$ to the following Sylvester matrix equation: $$ M X_t + X_t(M - \omega I) = E_t, \qquad x_t = {\rm vec}(X_t), \quad e_t = {\rm vec}(E_t) , $$ where the only nonzero element of $E_t$ is in position $(t_1,t_2)$; here the same lexicographic order of the previous sections is used to identify $t$ with $(t_1,t_2)$. From now on, we assume that $\Gamma = (-\infty, 0]$. We observe that the Sylvester equation has a unique solution, since no eigenvalue of $M$ can be an eigenvalue of $\omega I - M$ for $\omega \le 0$ (recall that $M$ is Hermitian positive definite). Following Lancaster (\cite[p.556]{Lancaster1970}), the solution matrix $X_t$ can be written as $$ X_t = -\int_{0}^{\infty} \exp(-\tau M)E_t \exp(-\tau(M - \omega I)) {\rm d}\tau . $$ For $k=(k_1,k_2)$ and $t=(t_1,t_2)$ this gives \begin{eqnarray}\label{eqn:boundA} e_k^T(\omega I - {\cal A})^{-1}e_t & = & e_{k_1}^T X_t e_{k_2} \nonumber\\ &=& -\int_{0}^{\infty} e_{k_1}^T \exp(-\tau M)e_{t_1} e_{t_2}^T \exp(-\tau(M - \omega I)) e_{k_2} {\rm d}\tau . \end{eqnarray} } Therefore, in terms of the original matrix function component, \begin{eqnarray* e_k^Tf({\cal A}) e_t = -\int_{-\infty}^0 \int_{0}^{\infty} e_{k_1}^T \exp(-\tau M)e_{t_1} e_{t_2}^T \exp(-\tau(M - \omega I)) e_{k_2} {\rm d}\tau {\rm d}\gamma (\omega) . \end{eqnarray*} We can thus bound each entry as \begin{eqnarray*} |e_k^Tf({\cal A}) e_t| \le \int_{0}^{\infty} \left( |\exp(-\tau M)|_{k_1 t_1} |\exp(-\tau M )|_{k_2 t_2} \int_{-\infty}^0 \exp(\tau \omega ) {\rm d}\gamma (\omega) \right) {\rm d}\tau . \end{eqnarray*} It is thus apparent that $|e_k^Tf({\cal A}) e_t|$ can be bounded in a way analogous to the case of Laplace--Stieltjes functions, once the term $\int_{-\infty}^0 \exp(\tau \omega ) {\rm d}\gamma (\omega)$ is completely determined. In particular, for $f(x) = x^{-1/2}$, we obtain \begin{eqnarray*} \int_{-\infty}^0 \exp(\tau \omega ) {\rm d}\gamma (\omega) &=& \frac{1}{\pi} \int_{-\infty}^0 \exp(\tau \omega )\frac{1}{\sqrt{-\omega}} {\rm d}\omega \\ &=& \frac{2}{\pi} \int_{0}^{\infty} \exp(-\tau \eta^2 )\frac{1}{\eta} {\rm d}\eta \\ &=& \frac{2}{\pi} \frac{\sqrt{\pi}}{2\sqrt{\tau}} = \frac{1}{\sqrt{\pi}} f(\tau). \end{eqnarray*} Therefore, \begin{eqnarray}\label{eqn:A-1/2_CS} \!\! |{\cal A}^{-\frac 1 2}|_{kt} \le \frac{1}{\sqrt{\pi}} \!\! \left ( \int_0^\infty \!\!\! |\exp(-\tau M)|_{k_1 t_1}^2 f(\tau) d\tau)\!\right )^{\frac 1 2} \!\! \left ( \int_0^\infty \!\!\! |\exp(-\tau M)|_{k_2 t_2}^2 f(\tau) d\tau)\!\right )^{\frac 1 2}. \end{eqnarray} Using once again the bounds in Theorem~\ref{th:boundexp} a final integral upper bound can be obtained, in the same spirit as for Laplace--Stieltjes functions. We explicitly mention that the solution matrix $X_t$ could be alternatively written in terms of the resolvent $(M - \zeta i I)^{-1}$, with $\zeta\in\RR$ \cite{Lancaster1970}. This would allow us to obtain an integral upper bound for $|e_k^Tf({\cal A}) e_t|$ by means of Proposition~\ref{prop:Freund} and of (\ref{eqn:shiftedM_CS}). We omit the quite technical computations, however the final results are qualitatively similar to those obtained above. \begin{example}\label{ex:A-1/2_CS} {\rm In Figure~\ref{fig:A-1/2_CS} we report the actual decay and our estimate following (\ref{eqn:A-1/2_CS}) for the inverse square root, again using the two matrices of our previous examples. We observe that having used estimates for the exponential to handle the Kronecker form, the approximations are slightly less sharp than previously seen for Cauchy--Stieltjes functions. Nonetheless, the qualitative behavior is captured in both instances. } \end{example} \begin{figure}[thb] \centering \includegraphics[width=2.5in,height=2.5in]{tridiag_A_minusonehalf_CS.eps} \includegraphics[width=2.5in,height=2.5in]{pentadiag_A_minusonehalf_CS.eps} \caption{Example \ref{ex:A-1/2_CS}. True decay and estimates for $|A^{-\frac 1 2}|_{:,t}$, $t=94$, $A=M\otimes I +I \otimes M$ of size $n=400$. Left: $M={\rm tridiag}(-1,4,-1)$. Right: $M={\rm pentadiag}(-0.5,-1,4,-1,-0.5)$. \label{fig:A-1/2_CS}} \end{figure} \begin{remark} {\rm As before, the estimate for $(f({\cal A}))_{k,t}$ can be generalized to the sum ${\cal A} = M_1 \otimes I + I \otimes M_2$, with both $M_1, M_2$ Hermitian and positive definite matrices. } \end{remark} \begin{remark} {\rm Using the previous remark, the estimate for the matrix function entries can be generalized to matrices that are sums of several Kronecker products. For instance, if $$ {\cal A} = M\otimes I\otimes I + I\otimes M\otimes I + I\otimes I\otimes M , $$ then we can write $$ {\cal A} = M\otimes (I\otimes I) + I\otimes (M\otimes I + I\otimes M ) =: M \otimes I + I \otimes M_2 , $$ so that, following the same lines as in (\ref{eqn:boundA}) we get \begin{eqnarray*} e_k^Tf({\cal A})e_t &=& \int_{\Gamma} e_k^T ({\cal A} - \omega I)^{-1}e_t {\rm d}\gamma (\omega) \\ & = & - \int_\Gamma \int_0^{\infty} e_{k_1}^T\exp(-\tau M)^{-1} e_{t_1} e_{t_2}^T \exp(-\tau(M_2-\omega I)) e_{k_2} {\rm d}\tau {\rm d}\gamma(\omega) . \end{eqnarray*} Since $M_2= M\otimes I + I\otimes M$, we then obtain $e_{t_2}^T \exp(-\tau M_2) e_{k_2} = e_{t_2}^T \exp(-\tau M)\otimes \exp(-\tau M) e_{k_2}$. Splitting $t_2, k_2$ in their one-dimensional indices, the available bounds can be employed to obtain a final integral estimate. } \end{remark} \section{Conclusions} \label{sec:Conc} In this paper we have obtained new decay bounds for the entries of certain analytic functions of banded and sparse matrices, and used these results to obtain bounds for functions of matrices that are Kronecker sums of banded (or sparse) matrices. The results apply to strictly completely monotonic functions and to Markov functions, which include a wide variety of functions arising in mathematical physics, numerical analysis, network science, and so forth. The new bounds are in many cases considerably sharper than previously published bounds and they are able to capture the oscillatory, non-monotonic decay behavior observed in the entries of $f({\cal A})$ when $\cal A$ is a Kronecker sum. Also, the bounds capture the superexponential decay behavior observed in the case of entire functions. A major difference with previous decay results is that the new bounds are given in integral form, therefore their use requires some work on the part of the user. If desired, these quantities can be further bounded for specific function choices. In practice, the integrals can be evaluated numerically to obtain explicit bounds on the quantities of interest. Although in this paper we have focused mostly on the Hermitian case, extensions to functions of more general matrices may be possible, as long as good estimates on the entries of the matrix exponential and resolvent are available. We leave the development of this idea for possible future work.
2,869,038,155,205
arxiv
\section{Introduction} The last 20 years have been a revolution for neutrino physics. The observation of neutrino oscillations has established that neutrinos have masses and this implies physics beyond the Standard Model. This fact has a clear impact not only on particle physics, but also on astroparticle physics and cosmology. Nevertheless, neutrinos are still quite unknown particles. At the moment we know that there are three light neutrinos, although some theoretical models propose the existence of sterile neutrinos (not interacting weakly with matter). Neutrinos are much lighter than their charged leptonic partners and they interact very weakly with matter. In addition, during the last 13 years, neutrino experiments have proved that neutrinos have mass, contrary to the zero-neutrino-mass hypothesis of the Standard Model. Neutrinos oscillate when they propagate through space. During the past few years the solar neutrino problem has been solved and the solar and atmospheric oscillation parameters have been confirmed using artificial sources. A period of precision measurements in neutrino physics has started. However, many fundamental questions still remain unanswered: What is the value of the neutrino masses? Are neutrinos Majorana or Dirac particles? What is the mass hierarchy? What is the value of the neutrino oscillation parameters, in particular, $\theta_{13}$ and $\theta_{23}$ (if this is maximal or not)? Is there CP-violation in the leptonic sector? Are there more than three neutrinos? What is the relation with the quark sector? Can neutrinos be related to leptogenesis? Why are neutrinos much lighter than other fermions?~\ldots\ In summary, there are many aspects of neutrinos still unknown.\aq{I've rewritten these in the form of actual short questions.} The history of neutrinos began with the investigation of beta decays. During the early decades of the past century, radioactivity was explored and the nuclear beta decay was observed. In this process, a radioactive nucleus emits an electron and increases its positive charge by one unit to become the nucleus of another element. The beta decay was studied and, because of the energy conservation, the electron should always carry away the same amount of energy. A line in the energy spectrum was expected. However, in 1914, Chadwick showed that electrons follow a continuous spectrum of energies up to the expected value. Some of the energy released in the decay appeared to be lost. To explain this observation, only two solutions seemed possible: either energy is not conserved (preference of Bohr) or an additional undetectable particle carrying away the additional energy was emitted (preference of Pauli). To solve the energy crisis, in 1930 Pauli wrote his famous letter explaining that he invented a desperate remedy to save the energy conservation law. There could exist in the nuclei electrically neutral particles that were emitted in beta decays and able to cross the detectors without leaving any trace. These particles (which he wished to call neutrons) were carrying all the missing energy. These particles have spin 1/2 and obey the exclusion principle. The mass should be the same order of magnitude as the electron mass. Later on, in 1932, Chadwick discovered the neutron; and, in 1934, Fermi took Pauli's idea and on its basis developed a theory of beta decay. Fermi named this particle ``neutrino''. The weak force is so weak that the probability of inverse beta decay was calculated to be close to zero. The possibility to detect a neutrino seemed null. However, the development of very intense sources of neutrinos (fission bombs and fission reactors) changed the prospect. In 1951, Reines thought about using an intense burst of antineutrinos from bombs in an experiment designed to detect them. At the end, they decided to use fission reactors as sources, in particular the Hanford reactor. In collaboration with Cowan at the Los Alamos Scientific Laboratory, they began the ``Poltergeist Project''. They chose the inverse beta decay on protons to detect the free neutrino. The detection principle was a coincident measurement of the 511~keV photons associated with positron annihilation and a neutron capture reaction a few microseconds later. The idea was to build a large detector filled with liquid scintillator loaded with Cd to increase the probability of capturing a neutron. The process releases 9~MeV gammas a few microseconds later than the positron detection. This delayed coincidence provides a powerful means to discriminate the signature of the inverse beta decay from background noise. The 300~litre neutrino detector was read by 90 two-inch photomultipliers (PMTs) and surrounded by homemade boron--paraffin shielding intermixed with lead to stop reactor neutrons and gamma rays from entering the detector and producing unwanted background. The expected rate for delayed coincidences from neutrino-induced events was 0.1--0.3 counts per minute. However, the delayed-coincidence background, present whether or not the reactor was on, was about 5 counts per minute, many times higher than the expected signal rate. The background was due to cosmic rays entering the detector. The small increase observed when the reactor was on was not sufficient. The results of this first experiment were not conclusive. Nevertheless, after the unsuccessful trial, they redesigned the experiment to better distinguish events induced by cosmic rays and those initiated by reactor neutrinos. Two large flat plastic target tanks were filled with water. The protons in the water provided the target for inverse beta decay. Cadmium chloride dissolved in the water provided the Cd nuclei that would capture the neutrons. The target tanks were sandwiched between three large scintillator detectors having 110 PMTs to collect scintillation light and produce electronic signals. With this detector, neutrinos were detected for the first time in 1956 by Reines and Cowan using the nuclear reactor neutrinos from the Savannah River Plant in South Carolina~\cite{reines}. Several tests confirmed that the signal was due to reactor antineutrinos. The experiment was also able to provide a measurement of the cross-section for inverse beta decay. This detection was rewarded with the Nobel Prize in 1995. Other important historical facts related to neutrinos were the detection of muon neutrinos in 1962, the detection of solar neutrinos by Davis in 1970, the discovery of neutral current neutrino interactions in 1973 with a bubble chamber experiment in a $\nu_\mu$ beam at CERN, the detection of neutrinos from a supernova type-II explosion in 1987 with large underground neutrino detectors, and the determination at \ced{the Large Electron--Positron Collider (LEP)} of three light neutrinos by measuring the total decay width of the Z resonance. \section{Neutrinos in the Standard Model} In the Standard Model (SM) of particle physics, fermions come in three families. Among them, neutrinos are the less known particles. We know that they have zero electric charge and they only interact via weak interactions. The SM is based on the gauge group $\mbox{SU}(3)_C \times \mbox{SU}(2)_L$ $\times$ \mbox{U}(1)$_Y$ that is spontaneously broken to the subgroup $\mbox{SU}(3)_C \times \mbox{U}(1)_{EM}$. All the fermions of the SM are representations of this group with the quantum numbers indicated in Table~\ref{tab:reps}, where the family structure is shown. Neutrinos are the partners of the charged leptons. They form left-handed weak isospin doublets under the \mbox{SU}(2) gauge symmetry. In the SM, neutrinos are strictly massless. They do not carry electromagnetic or colour charge but only the weak charge. They are extremely weakly interacting. \begin{table}[ht] \caption[]{Fermionic representations in the Standard Model.} \label{tab:reps} \[ \begin{array}{@{}cc|ccc@{}} \hline\hline \Rule[-1em]{2.5em} L_L(1,2,-\frac{1}{2}) & Q_L(3,2,\frac{1}{6})~~ & ~~E_R(1,1,-1) & U_R(3,1,\frac{2}{3}) & D_R(3,1,-\frac{1}{3}) \\\hline \Rule[-2em]{4em} \begin{pmatrix}\nu_\rme \\ \rme \end{pmatrix}_{L} & \begin{pmatrix} \qku \\ \qkd \end{pmatrix}_{L} & \rme_R & \qku_R & \qkd_R \\ \Rule[-2em]{4em} \begin{pmatrix}\nu_\mu \\ \mu\end{pmatrix}_{L} & \begin{pmatrix} \qkc \\ \qks \end{pmatrix}_{L} & \mu_R & \qkc_R & \qks_R \\ \Rule[-2em]{4em} \begin{pmatrix} \nu_\tau \\ \tau\end{pmatrix}_{L} & \begin{pmatrix} \qkt \\ \qkb \end{pmatrix}_{L} & \tau_R & \qkt_R & \qkb_R \\ \hline \end{array} \] \end{table} Under \ced{charge, parity and time reversal symmetry (CPT)}\aq{Given CPT in full on first occurrence. OK?} conservation, for any left-handed fermion there exists a right-handed antiparticle with opposite charge. But the right-handed particle state may not exist. This is precisely what happens with neutrinos in the SM. Since, when the SM was postulated, neutrino masses were compatible with zero, neutrinos were postulated to be Weyl fermions: the left-handed particle was the neutrino and the right-handed antiparticle was the antineutrino. A neutrino of a flavour $l$ is defined by the charged-current (CC) interaction with the corresponding charged lepton $l$. For example, the muon neutrino always comes with the charged muon. The CC interactions between neutrinos and their corresponding charged leptons are given by \begin{equation} -\mathcal{L}_{\rm CC} =\frac{g}{\sqrt{2}}\sum_l{\bar{\nu}_{Ll}\gamma}^{\mu}l_{\bar{L}}W^+_{\mu} + {\rm h.c.} \label{eq:a1} \end{equation} The SM neutrinos also have neutral-current (NC) interactions, as indicated in \begin{equation} -\mathcal{L}_{\rm NC} =\frac{g}{2\cos\theta_W}\sum_l{\bar{\nu}_{Ll}\gamma}^{\mu}\nu_{Ll} Z^0_{\mu}. \label{eq:a2} \end{equation} From this equation, one can determine the decay width of the Z$^0$ boson into neutrinos, which is proportional to the number of light left-handed neutrinos. We know thanks to neutrinos that there are exactly three families in the SM. An extra SM family with quarks and charged leptons so heavy that they remain unobserved would also have massless neutrinos that would have been produced in Z decay, modifying its width, which has been measured at LEP with impressive precision. The combined result from the four LEP experiments is $N_{\nu} = 2.984 \pm 0.008$~\cite{PDG}. The SM presents an accidental global symmetry. This is a consequence of the gauge symmetry and the representations of the physical states. The total lepton number given by $L = L_\rme + L_\mu + L_\tau$ is conserved. Other neutrino properties summarized in the Particle Data Book~\cite{PDG} are upper limits on neutrino masses, on neutrino decay processes and on the neutrino magnetic moment. \section{Neutrino interactions and detection} Neutrinos are produced copiously in natural sources: in the burning of stars, in the interaction of cosmic rays, in the Earth's radioactivity, in supernova explosions and even as relics of the Big Bang. In the laboratory, neutrinos are produced in nuclear reactors and particle accelerators. The neutrino energies expand through a huge range: from $10^3$~eV to $10^{15}$~eV. In the low-energy range there are neutrinos from double-beta decay, geoneutrinos, nuclear reactors, supernovas and the Sun. Artificial neutrinos from particle accelerators, beta beams or neutrino factories have energies in the medium range. Atmospheric neutrinos extend from medium to high energies, while neutrinos coming from extragalactic sources can reach very high energies. Another important ingredient for neutrino detection is the neutrino interaction cross-section. Neutrino cross-sections are not equally well known in the whole range (Fig.~\ref{fig:nu_int}). For neutrino energies lower than 100~MeV, cross-sections are well known because the interaction processes are dominated by the inverse beta decay, elastic scattering, and CC and NC interactions with nuclei. These interactions are theoretically better known than determined in experiments. For neutrino energies above 100~GeV, up to $10^7$~GeV (ultrahigh energies), they are also accurately known. However, in the intermediate range, critical for atmospheric and accelerator experiments with neutrino energies around 1~GeV, cross-sections are poorly known (with uncertainties of 20--40\%) due to the complexity of the processes like quasi-elastic (QE) scattering, single pion production, deep inelastic scattering (DIS), and nuclear effects, form factors, etc. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/nu_int.png} \caption{Neutrino interaction cross-sections.} \label{fig:nu_int} \end{center} \end{figure} MINER$\nu$A~\cite{minerva} (Main Injector Experiment for $\nu$-A) is a detector designed to precisely study neutrino--nucleus interactions in the 1--10~GeV range in the NuMI high-intensity neutrino beam at Fermilab. This experiment will improve our knowledge of neutrino cross-sections at low energy and study the $A$ dependence in neutrino interactions. These data will be important to reduce the systematic errors in long-baseline neutrino oscillation experiments. They will study four main reaction channels: QE, resonance production, deep inelastic scattering, and coherent neutrino--nucleus reactions (CC and NC coherent single pion production). The MINER$\nu$A detector is a fine-grained tracking calorimeter with a fully active solid-scintillator tracker. The active detector are solid-scintillator strips of triangular cross-section providing a spatial resolution of 2.5~mm. The scintillation light due to a charged particle is collected by a wavelength-shifting optical fibre located at the centre of each strip and routed to PMTs. The detectors are hexagonal modules containing one or two active planes. After the tracker region there is the electromagnetic calorimeter (ECAL) and the hadronic calorimeter (HCAL) to contain forward-going particles. Both calorimeters also surround the inner detector to contain particles with high transverse momentum. At the back of the detector they use the MINOS Near detector as a muon spectrometer to measure the energy and charge of muons. Before the tracker there is an area of the nuclear targets (liquid He, carbon, iron, lead and water) interleaved with tracking planes. Just before, there is a veto wall and a steel shield. The MINER$\nu$A collaboration built a first prototype of 24 full-size modules that was first commissioned with cosmic rays in 2008--2009 and then moved underground into the NuMI beam upstream of the MINOS Near detector with an iron target prototype and a veto wall. It \ced{began operating} in summer 2009. The complete detector was finished in March 2010. Modules of four types (120 in total) -- nuclear target, tracker, ECAL and HCAL -- were built with a total mass of $\sim$~200~ton. MINER$\nu$A has taken data in the low-energy (peak at 3~GeV) antineutrino beam since November 2009 with 55\% of the full detector. In March 2010 they took data in low-energy neutrino mode until September 2010. \ced{After November 2010 they took antineutrino data,} turning again to low-energy neutrino mode in spring 2011. In summer 2012 Fermilab will switch to the medium-energy beam (peak at 6~GeV) for NO$\nu$A, and MINER$\nu$A will continue to take data. Different technologies have been used by past and present neutrino detectors. {\it Radiochemical techniques} were used by the first solar neutrino experiments like Homestake, SAGE and GALLEX. They use the interaction of neutrinos with Cl or Ga isotopes, producing Ar or Ge, and developed methods to extract these isotopes using different solutions. They were not real-time detectors. At present, one of the most common technologies exploited is that used by {\it Cerenkov detectors} like Super-Kamiokande, SNO, MiniBooNE, Antares, IceCube, etc. They detect the Cerenkov light of the charged leptons produced by neutrinos using PMTs. The pattern of the detected rings allows electrons to be distinguished from muons. This is the best technique for low rates and low-multiplicity events with energies below 1~GeV and also very high energies. A different technique used by some neutrino accelerator experiments like MINOS, MINER$\nu$A and NO$\nu$A is {\it tracking calorimetry}. They use alternating planes of absorber material (such as lead) with detector planes for tracking (essentially liquid or plastic scintillators read by PMTs). This is appropriate for high-rate and high-multiplicity events with energies around 1~GeV. Another type of detectors are the {\it unsegmented scintillator calorimeters} like KamLAND, Borexino and \ced{Double Chooz}\aq{It appears that usually you refer to the CHOOZ (all capitals) experiment and the Double Chooz (initial capitals) experiment. This seems also to be the case with a Google search, so I've left this}. They provide large light yields at MeV energies. This is very convenient for the detection of reactor and solar neutrinos. \ced{\textit{Liquid argon time projection chambers} (LAr TPCs)}\aq{Given in full on first occurrence. OK?} like ICARUS have high granularity and are potentially good for large masses. Finally we have the {\it emulsion technique}, which is in fashion again with the OPERA experiment. This is the only technique providing the micrometre-level spatial resolution needed, for example, to detect tau neutrinos. \section{Massive neutrinos} As already mentioned, there are only upper limits to neutrino masses. The direct limits come from the precise measurement of the endpoint of the lepton energy spectrum in weak decays, which gets modified if neutrinos are massive. The SM predicts that neutrinos are precisely massless. In order to add a mass to the neutrino, the SM has to be extended. The SM gauge invariance does not imply lepton number symmetry. Total lepton number can or cannot be a symmetry depending on the neutrino nature. Neutrino masses can be easily accommodated in the SM. A massive fermion necessarily has two states of helicity. The mass is the strength of the coupling between the two helicity states. To introduce such a coupling in the SM for the neutrinos, we need to identify the neutrino right-handed states, which in the SM are absent. There are two ways to proceed: \begin{enumerate} \item We introduce a right-handed neutrino coupled to the matter just through the neutrino masses and impose lepton number conservation ({\it Dirac neutrinos}) \begin{equation} \mathcal{L}=\mathcal{L}_{\rm SM} - M_{\nu}\overline{\nu_R}\nu_L + {\rm h.c.} \label{eq:a3} \end{equation} \item We do not impose lepton number conservation and we identify the right-handed state with the antiparticle of the left-handed state ({\it Majorana neutrinos}) \begin{equation} \mathcal{L}=\mathcal{L}_{\rm SM} - \tfrac{1}{2} M_{\nu}\overline{\nu_{L}^C}\nu_L + {\rm h.c.} \label{eq:a4} \end{equation} \end{enumerate} In the first case, we enlarge the SM by adding a set of three right-handed neutrino states, which would be singlets under $\mbox{SU}(3) \times \mbox{SU}(2) \times U_Y(1)$, but coupled to matter just through the neutrino masses. This coupling has to be of the Yukawa type to preserve the gauge symmetry. Masses are proportional to the vacuum expectation value of the Higgs field, like for the remaining fermions. One important consequence of this is a new hierarchical problem: Why are neutrinos much lighter than the remaining leptons? In the second case, Majorana identified the right-handed state with the antiparticle of the left-handed state. $C$ is the operator of charge conjugation in spinor space that connects particle and anti\-particles: \begin{eqnarray} \nu_R \rightarrow (\nu_L)^c = C \bar{\nu}^T_L = C \gamma_0 \nu_L^* . \end{eqnarray} The Majorana neutrino masses are of the form \begin{eqnarray} m_\nu = \alpha_\nu \frac{v^2}{\Lambda}. \end{eqnarray} If $\Lambda$ is much higher than the electroweak scale $v$, a strong hierarchy between neutrino and charged lepton masses arises naturally. A Majorana mass violates the conservation of all charges carried by the fermion, including global charges as lepton number. The simplest example to explain the origin of the scale $\Lambda$ in the Majorana masses is the famous {\it see-saw mechanism} \cite{see-saw}. In this case, the scale of the mass eigenvalues is much higher than the scale of electroweak symmetry breaking ($\Lambda \gg v$). The Majorana effective interaction results from the interchange of very heavy right-handed Majorana neutrinos. The new physics scale is simply related to the masses of the heavy Majorana neutrinos and the Yukawa couplings. Neutrino masses imply neutrino mixing, as happens in the quark sector. Majorana and Dirac possibilities differ in the number of observable phases. The real physical parameters are the mass eigenstates and the mixing angles, while the imaginary parameters are CP-violating phases. In the case of three families, there are three mixing angles and one phase for the Dirac case or three phases for the Majorana case. \section{Neutrino oscillations in vacuum and matter} If neutrinos have masses and mix, there can be neutrino flavour change. Oscillations appear because of the misalignment between the neutrino interaction eigenstates and the propagation mass eigenstates. The neutrino flavour eigenstates, $\nu_\alpha$, produced in a weak interaction are linear combinations of the mass eigenstates $\nu_j$: \begin{eqnarray} |\nu_\alpha \rangle = \sum_{j=1}^n U_{\alpha j}^* |\nu_j \rangle , \end{eqnarray} where $n$ is the number of light neutrino species and $U$ is the \ced{Pontecorvo--Maki--Nakagawa--Sakata (PMNS)}\aq{Given in full. OK?} mixing matrix. A standard parametrization of the mixing matrix is given by \begin{equation} U_\text{PMNS} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & c_{23} & s_{23} \\ 0 & -s_{23} & c_{23} \end{pmatrix} \begin{pmatrix} c_{13} & 0 & s_{13} \rme^{\rmi \delta} \\ 0 & 1 & 0 \\ -s_{13} \rme^{\rmi \delta}& 0 & c_{13} \end{pmatrix} \begin{pmatrix} c_{12} & s_{12} & 0 \\ -s_{12} & c_{12} & 0 \\\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \rme^{\rmi \alpha_1} & 0\\ 0 & 0 & \rme^{\rmi \alpha_2} \end{pmatrix} , \label{mns} \end{equation} where $c_{ij}=\cos\theta_{ij}$ and $s_{ij}=\sin\theta_{ij}$, with $\theta_{12}$, $\theta_{13}$ and $\theta_{23}$ the three mixing angles, $\delta$ is the Dirac CP-violating phase, and $\alpha_1$ and $\alpha_2$ are the Majorana phases, not accessible by oscillation experiments. After travelling a distance $L$ (or, equivalently for relativistic neutrinos, time $t$), a neutrino originally produced with a flavour $\alpha$ evolves as follows: \begin{eqnarray} |\nu_\alpha(t) \rangle = \sum_{j=1}^n U_{\alpha j}^* |\nu_j(t) \rangle . \end{eqnarray} Using the standard approximation that the neutrino state is a plane wave $|\nu_j(t) \rangle = \rme^{-\rmi E_j t} |\nu_j(0) \rangle$, that neutrinos are relativistic with \begin{eqnarray} E_j = \sqrt{p_j^2 + m_j^2} \approx p + \frac{m_j^2}{2E} \end{eqnarray} and the orthogonality relation $\langle \nu_i(0)|\nu_j(0) \rangle = \delta_{ij}$, the transition probability between $\nu_\alpha$ and $\nu_\beta$ is \begin{eqnarray} P(\nu_\alpha{\rightarrow}\nu_\beta) = |\langle \nu_\beta |\nu_\alpha(t)\rangle|^2 &=& \left| \sum_{j=1}^n \sum_{k=1}^n U_{\alpha j}^* U_{\beta k} \langle \nu_k |\nu_j(t)\rangle \right|^2 \nonumber \\ &\approx& \sum_{j,k} U^*_{\alpha j} U_{\beta j} U_{\alpha k} U^*_{\beta k} \, \rme^{{-\rmi} {\Delta m^2_{jk} L}/{2E}} , \end{eqnarray} with $\Delta m^2_{jk} = m_j^2 - m_k^2$. The probability for flavour transition is a periodic function of the distance between the source and the detector. Dominant oscillations are well described by effective two-flavour oscillations. The three-flavour oscillation neutrino effects are suppressed because of the small value of $\theta_{13}$ and the hierarchy between the two mass splittings, $\Delta m^2_{21} \ll \Delta m^2_{32}$. In most cases the problem can be reduced to two-flavour oscillations. In the simplest case of two-family mixing, the mixing matrix depends on just one mixing angle and there is only one mass square difference. The probability that a neutrino $\nu_\alpha$ of energy $E_\nu$ oscillates into a neutrino $\nu_\beta$ after travelling a distance $L$ is given by \begin{eqnarray} P(\nu_\alpha{\rightarrow}\nu_\beta) = \sin^2 2\theta \sin^2\!\left(\frac{\Delta m^2 L}{4 E_\nu}\right), \qquad\quad \alpha\neq\beta . \label{eq:wk} \end{eqnarray} The probability is the same for neutrinos and antineutrinos, since there are no imaginary entries in the mixing matrix. The transition probability has an oscillatory behaviour with a period determined by the oscillation length ($L_{\rm osc}$), which is proportional to the neutrino energy and inversely proportional to the neutrino mass square difference, and an amplitude proportional to the mixing angle. Hence the name ``neutrino oscillations'': \begin{eqnarray} L_{\rm osc} = \frac{4 \pi E_\nu}{\Delta m^2}. \end{eqnarray} If $L \gg L_{\rm osc}$, the oscillating phase goes through many cycles before detection and is averaged to 1/2. Experimentally, the free parameters are the source--detector distance and the neutrino energy. In order to be sensitive to a given value of $\Delta m^2$, the experiment has to be set up with $E/L \approx \Delta m^2$. For example, to measure $\theta_{23}$ and $\Delta m^2_{32}$ parameters, one should look for an $L/E$ of around 500~km/GeV (which is the case for atmospheric neutrinos). To measure $\theta_{12}$ and $\Delta m^2_{21}$ parameters $L/E$ should be around 15\,000~km/GeV (solar neutrinos case). In the most general case of three neutrino families, the oscillation probability can be rewritten in one term conserving CP and another term violating CP, as follows: \begin{eqnarray} P(\nu_\alpha{\rightarrow}\nu_\beta) = \delta_{\alpha\beta} -4 \sum_{i<j}^n {\rm{Re}}[J^{\alpha\beta}_{ij}] \sin^2\!\left(\frac{\Delta m^2_{ij} L}{4 E_\nu}\right) \pm 2 \sum_{i<j}^n {\rm{Im}}[J^{\alpha\beta}_{ij}] \sin\!\left(\frac{\Delta m^2_{ij}L}{2 E_\nu}\right), \label{eq:prob} \end{eqnarray} with $J^{\alpha\beta}_{ij} \equiv U_{\alpha i} U_{\beta i}^* U_{\alpha j}^* U_{\beta j}$. The two terms have opposite sign for neutrinos and antineutrinos. By comparing neutrino and antineutrino oscillation probabilities, we could test the violation of CP. From the experimental point of view, to measure neutrino oscillations, we need to compute or to measure the flavour composition and the flux and energy spectra of the produced neutrinos (near data) and also the interaction cross-section at their energies. After propagation of neutrinos through a distance $L$, we need to measure the flavour composition and energy spectrum (far data) with a detector. By comparing predictions with observations or near/far data, we can measure neutrino oscillations and determine the oscillation parameters. When neutrinos propagate in matter, the interactions with the medium affect their properties. The amplitude of this propagation is modified due to coherent forward scattering on electrons and nucleons. Different flavours have different interactions. The effect of the medium can be described by an effective potential that depends on the density and composition of the matter. The effective potential for the evolution of $\nu_\rme$ in a medium with electrons, protons and neutrons due to its CC interactions is given by\aq{What is $G_F$ in the following equation?} \begin{equation} V_{\text{CC}} = \pm \sqrt{2} G_F n_\rme , \end{equation} where $n_\rme$ is the electron number density and $G_F$ is the Fermi constant. The effective potential has different sign for neutrinos and antineutrinos. For example, the matter potential at the Earth's core is $\sim 10^{-13}$~eV while at the solar core it is $\sim 10^{-12}$~eV. In spite of these tiny values, these effects are non-negligible in neutrino oscillations. For $\nu_\mu$ and $\nu_\tau$, the potential due to CC interactions is zero, since neither muons nor taus are present in the medium. The effective potential for any active neutrino due to neutral current interactions in a neutral medium can be written as \begin{equation} V_{\text{NC}} = \mp \frac{\sqrt{2}}{2} \, G_F n_\rmn \end{equation} where $n_\rmn$ is the number density of neutrons. In general, the electron number density in the medium changes along the neutrino trajectory and so does the effective potential. We can describe neutrino oscillations in a medium as in vacuum but with an effective mass matrix (${\tilde M}_\nu^2$) that depends on the neutrino energy and the matter density, as follows: \begin{equation} {\tilde M}_\nu^2 = M_\nu^2 \pm 4 E V_{\rm m} , \end{equation} with \begin{eqnarray} V_{\rm m} = \begin{pmatrix} V_\rme = V_{\text{CC}} + V_{\text{NC}} & 0 & 0\\ 0 & V_{\mu} = V_{\text{NC}} & 0 \\ 0 & 0 & V_{\tau} = V_{\text{NC}} \end{pmatrix} . \end{eqnarray} In the case of two flavours, the mixing angle and effective masses in matter can be written as \begin{equation} \tan 2\theta_{\rm m} = \frac{\Delta m^2 \sin 2\theta}{\Delta m^2 \cos 2\theta - A} \end{equation} and \begin{equation} \mu^2_{1,2}(x) = \frac{m_1^2 + m_2^2}{2} + E (V_\alpha + V_\beta) \mp \frac{1}{2} \sqrt{(\Delta m^2 \cos 2\theta - A)^2 + (\Delta m^2 \sin 2\theta)^2}. \end{equation} They depend on the matter density and neutrino energy. The $-$ (+) sign corresponds to neutrinos (antineutrinos). The quantity $A$ is defined as $A \equiv 2 E (V_\alpha - V_\beta)$, the potential difference factor between $\alpha$ and $\beta$ flavours. Depending on the sign of $A$, the mixing angle in matter can be larger or smaller than in vacuum. For constant potential, the mixing angle and effective masses are constant along the neutrino evolution. Matter effects are important when the potential difference factor $A$ is comparable to the mass difference term $\Delta m^2 \cos 2\theta$. The oscillation amplitude has a resonance when the neutrino energy satisfies this relation: \begin{equation} A_R= \Delta m^2 \cos 2\theta . \end{equation} Even if the mixing angle in vacuum is very small, we will have maximal mixing at the resonance condition. The resonance happens for neutrinos or antineutrinos but not for both, and depends on the sign of $\Delta m^2 \cos 2\theta$. The value of the mixing angle in matter changes if the density is changing along the neutrino trajectory. The mixing angle $\theta_{\rm m}$ changes sign at $A_R$. For $A \gg A_R$, we have $\theta_{\rm m} = \pi/2$. For $A = A_R$, $\theta_{\rm m} = \pi/4$. For a neutrino system that is travelling across a monotonically varying matter potential, the dominant flavour component of a given mass eigenstate changes when crossing the region with $A = A_R$. This phenomenon is known as {\it level crossing}. For constant or sufficiently slowly varying matter potential, the instantaneous mass eigenstates behave approximately as energy eigenstates and they do not mix in the evolution. This is the {\it adiabatic transition} approximation. The \ced{Mikheyev--Smirnov--Wolfenstein (MSW)}\aq{Given in full. OK?} effect~\cite{msw} describes the adiabatic flavour neutrino conversion in a medium with varying density. We can consider the propagation of a two-family neutrino system in the matter density of the Sun. The solar density decreases monotonically with the distance to the centre of the Sun. The eigenstates in matter can be written as \begin{eqnarray} |\nu_1^{\rm m}\rangle &=& |\nu_\rme\rangle \cos\theta_{\rm m} - |\nu_\mu\rangle \sin\theta_{\rm m} ,\\ |\nu_2^{\rm m}\rangle &=& |\nu_\rme\rangle \sin\theta_{\rm m} + |\nu_\mu\rangle \cos\theta_{\rm m} . \label{eq:eigenmat} \end{eqnarray} Neutrinos are produced close to the centre where the electron density ($n_\rme(0)$) is very large. The potential is much larger than the resonance potential \begin{equation} 2 E \sqrt{2} G_F n_\rme(0) \gg \Delta m^2 \cos 2\theta , \end{equation} and therefore the mixing angle in matter is $\theta_{\rm m} = \pi/2$. In this case, the electron neutrino is mostly the second mass eigenstate ($\nu_\rme \approx \nu_2^{\rm m}$). When neutrinos exit the Sun, the matter density falls to zero and the effective mixing angle is the one in vacuum, $\theta_{\rm m} = \theta$. If $\theta$ is small, the eigenstate $\nu_2^{\rm m}$ is mostly $\nu_\mu$. There is maximum conversion $\nu_\rme \rightarrow \nu_\mu$ if the adiabatic approximation is correct (Fig.~\ref{fig:msw}). This is the MSW effect. There is a level crossing in the absence of mixing. As we will explain later, the deficit of electron neutrinos coming from the Sun has been interpreted in terms of an MSW effect in neutrino propagation in the Sun. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{GilBotelaFigs/msw2.png} \caption{Effective masses acquired in the medium by a system of two massive neutrinos as a function of the potential $A$.} \label{fig:msw} \end{center} \end{figure} \section{Experimental results from neutrino oscillation experiments} Over the years, neutrino experiments have provided spectacular evidence for neutrino oscillations. There are essentially three pieces of evidence: one provided by solar and reactor neutrinos, a second by atmospheric and accelerator neutrinos, and a third by the \ced{Liquid Scintillator Neutrino Detector (LSND)}\aq{I've corrected the acronym to LSND throughout and given it in full here on first occurrence. OK?} experiment. They correspond to three values of mass-squared differences of different orders of magnitude. There is no consistent explanation of all three signals based on oscillations among the three known neutrinos, since there are only two independent mass-squared differences. In the next sections, I will describe these experimental results in detail. \subsection{Solar neutrinos} Solar electron neutrinos are produced in thermonuclear reactions happening in the Sun through two main chains, the \ced{proton--proton (pp) chain} and the \ced{carbon--nitrogen--oxygen (CNO) cycle.}\aq{Given both pp and CNO in full on first occurrence. OK?} There are five reactions that produce $\nu_\rme$ in the pp chain and three in the CNO cycle. Figure~\ref{fig:bahcall} shows the solar neutrino spectrum as predicted by Bahcall~\cite{bahcall} from the eight reactions. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{GilBotelaFigs/bahcall.png} \caption{Neutrino fluxes from the pp chain reactions and the CNO cycle reactions as a function of the neutrino energy.} \label{fig:bahcall} \end{center} \end{figure} The standard solar model (SSM) is the theoretical model describing the evolution of the Sun and allows one to predict the spectra and the fluxes of all the solar neutrino sources. As a consequence, solar neutrinos provide a unique probe for studying both the nuclear fusion reactions that power the Sun and the fundamental properties of neutrinos. The first indication of oscillations happened in the 1970s by measuring the solar neutrino flux. Radiochemical experiments were trying to understand the energy production mechanism in the Sun and they found a huge difference between what they measured and what was expected from solar models. The Davis experiment was installed in the Homestake mine in South Dakota~\cite{homestake}. They built a 615~ton tank of perchloroethylene C$_2$Cl$_4$ to measure the $\nu_\rme$ interaction with Cl, which gives a radioactive isotope $^{37}$Ar that can be extracted and counted. \begin{equation} \nu_\rme + \text{$^{37}$Cl} \rightarrow \text{$^{37}$Ar} + \rme^- \end{equation} The energy threshold for this reaction is 0.814~MeV, so the relevant fluxes are the $^7$Be and $^8$B neutrinos. The $^{37}$Ar produced is extracted radiochemically every three months approximately and the number of $^{37}$Ar decays is measured in a proportional counter. In the 1990s, other radiochemical experiments like GALLEX/GNO \cite{gallex} in Italy and SAGE~\cite{sage} in Russia tried to measure the solar neutrinos using a $^{71}$Ga target and extracting Ge isotopes. \begin{equation} \nu_\rme + \text{$^{71}$Ga} \rightarrow \text{$^{71}$Ge} + \rme^- \end{equation} This reaction has a very low energy threshold ($E_\nu > 0.233$~MeV) and a large cross-section for the lower-energy pp neutrinos. The extraction of $^{71}$Ge takes place every 3--4 weeks. The GALLEX programme was completed in the autumn of 1997 and its successor GNO started taking data in spring 1998. All the radiochemical neutrino experiments found a solar neutrino flux much lower (between 30\% and 50\%) than the predicted value. They could provide neither information on the directionality nor the energy of the neutrinos. The Kamiokande experiment~\cite{kamioka} pioneered a new technique to observe solar neutrinos using water Cerenkov detectors. This was a real-time experiment and provided information on the directionality and the energy of neutrinos by measuring the electrons scattered from the water by the elastic reaction \begin{equation} \nu_\rme + \rme^- \rightarrow \nu_\rme + \rme^- \end{equation} producing Cerenkov light, which is detected by photomultipliers. The threshold for this type of experiment is much higher and they are only able to measure the $^8$B neutrinos. Later on, the Super-Kamiokande (SK) experiment~\cite{sk_solar}, with 50~kton of water, measured the solar neutrinos with unprecedented precision in the energy region 5--20~MeV. Figure~\ref{fig:solar_sk} shows the reconstructed direction of the incoming neutrinos correlated to the Sun direction as measured by SK during the first phase of operation (1996--2001). \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{GilBotelaFigs/sk_solar2-eps-converted-to.pdf} \caption{Solar neutrino direction as a function of the zenith angle of the Sun in Super-Kamiokande phase I (from Ref.~\cite{sk_solar}).} \label{fig:solar_sk} \end{center} \end{figure} In 2001 the SNO experiment showed clear evidence that solar neutrinos oscillate. This allowed the solar model predictions to be studied independently of the neutrino properties. SNO~\cite{sno} is a Cerenkov detector made of 1~kton of heavy water (D$_2$O) located underground in the Sudbury mine in Canada and is able to detect $^8$B solar neutrinos via three different reactions: \begin{itemize} \item CC interactions on deuterons in which only electron neutrinos participate \begin{equation} \nu_\rme + \rmd \rightarrow \rmp + \rmp + \rme^{-} \end{equation} \item elastic scattering (ES) sensitive to other neutrino flavours but dominated by electron neutrinos \begin{equation} \nu_x + \rme^- \rightarrow \nu_x + \rme^- \end{equation} \item NC interactions with equal sensitivity to all flavours and an energy threshold of 2.2~MeV \begin{equation} \nu_x + \rmd \rightarrow \rmp + \rmn + \nu_x \end{equation} \end{itemize} In the case of no oscillations, the neutrino fluxes from the three interactions should be equal since there are only electron neutrinos coming from the Sun. However, Fig.~\ref{fig:sno} shows the neutrino fluxes measured by the three reactions by SNO. The flux of non-electron neutrinos ($\phi_{\mu\tau}$) is plotted as a function of the electron neutrino flux ($\phi_\rme$). The NC events give a measure of the total solar neutrino flux and it is in good agreement with the SSM theoretical predictions. SNO can test if the deficit of solar $\nu_\rme$ is due to changes in the flavour composition of the solar neutrino beam, since the ratio CC/NC compares the number of $\nu_\rme$ interactions with those from all active flavours. This comparison is independent of the overall flux normalization. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/sno-eps-converted-to.pdf} \caption{Flux of $^8$B solar neutrinos that are $\mu$ or $\tau$ flavour versus flux of electron neutrinos deduced from the three neutrino reactions in SNO (from Ref.~\cite{sno-I}).} \label{fig:sno} \end{center} \end{figure} The SNO detector operated during 1999--2006 in three phases with different detection techniques to detect NC neutrons: phase~I, in pure heavy water; phase~II, 2000~kg of salt were dissolved in the heavy water, increasing the neutron capture cross-section; and phase~III, the salt was removed and ultra-pure $^3$He counters were deployed into the SNO detector. SNO finished data taking in November 2006. From the latest results including the SNO-III phase, the ratio between the CC and NC events is~\cite{sno_ratio} \begin{equation} \frac{\phi_{\rm CC}}{\phi_{\rm NC}} = 0.301 \pm 0.033. \end{equation} This result provides clear evidence for solar neutrino oscillations independently of the solar model. From the SNO results, it is possible to constrain the neutrino mixing parameters. Figure~\ref{fig:sno_region} shows the allowed regions of parameters from SNO data (left) and from the global analysis including data from all the solar experiments (right). Of all the possible solutions, only the one at the largest mixing angle and mass-squared difference survives, the famous \ced{large-mixing-angle (LMA)}\aq{Given in full on first occurrence. OK?} solution, for which matter effects in the Sun are important. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{GilBotelaFigs/sno_reg1-eps-converted-to.pdf} \includegraphics[width=0.3\linewidth]{GilBotelaFigs/sno_reg2-eps-converted-to.pdf} \caption{Allowed oscillation parameters from the analysis of SNO neutrino data (left) and from the global analysis of all solar neutrino data (right) in terms of neutrino oscillations (from Ref.~\cite{sno_ratio}).} \label{fig:sno_region} \end{center} \end{figure} The phase I and II data from SNO have been reanalysed (see Fig.\ \ref{fig:sno_new})~\cite{sno_new} with a lower effective electron kinetic energy threshold (3.5~MeV). The total uncertainty on the flux of $^8$B solar neutrinos has been reduced by more than a factor of~2 \ced{compared to} the best previous SNO results. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/sno_new.png} \caption{Total $^8$B neutrino flux results using the NC reaction from both unconstrained signal extraction fits (LETA) in comparison to unconstrained fit results from previous SNO analyses (from Ref.~\cite{sno_new}).} \label{fig:sno_new} \end{center} \end{figure} One of the most important results in the last few years has come from the Borexino detector in the Gran Sasso laboratory. Borexino \cite{borexino} is a 300~kton ultra-pure liquid scintillator detector using the elastic scattering on electrons to measure the low-energy flux and spectrum of solar neutrinos. The main goal is the measurement of the monochromatic $^7$Be solar neutrinos at 0.862~MeV. Thanks to its excellent radiopurity, Borexino also measures $^8$B neutrinos with an energy threshold of only 3~MeV. This is the lowest energy threshold ever reached in real-time experiments. Before Borexino, radiochemical experiments measured the very low energy range (where oscillations happen essentially in vacuum) while SNO and SK measured the $^8$B part of the spectrum. Borexino has measured the $^7$Be spectrum and provided a confirmation of the MSW--LMA model. This is the first direct measurement of the survival probability for solar electron neutrinos in the transition region between matter-enhanced and vacuum-driven oscillations~\cite{borexino_results1}. A prediction of the MSW--LMA model is that neutrino oscillations are dominated by vacuum oscillations at low energies ($<1$~MeV) and by resonant matter-enhanced oscillations taking place in the Sun's core at high energies ($>5$~MeV). A measurement of the survival probability as a function of the neutrino energy is very important to confirm the MSW--LMA solution. Figure~\ref{fig:borexino_results} shows the survival probability ($P_{\rm ee}$) before (left) and after (right) including the Borexino data and the fit assuming LMA oscillations. The MSW--LMA model is confirmed at 4.2$\sigma$ level. For the first time the same apparatus can measure two different oscillation regions predicted by the MSW--LMA model. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\linewidth]{GilBotelaFigs/borex_1.png} \includegraphics[width=0.4\linewidth]{GilBotelaFigs/borex_2.png} \caption{Comparison of solar neutrino fluxes as a function of the energy measured by several solar neutrino experiments before (left) and after (right) Borexino data (from Ref.~\cite{borexino_results2}).} \label{fig:borexino_results} \end{center} \end{figure} ``Geoneutrinos'' are electron antineutrinos produced by beta decays of the nuclei in the decay chains of $^{238}$U and $^{232}$Th. Geoneutrinos are direct messengers of the abundances and distribution of radioactive elements within our planet. By measuring their flux and spectrum, it is possible to reveal the distribution of long-lived radioactivity in the Earth and to assess the radiogenic contribution to the total heat balance of the Earth. As these radioactive isotopes beta-decay, they produce antineutrinos. So, measuring these antineutrinos may serve as a cross-check of the radiogenic heat production rate. KamLAND is the first detector to conduct an investigation on geoneutrinos~\cite{kamland_geo}. In 2005 they provided the first experimental indication for geoneutrinos. Borexino has also been able to measure geoneutrinos at 4.2$\sigma$~\cite{borex_geo}. Both detectors use the inverse beta decay to detect geoneutrinos. \subsection{Atmospheric neutrinos} Atmospheric neutrinos are produced in the collision of primary cosmic rays (typically protons) with nuclei in the upper atmosphere. This creates a shower of hadrons, mostly pions. The pions decay to a muon and a muon neutrino. The muons decay to an electron, another muon neutrino, and an electron neutrino. Based on this simple kinematic chain, one predicts a flux ratio of \ced{two muon neutrinos to one electron neutrino.}\aq{Is this what you meant?} The first experiment proving neutrino oscillations without ambiguities was the Super-Kamiokande experiment located 1000~m underground in the Kamioka mine in Japan in 1998. This 50~kton water Cerenkov detector (22.5~kton fiducial mass) measured the atmospheric neutrinos produced by cosmic-ray collisions with the atmosphere. Two muon neutrinos are produced per one electron neutrino from the pion decay with energies between 0.1 and 100~GeV. More than 11\,000 20-inch PMTs covering 40\% of the surface detect the Cerenkov light coming from the neutrino CC interactions. By measuring the number of events of each type, as a function of energy and direction, we can find out if neutrino oscillations are affecting the results. SK has shown a big deficit of muon neutrinos \ced{dependent on}\aq{Sense OK?} the energy and at distances compatible with neutrino oscillations. The distributions of electrons and muons as a function of the azimuthal angle show an asymmetry between upward and downward muon neutrinos (Fig.~\ref{fig:sk_atm}). The muon neutrinos traversing the Earth present a clear deficit, which is not the case for downward muon neutrinos or electron neutrinos. This deficit is compatible with a $\nu_\mu$--$\nu_\tau$ oscillation~\cite{sk_osc}. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/sk_atm.png} \caption{Zenith angle distribution of SK data. Dots, solid line and dashed line correspond to data, Monte Carlo without oscillation, and Monte Carlo with best-fit oscillation parameters.} \label{fig:sk_atm} \end{center} \end{figure} On 12 November 2001, about 6600 of the photomultiplier tubes in the Super-Kamiokande detector imploded, apparently in a chain reaction due to a shock wave. The detector was partially restored by redistributing the photomultiplier tubes that did not implode. In 2005 they reinstalled 6000 PMTs and they called the new phase SK-III. The zenith angle two-flavour analysis of the data before the SK PMT implosion (SK-I and SK-II) has been updated~\cite{sk_oscpar} and allowed to better constrain the $\Delta m^2_{32}$ and $\theta_{23}$ oscillation parameters. SK was also able to observe the expected dip in the $L/E$ spectrum due to oscillations (Fig.~\ref{fig:sk_dip}). Other hypotheses have been excluded at 4.1$\sigma$ and 5$\sigma$ levels. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/sk_dip-eps-converted-to.pdf} \caption{Ratio of the data to the non-oscillated Monte Carlo events (points) with the best-fit expectation for two-flavour $\nu_\mu \rightarrow \nu_\tau$ oscillation analysis (solid line) as a function of $L/E$ (from Ref.~\cite{sk_dip}).} \label{fig:sk_dip} \end{center} \end{figure} The latest zenith angle and $L/E$ analysis results from SK-I, II and III data~\cite{sk_latest} are consistent and provide the most stringent limit on $\sin^2\theta_{23}$. \subsection{Reactor neutrinos} Reactor neutrinos have also played a crucial role in neutrino oscillations. They have helped to understand the solar anomaly and they have provided unique information on the $\theta_{13}$ mixing angle, still unknown. Nuclear reactors are the major source of human-generated neutrinos. They are very intense, pure and isotropic sources of antineutrinos coming from the beta decay of the neutron-rich fission fragments. The four main isotopes contributing to the antineutrino flux are $^{235}$U, $^{238}$U, $^{239}$Pu and $^{241}$Pu. On average, each fission cycle produces $\sim$~200~MeV and six antineutrinos. For typical modern commercial light-water reactors with thermal power of the order of 3~GWth, the typical yield is \mbox{$\sim 6 \times 10^{20}$} antineutrinos per core per second. But not all these neutrinos can be detected. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/anue_spect.png} \caption{Typical energy spectrum of antineutrinos from nuclear reactors.} \label{fig:anue_spect} \end{center} \end{figure} The observed neutrino spectrum will be the product of the reactor neutrino flux and the inverse beta decay cross-section, as shown in Fig.~\ref{fig:anue_spect}. The inverse beta decay (Eq.~(\ref{eq:ibd})) has an energy threshold of 1.8~MeV and only about 1.5~$\bar{\nu}_\rme$/fission can be detected (25\% of the total). \begin{equation} \bar{\nu}_\rme + \rmp \rightarrow \rme^+ + \rmn \label{eq:ibd} \end{equation} Past reactor experiments were looking for the disappearance of reactor $\bar{\nu}_\rme$ with the goal of solving the atmospheric problem at short baselines. All of them found negative results. The most sensitive of these experiments was the CHOOZ experiment. CHOOZ~\cite{chooz_result} was looking for the disappearance of electron antineutrinos from the CHOOZ nuclear power plant in France in the 1990s. CHOOZ was a quite simple liquid scintillator detector doped with 0.1\% Gd located 1.05~km away from the reactors. It was hosted in a cylindrical pit 7~m in diameter and height. The cylindrical steel tank was surrounded by a 75~cm thick low-radioactivity sand contained in an acrylic vessel and covered by cast iron. The target was 5~ton 0.1\% Gd-loaded liquid scintillator contained in a transparent acrylic vessel. A 17~ton non-Gd-loaded liquid scintillation region contained 192 eight-inch PMTs. A muon veto region was read by two rings of 24 eight-inch PMTs. This experiment has strongly influenced the present and upcoming reactor experiments. They had the unique opportunity to have both reactors off and periods with only one of the reactors on. This allowed a good measurement of the backgrounds. The ratio of measured to expected events was $1.01 \pm 2.8\%$~(stat.) $\pm\ 2.7\%$~(sys.). No evidence for $\nu_\rme \rightarrow \nu_\mu$ oscillations at the $10^{-3}$ scale was found. Despite the negative result, this experiment was very sensitive to the $\nu_\rme \rightarrow \nu_\tau$ oscillation. They have not observed the disappearance of electron antineutrinos but they could exclude a region in the parameter space (Fig.~\ref{fig:chooz_excl}). The upper bound obtained is $\sin^2(2\theta_{13}) < 0.12\mbox{--}0.2$ at 90\% confidence level (CL), depending on the value of $\Delta m^2_{32}$. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/exclplota-eps-converted-to.pdf} \caption{Exclusion region in the parameter space by the CHOOZ data (from Ref.~\cite{chooz_result}).} \label{fig:chooz_excl} \end{center} \end{figure} The CHOOZ constraint is also relevant to the global interpretation of the solar and atmospheric neutrino data in the framework of three-neutrino mixing. The neutrino oscillation in the solar range has been confirmed with reactor neutrinos by the KamLAND long-baseline reactor experiment \cite{kamland}. KamLAND is a 1~kton liquid scintillator detector located at the Kamioka mine in Japan (at the old Kamiokande site) at an average distance of $L_{0} = 180$~km from 55 nuclear reactors at a depth of 2700~mwe \ced{(metres of water equivalent).}\aq{Given `unit' in full on first occurrence. OK?} They were looking for the disappearance of electron antineutrinos at $E/L \sim 10^{-5}$~eV$^2$, in the oscillation range indicated by the solar data. They started taking data in 2002 and finished in 2007. The liquid scintillator is contained in a 13~m diameter spherical nylon balloon surrounded by oil in a 18~m diameter spherical stainless-steel vessel. This holds the 1879 PMTs with a photocathode coverage of 34\%. A cylinder filled with water surrounds the previous volumes, being a Cerenkov veto against backgrounds (cosmic muons, gamma rays and neutrinos from the surrounding rock). This is the largest scintillator detector ever constructed. Neutrinos are detected through the inverse beta decay reaction (Eq.~(\ref{eq:ibd})), the neutrons being captured in protons, giving photons of 2.22~MeV. They reported the first evidence for the disappearance of reactor electron antineutrinos in 2002~\cite{kamland_1}. In Fig.~\ref{fig:kamland_results} we can see the ratio of observed over expected events (without oscillations) as a function of the distance. The deficit measured by KamLAND ($R = 0.611 \pm 0.085$~(stat.) $\pm\ 0.041$~(syst.) for $\bar{\nu}_\rme > 3.5$~MeV) is compared with previous unsuccessful reactor experiments. This was consistent with the LMA region. KamLAND presented the first evidence of spectral distortion in 2004~\cite{kamland_2}. Figure~\ref{fig:kamland_results} shows data compared with the non-oscillation scenario and with the best-fit oscillation spectrum as a function of the prompt event energy ($E_{\rm prompt} \approx E_{\bar{\nu}_\rme} + m_\rmp + m_\rmn$). The shaded band indicates the systematic error in the best-fit reactor spectrum above 2.6~MeV. The observed energy spectrum disagrees with the expected spectral shape in the absence of neutrino oscillation at 99.6\% significance and prefers the distortion expected from the oscillation effects. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\linewidth]{GilBotelaFigs/kamland_1.png} \includegraphics[width=0.5\linewidth]{GilBotelaFigs/kamland_2.png} \caption{Evidence for $\bar{\nu}_\rme$ disappearance (left) and spectral distortion (right) measured by KamLAND.} \label{fig:kamland_results} \end{center} \end{figure} KamLAND has presented new results~\cite{kamland_3} with more statistics and a lower energy threshold (0.9~MeV compared to 2.6~MeV). They have enlarged the fiducial volume and performed a campaign to purify the liquid scintillator. They have reduced the systematic uncertainty in the number of target protons and background up to 4.1--4.5\%. The significance of spectral distortion is now $>5\sigma$. The KamLAND results can be interpreted in terms of $\bar{\nu}_\rme$ oscillations. Figure~\ref{fig:kamland_3} shows the allowed contours in the oscillation parameter space for solar and KamLAND data from the two-flavour oscillation analysis (assuming $\theta_{13}$ = 0). The solar region is in agreement with the KamLAND data. The $\Delta m^2_{21}$ parameter is strongly determined by the KamLAND experiment. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm, angle=-90]{GilBotelaFigs/kamland_3-eps-converted-to.pdf} \caption{Two-flavour neutrino oscillation analysis including solar and KamLAND data (from Ref.~\cite{kamland_3}).} \label{fig:kamland_3} \end{center} \end{figure} The ratio of the background-subtracted neutrino spectrum to non-oscillation expectations as a function of $L_{0}/E_{\nu}$ is shown in Fig.~\ref{fig:kamland_4}. We can clearly see the oscillation periods over almost two full cycles. The oscillatory signature is distorted because the reactor sources are distributed across multiple baselines. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm, angle=-90]{GilBotelaFigs/kamland_4-eps-converted-to.pdf} \caption{Ratio of the observed $\bar{\nu}_\rme$ spectrum to the expectation for non-oscillation versus $L_0/E$ for the KamLAND data (from Ref.~\cite{kamland_3}).} \label{fig:kamland_4} \end{center} \end{figure} In summary, KamLAND confirmed neutrino oscillation, providing the most precise value of $\Delta m^2_{21}$ to date and improving the precision of $\tan^2\theta_{12}$ in combination with solar data. The indication of an excess of low-energy antineutrinos consistent with an interpretation as geoneutrinos persists. The scientific goals of the KamLAND experiment are now expanded towards solar neutrino detection and neutrino-less double-beta decay detection using enriched Xe. \subsection{Accelerator neutrinos} Accelerator neutrinos \ced{not only have} confirmed neutrino oscillations in the atmospheric region \ced{and in addition proved the appearance of flavours but also have} opened new questions in neutrino physics.\aq{Is this construction now OK as changed?} Neutrinos are produced from the collision of a proton beam with a target, producing pions and kaons. Then, they are focused and decay, giving muons, electrons and neutrinos. Muons and electrons are absorbed and the surviving particles are 98\% muon neutrinos and around 2\% electron antineutrinos. There are two types of searches that can be undertaken at accelerators: disappearance searches with experiments like K2K and MINOS, with not enough energy to produce the lepton in the CC reaction, and appearance searches with experiments like MiniBooNE and OPERA, with enough energy to produce the lepton. These experiments are mainly focused on the measurement of $\Delta m^2_{32}$ and $\theta_{23}$ and they have very limited sensitivity to $\theta_{13}$. The first accelerator-based long-baseline neutrino oscillation experiment was K2K~\cite{k2k} starting in 1999 and running until 2004. They looked for muon neutrino disappearance using a beam provided by KEK and detecting the oscillated neutrinos 250~km away with the SK detector. The comparison between near and far detectors allowed the measurement of 112 events, whereas 158 were expected, and a clear distortion of the energy spectrum (Fig.~\ref{fig:k2k}). The best-fit parameters are compatible with the SK atmospheric oscillation results. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\linewidth]{GilBotelaFigs/k2k_spect.png} \includegraphics[width=0.4\linewidth]{GilBotelaFigs/k2k_param.png} \caption{(left) Distribution of $\nu_\mu$ events in K2K as a function of the reconstructed neutrino energy and (right) allowed regions from the analysis of K2K data compared to the $L/E$ SK analysis.} \label{fig:k2k} \end{center} \end{figure} More recently, the MINOS long-baseline experiment~\cite{minos} has presented a positive result on neutrino oscillations. MINOS is composed of two similar magnetized steel/scintillator calorimeters to look for the disappearance of muon neutrinos from the NUMI beam at Fermilab. The 1.5~kton near detector is located near the source at Fermilab and the 5~kton far detector is placed 735~km away in the Soudan mine. \begin{figure}[ht] \begin{center} \includegraphics[width=0.43\linewidth]{GilBotelaFigs/minos_1.png} \includegraphics[width=0.35\linewidth]{GilBotelaFigs/minos_2.png} \caption{(left) Distribution of the neutrino energy at the far MINOS detector compared to the non-oscillation case and (right) the corresponding allowed regions in the oscillation parameter space (from Ref.~\cite{minos_nus}).} \label{fig:minos_nus} \end{center} \end{figure} Figure~\ref{fig:minos_nus} shows the MINOS far detector data with a significant deficit compared to the non-oscillation case and very good agreement with the $\nu_\mu$--$\nu_\tau$ oscillation scenario. The allowed region in the parameter space is shown in this plot together with the results from SK $L/E$ analysis and K2K. The measurement of $\Delta m^2_{32}$ is dominated by MINOS, while the angle $\theta_{23}$ is essentially determined by SK. A similar study to that discussed previously has been performed on the antineutrino dataset~\cite{minos_anus}. MINOS is also able to distinguish between muon neutrinos and antineutrinos. A total of $1.7 \times 10^{20}$ \ced{protons on target (POT)}\aq{Given in full. OK?} were accumulated between September 2009 and March 2010. The reconstructed energy spectrum of $\bar{\nu}_\mu$ CC events at the far detector shows a deficit in the low-energy region. The best-fit oscillation parameters to $\bar{\nu}$ data are shown in Fig.~\ref{fig:minos_anus}. We see the corresponding contours for neutrino and antineutrino oscillations. Antineutrinos favour a slightly higher $\Delta m^2$ than neutrino data, which could violate CPT. Anyway, the results are compatible at 2$\sigma$ and more data are being taken to understand if this is a statistical fluctuation or not. Matter effects cannot explain this discrepancy (too small effect). \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/minos_3.png} \caption{Allowed regions in the oscillation parameter space for neutrino and antineutrino data (from Ref.~\cite{minos_anus}).} \label{fig:minos_anus} \end{center} \end{figure} A subdominant transition $\nu_\mu \rightarrow \nu_\rme$ would be expected if $\theta_{13} \neq 0$. MINOS was optimized for muon identification, and thus the reconstruction of electromagnetic showers is difficult. They use an artificial neural network technique for this analysis. Recent results looking for $\nu_\rme$ appearance have shown a very small excess of data (0.7$\sigma$ over the expected background)~\cite{minos_th13}. This measurement is also consistent with no $\nu_\rme$ appearance. A limit has been set around the CHOOZ value. Since MINOS is sensitive to matter effects, they have different limits depending on the sign of $\Delta m^2$ (Fig.~\ref{fig:minos_th13}). \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\linewidth]{GilBotelaFigs/minos_unico-eps-converted-to.pdf} \caption{Values of $2\sin^2(2\theta_{13})\sin^2\theta_{23}$ and $\delta_{\rm CP}$ that produce a number of candidate events in the far MINOS detector consistent with the observation for (top) the normal hierarchy and (bottom) the inverted hierarchy. Black lines are the best fit and red (blue) regions show the 90\% (68\%) CL intervals (from Ref.~\cite{minos_th13}).} \label{fig:minos_th13} \end{center} \end{figure} Among the short-baseline accelerator neutrino experiments, LSND is the first experiment that claimed the observation of neutrino oscillation appearance~\cite{lsnd}. They were taken data from 1993 to 1998 looking for the appearance of electron antineutrinos in a muon antineutrino beam produced at Los Alamos National Laboratory. The detector was a tank filled with 167~ton of dilute liquid scintillator, located about 30~m from the neutrino source. The experiment observed an excess of events above the MC predictions (at 3.8$\sigma$) that could be interpreted in terms of $\bar{\nu}_\mu \rightarrow \bar{\nu}_\rme$ oscillations. The corresponding $\Delta m^2$ is in the range shown in Fig.~\ref{fig:lnsd}. These results created a huge controversy because they are not compatible with atmospheric and solar oscillations since they cannot be explained assuming three-flavour oscillations. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm]{GilBotelaFigs/lnsd.png} \caption{Allowed regions in the parameter space including atmospheric, solar and LSND data.} \label{fig:lnsd} \end{center} \end{figure} The region of parameter space that is favoured by the LSND observations has been partly tested by other experiments like KARMEN \cite{karmen} with negative results on neutrino oscillations. KARMEN excluded part of the LSND region. Another experiment is needed to definitively confirm this excess or not. This is the case of the MiniBooNE experiment, designed to test the neutrino oscillation interpretation of the LSND signal. This experiment was proposed in 1997 and started running in 2002. MiniBooNE \cite{miniboone} is an 800~ton mineral oil Cerenkov detector placed at 540~m from the neutrino source and uses the $\nu_\mu$ beam produced by the Booster Neutrino Beamline at Fermilab. The $L/E$ baseline is similar to the LSND, but the baseline and neutrino energies are one order of magnitude higher. Therefore, MiniBooNE systematic errors are completely different. They also have higher statistics and they are taking data in both neutrino and antineutrino modes. Figure~\ref{fig:miniboone_nue} shows the MiniBooNE results for $\nu_\mu$--$\nu_\rme$ oscillations in terms of the reconstructed energy distribution of $\nu_\rme$ candidates~\cite{miniboone_nue}. Points are data with the statistical error and the histogram is the background prediction with systematic errors. For the analysis region between 475~MeV and 1.25~GeV, there is no evidence of oscillations. Data are consistent with background. MiniBooNE has excluded two neutrino oscillations in the LSND region at 98\% CL. However, in the low-energy part of the spectrum (between 200 and 475~MeV) they have found a sizeable excess of data. The excess at low energy has a significance of 1.7$\sigma$ or 3.4$\sigma$ and is incompatible with LSND-type oscillations. The source of this excess remains unknown. \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\linewidth]{GilBotelaFigs/miniboone_1.png} \includegraphics[width=0.37\linewidth]{GilBotelaFigs/mblimit-eps-converted-to.pdf} \caption{(left) Neutrino energy distribution for $\nu_\rme$ CCQE data and background and (right) 90\% CL limit (thick curve) and sensitivity (dashed curve) for events with energy $> 475$~MeV within a two-neutrino $\nu_\mu \rightarrow \nu_\rme$ oscillation model (from Ref.~\cite{miniboone_nue}).} \label{fig:miniboone_nue} \end{center} \end{figure} MiniBooNE has also reported results from the search for $\bar{\nu}_\mu$--$\bar{\nu}_\rme$ oscillations~\cite{miniboone_anue}. For the oscillation study, no contribution from the low-energy neutrino mode excess has been accounted for in the $\bar{\nu}$ prediction. In Fig.~\ref{fig:miniboone_anue} the $\bar{\nu}_\rme$ \ced{charged-current quasi-elastic (CCQE)}\aq{Given in full here. OK?} energy for data and background events is shown. From 200 to 3000~MeV there is a total excess of $43.2 \pm 22.5$ events. The excess is present in the low ($< 475$~MeV) and high ($> 475$~MeV) energy regions. Many checks have been performed on the data to ensure that backgrounds are correctly estimated. Any single background would have to be increased by more than 3$\sigma$ to explain the observed excess of events. On the right-hand plot of Fig.~\ref{fig:miniboone_anue} the 90, 95 and 99\% CL contours for $\bar{\nu}_\mu \rightarrow \bar{\nu}_\rme$ oscillations in the energy range $> 475$~MeV are shown. The allowed regions are in agreement with LSND allowed regions. The probability of background-only fit relative to the best oscillation fit is 0.5\%. Comparison between MiniBooNE and LSND as a function of $L/E$ also shows consistency between both results. \begin{figure}[ht] \begin{center} \includegraphics[width=0.47\linewidth]{GilBotelaFigs/miniboone_3.png} \includegraphics[width=0.37\linewidth]{GilBotelaFigs/miniboone_4.png} \caption{(left) Neutrino energy distribution for $\bar{\nu}_\rme$ CCQE data and background and (right) the 90, 95 and 99\% CL allowed regions for events with energy $> 475$~MeV within a two-neutrino $\bar{\nu}_\mu \rightarrow \bar{\nu}_\rme$ oscillation model (from Ref.~\cite{miniboone_anue}).} \label{fig:miniboone_anue} \end{center} \end{figure} The source of the excess observed by MiniBooNE at low energy and the difference found between neutrino and antineutrino results are still under study. CNGS~\cite{cngs} is the neutrino beam facility in Europe. It is mainly a $\nu_\mu$ beam from the CERN SPS with a mean energy of $\sim$17~GeV, a 4\% $\bar{\nu}_\mu$ contamination and a 0.9\% ($\nu_\rme + \bar{\nu}_\rme$) contamination. Two experiments, OPERA and ICARUS, are located in the LNGS laboratory in Italy, 730~km away from the neutrino source. Their main goal is to detect $\nu_\mu \rightarrow \nu_\tau$ transitions in appearance mode. Physics operations started in 2007 and they have provided neutrino beams in 2008, 2009 and 2010. OPERA looks for $\nu_\tau$ CC interactions through the measurement of $\tau$ decay kinks in different channels. The detector consists of a large set of emulsion--lead targets combined with electronic detectors and a magnetic spectrometer. This technique provides a very good spatial resolution of the order of the micrometres. In August 2009 the OPERA collaboration presented the first $\nu_\tau$ neutrino candidate~\cite{opera}. With their statistics, they expected 0.5 $\nu_\tau$ candidates. The statistical significance of the measurement of a first $\nu_\tau$ candidate is 2.36$\sigma$. For five years of data taking at the nominal CERN performance of $4.5 \times 10^{19}$~POT, they expect to detect 10~$\tau$ events. The ICARUS T600 detector at LNGS~\cite{icarus} is a 600~ton LAr TPC providing 3D imaging of any ionizing event. T600 is presently taking data and has smoothly reached the optimal working conditions. They have already observed neutrino interactions. The first data analysis is ongoing, together with the development of fully automated reconstruction software. They expect to measure one or two $\tau$ events in the next two years. \subsection{Global analysis of oscillation data} In previous sections, all the current neutrino oscillation results have been summarized. The three pieces of neutrino oscillation evidence have been explained corresponding to three different values of mass-squared differences. The mixing of three standard neutrinos can only explain two of the anomalies. The explanation of the three sets of data would require the existence of sterile $\nu$ species, since only three light neutrinos can couple to the Z$^0$ boson. In the case of the solar and atmospheric neutrino indications, several experiments agree on the existence of the effect and they have been confirmed by terrestrial reactor and accelerator experiments. Therefore, the standard scenario is to consider three-neutrino mixing without the LSND result. Several attempts have been made in the literature to accommodate also LSND data (include a fourth sterile neutrino, break CPT symmetry, make neutrinos and antineutrinos have different masses). However, the present phenomenological situation is that none of these explanations can successfully describe all the neutrino data. Table \ref{tab:global} summarizes the present values of the oscillation parameters from a recent global three-flavour neutrino oscillation analysis of the experimental data~\cite{schwetz}. The upper (lower) row corresponds to normal (inverted) mass hierarchy. \begin{table}[htbp] \begin{center} \caption{Neutrino oscillation parameter summary (from Ref.~\cite{schwetz}).} \label{tab:global} \begin{tabular}{cccc} \hline\hline {\bf Parameter} & {\bf Best fit $\pm 1\sigma$} & {\bf $2\sigma$} & {\bf $3\sigma$} \\ \hline \Rule{1.5em} $\Delta m^2_{21}$ ($10^{-5}$~eV$^2$) & 7.59$^{+0.20}_{-0.18}$ & 7.24--7.99 & 7.09--8.19 \\ \Rule{2em} $\Delta m^2_{32}$ ($10^{-3}$~eV$^2$) & \begin{tabular}{c} 2.45 $\pm$ 0.09 \\ -- (2.34$^{+0.10}_{-0.09}$) \\ \end{tabular} & \begin{tabular}{c} 2.28--2.64 \\ -- (2.17--2.54) \\ \end{tabular} & \begin{tabular}{c} 2.18--2.73 \\ -- (2.08--2.64) \\ \end{tabular} \\ \Rule{2em} $\sin^2\theta_{12}$ & 0.312$^{+0.017}_{-0.015}$ & 0.28--0.35 & 0.27--0.36 \\ \Rule{2em} $\sin^2\theta_{23}$ & \begin{tabular}{c} 0.51 $\pm$ 0.06 \\ 0.52 $\pm$ 0.06 \\ \end{tabular} & \begin{tabular}{c} 0.41--0.61 \\ 0.42--0.61 \\ \end{tabular} & 0.39--0.64 \\ \Rule{2em} $\sin^2\theta_{13}$ & \begin{tabular}{c} 0.0$10^{+0.009}_{-0.006}$ \\[3pt] 0.013$^{+0.009}_{-0.007}$ \\ \end{tabular} & \begin{tabular}{c} $\leq$ 0.027 \\[3pt] $\leq$ 0.031 \\ \end{tabular} & \begin{tabular}{c} $\leq$ 0.035 \\[3pt] $\leq$ 0.039 \\ \end{tabular} \\ \vspace{-0.3cm} \\ \hline \hline \end{tabular} \end{center} \end{table} There are two possible mass orderings, which we denote as normal ($\Delta m^2_{32}$ > 0) and inverted ($\Delta m^2_{32}$ < 0). The two orderings are often referred to in terms of sgn($\Delta m^2_{32}$). As you may see, not all of the neutrino oscillation parameters have been measured: the value of the $\theta_{13}$ angle, the sign of the $\Delta m^2_{32}$ (mass hierarchy) and the CP violation phase are still unknown. Past and present experiments tried to measure the $\theta_{13}$ mixing angle without success. We only have an upper limit on its value, indicating that this angle must be very small. However, the best-fit point of this parameter is not zero. There are independent hints for $\theta_{13} > 0$ computed using different data ranging between 1.4$\sigma$ and 2.8$\sigma$. Figure~\ref{fig:limit_th13} illustrates the interplay of the various datasets in the plane of $\sin^2\theta_{13}$ and $\Delta m^2_{32}$. The latest T2K results have not been considered in this analysis. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{GilBotelaFigs/limit_th13.png} \caption{Bound on $\sin^2\theta_{13}$ using global data, corresponding to (left) normal hierarchy and (right) inverted hierarchy (from Ref.~\cite{schwetz}).} \label{fig:limit_th13} \end{center} \end{figure} \section{Current and future neutrino oscillation experiments} In the previous section I provided a summary of the present situation in terms of experimental neutrino oscillation results and data analysis. As pointed out, there are still many questions to be answered by \ced{forthcoming neutrino oscillation experiments in the next few years and further in the future}. The main topics that will be addressed by the current and near-future experiments are: \begin{itemize} \item measurement of $\theta_{13}$ mixing angle; \item accurate measurements of other oscillation parameters ($\Delta m^2_{32}$, $\theta_{23}$, is $\theta_{23}$ maximal?); \item understanding of LSND/MiniBooNE anomalies; \item understanding of differences observed between neutrinos and antineutrinos in accelerator experiments; \item searching for CP violation in the leptonic sector; and \item searching for the sign of $\Delta m^2_{32}$. \end{itemize} Current experiments that will try to study these questions are the accelerator experiments MINOS, OPERA, ICARUS and MiniBOONE, which will continue their operation to accumulate more statistics, and the new ones T2K in Japan and NO$\nu$A in the USA. Concerning reactor neutrinos, there are essentially three new reactor experiments that would like to measure $\theta_{13}$: Double Chooz in France is already taking data, RENO in Korea is coming soon, and Daya Bay in China a bit later. In addition, more news on SK and Borexino concerning natural sources will be reported. The first goal of the near-future experiments is to measure the $\theta_{13}$ mixing angle. There are essentially two ways of studying this parameter: with neutrino accelerator long-baseline experiments or reactor experiments. The long-baseline accelerator experiments will try to measure the $\theta_{13}$ mixing angle by looking for the appearance of electron neutrinos in a muon neutrino beam generated at great distance from the detector. The main problem to measure $\theta_{13}$ with accelerator experiments is that the oscillation probability depends on several parameters in such a way that the measurement of $\theta_{13}$ will be affected by correlations and degeneracies between parameters (their sensitivity will be reduced). Owing to the long baseline, they can also be sensitive to matter effects. However, reactor neutrino experiments are unique for providing an unambiguous determination of $\theta_{13}$. The electron antineutrino disappearance probability does not depend on the CP phase nor on the sign of $\Delta m^2_{32}$. It depends essentially on $\theta_{13}$ (only has a weak dependence on $\Delta m^2_{21}$). Therefore, unlike appearance experiments, they do not suffer from parameter degeneracies. Moreover, the matter effects are negligible due to the small distances. So, they will provide a clean measurement of the mixing angle. In addition to this, they can help to solve the $\theta_{23}$ degeneracy (the octant of $\theta_{23}$ if not maximal, $\theta_{23} > \pi/4$ or $< \pi/4$) combined with accelerator experiments. The experimental challenges of neutrino accelerator experiments are related to the neutrino beam intensity, the contamination of other flavours in the neutrino beam, the uncertainties on the neutrino flux properties and the neutrino--nucleus interactions. On the other hand, reactor neutrino experiments have a pure antineutrino flux without flavour contamination, the flux is known at few per cent level, and the cross-section is high, so the needed detectors are smaller (and cheaper compared to accelerator experiments). However, they need to deal with backgrounds and reduce the systematic uncertainties to provide a precise measurement. On the other hand, accelerator experiments are able to provide other measurements like CP-violation. Anyway, both kinds of experiments are necessary. They provide independent and complementary information. \subsection{New reactor experiments} The main goal of the new reactor experiments is to measure the $\theta_{13}$ mixing angle. In order to achieve this, several improvements with respect to previous reactor measurements are needed. It will be necessary to increase the statistics. More powerful reactors are desired, longer exposure and larger detector mass. On the other hand, backgrounds should be further reduced with a better detector design: using veto detectors and external shields against muons and external radioactivity. Finally, an important reduction of the systematic uncertainties is fundamental to reach the high precision needed. It could be achieved by performing relative measurements using two identical detectors and comparing them to minimize the reactor errors. A detailed calibration programme will be needed. Reactor experiments will look for the disappearance of electron antineutrinos coming from nuclear reactors. The corresponding oscillation probability (Eq.~\eqref{eq:posc_react}) essentially depends on $\Delta m^2_{32}$ and $\sin^2 2\theta_{13}$. The second term corresponds to a second oscillation amplitude dominated by solar parameters and has been measured with the KamLAND experiment: \begin{equation} P(\nu_\rme{\rightarrow}\nu_\rme)= 1 - \sin^2 2\theta_{13} \sin^2\left(\frac{\Delta m_{32}^2 L}{4 E_\nu}\right) - \cos^4\theta_{13} \sin^2 2\theta_{12} \sin^2\left(\frac{\Delta m_{21}^2 L}{4 E_\nu}\right). \label{eq:posc_react} \end{equation} Figure~\ref{fig:oscprob_react} shows the survival probability as a function of the distance of detection for a typical reactor neutrino energy of 4~MeV. Owing to the low neutrino energies, reactor neutrino experiments are disappearance experiments located at short distances in order to maximize the disappearance probability. At $\sim$1--2~km from the neutrino source, a small antineutrino deficit is expected over a large neutrino flux. High precision will be necessary to measure the mixing angle. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{GilBotelaFigs/oscprob_react.png} \caption{Survival oscillation probability for a typical reactor neutrino experiment.} \label{fig:oscprob_react} \end{center} \end{figure} The reactor antineutrinos are detected through the inverse beta decay reaction (Eq.~\eqref{eq:ibd}) giving a prompt signal due to the e$^+$ annihilation and a delayed signal from the neutron capture ($\sim$30~$\mu$s later) giving photons of $\sim$8~MeV in the case of capture in Gd. In the case of H, the delayed signal happens 200~$\mu$s later and the photons are of 2~MeV. The spectrum peaks around 3.6~MeV and neutrino energy threshold is 1.8~MeV. The signature of a neutrino interaction can be mimicked by two types of background events: accidental or correlated. All backgrounds are linked to the cosmic muon rate and detector radiopurity. Compared to CHOOZ, backgrounds can be reduced with a better detector design and \textit{in situ} measurements. The accidental events occur when a neutron-like event by chance falls in the time window of $\sim$100~$\mu$s after an event in the scintillator with an energy above 0.5--0.7~MeV. The positron-like signal comes from natural radioactivity of the rock or of the detector materials, in general, dominated by the PMT radioactivity. The delayed background (neutron-like signal) comes from neutron captures on Gd. They are energy deposits over 6~MeV isolated in time from other deposits. The correlated background are events that mimic both parts of the coincidence signal: one single process induces both a fake positron and a neutron signal. They come from fast neutrons induced by cosmic muons, which slow down by scattering in the scintillator, deposit more than 0.5~MeV visible energy and are captured on Gd. Correlated background can also be produced by long-lived isotopes like $^8$He, $^9$Li or $^{11}$Li, which undergo beta decay with neutron emission. There are several reactor neutrino experiments that \ced{are looking to measure} the $\theta_{13}$ angle with sensitivities on $\sin^2 2\theta_{13}$ up to 0.01: Double Chooz in France, RENO in Korea and Daya Bay in China. Table \ref{tab:reactors} summarizes the three reactor neutrino experiments in progress.\aq{I've assumed ``y'' stands for ``year'', for which the preferred abbreviation is ``yr''. If that is not the case please explain what it is.} Double Chooz~\cite{dchooz} is the most advanced of the three reactor experiments, since it is already taking data. The three detectors are quite similar with slight variations between them. \begin{table}[htbp] \begin{center} \caption{Comparison between reactor neutrino experiments.} \label{tab:reactors} \begin{tabular}{ccccccc} \hline\hline {\bf Experiment} & {\bf Location} & {\bf Th. power} & {\bf Distances } & {\bf Depth near} & {\bf Target} & {\bf Expect.} \\ & & {\bf (GW)} & {\bf near/far (m)} & {\bf /far (mwe)} & {\bf mass (ton)} & {\bf sensit. (3 yr)} \\ \hline Double Chooz & France & 8.5 & 400/1050 & 115/300 & 10/10 & 0.03 \\ RENO & Korea & 16.4 & 290/1380 & 120/450 & 15/15 & 0.02 \\ Daya Bay & China & 11.6 & 360(500)/ & 260/910 & 40$\times$2/80 & 0.01 \\ & & (17.4) & 1985(1613) & & & \\ \hline\hline \end{tabular} \end{center} \end{table} The antineutrinos used in Double Chooz are produced by the pair of reactors (type N4) located at the Chooz-B nuclear power station in France. The maximum operating thermal power of each core amounts to 4.27~GW. The idea of Double Chooz is to use two almost identical neutrino detectors of medium size, containing 10.3~m$^3$ of liquid scintillator target doped with 0.1\% of gadolinium. The neutrino laboratory of the first CHOOZ experiment is located 1.050~km from the two cores. The far detector is already installed at this site. The far site is shielded by about 300~mwe of rocks. In order to cancel the systematic errors originating from the lack of knowledge of the $\bar{\nu}_\rme$ flux and spectrum, as well as to reduce the set of systematic errors related to the detector and event selection procedure, a second detector will be installed close to the nuclear cores, at $\sim$400~m. The Double Chooz detector consists of concentric cylinders (Fig.~\ref{fig:DC_detect}). A target cylinder of 1.2~m radius and 2.5~m height, providing a volume of 10.3~m$^3$, is filled with a liquid scintillator doped with gadolinium (1~g/l). This is the volume for neutrino interactions. Surrounding the target we have the gamma-catcher region of 22.6~m$^3$ containing non-loaded liquid scintillator with the same optical properties as the $\bar{\nu}_\rme$ target (light yield, attenuation length). This is an extra volume for gamma interaction. This region is needed to measure the gammas from the neutron capture on Gd, to measure the positron annihilation and to reject the background from fast neutrons. Surrounding the gamma-catcher acrylic tank there is a 1~m thick non-scintillating (oil) buffer contained in a stainless-steel tank. The goal of this region is to decrease the level of accidental background mainly from the contribution from PMT radioactivity. The photomultiplier tubes are mounted from the interior surface of the buffer vessel and they collect the light from the target volume and the gamma-catcher. They are 390 10-inch PMTs per detector to cover $\sim$13\%. Then a 50~cm thick inner veto region is filled with liquid scintillator to tag the muon-related background events. Finally, a 15~cm thick steel shielding will protect the detector from natural radioactivity of the rocks around the pit with a significant gamma reduction. An additional muon outer veto (plastic scintillator planes) will be required to help identify muons, which could cause neutrons or other cosmogenic backgrounds. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm]{GilBotelaFigs/DC_detect.png} \caption{Sketch of the Double Chooz detector.} \label{fig:DC_detect} \end{center} \end{figure} The statistical error in CHOOZ was 2.8\% while in Double Chooz in three years it is expected to be $\sim$0.5\%. The fiducial volume has been increased with respect to CHOOZ and longer data taking is expected. Concerning the systematic errors, in CHOOZ the total systematic error was 2.7\%, dominated by the reactor antineutrino flux and spectrum uncertainties (1.9\%). In Double Chooz the uncertainty related to the reactor is cancelled by using two identical detectors. The relative normalization between the two detectors is the most important source of error. The goal of Double Chooz is to reduce the overall systematic uncertainty to 0.6\%. Table \ref{tab:DC_sys} summarizes the expected systematic errors in Double Chooz. \begin{table}[h] \begin{center} \caption{Summary of Double Chooz systematic errors.} \label{tab:DC_sys} \begin{tabular}{lcc} \hline\hline & \textbf{CHOOZ} & \textbf{Double Chooz}\\ \hline Reactor uncertainties & 2.1\% & < 0.1\% \\ ($\nu$ flux and reactor power) & & \\ Number of protons & 0.8\% & < 0.2\% \\ Detector efficiency & 1.5\% & < 0.5\% \\ {\bf Systematic error} & {\bf 2.7\%} & {\bf < 0.5\%} \\ \hline\hline \end{tabular} \end{center} \end{table} The Double Chooz far detector started to take data at the end of 2010. Figure~\ref{fig:first_DC} shows an internal view of the detector with all PMTs installed inside the buffer volume and, on the right, the first signals of a few photoelectrons contained in the inner detector. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{GilBotelaFigs/dc_inner.png} \includegraphics[width=5cm]{GilBotelaFigs/dc_event1.png} \includegraphics[width=3.5cm]{GilBotelaFigs/dc_event2.png} \caption{(left) Internal view of the Double Chooz detector with the PMTs installed and (right) one event of a few photoelectrons contained in the inner detector.} \label{fig:first_DC} \end{center} \end{figure} The expected Double Chooz $\sin^2 2\theta_{13}$ sensitivity as a function of time is shown in Fig.~\ref{fig:dc_sensi} in the case when no signal is observed. Double Chooz will operate in two phases. In the first one, after 1.5 years of data taking with the far detector, a limit of $\sin^2 2\theta_{13} < 0.06$ at 90\% CL can be reached. Using both detectors, it is possible to measure $\sin^2 2\theta_{13}$ up to 0.05 at 3$\sigma$ or to obtain a limit down to 0.03 at 90\% CL after three years of data taking. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/dc_sensi.png} \caption{Expected Double Chooz sensitivity on $\sin^2 2\theta_{13}$.} \label{fig:dc_sensi} \end{center} \end{figure} The RENO~\cite{reno} reactor neutrino experiment is under construction at YongGwang in South Korea. The plant consists of six equally spaced reactors in line spanning $\sim$1.3~km. The total average thermal power is $\sim$16.4~GWth. One near detector and one far detector are placed in the iso-flux line from the reactors. The near detector is located $\sim$290~m from the cores' barycentre with an overburden $\sim$120~mwe. The far detector is at 1380~m surrounded by 450~mwe. The design of the RENO detectors is quite similar to the Double Chooz one. The inner detector is bigger (16~ton) and the main difference is the muon veto system, which is a 30~cm concrete vessel filled with water observed by 60~PMTs. The goal of the experiment is to have a systematic error of the order of 0.5\% and a statistical error $\sim$0.3\%. The expected limit is $<0.02$ and the discovery reach at 3$\sigma$ up to 0.04 after three years of data taking. Both detectors are being commissioned. They expect to start data taking with the two detectors at the end of 2011. A third reactor neutrino experiment, Daya Bay~\cite{dayabay}, is under construction in China, at the Ling Ao and Daya Bay nuclear power plants. The power plant complex is now composed of two pairs of reactors, Daya Bay and Ling Ao-I. Other two reactors named Ling Ao-II are under construction and should be operational in 2011. Each core yields 2.9~GW, thus the site is 11.6~GW and will be 17.4~GW. Near detectors are needed in near sites to monitor the different reactors. Two detectors will be installed at $\sim$360~m from Daya Bay, two detectors at $\sim$500~m from Ling Ao, and four far detectors at 1.6~km from the barycentre of Ling Ao sites and $\sim$2~km from Daya Bay. Each detector contains 20~ton of liquid scintillator doped with Gd. Horizontal tunnels connect the detector halls for cross-calibration. The design of the detector is very similar to the Double Chooz one except for the shielding. The detectors of each site are submerged into a swimming pool filled with purified water giving protection against radiation and fast neutrons. The water pool is instrumented with PMTs to read the Cerenkov light to tag muons together with \ced{resistive plate chambers (RPCs)}\aq{Given in full OK?} placed on top of the water pool. This system is under production. The excavation of access tunnels and experimental halls is nearing completion. Two near detectors are completed. The expected systematic error is 0.38\%. They have the ambitious idea of swapping the detectors of different sites, moving them through the tunnels to reduce the systematic errors from relative detector normalization to 0.12\%. The expected sensitivity at 90\% CL with three years of data taking (assuming a systematic error of 0.4\%) is 0.01. Daya Bay plans to start taking data with the first near site in summer 2011 and with the three sites operational at the end of 2012. \subsection{New accelerator experiments} Complementary to reactors, new accelerator experiments will measure neutrino oscillations in the next few years. Their main goal is to look for $\nu_\rme$ appearance in a muon neutrino beam. The approximate formula for the oscillation probability can be written as \begin{eqnarray} P{(\nu_\rme{\rightarrow}\nu_\mu )} & \approx & s_{23}^2 \sin^22\theta_{13} \sin^2\!\left(\frac{\Delta m^2_{32}L}{4E_\nu}\right ) + P_{\text{sol}}(\theta_{12}, \Delta m^2_{21}) \nonumber \\ &&{} \pm \sin2\theta_{13} {F}_{\text{solar}} {F}(\sin2\theta_{23}, |\Delta m^2_{32}|) {F}(\delta_\text{CP}, \Delta m^2_{32}). \end{eqnarray} The first term corresponds to atmospheric oscillations, the second one is the solar one and there is an interference term, which has the information on the $\delta_{\rm CP}$ phase and also dependence on the sign of $\Delta m^2_{32}$. The +~($-$) sign applies to neutrinos (antineutrinos), respectively. Accelerator experiments will try to measure the $\theta_{13}$ mixing angle, provide more precise measurements of the atmospheric parameters and in principle look for CP-violation, \begin{equation} P(\nu_{\alpha}{\rightarrow}\nu_{\beta})-P(\bar{\nu}_{\alpha}{\rightarrow}\bar{\nu}_{\beta}) \neq 0 \qquad\quad (\alpha \neq \beta), \end{equation} and matter effects. There are two effects, one from the CP phase and another from matter effects, that produce differences between neutrinos and antineutrinos. We need to disentangle these two effects using different experimental set-ups. At short distances, CP-violating effects dominate, while at long distances, matter effects completely hide CP-violating effects. They can be distinguished by the different neutrino energy dependence. In order to achieve this, an improvement of the present beams is needed: with much higher intensities and almost monochromatic beams. New detectors at accelerators are located off-axis in order to reduce the beam energy and have a more monochromatic beam. This technique allows experiments to pick the energy corresponding to the maximum oscillation signal and, at the same time, to get rid of the high-energy part \ced{contributing most of the background}. The $\nu_\rme$ contamination from the beam could be reduced below the 1\% level. In Fig.~\ref{fig:off-axis} the neutrino energy spectrum is shown for different off-axis degrees. The energy peak is reduced and becomes narrower by increasing the off-axis \ced{angle}. In addition, the contamination of $\nu_\rme$ from the beam is greatly reduced. The problem with this technique is the reduced rate. Thus, large detectors and \ced{intense}\aq{I don't think ``high'' was quite the right word here. Is ``intense'' OK?} proton sources are needed. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm]{GilBotelaFigs/off-axis_v2.pdf} \caption{Neutrino energy spectrum variation as a function of the off-axis angle.} \label{fig:off-axis} \end{center} \end{figure} In accelerator experiments, the main neutrino signal will be CCQE interactions. They will look for the muon or electrons coming from these reactions. They have to deal with backgrounds coming essentially from the $\nu_\rme$ contamination of the beam and from $\pi^0$ production in neutral currents. The main long-baseline project that has begun operation this year is T2K~\cite{t2k}. The neutrino beam is produced in the accelerator complex of J-PARC in Japan and it will travel 295~km to Kamioka, where the SK detector is located. They will use near and far detectors to control the beam and measure the oscillations. Both detectors are 2.5$^\degree$ off neutrino beam axis. This gives a neutrino beam energy of $\sim$600~MeV. Owing to the short distance, this experiment is not sensitive to matter effects but can provide information on CP if $\theta_{13}$ is not too small. T2K will have different detectors along the beam line. A muon monitor is located after the beam dump and measures the direction and intensity of the beam. It is used as a proton beam detector, target monitor and horn monitor. The INGRID on-axis detector, at 280~m from the target, is made of steel and scintillator layers and measures the intensity and direction of the neutrino beam. It monitors the beam using muons from CC neutrino interactions. The ND280 off-axis near detector is a magnetized detector inside the former UA1 magnet donated by CERN to this experiment (0.2~T). It is composed of several subdetectors: a $\pi^0$ detector, a tracker made of fine-grain detectors and TPCs to detect charged particles and measure their momentum, and an electromagnetic calorimeter. The Side Muon Range Detector will detect muons and measure their momenta. This detector measures the neutrino flux and spectrum before oscillations, different interaction cross-sections and also the backgrounds for $\nu_\rme$ appearance. Finally, the SK detector expects 10 $\nu_\mu$ events per day at full beam power. T2K completed its first run in the first half of 2010. A total of $3.23 \times 10^{19}$ protons were delivered at 30~GeV. The beam was working at 50~kW of power. T2K has analysed both $\nu_\mu$ and $\nu_\rme$ samples. For the $\nu_\mu$ disappearance analysis, $\nu_\mu$ is consistent with previous disappearance experiments. In the appearance $\nu_\rme$ channel, they have observed six $\nu_\rme$ candidates, and the expected number of events in a three-flavour neutrino oscillation scenario with $|\Delta m^2_{32}| = 2.4 \times 10^{-3}$~eV$^2$, $\sin^22\theta_{23} = 1$ and $\sin^2 2\theta_{13} = 0$ was $1.5 \pm 0.3$ (syst.)~\cite{t2k_nue}. At 90\% CL, the data are consistent with $0.03~(0.04) < \sin^2 2\theta_{13} < 0.28~(0.34)$ for $\delta_{\rm CP} = 0$ and normal (inverted) hierarchy. Their goal for 2011 was to accumulate 150~kW $\times$ $10^7$~s by July and increase the beam power. However, due to the March 2011 earthquake, the experiment has been somewhat delayed. More data are required to firmly establish $\nu_\rme$ appearance and to determine the $\theta_{13}$ angle. Assuming the beam running at 750~kW for five years, Fig.~\ref{fig:t2k_sensit} shows the expected sensitivity that T2K plans to reach for $\sin^2 2\theta_{13}$ as a function of $\Delta m^2_{32}$ at 90\% CL and for different systematic errors, assuming $\delta_{\rm CP} = 0$. They could be sensitive down to 0.01 at 90\% CL. The final sensitivity will depend on the value of $\delta_{\rm CP}$. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{GilBotelaFigs/t2k_sensit.png} \caption{Expected sensitivity of the T2K experiment to $\sin^2 2\theta_{13}$ for five years of data taking with a 750~kW beam assuming $\delta_{\rm CP}=0$ and normal mass hierarchy (from Ref.~\cite{t2k_sensit}).} \label{fig:t2k_sensit} \end{center} \end{figure} There is another approved experiment that will start its operation in 2013: the NO$\nu$A experiment (NUMI Off-axis Neutrino Appearance)~\cite{nova}. They will search for $\nu_\rme$ appearance using an upgraded version of the NUMI beam at 700~kW with two identical detectors: a 220~ton near detector located close to the source at 1~km and a 15~ton far detector at 810~km away at Ash River, Minnesota, USA. They will \ced{use} active tracking liquid scintillator calorimeters with very good electron identification capability. The unique feature of this experiment is that, depending on the value of $\theta_{13}$, NO$\nu$A could be the only approved experiment with sensitivity to determine the neutrino mass hierarchy. The detectors will be placed off-axis at 0.8$^\degree$ to tune the neutrino energy to 2~GeV and maximize the $\nu_\rme$ appearance. They can run in the neutrino and antineutrino modes. The NuMI beam will be upgraded from 320~kW to 700~kW during the shutdown of 2012. NO$\nu$A plans to run for three years in neutrino mode and three years in antineutrino mode. They plan to take advantage of the large matter effects. Figure~\ref{fig:nova} shows the NO$\nu$A sensitivity to matter effects depending on the $\delta_{\rm CP}$ value. The mass ordering can only be solved by NO$\nu$A alone in this region of the parameter space, if the hierarchy is normal. For the rest of the parameter space, we need to combine these measurements with other experiments, like T2K, which will improve the sensitivity a bit. NO$\nu$A plans to have the far detector completed in October 2013. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{GilBotelaFigs/nova_matter.png} \caption{The 95\% resolution of the mass ordering as a function of $\delta_{\rm CP}$ for six years of NO$\nu$A running split evenly between neutrinos and antineutrinos for different beam powers in the case of normal mass ordering (from Ref.~\cite{nova}).} \label{fig:nova} \end{center} \end{figure} In the next few years, it is possible that these experiments will provide a measurement of $\theta_{13}$ mixing angle, if $\sin^2 2\theta_{13} > 0.01$ and solve the $\theta_{23}$ degeneracy. However, they will have limited sensitivity to CP-violation and matter effects. More than 70\% of the parameter space will not be accessible. The ultimate goals of the future generation will depend on these measurements, but in principle they will focus on CP-violation (new measurements are needed to solve degeneracies) and on the mass hierarchy. To achieve these measurements, many improvements are needed from the experimental point of view: We will need upgraded beams that are more energetic, more powerful and more pure. We will need huge detectors (one order of magnitude bigger), with more granularity and energy resolution. And, to solve the degeneracies, we will need different energies, baselines (longer baselines to enhance matter effects) and detection channels. New facilities and experiments are being proposed that can realize some (or all) of the pending issues: \begin{itemize} \item[(a)] {\it Superbeams} are more powerful versions of conventional pion decay-based beams. They could be obtained with new megawatt proton sources. They will need to be coupled with huge detectors at longer distances to explore matter effects. In these facilities, the main beam consists of $\nu_\mu$ and the experiments will search for both $\nu_\mu$ disappearance and $\nu_\rme$ appearance. Several possibilities are under study: a CERN upgraded beam to large detectors located in European underground laboratories (LAGUNA), a new beamline from an upgraded accelerator complex (2.3~MW beam power) to be sent to a large detector located in the DUSEL underground laboratory (1300~km), and an upgraded version of the J-PARC beam (1.66~MW) to T2HK in Japan or another detector in Korea. \item[(b)] {\it Beta-beams} are very pure $\nu_\rme$ or $\bar{\nu}_\rme$ beams made by allowing accelerated radioactive ions to decay in a storage ring. Both $\nu_\rme$ disappearance and $\nu_\mu$ appearance are sought. However, $\nu_\mu$ disappearance cannot be studied. \item[(c)] {\it Neutrino factories} are facilities where muons are produced by pion decay, cooled, injected into a storage ring and allowed to decay in straight sections. This provides a very clean $\nu_\mu$ and $\bar{\nu}_\rme$ beams (or vice versa) with well-known energy spectrum. The dominant search is the appearance of ``wrong-sign'' muons from the oscillation of $\bar{\nu}_\rme$. Other oscillation channels can also be observed. They will need detectors with capability to distinguish between $\mu^+$ and $\mu^-$. \end{itemize} \noindent Figure~{\ref{fig:mezzetto} (from Ref.~\cite{mezzetto}) compares the $\sin^2 2\theta_{13}$ discovery reach at 3$\sigma$ for different future facilities. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{GilBotelaFigs/mezzetto.png} \caption{Physics reach of different future facilities in $\sin^2 2\theta_{13}$ (from Ref.~\cite{mezzetto}).} \label{fig:mezzetto} \end{center} \end{figure} \section{Direct measurements of neutrino mass} The properties of neutrinos and especially their rest mass play an important role in cosmology, particle physics and astroparticle physics. Neutrino oscillation experiments provide compelling evidence that neutrinos are massive but they cannot provide the absolute mass value. There are two complementary approaches for measuring the neutrino mass in laboratory experiments: one is the precise spectroscopy of beta decay at its kinematic endpoint, and the other is the search for neutrinoless double-beta decay ($0\nu\beta\beta$). The $0\nu\beta\beta$ process requires the neutrino to be a Majorana particle and the effective Majorana mass $m_{\beta\beta}$ can be determined as \begin{equation} m_{\beta\beta} = \left|\sum_i U_{\rme i}^2 m_{i}\right| . \label{eq:mbb} \end{equation} This is the coherent sum of all mass eigenstates $m_i$ with respect to the PMNS mixing matrix $U_{\rme i}$; $m_{\beta\beta}$ depends on complex CP phases with the possibility of cancellations. Therefore, this implies a model-dependent determination of the Majorana mass. Experiments investigating single-beta decay offer a direct and model-independent method to determine the absolute neutrino mass; $m_{\nu_\rme}$ is determined as an incoherent sum of all mass eigenstates according to the PMNS matrix: \begin{equation} m_{\nu_\rme}^2 = \sum_i |U_{\rme i}|^2 m_{i}^2 . \end{equation} The experiments looking for $0\nu\beta\beta$ decay have the potential to probe $m_{\beta\beta}$ in the 20--50~meV region. New single-$\beta$ experiments will increase the sensitivity on $m_{\nu_\rme}$ by one order of magnitude to 200~meV. The basic principle applied in the single-beta decay model-independent method is based on kinematics and energy conservation. The idea is to measure the spectral shape of beta decay electrons close to their kinematic endpoint (Eq.~\eqref{eq:lepton_spect}), where $E_0 - E$ is small and the mass term $m_i$ becomes significant. A non-zero neutrino mass will not only shift the endpoint but also change the spectral shape: \begin{equation} \frac{\rmd\Lambda_i}{\rmd E} = C p (E + m_\rme) (E_0 - E) \sqrt{(E_0 - E)^2 - m_i^2} \, F(E, Z) \Theta({E_0 - E - m_i^2}) . \label{eq:lepton_spect} \end{equation} Here $E_0$ is the maximum energy, $F(E, Z)$ is the Fermi function and $m_i$ is the neutrino mass. The experimental requirements for doing this measurement are having a low-endpoint $\beta$ source for a large fraction of electrons in the endpoint region, high energy resolution and very low background. Figure~\ref{fig:limits_mnu} shows the evolution of the experimental bounds on neutrino masses with time. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/limits_mnu.png} \caption{Limits on neutrino masses versus year (from Ref.~\cite{PDG}).} \label{fig:limits_mnu} \end{center} \end{figure} At present the best experimental limits from single-beta decay have been determined by the Mainz and Troitsk experiments~\cite{mainz} through the tritium beta decay: \begin{equation} {}^3{\rm H} \rightarrow {}^3{\rm He} + \rme^- + \bar{\nu}_\rme \qquad\quad (m_{\nu_\rme} < 2.2~\text{eV at 95\% CL}). \end{equation} The direct limits on the other two neutrino masses are much weaker. The muon neutrino mass limit ($m_{\nu_\mu} < 170$~keV) has been determined from the endpoint spectrum of the pion decay $\pi^+ \rightarrow \mu^+ \nu_\mu$. The tau neutrino mass ($m_{\nu_\tau} < 18.2$~MeV) has been measured using the tau hadron decay $\tau \rightarrow 5 \pi \nu_\tau$. There are two complementary experimental approaches (calorimetry and spectroscopy) for measuring the neutrino mass from single-$\beta$ decays with different systematics. In the calorimeter approach, the source is identical to the detector. The best choice for the source is $^{187}$Re crystal bolometers and the entire beta decay energy is measured as a differential energy spectrum. $^{187}$Re has the lowest endpoint (2.47~keV) but, due to its rather long half-life ($4.3 \times 10^{10}$~yr), the activity is rather low. Since bolometers are modular, their number can be scaled in order to increase the sensitivity. This approach is being followed in the MARE experiment \cite{mare}. In the spectrometer approach, an external tritium source is used. Electrons are magnetically or electrostatically selected and transported to the counter. The kinetic energy of the beta electrons is analysed as an integral spectrum by an electrostatic spectrometer. The material is a high-purity molecular tritium source with a low endpoint at 18.6~keV and a short half-life providing high activity. This approach has reached its ultimate size and precision in the KATRIN experiment~\cite{katrin}. The KATRIN set-up (Fig.~\ref{fig:katrin}) \ced{extends} over 70~m. KATRIN uses a molecular gaseous tritium source. Electrons emitted by the T$_2$ decay are guided by strong magnetic fields (3.6~T in the source and 5.6~T in the transport section) to the transport section and finally to the spectrometer section. The gas flow is retained by 14 orders of magnitude by active and cryogenic pumping. The pre-spectrometer can be used to transmit only electrons with energies close to the T$_2$ endpoint. Only electrons of the endpoint region would enter the main spectrometer for precise energy analysis. The low-energy part of the spectrum is filtered. Then, when electrons enter the spectrometer, the magnetic field drops by several orders of magnitude. Only electrons able to cross the potential in the spectrometer are counted. The main spectrometer offers a resolution of 0.93~eV for 18.6~keV electrons by applying a magnetic field ratio of 1/20\,000. The selected electrons are counted in a final detector (Si PIN diodes with energy resolution of 1~keV). The main inconvenience here is that the source is external and results suffer from many systematic uncertainties, since the final energy of the electrons needs to be corrected for the energy lost in the different steps. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{GilBotelaFigs/katrin.png} \caption{Set-up of the KATRIN experiment.} \label{fig:katrin} \end{center} \end{figure} The main spectrometer is going to follow a test programme in 2011 and they plan to have the system integrated for late 2012. After three years of data taking, they plan to arrive at a sensitivity for the neutrino mass $< 0.2$~eV at 90\% CL or they could be able to detect a neutrino mass up to 0.35~eV at 5$\sigma$ significance. The MARE experiment wants to make a direct and calorimetric measurement of the $\nu_\rme$ mass with sub-eV sensitivity. They plan to use $^{187}$Re (or $^{163}$Ho) as beta emitter and they will measure all the energy released in the decay except the $\nu_\rme$ energy. The systematic uncertainties from the external electron source are eliminated. On the other hand, because they detect all the decays occurring over the entire beta decay spectrum, the source activity must be limited to avoid pulse pile-up at the endpoint. Thus, the statistics at the endpoint will be limited. They use thermal microcalorimeters whose absorbers contain the beta decay isotope with a low $Q$-value ($\sim$2.5~keV). They plan to improve the energy resolution to 1--2~eV. The MARE project is subdivided into two phases: MARE~I is an R\&D phase focused on the choice of the best isotope and the best detector technology for the final experiment. They will use 300 bolometers and plan to take data for three years to investigate masses between 2 and 4~eV. MARE~II will be the final large scale of the detector with sub-eV sensitivity (improving the mass sensitivity by one order of magnitude) and investigate the KATRIN region. They plan to use 50\,000 bolometers and five years of data taking. New ideas have recently come up to measure the neutrino mass. Project~8 \cite{project8} aims to make use of radio-frequency techniques to measure the kinetic energy of electrons from a gaseous tritium source. When a relativistic electron moves in a uniform magnetic field, cyclotron radiation is emitted. The characteristic frequency is inversely proportional to the energy of the electron. An array of antennas would capture the cyclotron radiation emitted by the electrons when moving and, by measuring the frequency, the energy of the electron could be obtained. The authors claim that they can obtain sensitivities of 0.1~eV. They are now preparing a proof-of-principle experiment to show the feasibility of detecting electrons and determining their kinetic energy. \section{Neutrinoless double-beta decay} Direct information on neutrino masses can also be obtained from neutrinoless double-beta decay ($0\nu\beta\beta$) searches. This process violates the total lepton number and requires Majorana neutrinos. Therefore, the detection of such a process would prove that neutrinos are their own antiparticles. The double-beta ($\beta\beta$) decay process is allowed when single-beta decay is energetically forbidden or strongly suppressed. The double-beta decay is characterized by a nuclear process that changes the charge $Z$ in two units while leaving the total mass $A$ unchanged: \begin{equation} (A, Z) \rightarrow (A, Z+2) + 2 \rme^- + 2 \bar{\nu}_\rme . \end{equation} For this, it is necessary that the mass $m(A,Z) > m(A, Z+2)$. This condition is fulfilled in several nuclei, with lifetimes between $10^{18}$ and $10^{21}$ years. In the $0\nu\beta\beta$ decay process, only two electrons are emitted: \begin{equation} (A, Z) \rightarrow (A, Z+2) + 2 \rme^- . \end{equation} The process can be mediated by the exchange of a light Majorana neutrino or other particles. The existence of $0\nu\beta\beta$ decay requires Majorana neutrino mass, no matter what the actual mechanism is and a violation of the total lepton number conservation. A limit on the half-life of this process implies a limit on the effective Majorana neutrino mass. In the case of $\beta\beta$ decay, we should observe a continuous energy spectrum corresponding to the two electrons up to the endpoint of the decay (Fig.~\ref{fig:spect_0nbb}). In the case of 0$\nu\beta\beta$ decay, we should only see a line coming from the two electron energies since no neutrinos are carrying away part of the energy of the process. In that sense, to observe and be sensitive to $0\nu\beta\beta$, we need good energy resolution to separate this line from the possible background (including the possible $\beta\beta$ decay up to the $Q$-value). \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/spect_0nbb.png} \caption{Energy spectrum of $\beta\beta$ and $0\nu\beta\beta$ processes.} \label{fig:spect_0nbb} \end{center} \end{figure} The inverse half-life ($T^{-1}_{1/2}$) of the neutrinoless double-beta decay rate is proportional to the square of the effective Majorana mass and also depends on the phase space factor ($G^{0\nu}$) and the nuclear matrix elements ($M^{0\nu}$), which are difficult to evaluate. While the phase space can be calculated reliably, the computation of the nuclear matrix is subject to uncertainty. This would give a factor $\sim$3 uncertainty in the derived $m_{\beta\beta}$ values: \begin{eqnarray} T^{-1}_{1/2} \simeq G^{0\nu} |M^{0\nu}|^2 \langle m_{\beta\beta} \rangle^2 . \end{eqnarray} The effective neutrino mass $m_{\beta\beta}$ depends directly on the assumed form of lepton number-violating interactions. The simplest one is a light Majorana neutrino exchange. Assuming this, the effective Majorana neutrino mass can be written as the sum of the mass eigenvalues multiplied by the mixing matrix elements and the CP phases: \begin{equation} m_{\beta\beta} = | m_1 c_{12}^2 c_{13}^2 + m_2 s_{12}^2 c_{13}^2 \, \rme^{\rmi \alpha_1} + m_3 s_{13}^2 \, \rme^{\rmi \alpha_2} | . \end{equation} The individual neutrino masses can be expressed in terms of the smallest neutrino mass and the mass-squared differences. For the normal mass hierarchy (NH), \begin{equation} m_3 \simeq \sqrt{\Delta m_{\rm atm}^2} \gg m_2 \simeq \sqrt{\Delta m_{\rm sun}^2} \gg m_1 , \end{equation} the effective mass is \begin{equation} \left<m_{\beta\beta}\right>^{\rm NH} \simeq \big| (m_1 c_{12}^2 + \sqrt{\Delta m_{\rm sun}^2} \, s_{12}^2 \,\rme^{\rmi \alpha_1}) c_{13}^2 + \sqrt{\Delta m_{\rm atm}^2} \, s_{13}^2 \,\rme^{\rmi \alpha_2} \big| \end{equation} For the inverted mass hierarchy (IH), the smallest neutrino mass is $m_3$, \begin{equation} m_2 \simeq m_1 \simeq \sqrt{\Delta m_{\rm atm}^2} \gg m_3 , \end{equation} and the effective mass can be written as \begin{equation} \left<m_{\beta\beta}\right>^{\rm IH} \approx \sqrt{\Delta m_{\rm atm}^2} \, c_{13}^2 \, \big| c_{12}^2 + s_{12}^2 \, \rme^{\rmi \alpha_1} \big| . \end{equation} In the quasi-degenerate case (QD), \begin{equation} m_0^2 \equiv m_1^2 \simeq m_2^2 \simeq m_3^2 \gg \Delta m_{\rm atm}^2 , \end{equation} the effective mass is \begin{equation} \left<m_{\beta\beta}\right>^{\rm QD} \approx m_0 \big| (c_{12}^2 + s_{12}^2 \,\rme^{\rmi \alpha_1}) c_{13}^2 + s_{13}^2 \,\rme^{\rmi \alpha_2} \big| . \end{equation} Given our present knowledge of the neutrino oscillation parameters, one can derive the relation between the effective Majorana mass and the mass of the lightest neutrino, as shown in Fig.~\ref{fig:mbb_plot}. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/mbb_plot.png} \caption{Effective Majorana neutrino mass as a function of the smallest neutrino mass.} \label{fig:mbb_plot} \end{center} \end{figure} In principle, a determination of the Majorana mass would allow us to distinguish between these regions. The three different mass hierarchies allowed by the oscillation data result in different projections. The width of the innermost dark bands reflects the uncertainty introduced by the unknown Majorana phases. Because of the overlap of the different mass scenarios, a measurement of $m_{\beta\beta}$ in the degenerate or hierarchical ranges would not determine the hierarchy. Naturally, if $m_{\beta\beta} < 0.01$~eV, normal hierarchy becomes the only possible scenario. The most sensitive double-beta experiments, Heidelberg--Moscow HM-1 and IGEX, have used $^{76}$Ge as source and detector, and reach sensitivities around 0.3~eV in the effective neutrino mass. Both collaborations have reported almost the same upper limit on the half-life of $1.6 \times 10^{25}$~yr, corresponding to a mass range of 0.33 to 1.3~eV~\cite{hm_igex}. However, part of the Heidelberg--Moscow collaboration claimed in 2001 the observation of the $0\nu\beta\beta$ process~\cite{klapdor1} with five enriched high-purity $^{76}$Ge detectors (10.96~kg of active volume). New results were presented in 2004 with collected statistics of 71.7~kg~yr~\cite{klapdor2}. The background achieved in the energy region of $0\nu\beta\beta$ decay is very low (0.11~events/kg~yr~keV). The confidence level for the neutrinoless signal was improved to 4.2$\sigma$ with a $T_{1/2} = 0.69\mbox{--}4.18 \times 10^{25}$~yr corresponding to $\left< m_{\beta\beta} \right> = 0.24\mbox{--}0.58$~eV. This would imply a degenerate neutrino mass hierarchy. This result has been much criticized and remains controversial (in contradiction with HM-1 and IGEX experiments, only part of the collaboration agrees with the result, not all the background peaks are explained) and needs to be confirmed or refuted by other experiments. The latest reanalysis of data from 1990 to 2003 shows a 6$\sigma$ excess of counts at the decay energy, which corresponds to a Majorana neutrino mass of $0.32 \pm 0.03$~eV at 68\% CL (Fig.~\ref{fig:klapdor}). \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{GilBotelaFigs/klapdor.jpg} \caption{The claim by the Heidelberg--Moscow experiment to have observed the $0\nu\beta\beta$ process at $>6\sigma$ (from Ref.~\cite{klapdor3}).} \label{fig:klapdor} \end{center} \end{figure} \subsection{Experimental detection} Neutrinoless double-beta decay is a very rare process. The half-life sensitivity of this process depends on whether there is background or not. The sensitivity (without background) is proportional to the exposure (mass $M$ $\times$ time of measurement $t$) and the isotopic abundance $a$; with background, it is inversely proportional to the background rate $B$ and the energy resolution $\Delta E$: \begin{equation} \begin{array}{rcl@{\qquad\quad}l} T_{1/2} & \propto & a M \epsilon \, t & \text{(background free),} \\ T_{1/2} & \propto & a \epsilon \sqrt{\displaystyle\frac{M t}{\Delta E \, B}} & \text{(background limited).} \end{array} \end{equation} Therefore, the basic experimental requirements for detecting this process are a large and highly efficient source mass, excellent energy resolution and an extremely low background in the $0\nu\beta\beta$ peak region. The neutrinoless double-beta decay experiments can be classified into two types, depending on whether or not the source is \ced{the same as} the detector. The first experimental approach is calorimetric detectors, where the source is the detector. They have good energy resolution and good scaling-up, but modest background discrimination. Thus, strong requirements on radiopurity and shielding are needed. The semiconductors, cryogenic bolometers, scintillators and liquid and gaseous Xe TPCs are in this category. The second approach involves detectors where the source is different from the detector. This is the case for the \ced{combined tracking and calorimetry (tracko-calo)}\aq{Expanded on first occurrence. OK?} experiments, where foils of $\beta\beta$ source are surrounded by a tracking detector that provides direct detection of the two electron tracks emitted in the decay. They have a moderate energy resolution and are difficult to scale-up. However, they can provide information on the event topology. The main goals of the future $0\nu\beta\beta$ experiments will be to reach sensitivities of the order of $\left< m_{\beta\beta} \right> \sim 0.01\mbox{--}0.1$~eV (IH mass region) using different isotopes and different experimental techniques. Table \ref{tab:0nbb_exp} shows a summary of the \ced{forthcoming} $0\nu\beta\beta$ experiments. \begin{table}[h] \begin{center} \caption{Overview of upcoming 0$\nu\beta\beta$ experiments.} \label{tab:0nbb_exp} \begin{tabular}{cccccc} \hline\hline \textbf{Experiment} & \textbf{Isotope} & \textbf{Mass (kg)} & \textbf{Technique} & \textbf{Sensit. $T^{0\nu}_{1/2}$ (yr)} & \textbf{Status} \\ \hline GERDA & $^{76}$Ge & 40 & ionization & 2 $\times$ $10^{26}$ & in progress \\ Majorana & $^{76}$Ge & 30 & ionization & 1 $\times$ $10^{26}$ & in progress \\ COBRA & $^{116}$Cd, $^{130}$Te & t.b.d. & ionization & t.b.d. & R\&D \\ CUORE & $^{130}$Te & 200 & bolometers & 6.5 $\times$ $10^{26}$ & in progress \\ EXO & $^{136}$Xe & 200 & liquid TPC & 6.4 $\times$ $10^{25}$ & in progress \\ NEXT & $^{136}$Xe & 100 & gas TPC & 1.8 $\times$ $10^{26}$ & in progress \\ SNO+ & $^{150}$Nd & liquid scintillator & 56 & 4.5 $\times$ $10^{24}$ & in progress\\ KamLAND-Zen & $^{136}$Xe & liquid scintillator & 400 & 4 $\times$ $10^{26}$ & in progress \\ SuperNEMO & $^{82}$Se, $^{150}$Nd & 100 & tracko-calo & 1--2 $\times$ $10^{26}$ & in progress \\ \hline\hline \end{tabular} \end{center} \end{table} The GERDA and Majorana experiments will search for $0\nu\beta\beta$ in $^{76}$Ge using arrays of high-purity germanium detectors. This is a well-established technique offering outstanding energy resolution (better than 0.2\% full width at half-maximum (FWHM) at the $Q_{\beta\beta}$ value) and high efficiency ($\sim$80\%) but limited methods to reject backgrounds. The GERDA detector~\cite{gerda} is made of an 86\% enriched pure naked $^{76}$Ge crystal array immersed in LAr and surrounded by 10~cm of lead and 2~m of water. In phase~I, they will operate the refurbished HM and IGEX enriched detectors ($\sim$18~kg). They will verify or reject the Heidelberg--Moscow claim with the same detectors. They expect a background rate of the order of 0.01 counts/keV~kg~yr. In phase~II, they will add 20~kg of segmented detectors to arrive at a background level of $\sim$0.001 counts/keV~kg~yr. Depending on the outcome, there could be a phase~III, merging GERDA and Majorana detectors, to reach a mass of the order of 1~ton and test the IH mass region. The Majorana experiment~\cite{majorana} is located in Sanford Laboratory and will be composed of 30~kg of enriched $^{76}$Ge crystals with a passive Cu and Pb shielding providing a low background. They anticipate a background rate of 0.001 counts/keV~kg~yr. The COBRA experiment~\cite{cobra} aims to search for the $0\nu\beta\beta$ decay of $^{116}$Cd and $^{130}$Te with CdZnTe semiconductors. It is currently in the R\&D phase and they have a test set-up working at the Gran Sasso laboratory. The idea is to use an array of CdZnTe room-temperature semiconductors. The exploration of pixellated detectors will add tracking capabilities to the pure energy measurements and even further background reduction by particle identification. A scientific proposal is foreseen by the end of 2012. CUORICINO was an experiment at the Gran Sasso laboratory working from 2003 to 2008. It was composed by cryogenic bolometers of TeO$_2$ crystals. The $0\nu\beta\beta$ decay was not observed and the experiment has been able to set the world's most stringent lower limit for the half-life for $0\nu\beta\beta$ in $^{130}$Te, namely, $T_{1/2} \ge 2.8 \times 10^{24}$~yr at 90\% CL~\cite{cuoricino}. The CUORE detector~\cite{cuore} will consist of an array of 988 TeO$_2$ crystals that contain 27\% $^{130}$Te as the source of $0\nu\beta\beta$ with $\sim$200~kg of $^{130}$Te for a total detector mass of about 740~kg. The crystals will be cooled inside a specially built dilution refrigerator -- one of the world's largest -- to a temperature of $\sim$10~mK, at which point they have such a small heat capacity that the energy deposited by individual particles or gamma rays in a crystal produces a temporary, measurable rise of its temperature. The measured temperature pulses will be used to construct an energy spectrum of the interactions occurring inside the crystals, and the spectrum is then inspected for a small peak at 2527~keV. The next project goal for CUORE will be the construction and operation of CUORE-0, the first 52-crystal tower produced by the CUORE detector assembly line. The CUORE-0 tower will be installed in the existing CUORICINO cryostat, and it will take data for the next two years while the 19 CUORE towers are assembled. CUORE-0 is primarily intended to serve as a test of the CUORE detector assembly protocols and to verify the functionality of the experimental components, but it will nevertheless represent a significant measurement: it will be comparable in size to CUORICINO, yet its energy spectrum will have a lower background due to improvements in materials and assembly procedures. The advantages and disadvantages of the technique are similar to those of germanium experiments, with about the same energy resolution and efficiency for the signal. The expected sensitivity for a background of 0.001 counts/keV~kg~yr and $\Delta E = 5$~keV is $\sim 6.5 \times 10^{26}$~yr. SNO+~\cite{sno+} proposes to fill the Sudbury Neutrino Observatory (SNO) with liquid scintillator. A mass of several tens of kilograms of $\beta\beta$ decaying material can be added to the experiment by dissolving a neodymium salt in the scintillator. The natural abundance in the $^{150}$Nd isotope is 5.6\%. Given the liquid scintillator light yield and photocathode coverage of the experiment, a modest energy resolution performance (about 6\% FWHM at $Q_{\beta\beta}$) is expected. This could be compensated by large quantities of isotope and low backgrounds. They plan to use enriched Nd to increase the mass. KamLAND-Zen~\cite{kamland-zen} plans to dissolve 400~kg of $^{136}$Xe in the liquid scintillator of KamLAND in the first phase of the experiment, and up to 1~ton in a projected second phase. Xenon is relatively easy to dissolve (with a mass fraction of more than 3\% being possible) and also easy to extract. The major modification to the existing KamLAND experiment is the construction of an inner, very radiopure and very transparent balloon to hold the dissolved xenon. The balloon, 1.7~m in radius, would be shielded from external backgrounds by a large, very radiopure liquid scintillator volume. While the energy resolution at $Q_{\beta\beta}$ (about 10\%) is inferior to that of SNO+, the detection efficiency is much better (80\%) due to its double envelope. The NEMO-3 experiment~\cite{nemo3} combines calorimetry and tracking techniques. The foils of the source are surrounded by a tracking detector that provides a direct detection of the two electron tracks emitted in the decay. NEMO-3 is installed in the Frejus underground laboratory and is searching for neutrinoless double-beta decay for two main isotopes ($^{100}$Mo and $^{82}$Se) and studying the two-neutrino double-beta decay of seven isotopes. The experiment has been taking data since 2003 and, up to the end of 2009, showed no evidence for neutrinoless double-beta decay. SuperNEMO~\cite{supernemo} uses the NEMO-3 approach with series of modules, each one consisting of a tracker and a calorimeter that surround a thin foil of the isotope. In SuperNEMO the target will likely be $^{82}$Se, although other isotopes such as $^{150}$Nd or $^{48}$Ca are also being considered. The mass of the target is limited to a few kilograms (typically 5--7~kg) by the need to build it foil-like, and to minimize multiple scattering and energy loss. The tracker and calorimeter can record the trajectory of the charged particles and measure their energies independently. This technique, which maximally exploits the topological signature of the events, leads to excellent background rejection. However, the selection efficiency is relatively low (about 30\%), and the resolution rather modest (4\% FWHM at $Q_{\beta\beta}$). Moreover, this technique is very hard to extrapolate to large masses due to the size, complexity and cost of each module. Another technique used in $0\nu\beta\beta$ experiments is the xenon time projection chambers. Xenon is a suitable detection medium, providing both scintillation and ionization signals. It has a decaying isotope, $^{136}$Xe, with a natural abundance of about 10\%. Compared to other sources, xenon is easy (thus relatively cheap) to enrich in the candidate isotope. When an event occurs, the energetic electrons produced interact with the liquid xenon (LXe) to create scintillation light that is detected, for example, with avalanche photodiodes (APDs). The electrons also ionize some of the xenon and the ionized electrons drift to charge collection wires at the ends of the vessel in an electric field. The time between the light pulse and the electrons reaching the wires tell us how far in the event occurred, since we know the drift time. There are two possibilities for a xenon TPC: a cryogenic liquid xenon time projection chamber (LXe TPC), or a (high-pressure) xenon (HPXe) gas chamber. EXO~\cite{exo} is a LXe TPC with a modest energy resolution (3.3\% FWHM at $Q_{\beta\beta}$) through ionization and scintillation readout. A 200~kg detector of 80\% enriched $^{136}$Xe is currently being installed at the Waste Isolation Pilot Plant (WIPP) in New Mexico, USA. This experiment aims to measure the -- as yet unobserved -- two-neutrino mode of double-beta decay of $^{136}$Xe and provide a competitive limit on neutrinoless double-beta decay. Background rates of order 0.001 counts/keV~kg~yr are expected in EXO-200. The improvement with respect to the high-resolution calorimeters comes from the event topological information. The collaboration is undergoing extensive R\&D to develop the xenon detector and a way to ``tag'' the products of the decay ($^{136}\mathrm{Ba}^{2+}$ tagging) in order to eliminate all backgrounds. The NEXT experiment~\cite{next} proposes to build a 100~kg high-pressure gaseous xenon (enriched at 90\% in $^{136}$Xe) TPC. The experiment aims to take advantage of both good energy resolution ($\leq 1$\% FWHM at $Q_{\beta\beta}$) and the presence of a $0\nu\beta\beta$ topological signature for further background suppression. NEXT plans to rely on electroluminescence to amplify the ionization signal, using two separate photodetection schemes for an optimal measurement of both calorimetry and tracking. Figure~\ref{fig:jj} shows the background rate in the region of interest (1~FHWM around $Q_{\beta\beta}$) versus the energy resolution (FWHM) for different past and present experiments~\cite{pau}. The (green) circles correspond to measured data, while the (blue) squares and (red) diamonds correspond, respectively, to the R (reference) and O (optimistic) background assumptions of the experiments, according to Ref.~\cite{pau}. The results for the $m_{\beta\beta}$ sensitivity (90\% CL) of the proposals as a function of exposure are also shown. The filled circles indicate 10 years of run-time according to the reference scenario. \begin{figure}[ht] \begin{center} \includegraphics[width=7.5cm]{GilBotelaFigs/sensit_0nbb.png} \includegraphics[width=6cm]{GilBotelaFigs/sensit_pau.png} \caption{(left) Background rate as a function of the energy resolution (FWHM) for different past and present experiments and (right) $m_{\beta\beta}$ sensitivity (at 90\% CL) as a function of the exposure (from Ref.~\cite{pau}).} \label{fig:jj} \end{center} \end{figure} NEXT and CUORE have the best sensitivities, reaching 66 and 73~meV at 90\% CL, respectively. KamLAND-Zen, EXO and SuperNEMO follow, with sensitivities in the 82--87~meV range. GERDA and SNO+ reach sensitivities of 94 and 96~meV, respectively. In the optimistic scenario, the lower background regime for all experiments allows significantly better sensitivities to be obtained. In summary, the goals of the next generation of $0\nu\beta\beta$ experiments are to push the $m_{\beta\beta}$ limit down to 100~meV to confirm or discard the Heidelberg--Moscow claim. In a second and more ambitious step, they should reach $m_{\beta\beta} \sim 50$~meV to fully explore the degenerate spectrum. Finally, depending on their capability to scale their technology to larger masses ($\sim$~ton scale), they will try to partially explore the inverse hierarchy down to $\sim 10$~meV. \section{Supernova neutrinos} Type II supernovae (SNe) are massive stars that begin their lives made out of hydrogen. Hydrogen starts nuclear fusion in the core, and when all H is converted into He, the star starts to collapse until He is hot enough to fuse. Then, He will begin the same process as H. The same happens for the rest of the elements up to Fe. The Fe fusion reaction absorbs more energy than it releases, and then the core shrinks, heats up and produces no new, more massive elements. The star cannot resist the pressure of its internal gravitational force and then collapses. The collapse leads to an explosion, that is known as a type~II~SN. In the core-collapse mechanism, three stages are important from the point of view of neutrino emission: \begin{itemize} \item[(1)] The collapse of the core: a first electron neutrino burst is emitted since the high density of matter enhances the electron capture by protons. \item[(2)] Then, neutrinos are trapped and an elastic bounce of the core is produced, which results in a shock wave. When the shock crosses the electron neutrino sphere, an intense burst of $\nu_\rme$ is produced, called the {\it shock breakout} or {\it neutronization burst}, and a total energy of $3 \times 10^{51}$~erg is radiated \ced{in milliseconds}. \item[(3)] The process will finish in an explosion. Then, the external layers of the star are expelled into space. After this, the star loses energy by emitting neutrinos of all flavours and the {\it cooling process} ($\sim 10$~s) starts until a neutron star or a black hole is formed. \end{itemize} \noindent The total energy released during this process is enormous: $\approx 3 \times 10^{53}$~erg. Some 99\% of the gravitational binding energy of the star ($E_B$) is released in the form of neutrinos of all flavours: 1\% are produced during the neutronization process while the rest are $\nu$--$\bar{\nu}$ pairs from later cooling reactions. The expected supernovae rate in our Galaxy is about three per century. In 1987 astrophysics entered a new era with the detection of the neutrinos from the SN1987A~\cite{sn1987}, which exploded in the Large Magellanic Cloud at a distance of $\sim 50$~kpc. The burst of light was visible to the naked eye. Around three hours before the observation of the SN, an increase of neutrinos was detected by three water Cerenkov detectors: Kamiokande, IMB and Baksan. This observation confirmed important parts of the neutrino supernova theory such as total energy, mean temperature and time duration. However, limited quantitative information on the neutrino spectrum was obtained due to the small statistics (only about 20 events) recorded. The flavour composition, energy spectrum and time structure of the neutrino burst from a supernova can give information about the explosion mechanism and the mechanisms of proto-neutron star cooling. In addition, the intrinsic properties of the neutrino such as flavour oscillations can also be studied. The neutrinos in the cooling stage are in equilibrium with their surrounding matter density and their energy spectra can be described by a function close to a Fermi--Dirac distribution. The flux of an emitted neutrino $\nu_{\alpha}$ can then be written as \cite{Lunardini:2003eh} \begin{equation} \phi_{\alpha}(E_{\alpha}, L_{\alpha}, D, T_{\alpha}, \eta_{\alpha}) = \frac{L_{\alpha}}{4\pi D^2 F_3(\eta_\alpha) T^4_{\alpha}} \, \frac{E_{\alpha}^2}{\rme^{E_{\alpha}/T_{\alpha}-\eta_{\alpha}}+1} , \end{equation} \noindent where $L_{\alpha}$ is the luminosity of the flavour $\nu_{\alpha}$ ($E_B = \sum L_{\alpha}$), $D$ is the distance to the supernova, $E_{\alpha}$ is the energy of the $\nu_{\alpha}$ neutrino, $T_{\alpha}$ is the neutrino temperature inside the neutrinosphere and $\eta_{\alpha}$ is the ``pinching'' factor. The original $\nu_\mu$, $\nu_\tau$, $\bar{\nu}_\mu$ and $\bar{\nu}_\tau$ fluxes are approximately equal and therefore we treat them as $\nu_x$. An energy hierarchy between the different neutrino flavours is generally believed to hold and implies $\langle E_{\nu_\rme} \rangle < \langle E_{\bar{\nu}_\rme} \rangle < \langle E_{\nu_x} \rangle$. However, the specific neutrino spectra remain a matter of detailed calculations. In particular, recent simulations seem to indicate that the energy differences between flavours could be very small and possible collective neutrino flavour conversions could arise for either mass hierarchy depending on the primary fluxes~\cite{Choubey:2010up}. Neutrino oscillations and matter effects in the supernova will change the neutrino fluxes significantly and, therefore, the number of events expected in the detectors. If the neutrino energy spectra are different, then $\theta_{13}$ and the mass hierarchy can be probed. For small mixing angle ($\sin^2 \theta_{13} < 2 \times 10^{-6}$), there are no effects on $\theta_{13}$ and we cannot distinguish among mass hierarchies. Only an upper bound on $\sin^2\theta_{13}$ can be set. For intermediate $\theta_{13}$ ($2 \times 10^{-6} < \sin^2\theta_{13} < 3 \times 10^{-4}$), maximal sensitivity to the angle is achieved and measurements of the angle are possible in this region. For large mixing angle ($\sin^2\theta_{13} > 3 \times 10^{-4}$), maximal conversions occur. The mass hierarchy can be probed but only a lower bound on $\theta_{13}$ can be established. In addition to matter effects in the SN matter, when neutrinos traverse the Earth, regeneration effects can produce a distortion of the neutrino energy spectrum. If we compare the signals from different detectors in different locations, we could probe such an effect. \subsection{Supernova neutrino detection in terrestrial experiments} Most of the current and near-future supernova neutrino experiments \cite{KateTAUP} are water Cerenkov or liquid scintillator detectors and, therefore, primarily sensitive to the $\bar{\nu}_\rme$ component of the signal, via inverse beta decay $\bar{\nu}_\rme + \rmp \rightarrow \rmn + \rme^+$. For supernova burst detection, not only statistics but also diversity of flavour sensitivity is needed: neutral current sensitivity, which gives access to the $\nu_\mu$ and $\nu_\tau$ components of the flux, and $\nu_\rme$ sensitivity are particularly valuable. Only two near-future experiments will be mainly sensitive to the $\nu_\rme$. The HALO detector~\cite{HALO} is under construction at SNOlab and it uses 80~tons of lead blocks instrumented with the unused SNO NCD counters to record neutrons and electromagnetic signals. However, this technique has some limitations since no energy or pointing information can be obtained and only rates are provided. The ICARUS detector at Gran Sasso~\cite{ICARUS} is a 600~ton LAr TPC with excellent $\nu_\rme$ sensitivity via $^{40}$Ar CC interactions, for which de-excitation gammas will be visible. All current supernova neutrino experiments participate in the Supernova Early Warning System (SNEWS)~\cite{SNEWS}, the network of SN neutrino observatories whose main goal is to provide the astronomical community with a prompt alert for the next galactic core-collapse supernova explosion. Very promising for the future are a number of planned mega-detectors exploring essentially three technologies: megaton-scale water Cerenkov detectors, like LBNE in DUSEL~\cite{LBNE}, Hyper-K in Japan~\cite{HK} and Memphys in Europe~\cite{Memphys}; 100~kton-scale LAr TPC detectors, like GLACIER in Europe~\cite{Glacier} or LAr LBNE in DUSEL \cite{LBNE-LAr}; and 50~kton-scale liquid scintillator detectors, like LENA in Europe~\cite{LENA} or Hanohano in Hawaii~\cite{Hano-Hano}. Some such detectors can hope to collect individual neutrino events every few years from beyond the Local Group of galaxies (a few megaparsecs), assuming that background can be reduced sufficiently. The LAGUNA~\cite{LAGUNA} project in Europe is studying the performance of these three technologies for detecting supernova neutrinos. The three proposed large-volume detector neutrino observatories can guarantee continuous exposure for several decades, so that a high statistics supernova neutrino signal could eventually be observed. The expected numbers of events for GLACIER, LENA and MEMPHYS are reported in Ref.~\cite{LAGUNA_detect}, including the neutronization burst rates and diffuse supernova neutrino background. \subsection{Diffuse supernova neutrino background} The diffuse supernova neutrino background (DSNB) is the flux of neutrinos and antineutrinos emitted by all core-collapse supernovae that have occurred so far in the Universe. It will appear isotropic and time-independent in feasible observations. The DSNB has not been detected yet, but discovery prospects are excellent. The Super-Kamiokande experiment established an upper limit on the $\bar{\nu}_\rme$ flux of $\Phi (\bar{\nu}_\rme) < 1.2~\mathrm{cm^{-2}~s^{-1}}$ for neutrino energies higher than 19.3~MeV~\cite{sk-dsnb}, close to the predictions. Figure~\ref{fig:sk-dsnb} shows the energy spectrum of DSNB candidates. Points are data and the expected total atmospheric neutrino background is shown by the thick solid line. The largest allowed DSNB signal is shown by the shaded region added to the atmospheric background.\aq{What is the boxed ``GADZOOKS!'' at the top of the right-hand panel? Can it be deleted?} \begin{figure}[ht] \begin{center} \includegraphics[width=6.2cm]{GilBotelaFigs/relic_beacom.png} \includegraphics[width=6.5cm]{GilBotelaFigs/relic+_beacom_new.png} \caption{(left) Energy spectrum of DSNB candidates measured by SK and (right) expected detection rates in SK with dissolved gadolinium (from Ref.~\cite{beacom}).} \label{fig:sk-dsnb} \end{center} \end{figure} If Super-Kamiokande is modified with dissolved gadolinium to reduce detector backgrounds and increase the energy range for analysis, then the DSNB could be detected at a rate of a few events per year~\cite{beacom}. LAr TPCs would be able to detect mainly the $\nu_\rme$ component of the DSNB signal, providing complementary information with respect to Super-Kamiokande. The main background sources for these events in the relevant neutrino energy range of 10--50~MeV are solar and low-energy atmospheric neutrinos. Depending on the theoretical predictions for the DSNB flux, a 100~kton LAr detector running for five years would get more than 4$\sigma$ measurement of the DSNB flux~\cite{cocco}. \section{Conclusions} Neutrinos are responsible for one of the most important discoveries in the past few years in particle and astroparticle physics. Experimental data have proved that neutrinos oscillate and, therefore, they are massive particles. Nevertheless, fundamental questions regarding neutrinos remain unsolved, and present and future neutrino experiments will try to provide an answer to them. The main goals of such a research programme include the measurement of the unknown $\theta_{13}$ mixing angle, the sign of $\Delta m^2_{32}$ (type of mass hierarchy), the determination of the existence or not of CP-violation in the leptonic sector, the value of the neutrino masses, and the Majorana or Dirac nature of neutrinos, among others. New facilities and detectors are being proposed to answer these questions, using both oscillation and non-oscillation experiments. Neutrinos still have surprises for us, and the near future is going to be very exciting. We will have a better understanding of the neutrino physics thanks to the experimental programme in the coming years. \section*{Acknowledgements} I would like to thank the organizers for inviting me to this great School and in particular I am very grateful to the discussion leaders and students for creating a friendly environment and for very interesting discussions.
2,869,038,155,206
arxiv
\section{ Introduction} This paper is devoted to classify the structure of standing waves for the following system of Schr\"odinger-Poisson type, \begin{equation}\label{mains} \left\{\begin{aligned} &2 i\dot{v}_+-\Delta v_++(\mu_{11}\phi_{v_+}-\mu_{12}\phi_{v_-})v_+ -\frac{1}{2\pi}\int_0^{2\pi}g(v_++e^{i\theta}\overline{v}_-)d\theta=0,\\ & 2i\dot{v}_--\Delta v_-+(\mu_{22}\phi_{v_-}-\mu_{12}\phi_{v_+})v_- -\frac{1}{2\pi}\int_0^{2\pi}g(v_-+e^{i\theta}\overline{v}_+)d\theta=0, \end{aligned}\right. \end{equation} where $v_\pm(t,x):\mathbb{R}^{1+3}\rightarrow \mathbb{C}$, $\phi_u=(4\pi|\cdot|)^{-1} \ast |u|^2$, $\mu_{ij} >0$ for $i,j=1,2$ and $g(v):\mathbb{C} \rightarrow \mathbb{C} $ satisfying $g(e^{is} v)=e^{is}g(v)$ for all $s\in \mathbb{R}$. This system arises as a limit system of the nonlinear Maxwell-Klein-Gordon equations when we consider the nonrelativistic limit, taking the speed of light to infinity. The nonlinear Maxwell-Klein-Gordon equations are written as follows: \begin{equation}\tag{NMKG} \left\{\begin{aligned} & D_\alpha D^{\alpha}\psi = (mc)^2\psi -g(\psi), \\ & \partial^{\beta}F_{\alpha\beta} = \frac{1}{c}\text{Im}(\psi\overline{D_\alpha\psi}), \end{aligned}\right. \end{equation} where $\psi(t,x):\mathbb{R}^{1+3}\rightarrow \mathbb{C}$ is a wave function, $g :\mathbb{C}\rightarrow \mathbb{C}$ is a nonlinear potential satisfying $g(e^{is}v)=e^{is}g(v)$ for all $s\in \mathbb{R}$, $D_\alpha =\partial_\alpha +\frac{i}{c}A_\alpha, \alpha = 0, 1, 2, 3$ is the covariant derivative and $F_{\alpha\beta} =\partial_\alpha A_\beta-\partial_{\beta}A_{\alpha}$ is the curvature tensor. Here, we write $\partial_0 = \frac{\partial}{c\partial t}$, $\partial_j = \frac{\partial}{\partial x_{j}}, j = 1, 2, 3$. Indices are raised under the Minkowski metric $g_{\alpha\beta} = \text{diag}(-1, 1, 1, 1)$, i.e., $X^{\alpha} = g_{\alpha\beta}X_{\beta}$. The positron part $v_+$ and electron part $v_-$ of a solution $\psi$ to (NMKG) are given by \[ v_+ \coloneqq e^{-ic^2t}\frac12\left(\psi-i\langle\nabla\rangle_c^{-1}D^+_0\psi\right),\quad v_- \coloneqq e^{-ic^2t}\frac12\left(\bar\psi-i\langle\nabla\rangle_c^{-1}D^-_0\bar\psi\right), \] where $\langle \nabla\rangle_c$ denotes the Fourier multiplier with the symbol $\sqrt{|\cdot|^2+c^2}$ and $D_\alpha^{\pm} = \partial_\alpha \pm \frac{i}{c}A_\alpha$. Some formal computation (for example, see \cite{MN}) shows that by taking the nonrelativistic limit $c\to\infty$, $(v_+,v_-)$ approaches to a solution of the system \eqref{mains} with $\mu_{ij} = 2$. In other words, a solution of (NMKG) can be approximated as the two different modes of oscillations, namely $\psi=e^{ic^2t}v_++e^{-ic^2t}\overline{v_-}+o_c(1)$ as $c\rightarrow \infty$, where $(v_+,v_-)$ is a solution to the system \eqref{mains}. The integration $\frac{1}{2\pi}\int_0^{2\pi}g(v_\pm+e^{i\theta}\overline{v}_\mp)d\theta$ in \eqref{mains} describes the resonances and if we take the cubic nonlinearity $g(v)=|v|^2v$, then it simply becomes $(|v_{\pm}|^2+2|v_\mp|^2)v_\pm$. In \cite{MN, MN1}, Masmoudi-Nakanishi rigorously justified the convergence of nonrelativistic limit in the space $C[0,T], H^1)$ either the Maxwell gauge terms are not involved, that is, $A_\alpha=0$ for $\alpha = 0, 1, 2, 3$ or the nonlinear potential term $g$ is absent. In these cases, the system \eqref{mains} reduces to respectively the systems \begin{equation}\label{coupledNLS} 2i\dot{v}_{\pm}-\Delta v_{\pm}-\frac{1}{2\pi}\int_0^{2\pi}g(v_\pm+e^{i\theta}\overline{v_\mp})d\theta=0 \end{equation} or \begin{equation}\label{htr} \left\{\begin{aligned} &2 i\dot{v}_+-\Delta v_++(\mu_{11}\phi_{v_+}-\mu_{12}\phi_{v_-})v_+ =0,\\ & 2i\dot{v}_--\Delta v_-+(\mu_{22}\phi_{v_-}-\mu_{12}\phi_{v_+})v_- =0. \end{aligned}\right. \end{equation} We note that the system \eqref{mains} serves as a physically meaningful generalization of several well-studied semilinear PDEs. For example, when either $v_+$ or $v_-$ vanishes, the system \eqref{mains} is reduced to the so-called Schr\"odinger-Poisson equation \[ 2i\dot{v} -\Delta v + \mu \phi_{v} v - g(v )=0, \] which is extensively studied during past two decades. We refer to the literatures \cite{C, CW, DM1, DM, MRS, R} for the existence and qualitative properties of positive standing waves. The system \eqref{coupledNLS} with $g(u)=|u|^2u$ is called the coupled system of cubic nonlinear Schr\"odinger equations and appeared in theory of Bose–Einstein condensates and nonlinear optics (see \cite{AA,AC, BDW, LW, MCSS, MS, PW, RCF,S1, TBDC,WY} and references therein). Moreover, if we take $v_+ = v_-$ and $\mu_{11} = \mu_{22}$ in the system \eqref{htr}, we obtain the nonlinear Hartree equation $$ 2 i\dot{v} -\Delta v + \mu \phi_{v} v =0, $$ which arises in the self gravitational collapse of quantum mechanical system and physics of laser beams. We refer to \cite{L, Li, Le, MZ} and references therein for the study of ground states and standing waves. In relating with the system \eqref{htr}, the existence of positive standing waves in \cite{WS} is studied when the parameters $\mu_{11}$ and $\mu_{22}$ are negative. The main novelty of this paper therefore is to investigate the existence of standing waves having nontrivial components $v_+,\, v_-$ to the full system \eqref{mains} with positive coupling parameters $\mu_{ij} > 0$. We will see that the nontrivial standing waves of the aforementioned reduced systems play a significant role in analyzing structures of standing waves to \eqref{mains}. Now, we insert the standing wave ansatz \[ v_+(x,t)=u_1(x)e^{i\frac{\lambda}{2} t} \mbox{ and } v_-(x,t)=u_2(x)e^{i\frac{\lambda}{2} t} \] into the system \eqref{mains} to obtain \begin{equation}\label{gmee} \begin{cases} -\Delta u_1+\lambda u_1+(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1=\frac{1}{2\pi}\int_0^{2\pi} g(u_1+e^{i\theta}u_2)d\theta \ \ \mbox{ in }\ \ \mathbb{R}^3, \\ -\Delta u_2+\lambda u_2+(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2=\frac{1}{2\pi}\int_0^{2\pi} g(u_2+e^{i\theta}u_1)d\theta \ \ \mbox{ in }\ \ \mathbb{R}^3, \end{cases} \end{equation} where $u_1, u_2:\mathbb{R}^3\rightarrow \mathbb{R}$, $\lambda, \mu_{ij} >0$ for $i,j=1,2$. We denote by $\vec{u}$ a pair of functions $(u_1,\, u_2)$. Here we define some concepts of triviality and positiveness of a vector function $\vec{u}$. \begin{defn} \begin{enumerate}[$*$] A vector function $\vec{u} = (u_1,u_2)$ is said to be \smallskip \item nontrivial if either $u_1 \neq 0$ or $u_2 \neq 0$; \item semi-trivial if it is nontrivial but either $u_1 = 0$ or $u_2 = 0$; \item vectorial if both of $u_1$ and $u_2$ are not zero; \item nonnegative if $u_1 \geq 0$ and $u_2 \geq 0$; \item positive if $u_1 > 0$ and $u_2 > 0$. \end{enumerate} \end{defn} We denote by $H^1 = H^1(\mathbb{R}^3)$ the completion of $C^\infty_c(\mathbb{R}^3)$, the space of real valued smooth function with compact support, with respect to the standard Sobolev norm \[ \|u \|_{\lambda}=\Big(\int_{\mathbb{R}^3}|\nabla u|^2+\lambda u^2dx\Big)^\frac12. \] The notation $H_r^1$ denotes the space of radially symmetric functions in $H^1$. We also define ${\bf H} \coloneqq H_r^1\times H_r^1$ equipped with the norm \[ \|\vec{u}\|_{\bf{H}}^2= \|u_1 \|_{\lambda}^2+\|u_2 \|_{\lambda}^2. \] We now are ready to state the main results. Our first result deals with the case $g = 0$, in which the system \eqref{gmee} is reduced to \begin{equation}\label{bme} \begin{cases} -\Delta u_1+\lambda u_1+(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1=0 \ \ \mbox{ in }\ \ \mathbb{R}^3,\\ -\Delta u_2+\lambda u_2+(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2=0\ \ \mbox{ in }\ \ \mathbb{R}^3. \end{cases} \end{equation} In this case we can provide a satisfactory picture for the solution structure of \eqref{bme}. Interestingly, it turns out that the sign of determinant of the matrix $(\mu_{ij})$ completely determines the existence of nontrivial solutions to \eqref{bme}. \begin{thm}\label{btm} Let $\lambda, \mu_{ij} >0$ for $i, j=1,2$. If $\mu_{11}\mu_{22}\ge \mu_{12}^2$, then \eqref{bme} has only the trivial solution in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$. If $\mu_{11}\mu_{22}< \mu_{12}^2$, The system \eqref{bme} admits a positive vector solution $\vec{u} \in \mathbf{H}$, which is unique in the class of nontrivial nonnegative functions in {\bf H}. Actually, it is determined as $\vec{u}=(V,a_0V)$, where $a_0=\sqrt{\frac{\mu_{11}+\mu_{12}}{\mu_{22}+\mu_{12}}}$ and $V$ is a unique positive radial solution of the equation \begin{equation}\label{bme1} -\Delta u+\lambda u+ \frac{\mu_{11}\mu_{22}-\mu_{12}^2}{\mu_{22}+\mu_{12}} \phi_{u} u=0. \end{equation} \end{thm} \begin{rmk}\em It seems interesting to compare the system \eqref{bme} with the corresponding single equation, so-called nonlinear Hartree equation \[ -\Delta u+\lambda u +\mu \phi_uu = 0. \] It is proved in \cite{L, Le} that it admit a unique positive radial solution when $\mu < 0$. (Note that this makes the statement of Theorem \ref{btm} valid.) By multiplying by $u$ and integrating, it is also easy to see it has only trivial solution if $\mu \geq 0$. Here, we point out that the sign of $\text{det}(\mu_{ij})$ of the system \eqref{bme} plays the same role with the sign of $\mu$. \end{rmk} \begin{rmk}\label{rmk-no-semi}\em It is immediate to show that the system \eqref{bme} does not have any semi-trivial solution. Suppose, for example, $u_2 = 0$ for a solution $\vec{u}$ so that \eqref{bme} reduces to $-\Delta u_1+\lambda u_1+\mu_{11}\phi_{u_1}u_1=0$. Then as mentioned in the previous remark, we see $u_1 = 0$. \end{rmk} We next turn our attention to the case $g \neq 0$. In this paper, we only focus on the standard power function $g(v)=|v|^{p-1}v$. Then we can write the system \eqref{gmee} as \begin{equation}\label{gme} \begin{cases} -\Delta u_1+\lambda u_1+(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1=\frac{1}{2\pi}\int_0^{2\pi} |u_1+e^{i\theta}u_2|^{p-1}(u_1+e^{i\theta}u_2)d\theta \ \ \mbox{ in }\ \ \mathbb{R}^3, \\ -\Delta u_2+\lambda u_2+(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2=\frac{1}{2\pi}\int_0^{2\pi} |u_2+e^{i\theta}u_1|^{p-1}(u_2+e^{i\theta}u_1) d\theta \ \ \mbox{ in }\ \ \mathbb{R}^3. \end{cases} \end{equation} In this case, one can compare the system \eqref{gme} with \begin{equation}\label{NSP} -\Delta u+ \lambda u + \mu\phi_{u} u= |u|^{p-1} u \mbox{ in } \mathbb{R}^3, \end{equation} which is called the nonlinear Schr\"odinger-Poisson equation. It is proved in \cite{DM} that when $p \in (0, 1] \cup [5,\infty)$, there exists no nontrivial solution to \eqref{NSP}. In the interesting paper \cite{R}, Ruiz investigated the existence of positive radial solutions to \eqref{NSP} with the subcritical range of $p \in (1,5)$. He shows that while a positive radial solution exists when $p \in (2, 5)$ for each $\lambda, \mu > 0$, the situation is changed when we consider the case $p \in (1,2]$, where there exists no nontrivial solution for large $\mu > 0$ and there exist two positive radial solutions for small $\mu > 0$. It is therefore naturally expected that the system \eqref{gme} admits only trivial solution if the range of the exponent $p > 0$ is either sub-linear or $H^1$ super-critical. The second result of this paper confirms this whenever $\det{(\mu_{ij})} \geq 0$. \begin{thm}\label{th0} Let $\lambda, \mu_{ij} >0$ for $i,j=1,2$ satisfying $\mu_{11}\mu_{22}\ge \mu_{12}^2$. If $p\in(0,1]\cup [5,\infty)$, then \eqref{gme} has only the trivial solution in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$. \end{thm} \begin{rmk}\em The condition $\mu_{11}\mu_{22}\ge \mu_{12}^2$ is not a technical matter. Indeed, we note that, when $p=1$, \eqref{gme} becomes the following systems \begin{equation}\label{endc} \begin{cases} -\Delta u_1+(\lambda-1) u_1+(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1=0 \ \ \mbox{ in }\ \ \mathbb{R}^3, \\ -\Delta u_2+(\lambda-1)u_2+(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2=0 \ \ \mbox{ in }\ \ \mathbb{R}^3.\end{cases} \end{equation} Then, by Theorem \ref{btm}, we deduce that if $\lambda >1$ and $\mu_{ij}>0$ with $\mu_{11}\mu_{22}<\mu_{12}^2$, then \eqref{endc} has a positive solution. \end{rmk} In the subsequent theorems, we investigate the structure of ground states to \eqref{gme} in the super-linear and sub-critical range of $p \in (1, 5)$ to search for positive vector solutions. Observe that the system \eqref{gme} is the Euler-Lagrange system of the action functional \begin{align*} I(\vec{u})&=\frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\frac14\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\qquad - \frac{1}{p+1}\frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi} |u_1+e^{i\theta}u_2|^{p+1}d\theta dx. \end{align*} We say that a solution $\vec{u} \in \mathbf{H}$ to \eqref{gme} is a (radial) ground state if $\vec{u}$ is nontrivial and minimizes the value of $I$ among all nontrivial solutions to \eqref{gme} in $\mathbf{H}$. Similarly with the results on the nonlinear Schr\"odinger-Poisson equations, it turns out that the solution structure of \eqref{gme} changes at the borderline $p = 2$. The next theorem is concerned with the range of $p \in (2,5)$. \begin{thm}\label{th3} Let $2<p<5$ and $\lambda, \mu_{ij} >0$ for $i,j=1,2$. Then there exists a non-negative ground state solution $\vec{w} $ of \eqref{gme}. If either $2<p<5$ and $\mu_{11}=\mu_{22}$; or $\frac13 (-2 + \sqrt{73})\le p<5$, then $\vec{w}$ is positive. Moreover, if $2<p<5$ and $\mu_{11}=\mu_{22}$, then $\vec{w}$ must be of the form $\vec{w} =(U ,U )$, where $U$ is a positive ground state solution of \begin{equation}\label{sie} -\Delta u+ \lambda u + ( {\mu_{11} - \mu_{12}}) \phi_{u} u= \frac{c_p}{2}|u|^{p-1} u \mbox{ in } \mathbb{R}^3 \end{equation} with $c_p = \frac{2^{\frac{p-1}{2}}}{ \pi}\int_0^{2\pi} (1+\cos \theta)^{\frac{p+1}{2}}d\theta$. \end{thm} \begin{rmk}\em We note that the number $\frac13 (-2 + \sqrt{73})\cong 2.18133$ is slightly larger than 2. We believe that this number appears in just a technical matter and any ground state is actually positive in the whole range of $2< p <5$ without the assumption $\mu_{11} = \mu_{22}$. \end{rmk} Now, we turn to the remaining case $p \in (1, 2]$. It turns out that the solution structure varies when we change the sign of $\det{(\mu_{ij})}$. We first construct a positive solution of \eqref{gme} when $\mu_{12}>0$ is taken to be large, i.e., $\det(\mu_{ij}) < 0$ and $|\det(\mu_{ij})|$ is large. The solution is obtained as a perturbation of $(V,V)$, where $V$ is the solution of a nonlinear Hartree equation \[ -\Delta V+\lambda V-\phi_V V=0 \mbox{ in } \mathbb{R}^3. \] \begin{thm}\label{th5} Let $1<p\le2$ and $\lambda, \mu_{11},\, \mu_{22} >0$ be fixed. Then there exists a constant $\mu_0 > 0$ such that if $\mu_{12} > \mu_0$, the system \eqref{gme} admits a positive solution. \end{thm} The next two theorems cover the case $\det{(\mu_{ij})} > 0$. We will see that, unlike the case $\det{(\mu_{ij}) < 0}$, the system \eqref{gme} does not admit any nontrivial solution when $\det(\mu_{ij})$ is taken to be large. \begin{thm} \label{th31} Let $1<p\le2$ and $\mu_{ij} >0$ for $i,j=1,2$. Assume that $\lambda\ge2$, $\mu_{11}>4$ and $(\mu_{11}-4)(\mu_{22}-4)>\mu_{12}^2.$ Then \eqref{gme} has only the trivial solution in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$. \end{thm} Our final result deals with the case that $\det{(\mu_j)} > 0$ and $\mu_{ij}$ is small. We construct two positive solutions to \eqref{gme} such that one has positive energy and the other one is an energy minimizer which has negative energy. This corresponds to the work by Ruiz \cite{R} in which the same kind of positive solutions for \eqref{sie} are found. \begin{thm}\label{th4} Let $1<p<2$ and $\lambda, \mu_{ij} >0$ for $i,j=1,2$. If $\mu_{11}\mu_{22}-\mu_{12}^2>0$ and $\mu_{ij}$ is small enough, where $i,j=1,2$, then there are at least two different positive radial solutions $\vec{u}_1$, $\vec{u}_2$ of \eqref{gme}, where $\vec{u}_1$ is a positive minimizer of \eqref{gme} with negative energy, and $\vec{u}_2$ is a positive solution of \eqref{gme} with positive energy. Moreover, if we assume $\mu_{11}\mu_{22}-\mu_{12}^2>0$, $\mu_{11}=\mu_{22}$ and $\mu_{ij}>0$ is small enough, where $i,j=1,2$, then $\vec{u}_1$ and $\vec{u}_2$ have the form $\vec{u}_1=(U_1,U_1)$ and $\vec{u}_2=(U_2,U_2)$, where $U_1$ is a positive minimizer of \eqref{sie} with negative energy, and $U_2$ is a positive solution of \eqref{sie} with positive energy. \end{thm} \begin{rmk}\em Theorem \ref{th5} and \ref{th4} show that the value of $\det(\mu_{ij})$ affects the solution structure of \eqref{gme} in the case $p \in (1,2]$. If $\det(\mu_{ij})$ is largely negative, then there exists a positive solution bifurcating from the ground state of the nonlinear Hartree equation. If $\det(\mu_{ij})$ is positive and the coefficient are small, then there exists two positive solution bifurcating from two positive solution of the nonlinear Schr\"odinger-Poisson equations. \end{rmk} The rest of paper is organized as follows. In Section \ref{bmepr}, we study the system \eqref{gmee} with $g=0$ and prove Theorem \ref{btm}. Section \ref{strucsect} is devoted to provide classification results of positive solutions to \eqref{gme}, and some properties of the semi-trivial solutions to \eqref{gme} such as the energy levels or Morse indices of them. In Section \ref{nonlin}, we study the existence of positive solutions to \eqref{gme} for $2<p<5$, and prove Theorem \ref{th3}. In Section \ref{negadet}, we construct positive solutions to \eqref{gme} for $1<p\le 2$ and $\det(\mu_{ij})<0$, and prove Theorem \ref{th5}. In Section \ref{posidet}, we present the multiple existence and non-existence results to \eqref{gme} for $1<p<2$ and $\det(\mu_{ij})>0$, and give the proofs of Theorem \ref{th31} and Theorem \ref{th4}. In Appendix \ref{append1}, we prove Pohozaev identity for \eqref{gme}, and give non-existence results to \eqref{gme} when $p\in (0,1]\cup [5,\infty)$, which provide the proof of Theorem \ref{th0}. Finally, in Appendix \ref{regularity}, we discuss some regularities of the energy functional for \eqref{gme}. \section{Classification of solutions in zero potential case} \label{bmepr} In this section, we prove Theorem \ref{btm}, which gives a complete classification of nonnegative solutions for \eqref{bme}. Throughout this section, we assume $\lambda, \mu_{ij} >0$ for $i,j=1,2$ and $\vec{u}=(u_1,u_2)\in H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ always denotes an arbitrary solution of \eqref{bme}. \subsection{Triviality of $\vec{u}$ when $\det(\mu_{ij}) \geq 0$} We first deal with the case $\det(\mu_{ij}) \geq 0$, i.e., $\mu_{11}\mu_{22}\ge \mu_{12}^2$. Our claim is that $\vec{u}$ is trivial. Multiplying the first equation of \eqref{bme} by $u_1$ and the second equation by $u_2$ respectively, and integrating, we see that \begin{equation}\label{ch5} \begin{aligned} 0&=\int_{\mathbb{R}^3}\Big(-\Delta u_1+\lambda u_1+(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1\Big)u_1dx =\int_{\mathbb{R}^3}|\nabla u_1|^2+\lambda u_1^2 +\mu_{11}\phi_{u_1}u_1^2-\mu_{12}\phi_{u_2}u_1^2dx\\ &=\int_{\mathbb{R}^3}|\nabla u_1|^2+\lambda u_1^2 +\mu_{11}\phi_{u_1}u_1^2-\mu_{12}\phi_{u_1}u_2^2dx \end{aligned} \end{equation} and \begin{equation}\label{ch6} 0=\int_{\mathbb{R}^3}\Big( -\Delta u_2+\lambda u_2+(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2\Big)u_2dx =\int_{\mathbb{R}^3}|\nabla u_2|^2+\lambda u_2^2 +\mu_{22}\phi_{u_2}u_2^2-\mu_{12}\phi_{u_1}u_2^2dx. \end{equation} Note that, since $\mu_{ij}>0, \mu_{11}\mu_{22}\ge \mu_{12}^2$ and $ \nabla \phi_{u_1}\cdot \nabla \phi_{u_2}\le \frac{\mu_{11}}{2\mu_{12}}|\nabla \phi_{u_1}|^2+\frac{\mu_{12}}{2\mu_{11}}|\nabla \phi_{u_2}|^2, $ where $i,j=1,2$, we have \begin{equation}\label{mupos} \mu_{11}|\nabla \phi_{u_1}|^2+\mu_{22}|\nabla \phi_{u_2}|^2 -2\mu_{12}\nabla\phi_{u_1}\cdot\nabla\phi_{u_2} \ge \frac{\mu_{11}\mu_{22}-\mu_{12}^2}{\mu_{11}}|\nabla \phi_{u_2}|^2\ge0. \end{equation} Then, by \eqref{mupos}, adding \eqref{ch5} and \eqref{ch6}, we get \begin{align*} 0&= \int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2 +\lambda u_2^2+\mu_{11}\phi_{u_1}u_1^2+\mu_{22}\phi_{u_2}u_2^2 -2\mu_{12}\phi_{u_1}u_2^2 dx \\ & = \int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2 +\lambda u_2^2+\mu_{11}|\nabla \phi_{u_1}|^2+\mu_{22}|\nabla \phi_{u_2}|^2 -2\mu_{12}\nabla\phi_{u_1}\cdot\nabla\phi_{u_2} dx \\ & \ge \int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2 +\lambda u_2^2dx, \end{align*} which implies that $\vec{u} \in H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ is trivial. \subsection{Explicit form of $\vec{u}$ when $\det(\mu_{ij}) < 0$} We next assume $\mu_{11}\mu_{22}< \mu_{12}^2$. Suppose that $\vec{u}$ is nontrivial, nonnegative and radial. We have already seen from Remark \ref{rmk-no-semi} that $\vec{u}$ is vectorial. Then the strong maximum principle applies to see that $\vec{u}$ is positive. We now state a lemma which clarifies a structure of $\vec{u}$. \begin{lem}\label{betaunq} Assume $\lambda, \mu_{12} >0$, $\mu_{ii} \ge 0$ for $i=1,2$. Then for any positive radial solutions $\vec{u}=(u_1,u_2)$ of \eqref{bme}, we have $u_2\equiv a_0u_1$, where $a_0=\sqrt{\frac{\mu_{11}+\mu_{12}}{\mu_{22}+\mu_{12}}}$. \end{lem} \begin{proof} Let $\vec{u}=(u_1,u_2)$ be a positive radial solution of \eqref{bme}. By Newton’s Theorem and Holder's inequality, we see that \begin{equation}\label{phexp} \phi_{u_i}(x)=\frac{1}{|x|}\int_0^\infty u_i^2(s)s^2\min\{|x|s^{-1},1\}ds\le C\|u_i\|_{H^1}^2\min\{1, |x|^{-1}\} \end{equation} for $i=1,2$. Thus, by the comparison principle, for any $\lambda^\prime<\lambda<\lambda^{\prime \prime}$, there exist constants $C_1, C_2>0$ such that \[ C_1\exp(-\lambda^{\prime \prime}|x|)\le u_i(x)\le C_2\exp(-\lambda^{\prime }|x|) \mbox{ for } x\in \mathbb{R}^3 \] and thus $ u_i^2/ u_j \in H^1(\mathbb{R}^3)$, where $i,j=1,2$. Denote $v_2\equiv a_0^{-1}u_2$, where $a_0=\sqrt{\frac{\mu_{11}+\mu_{12}}{\mu_{22}+\mu_{12}}}$. Then, since $\phi_{u_2}=a_0^2\phi_{v_2}$, we have \begin{equation}\label{bbme} \begin{cases} -\Delta u_1+\lambda u_1+(\mu_{11}\phi_{u_1}-a_0^2\mu_{12}\phi_{v_2})u_1 =0, \\ -\Delta v_2+\lambda v_2+(a_0^2\mu_{22}\phi_{v_2}-\mu_{12}\phi_{u_1})v_2 =0.\end{cases} \end{equation} By dividing the first and second equations of \eqref{bbme} by $u_1$ and $v_2$, respectively, and subtracting the second equation from the first equation, we have \begin{align*} 0&=-(\Delta u_1)u_1^{-1}+(\Delta v_2)v_2^{-1}+\Big((\mu_{11}+\mu_{12})\phi_{u_1}-a_0^2(\mu_{22}+\mu_{12})\phi_{v_2}\Big) \\ &=-(\Delta u_1)u_1^{-1}+(\Delta v_2)v_2^{-1}+(\mu_{11}+\mu_{12}) (\phi_{u_1}- \phi_{v_2} ). \end{align*} Multiplying the above equation by $u_1^2-v_2^2$, then integration by parts yields \begin{align*} 0&=\int_{\mathbb{R}^3}\nabla u_1\cdot \nabla \Big(\frac{u_1^2-v_2^2}{u_1}\Big)-\nabla v_2\cdot \Big(\frac{u_1^2-v_2^2}{v_2}\Big)+(\mu_{11}+\mu_{12}) (\phi_{u_1}- \phi_{v_2} )(u_1^2-v_2^2) \\ &=\int_{\mathbb{R}^3}(u_1^2+v_2^2)\Big|\frac{\nabla u_1}{u_1}-\frac{\nabla v_2}{v_2}\Big|^2+(\mu_{11}+\mu_{12}) |\nabla( \phi_{u_1}- \phi_{v_2}) |^2dx, \end{align*} which implies that $u_1\equiv v_2$. \end{proof} By applying Lemma \ref{betaunq}, we can immediately see that $\vec{u}$ has the form $\vec{u}=(V,a_0V)$, where $a_0=\sqrt{\frac{\mu_{11}+\mu_{12}}{\mu_{22}+\mu_{12}}}$ and $V$ is a positive solution of the Hartree equation \begin{equation}\label{bme11} -\Delta u+\lambda u+ \frac{\mu_{11}\mu_{22}-\mu_{12}^2}{\mu_{22}+\mu_{12}} \phi_{u} u=0. \end{equation} We finally refer to \cite[Theorem 10]{L} or \cite[Lemma 9]{Le} to conclude that if any positive radial solution of \eqref{bme11} is unique when $\mu_{11}\mu_{22}< \mu_{12}^2$. This completes the proof of Theorem \ref{btm}. \section{Structure theorems for nonnegative solutions in power potential case}\label{strucsect} The rest of the paper is devoted to the study of the system \eqref{gme}, that is, the power potential case. We begin with defining the energy functional for \eqref{gme}: \begin{align*} I(\vec{u})&=\frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\frac14\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\qquad - \frac{1}{p+1}\frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi} |u_1+e^{i\theta}u_2|^{p+1}d\theta dx\\ &=\frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\frac14\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\qquad - \frac{1}{p+1} \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta dx. \end{align*} We prove in Section \ref{regularity} that $I$ is twice continuously differentiable on $\mathbf{H}$ and its critical points are solutions to \eqref{gme}. By finding nonzero critical points of $I$, one therefore can provide several nontrivial solutions to \eqref{gme}. This shall be done throughout subsequent sections. One fundamental task of this paper is to distinguish these solutions from the semi-trivial ones. We will see that this can be accomplished by obtaining some information on mathematical structures of the semi-trivial solutions such as the energy levels or Morse indices of them. The main purpose of this section is to investigate such structures of semi-trivial solutions. We also prove a rigidity theorem for positive solutions, which are required to give a classification on positive solutions. \subsection{Morse indices of semi-trivial solutions} In this subsection, we compute a lower bound of Morse indices of semi-trivial critical points of $I$. We say $n$ is the Morse index of a critical point $\vec{u}$ of $I$ if $n$ is the maximal dimension of subspaces $V \subset \mathbf{H}$ satisfying \[ I^{\prime \prime}(\vec{u})[h, h] < 0 \quad \text{for all } h \in V. \] The following identity is straightforward from \eqref{hss1} in Appendix \ref{regularity} but frequently invoked throughout this section. \begin{lem}\label{semimorse} Assume $1<p<5$. For $i=1,2$, let $\lambda>0, \mu_{ij} \ge0, (t_1,t_2)\in \mathbb{R}^2$ and $(u_1,u_2)\in {\bf H}$. Then we have \begin{equation}\label{abhss1} \begin{aligned} I^{\prime \prime}(u_1,0)\Big[(t_1u_1, t_2u_1),(t_1u_1, t_2u_1)\Big]&=t_1^2\int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2+3\mu_{11} \phi_{u_1} u_1^2- p |u_1 |^{p+1}dx\\ &\quad +t_2^2\int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2-\mu_{12} \phi_{u_1} u_1^2 - \frac{p+1}{2} |u_1|^{p+1}dx \end{aligned} \end{equation} and \begin{align*} I^{\prime \prime}(0,u_2)\Big[(t_1u_2, t_2u_2),(t_1u_2, t_2u_2)\Big] &=t_1^2\int_{\mathbb{R}^3} |\nabla u_2|^2+\lambda u_2^2-\mu_{12} \phi_{u_2} u_2^2 - \frac{p+1}{2} |u_2|^{p+1}dx\\ &\quad +t_2^2\int_{\mathbb{R}^3} |\nabla u_2|^2+\lambda u_2^2+3\mu_{22} \phi_{u_2} u_2^2- p |u_2 |^{p+1}dx. \end{align*} \end{lem} \begin{prop}\label{mmors} Assume that $1<p<5$. Let $\lambda$ and $\mu_{ij}$ be positive constants for $i=1,2$. Then every semi-trivial critical point of $I$ has Morse index larger than or equal to $1$. \end{prop} \begin{proof} We may only deal with the semi-trivial critical point $(u_1,0)$ due to the symmetry of the system \eqref{gme}. Since $u_2 = 0$, $u_1$ solves $$-\Delta u_1+\lambda u_1+ \mu_{11}\phi_{u_1} u_1 = |u_1|^{p-1} u_1$$ so that one has $$ \int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2+\mu_{11} \phi_{u_1} u_1^2 - |u_1|^{p+1}dx=0. $$ Then by \eqref{abhss1}, we see that for $t\in \mathbb{R}\setminus \{0\}$, \begin{equation*} \begin{aligned} &I^{\prime \prime}(u_1,0)\Big[(0, tu_1),(0, tu_1)\Big] \\ &= t^2\int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2-\mu_{12} \phi_{u_1} u_1^2 - \frac{p+1}{2} |u_1|^{p+1}dx\\ &=-t^2\int_{\mathbb{R}^3} (\mu_{11}+\mu_{12}) \phi_{u_1} u_1^2+\frac{p-1}{2} |u_1|^{p+1}dx<0. \end{aligned} \end{equation*} This shows that $1$ is a lower bound of Morse index of $(u_1,0)$. \end{proof} \begin{prop}\label{msi1} Let $\lambda$ and $\mu_{ij}$ be positive constants for $i=1,2$. Assume that $\frac13 (-2 + \sqrt{73})\le p<5$. Then every semi-trivial critical point of $I$ has Morse index larger than or equal to $2$. \end{prop} \begin{proof} Let $(u_1,0)$ be a semi-trivial critical point of $I$ so that $u_1$ solves $$-\Delta u_1+\lambda u_1+ \mu_{11}\phi_{u_1} u_1 = |u_1|^{p-1} u_1.$$ Define $ a= \int_{\mathbb{R}^3} |\nabla u_1|^2dx,\ \ b= \int_{\mathbb{R}^3}\lambda u_1^2,\ \ c= \int_{\mathbb{R}^3}\mu_{11} \phi_{u_1} u_1^2,\ \ d= \int_{\mathbb{R}^3} |u_1 |^{p+1}dx. $ Then we have \begin{equation}\label{esee1} \begin{cases} a+b+c-d=0,\\ \frac12 a+\frac32 b+\frac54 c-\frac{3}{p+1}d=0. \end{cases} \end{equation} Here, the second one is so-called the Pohozaev identity. We refer to \cite{R} or Proposition \ref{pz} applied to the solution $(u_1,0)$ in Appendix. By \eqref{abhss1}, we have \begin{equation}\label{ese2} \begin{aligned} I^{\prime \prime}(u_1,0)\Big[(t_1u_1, t_2u_1),(t_1u_1, t_2u_1)\Big] &=t_1^2\int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2+3\mu_{11} \phi_{u_1} u_1^2- p |u_1 |^{p+1}dx\\ &\quad +t_2^2\int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2-\mu_{12} \phi_{u_1} u_1^2 - \frac{p+1}{2} |u_1|^{p+1}dx\\ &=t_1^2 (\rom{1})+t_2^2(\rom{2}). \end{aligned} \end{equation} Since $a+b+c-d=0$, we have \begin{equation}\label{ese3} \begin{aligned} (\rom{2})=-\int_{\mathbb{R}^3} (\mu_{11}+\mu_{12}) \phi_{u_1} u_1^2+\frac{p-1}{2} |u_1|^{p+1}dx<0. \end{aligned} \end{equation} On the other hand, by \eqref{esee1}, we have $$ a=\frac{1}{3}b+\frac{5p-7}{3(p+1)}d,\ \ c=-\frac{4}{3}b+\frac{2(5-p)}{3(p+1)}d $$ and $$ (\rom{1})=a + b + 3 c - p d=\frac13 \Big(-8b-\frac{3p^2+4p-23}{p+1}d\Big). $$ Thus, $\frac{3p^2+4p-23}{p+1}>0$ if $p>\frac13 (-2 + \sqrt{73})\cong 2.18133$, which implies that $(\rom{1})<0 $ if $p\ge\frac13 (-2 + \sqrt{73})$ and consequently the Morse index of $(u_1,0)$ is larger than or equal to 2. By the symmetry, Morse index of the semi-trivial solution $(0,u_2)$ for \eqref{gme} is also larger than or equal to two. \end{proof} \subsection{Energy comparison of semi-trivial solutions} In this subsection, we prove that a semi-trivial solution to \eqref{gme} is not a ground state when $\mu_{11} = \mu_{22}$ and $2 < p < 5$. We accomplish this by comparing the energy of semi-trivial solutions with the energy of the solution of the form $(W,W)$, whose existence is guaranteed for the case $\mu_{11} = \mu_{22}$. In fact, if $\mu_{11} = \mu_{22}$ then $(W,W)$ gives a solution of \eqref{gme} for any solution $W$ of the equation \[ -\Delta u+ \lambda u + ( {\mu_{11} - \mu_{12}}) \phi_{u} u= \frac{c_p}{2}|u|^{p-1} u, \] where we define \begin{equation}\label{cp} \begin{aligned} c_p \coloneqq \frac{1}{2\pi}\int_0^{2\pi} |1+e^{i\theta}|^{p+1}d\theta=\frac{2^{\frac{p-1}{2}}}{ \pi}\int_0^{2\pi} (1+\cos \theta)^{\frac{p+1}{2}}d\theta. \end{aligned} \end{equation} We shall show that the energy level of $(W,W)$ is lower than that of semi-trivial solutions. We first prepare some auxiliary lemmas. \begin{lem}\label{uniqct}\cite[Lemma 3.3]{R} Let $2<p<5$, $a,c>0$ and $b\in \mathbb{R}$. Define $h:[0,\infty)\rightarrow \mathbb{R}$ by $$ h(t)=at+bt^3-ct^{2p-1}. $$ Then $h$ has a unique critical point, which corresponds to its maximum. \end{lem} Let us denote $$ I_\gamma(u) \coloneqq \int_{\mathbb{R}^3}\frac12\Big(|\nabla u|^2+u^2\Big)+\frac{\gamma}{4} u^2 \phi_{u} -\frac{1}{p+1} |u|^{p+1}dx. $$ \begin{lem}\label{l1} Let $v$ and $w$ be positive ground state solutions of \[ -\Delta v+v+ \gamma \phi_{v} v= |v|^{p-1} v \ \mbox{ and }\ -\Delta w+w+ \mu \phi_{w} w = |w|^{p-1} w, \] respectively, where $2<p<5$ and $ \gamma, \mu\in \mathbb{R}$ with $\gamma>\mu. $ Then we have $$ I_\gamma(v) > I_\mu(w). $$ \end{lem} \begin{rmk}\em The existence of a positive radial ground states for $I_\gamma$ with $2< p < 5$ was proved in \cite{R} when $\gamma > 0$ and in \cite{V} when $\gamma < 0$. \end{rmk} \begin{proof} In fact, we recall $$ I_\gamma(v)=\inf_{u\not\neq0}\max_{t\ge 0}I_\gamma(u_t) \mbox{ and } I_\mu(w)=\inf_{u\not\neq0}\max_{t\ge 0} I_\mu(u_t), $$ where $u_t(x)\equiv t^2u(tx)$ (see \cite[Lemma 2.4]{AP}). Combining the Nehari's identity $$ \int_{\mathbb{R}^3} |\nabla v|^2+ v^2+\gamma v^2\phi_v- |v|^{p+1}dx=0 $$ and the Pohozaev's identity (see \cite{R} or Proposition \ref{pz}) $$ \int_{\mathbb{R}^3}\frac12 |\nabla v|^2+\frac32 v^2+\frac{5\gamma}{4}v^2\phi_v-\frac{3}{p+1}v^{p+1}dx=0, $$ we get $$ \int_{\mathbb{R}^3}\frac32 |\nabla v|^2+\frac12 v^2+\frac{3\gamma}{4}v^2\phi_v-\frac{2p-1}{p+1}v^{p+1}dx=0. $$ From this and Lemma \ref{uniqct}, we see that $I_\gamma(v) =\max_{t\ge 0}I_\gamma(v_t)$, and thus \begin{align*} I_\gamma(v)&=\max_{t\ge 0}I_\gamma(v_t) =\max_{t\ge 0}\bigg[\int_{\mathbb{R}^3}\frac12\Big(t^3|\nabla v|^2+tv^2\Big)+\frac{\gamma}{4}t^3 v^2 \phi_{v} -\frac{t^{2p-1}}{p+1} |v|^{p+1}dx\bigg]\\ &>\max_{t\ge 0}\bigg[\int_{\mathbb{R}^3}\frac12\Big(t^3|\nabla v|^2+tv^2\Big)+\frac{\mu}{4}t^3 v^2 \phi_{v} -\frac{t^{2p-1}}{p+1} |v|^{p+1}dx\bigg] \ge I_\mu(w). \end{align*} \end{proof} A direct computation shows that there holds the following scaling properties. \begin{lem}\label{rmk1} Let $v$ be a solution of $$ -\Delta v+a v+ b\phi_{v} v= c|v|^{p-1} v, $$ where $1<p<5$ and $a,b,c>0$, which is a Euler-Lagrange equation of \[ I_{a,b,c}(v) = \int_{\mathbb{R}^3}\frac12|\nabla v|^2+\frac{a}{2}v^2+\frac{b}{4}\phi_v v^2-\frac{c}{p+1}|v|^{p+1}dx. \] Then a rescaled function $\tilde{v}(x)=\Big(\frac{c}{a}\Big)^{\frac{1}{p-1}}v\Big(\frac{x}{\sqrt{a}}\Big)$ solves $$ -\Delta \tilde{v}+ \tilde{v}+ \gamma(a,b,c)\phi_{\tilde{v}} \tilde{v}= |\tilde{v}|^{p-1} \tilde{v}, $$ where $\gamma(a,b,c) = \frac{b}{a^2}\left(\frac{a}{c}\right)^\frac{2}{p-1}$ and satisfies \[ I_{\gamma(a,b,c)}(\tilde{v}) = I_{1,\gamma(a,b,c),1}(\tilde{v}) = \Big(\frac{c}{a}\Big)^{\frac{2}{p-1}}a^\frac{1}{2} I_{a,b,c}(v). \] \end{lem} Now, we are ready to prove the main result of this subsection. \begin{prop}\label{egc1} Assume $2<p<5$, $\lambda>0, \mu_{ij}>0$, $\mu_{11}=\mu_{22}$, where $i,j =1,2$. Then we have \begin{equation}\label{engc} I(V,0)=I(0,V) > I(W,W), \end{equation} where $W$ and $V$ are positive ground state solutions of $$ -\Delta u+ \lambda u + ( {\mu_{11} - \mu_{12}}) \phi_{u} u= \frac{c_p}{2}|u|^{p-1} u \ \mbox{ and }\ -\Delta u+\lambda u+ \mu_{11}\phi_{u} u= |u|^{p-1} u, $$ respectively. \end{prop} \begin{proof} Denote $\tilde{W}(x)=\Big(\frac{c_p}{2\lambda }\Big)^\frac{1}{p-1}W\Big(\frac{x}{\sqrt{\lambda}}\Big) $ and $\tilde{V}(x)=\Big(\frac{1}{\lambda }\Big)^\frac{1}{p-1}V \Big(\frac{x}{\sqrt{\lambda}}\Big)$. Then, by Lemma \ref{rmk1}, we see that $\tilde{W}$ and $\tilde{V} $ are positive ground state solutions of $$ -\Delta u+u +\frac{ 2^\frac{2}{p-1}(\mu_{11}-\mu_{12})}{ \lambda^\frac{2p-4}{p-1} c_p^\frac{2}{p-1}}\phi_{u} u= |u|^{p-1} u \ \mbox{ and }\ -\Delta u+u+ \frac{\mu_{11}}{\lambda^\frac{2p-4}{p-1} } \phi_{u} u= |u|^{p-1} u, $$ respectively. By Jensen's inequality, we see that for $1<p<5$, $$ 1=\left(\frac{1}{2\pi}\int_0^{2\pi}(1+\cos\theta)d\theta\right)^\frac{p+1}{2}<\frac{1}{2\pi}\int_0^{2\pi}(1+\cos\theta)^\frac{p+1}{2}d\theta. $$ which implies that $2^{\frac{p+1}{2}}< c_p$ for $1<p<5$. Note that this means $\Big(\frac{c_p}{2}\Big)^\frac{2}{p-1}> 1$ for $2<p<5$ and thus we have $ \mu_{11} >\frac{ 2^\frac{2 }{p-1}(\mu_{11} - \mu_{12}) }{ c_p^\frac{2}{p-1}}$. Then, by Lemma \ref{l1} and Lemma \ref{rmk1}, \begin{align*} &\int_{\mathbb{R}^3}\frac12\Big(|\nabla \tilde{V}|^2+\tilde{V}^2\Big)+\frac{\mu_{11}}{4\lambda ^\frac{2p-4}{p-1} } \tilde{V}^2 \phi_{\tilde{V}} -\frac{1}{p+1} |\tilde{V}|^{p+1}dx\\ &= \lambda^{-\frac{5-p}{2(p-1)}}\int_{\mathbb{R}^3}\frac12\Big(|\nabla V|^2+\lambda V^2\Big)+\frac{\mu_{11}}{4} V^2 \phi_{V} -\frac{1}{p+1} |V|^{p+1}dx\\ &=\lambda^{-\frac{5-p}{2(p-1)}}I(V,0) \\ &> \int_{\mathbb{R}^3}\frac12\Big(|\nabla \tilde{W}|^2+\tilde{W}^2\Big) +\frac{ 2^{\frac{2}{p-1}}(\mu_{11} - \mu_{12}) }{4\lambda^\frac{2p-4}{p-1} c_p^\frac{2}{p-1}}\phi_{\tilde{W}}\tilde{W}^2 -\frac{1}{p+1} |\tilde{W}|^{p+1}dx \\ &=\frac{c_p^\frac{2}{p-1} }{ 2^\frac{2}{ p-1 }\lambda^\frac{5-p}{2(p-1)}} \int_{\mathbb{R}^3}\frac12\Big(|\nabla W|^2+ \lambda W^2\Big) +\frac{\mu_{11}-\mu_{12}}{4}\phi_W W^2 -\frac{c_p}{2(p+1)} |W|^{p+1}dx\\ &=\frac{c_p^\frac{2}{p-1} }{ 2^\frac{2}{ p-1 } \lambda^\frac{5-p}{2(p-1)}}\frac12 I(W,W). \end{align*} Note that, since $2^{\frac{p+1}{2}}< c_p$ for $2<p<5$, we get $ \lambda^{-\frac{5-p}{2(p-1)}} <\frac{c_p^\frac{2}{p-1} }{2^{ \frac{p+1}{p-1}} \lambda^{ \frac{5-p}{2(p-1)}}}.$ From this, we see that for $2<p<5$, $$ I(V,0)= I(0,V) > I(W,W). $$ \end{proof} \subsection{Rigidity results for positive solutions}\label{rigid} Analogously with Lemma \ref{betaunq}, we can show the coincidence of components for positive solutions to \eqref{gme} with $\mu_{11} = \mu_{22}$. \begin{prop}\label{rig1} Assume $1< p<5$, $\lambda>0$, $\mu_{ij}\ge0$ and $\mu_{11}=\mu_{22}$, where $i,j=1,2$. For any positive radial solutions $\vec{u}=(u_1,u_2)$ of \eqref{gme}, we have $u_1\equiv u_2$. \end{prop} \begin{proof} Let $\vec{u}=(u_1,u_2)\in {\bf H}$ be a positive solution of $$ \begin{cases} -\Delta u_1+\lambda u_1+(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1=\frac{1}{2\pi}\int_0^{2\pi}(u_1+u_2 \cos \theta )\Big(u_1^2+2u_1u_2 \cos \theta+u_2^2\Big)^\frac{p-1}{2}d\theta, \\ -\Delta u_2+\lambda u_2+(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2=\frac{1}{2\pi}\int_0^{2\pi}(u_2+u_1 \cos \theta )\Big(u_1^2+2u_1u_2 \cos \theta+u_2^2\Big)^\frac{p-1}{2}d\theta. \end{cases} $$ By \eqref{ibp}, we see that \begin{align*} \int_0^{2\pi}(u_1+u_2 \cos \theta )\Big(u_1^2+2u_1u_2 \cos \theta+u_2^2\Big)^\frac{p-1}{2}d\theta\le Cu_1(u_1^{p-1}+u_2^{p-1})\le \tilde{C}|x|^{-(p-1)}u_1, \end{align*} where we used the Strauss inequality $|u_i(x)|\le C\|u_i\|_{H^1}|x|^{-1}$ for $i=1,2$. Moreover, by Newton’s Theorem, $$ \phi_{u_i}(x)=\frac{1}{|x|}\int_0^\infty u_i^2(s)s\min\{|x|,s\}ds\le C\min\{1, \|u_i\|_{H^1}^2|x|^{-1}\} $$ for $i=1,2$. Thus, by the comparison principle, for any $\lambda^\prime<\lambda<\lambda^{\prime \prime}$, there exist constants $C_1, C_2>0$ such that \begin{equation}\label{uqz2} C_1\exp(-\lambda^{\prime \prime}|x|)\le u_i(x)\le C_2\exp(-\lambda^{\prime }|x|) \mbox{ for } x\in \mathbb{R}^3, \end{equation} where $i=1,2$. From \eqref{uqz2}, we deduce that $ u_i^2/ u_j \in H^1(\mathbb{R}^3)$, where $i,j=1,2$. Moreover, by \eqref{uq}, we see that for $a,b>0$, \begin{equation}\label{uqz1} \begin{aligned} &\int_0^{2\pi}\frac{a+b \cos \theta }{a}\Big(a^2+2ab \cos \theta+b^2\Big)^\frac{p-1}{2} -\frac{b+a \cos \theta }{b}\Big(a^2+2ab \cos \theta+b^2\Big)^\frac{p-1}{2}d\theta\\ &= \frac{b^2-a^2}{ab}\int_0^{2\pi} \cos\theta \Big(a^2+2ab \cos \theta+b^2\Big)^\frac{p-1}{2} d\theta\\ &= (p-1)(b^2 -a^2 )\int_0^{2\pi} \sin^2\theta \Big(a^2+2ab \cos \theta+b^2\Big)^\frac{p-3}{2} d\theta. \end{aligned} \end{equation} Then, by dividing the first and second equations of \eqref{gme} by $u_1$ and $u_2$, respectively, and subtracting the second one from the first one, we see from \eqref{uqz1} that \begin{align*} 0&=-(\Delta u_1)u_1^{-1}+(\Delta u_2)u_2^{-1}+\Big((\mu_{11}+\mu_{12})\phi_{u_1}-(\mu_{22}+\mu_{12})\phi_{u_2}\Big)\\ &\qquad +\frac{1}{2\pi }\int_0^{2\pi} \bigg[\frac{u_2+u_1 \cos \theta }{u_2}-\frac{(u_1+u_2 \cos \theta )}{u_1}\bigg](u_1^2+2u_1u_2 \cos \theta+u_2^2 )^\frac{p-1}{2} d\theta \\ &=-(\Delta u_1)u_1^{-1}+(\Delta u_2)u_2^{-1}+(\mu_{11}+\mu_{12}) (\phi_{u_1}- \phi_{u_2} )\\ &\quad +\frac{(p-1)( u_1^2-u_2^2 )}{2\pi }\int_0^{2\pi} \sin^2\theta \Big(u_1^2+2u_1u_2 \cos \theta+u_2^2\Big)^\frac{p-3}{2} d\theta. \end{align*} By \eqref{uqz2} and the fact that $$ (u_1^2-u_2^2)^2(u_1^2+2u_1u_2 \cos \theta+u_2^2 )^\frac{p-3}{2}\le C\begin{cases}(u_1+u_2)^2|u_1 -u_2 |^{p-1} &\mbox{ if } 1<p\le 3,\\ (u_1^4+u_2^4)(u_1^{p-3}+u_2^{p-3}) &\mbox{ if } 3<p<5, \end{cases} $$ multiplying the above equation by $u_1^2-u_2^2$, then integration by parts yields \begin{equation}\label{eqid} \begin{aligned} 0&=\int_{\mathbb{R}^3}\nabla u_1\cdot \nabla \Big(\frac{u_1^2-u_2^2}{u_1}\Big)-\nabla u_2\cdot \Big(\frac{u_1^2-u_2^2}{u_2}\Big)+(\mu_{11}+\mu_{12}) (\phi_{u_1}- \phi_{u_2} )(u_1^2-u_2^2)\\ &\qquad + \frac{(p-1)(u_1^2-u_2^2)^2}{2\pi }\int_0^{2\pi} \sin^2\theta \Big(u_1^2+2u_1u_2 \cos \theta+u_2^2\Big)^\frac{p-3}{2} d\theta dx\\ &=\int_{\mathbb{R}^3}(u_1^2+u_2^2)\Big|\frac{\nabla u_1}{u_1}-\frac{\nabla u_2}{u_2}\Big|^2+(\mu_{11}+\mu_{12}) |\nabla( \phi_{u_1}- \phi_{u_2}) |^2\\ &\qquad + \frac{(p-1)(u_1^2-u_2^2)^2}{2\pi }\int_0^{2\pi} \sin^2\theta \Big(u_1^2+2u_1u_2 \cos \theta+u_2^2\Big)^\frac{p-3}{2} d\theta dx. \end{aligned} \end{equation} If $u_1\not\equiv u_2$ in $\mathbb{R}^3$, then we may assume that $\Omega=\{x\in \mathbb{R}^3 : u_1(x)<u_2(x)\}\neq \emptyset.$ Then by \eqref{eqid} and the fact that $$ u_1^2+2u_1u_2 \cos \theta+u_2^2=(u_2-u_1)^2+2u_1u_2(1+\cos\theta)>0 \mbox{ for all }\theta\in [0,2\pi] \mbox{ and }x\in \Omega,$$ we see that $u_1\equiv u_2$ in $\Omega$, which is a contradiction. Thus, we have $u_1 \equiv u_2$ in $\mathbb{R}^3$. \end{proof} \section{Power potential case with $2<p<5$}\label{nonlin} In this section, we prove the existence results of \eqref{gme} for $2 < p < 5$, and give the proof of Theoreom \ref{th3}. \subsection{Construction of a nonnegative ground state}\label{p25} In this subsection, we construct a nontrivial nonnegative ground state solution of \eqref{gme} for $2<p<5$. We mainly follow the approach adopted in \cite{R} but we need to modify arguments because the term \[ \int_{\mathbb{R}^3}\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2}\,dx \] appearing in the energy functional $I$ is not sign definite. We begin with by setting the manifold \begin{equation}\label{mmf} \mathcal{M}\equiv \{ \vec{u}=(u_1,u_2)\in {\bf H} \setminus \{\vec{0}\} \ |\ G(\vec{u})=0 \}, \end{equation} where \begin{align*} G(\vec{u})&\equiv 2J(\vec{u})+P(\vec{u})\\ &= \int_{\mathbb{R}^3} \frac32\Big(|\nabla u_1|^2+|\nabla u_2|^2\Big)+\frac{1}{2}\lambda( u_1^2+ u_2^2)+\frac{3}{4} \Big(\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2} \Big) \\ &\qquad -\frac{2p-1}{p+1}\bigg[ \frac{1}{2\pi}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta\bigg] dx \end{align*} and $J$ and $P$ are given by \begin{align*} J(\vec{u})&\equiv I^\prime(\vec{u})\vec{u}\\ &=\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\qquad - \frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi}\Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2} d\theta dx, \end{align*} \begin{align*} P(\vec{u})&\equiv \int_{\mathbb{R}^3}-\frac12\Big(|\nabla u_1|^2+|\nabla u_2|^2\Big)-\frac{3}{2}\lambda( u_1^2+ u_2^2)dx-\frac{5}{4} \int_{\mathbb{R}^3}\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2} dx \\ &\qquad + \frac{3}{p+1}\int_{\mathbb{R}^3}\frac{1}{2\pi}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta dx. \end{align*} Note that, since \begin{equation}\label{gx} \begin{aligned} &G^\prime(\vec{u})[\psi_1,\psi_2]\\ &= \int_{\mathbb{R}^3} 3(\nabla u_1\cdot \nabla \psi_1 + \nabla u_2\cdot \nabla \psi_2)+\lambda u_1 \psi_1+\lambda u_2\psi_2dx\\ &\quad + 3\int_{\mathbb{R}^3}\mu_{11} u_1 \phi_{u_1} \psi_1 + \mu_{22} u_2 \phi_{u_2} \psi_2 - \mu_{12} u_1 \phi_{u_2} \psi_1 -\mu_{12} u_2 \phi_{u_1} \psi_2 dx\\ &\quad - (2p-1) \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi}\Big((u_1+u_2\cos \theta ) \psi_1+( u_1\cos \theta+u_2) \psi_2\Big)\Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p-1}{2} d\theta dx\\ &= \int_{\mathbb{R}^3} 3(\nabla u_1\cdot \nabla \psi_1 + \nabla u_2\cdot \nabla \psi_2)+\lambda u_1 \psi_1+\lambda u_2\psi_2dx\\ &\quad + 3\int_{\mathbb{R}^3}\mu_{11} u_1 \phi_{u_1} \psi_1 + \mu_{22} u_2 \phi_{u_2} \psi_2 - \mu_{12} u_1 \phi_{u_2} \psi_1 -\mu_{12} u_2 \phi_{u_1} \psi_2 dx\\ &\quad - (2p-1) \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi}|u_1+e^{i\theta}u_2|^{p-1}(u_1+e^{i\theta}u_2) \psi_1+|u_2+e^{i\theta}u_1|^{p-1}(u_2+e^{i\theta}u_1) \psi_2 d\theta dx, \end{aligned} \end{equation} if $G^\prime(\vec{u})=0$, $\vec{u}$ satisfies \begin{equation}\label{eqg} \begin{cases} -3\Delta u_1+\lambda u_1+3(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1=(2p-1)\frac{1}{2\pi}\int_0^{2\pi} |u_1+e^{i\theta}u_2|^{p-1}(u_1+e^{i\theta}u_2) d\theta, \\ -3\Delta u_2+\lambda u_2+3(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2=(2p-1)\frac{1}{2\pi}\int_0^{2\pi}|u_2+e^{i\theta}u_1|^{p-1}(u_2+e^{i\theta}u_1) d\theta. \end{cases} \end{equation} \begin{lem}\label{constr1} Let $2<p<5$ and $\lambda, \mu_{ij} >0$ for $i,j=1,2$. Then $\mathcal{M}$ is non-empty and $\vec{0}\notin \partial\mathcal{M}$. Moreover, $\inf_{\mathcal{M}} I >0$, and if $\inf_{\mathcal{M}} I=I(\vec{u})$, $G^\prime(\vec{u}) \neq 0$. \end{lem} \begin{proof} Note that, by Jensen's inequality, for $\vec{u}=(u_1,u_2)\in {\bf H}\setminus \{\vec{0}\}$ \begin{equation}\label{je1} \begin{aligned} 0<\int_{\mathbb{R}^3}(u_1^2+u_2^2)^\frac{p+1}{2}dx&=\int_{\mathbb{R}^3}\Big(\frac{1}{2\pi}\int_0^{2\pi} u_1^2+2 u_1 u_2 \cos \theta+ u_2^2d\theta\Big)^\frac{p+1}{2}dx\\ &\le\frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi} \Big( u_1^2+2 u_1 u_2 \cos \theta+ u_2^2\Big)^\frac{p+1}{2}d\theta dx. \end{aligned} \end{equation} For any $\vec{u}=(u_1,u_2)\in {\bf H}\setminus \{\vec{0}\}$, one has that \begin{align*} &t^2(u_1(t\cdot),u_2(t\cdot))\in \mathcal{M} \\ & \iff \int_{\mathbb{R}^3} \frac32 t^3\Big(|\nabla u_1|^2+|\nabla u_2|^2\Big)+\frac{1}{2}t\lambda( u_1^2+ u_2^2)+\frac{3}{4} t^3\Big(\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2} \Big)dx \\ &\qquad \qquad =t^{2p-1}\frac{2p-1}{p+1}\int_{\mathbb{R}^3}\bigg[ \frac{1}{2\pi}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta\bigg] dx. \end{align*} From this and \eqref{je1}, for all $\vec{u}=(u_1,u_2)\in {\bf H}\setminus \{\vec{0}\}$, there exists a unique $t>0$ such that $t\vec{u} \in \mathcal{M}$, which implies that $\mathcal{M}$ is non-empty. By the facts that $ \int_{\mathbb{R}^3}|u|^{p+1} dx\le C_1 \|u\|_{H^1}^{p+1} $ and \begin{equation}\label{ch1} \int_{\mathbb{R}^3}u^2\phi_vdx\le \| u\|_{L^\frac{12}{5}}^2\|\phi_v\|_{L^6} \le \frac12\Big( \| u\|_{L^\frac{12}{5}}^4 +\|\phi_v\|_{L^6}^2\Big)\le C_2\Big(\|u\|_{H^1}^4 +\|v\|_{H^1}^4\Big) , \end{equation} where $C_1, C_2>0$ are constants, we have $$ G(\vec{u})\ge\frac12 (\|u_1\|_{\lambda}^2+\|u_2\|_{\lambda}^2)\left(1-C\frac{\|u_1\|_{\lambda}^4+\|u_2\|_{\lambda}^4}{\|u_1\|_{\lambda}^2+\|u_2\|_{\lambda}^2}-C\frac{\|u_1\|_{\lambda}^{p+1}+\|u_2\|_{\lambda}^{p+1}}{\|u_1\|_{\lambda}^2+\|u_2\|_{\lambda}^2}\right), $$ and hence we deduce that there is $\rho>0$ such that \begin{equation}\label{b1} \|u_1\|_{\lambda}^2+\|u_2\|_{\lambda}^2\ge \rho \mbox{ for all } \vec{u}\in \mathcal{M}, \end{equation} which implies that $\vec{0}\notin \partial\mathcal{M}$. We claim that $\inf_{\mathcal{M}} I >0$. Let $k=I(\vec{u})$, where $\vec{u}=(u_1,u_2)\in \mathcal{M}$. Define \begin{equation}\label{nota1} \begin{aligned} a &=\int_{\mathbb{R}^3} |\nabla u_1|^2+|\nabla u_2|^2dx, \ \ b=\int_{\mathbb{R}^3}\lambda u_1^2+\lambda u_2^2dx, \\ c &=\int_{\mathbb{R}^3}\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2} dx, \ \ d=\int_{\mathbb{R}^3}\bigg[ \frac{1}{2\pi}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta\bigg] dx. \end{aligned} \end{equation} Note that $a,b,c$ and $d$ satisfy \begin{equation}\label{abcd-rel} \begin{cases} \frac12 a+\frac12 b+\frac14 c-\frac{1}{p+1} d=k,\\ \frac32 a+\frac12 b+\frac34c-\frac{2p-1}{p+1} d=0. \end{cases} \end{equation} We solve the above system of equations for arbitrarily given $a,b$ and $k$ to get $$ c=-2\frac{a(p-2)+b(p-1)+k(1-2p)}{p-2}, \ \ d=-\frac{b(p+1)-3k(p+1)}{2(p-2)}. $$ From the latter equality, we have \begin{equation}\label{b3} (p+1)b+2(p-2)d=3k(p+1) \end{equation} so that $k > 0$ and consequently $\inf_{\mathcal{M}}I \geq 0$. Moreover, since $2<\frac{12}{5}<p+1$, \begin{align*} \int_{\mathbb{R}^3}|\nabla \phi_{u}|^2dx&=\int_{\mathbb{R}^3} \phi_{u} u^2dx\le \|\phi_{u}\|_{L^6}\|u\|_{L^\frac{12}{5}}^2 \le C\|\phi_{u}\|_{D^{1,2}}\|u\|_{L^2}^{2\Lambda}\|u\|_{L^{p+1}}^{2(1-\Lambda)}, \end{align*} where $\Lambda=\frac{5p-7}{6(p-1)}$, and hence by \eqref{je1} we deduce that \begin{equation}\label{err1} |c|\le C(\|\phi_{u_1}\|_{D^{1,2}}^2+\|\phi_{u_2}\|_{D^{1,2}}^2)\le C(b^{4\Lambda}+d^{\frac{8(1-\Lambda)}{p+1}}). \end{equation} Now, suppose that $\inf_{\mathcal{M}}I = 0$. Then there exists a sequence $\{\vec{u}_n\} \in \mathcal{M}$ satisfying $k_n = I(\vec{u_n}) \to 0$ as $n \to \infty$. We define the constants $a_n, b_n, c_n, d_n$ as in \eqref{nota1} for $\vec{u}_n$. Since $p > 2,\, b_n,d_n >0$, \eqref{b3} tells us that $b_n,d_n \to 0$ as $n\to\infty$. This shows by \eqref{err1} $c_n \to 0$ as $n\to \infty$ but this contradicts with \eqref{b1} and \eqref{abcd-rel}. Thus we conclude that $\inf_{\mathcal{M}} I >0$. Finally, we claim that if $\inf_{\mathcal{M}} I=I(\vec{u})=k$, then $G^\prime(\vec{u}) \neq 0$. Contrary to the claim, we assume that $\inf_{\mathcal{M}} I=I(\vec{u})=k$ and $G^\prime(\vec{u}) =0$. By \eqref{gx}, one has \begin{align*} G^\prime(\vec{u})\vec{u}&= \int_{\mathbb{R}^3} 3 (|\nabla u_1|^2+|\nabla u_2|^2 )+ \lambda( u_1^2+ u_2^2)+3 \Big(\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2} \Big) \\ &\qquad -(2p-1) \bigg[ \frac{1}{2\pi}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta\bigg] dx=0. \end{align*} Then if $G^\prime(\vec{u})=0$, we have $$ \begin{cases} \frac12 a+\frac12 b+\frac14 c-\frac{1}{p+1} d=k,\\ \frac32 a+\frac12 b+\frac34c-\frac{2p-1}{p+1} d=0\\ 3a+b+3c-(2p-1)d=0\\ \frac32 a+ \frac32 b+3\frac54c-(2p-1)\frac{3}{p+1}d=0. \end{cases} $$ The fourth equation is the Pohozaev identity(see Proposition \ref{pz}) applies to \eqref{eqg}. Then we see that $$ a=-k\frac{2p-1}{4(p-2)},\ b=3k\frac{2p-1}{2(p-1)},\ c=-k\frac{2p-1}{2(p-2)},\ d=-3k\frac{p+1}{4(2-3p+p^2)}, $$ and thus by the facts that $k>0, a\ge 0$ and $2<p<5$, we can deduce a contradiction. \end{proof} \begin{lem}\label{constr2} Let $2<p<5$ and $\lambda, \mu_{ij} >0$ for $i,j=1,2$. If $\vec{u}\in {\bf H}$ is a minimizer of $I$ on $\mathcal{M}$, then $\vec{u}$ is a non-trivial critical point of $I$. \end{lem} \begin{proof} Suppose that $\vec{u}=(u_1,u_2)\in {\bf H}$ is a minimizer of $I$ on $\mathcal{M}$. Then, by Lemma \ref{constr1}, there exists $\omega\in \mathbb{R}$ such that $$ I^\prime(\vec{u})=\omega G^\prime(\vec{u}), $$ which can be written as, in a weak sense, \begin{equation}\label{b2} \begin{cases} &-(1-3\omega)\Delta u_1+(1-\omega)\lambda u_1+(1-3\omega)(\mu_{11}\phi_{u_1}-\mu_{12}\phi_{u_2})u_1\\ &\qquad +\Big(-1+(2p-1)\omega\Big)\frac{1}{2\pi}\int_0^{2\pi} |u_1+e^{i\theta}u_2|^{p-1}(u_1+e^{i\theta}u_2)d\theta =0,\\ & -(1-3\omega)\Delta u_2+(1-\omega)\lambda u_2+(1-3\omega)(\mu_{22}\phi_{u_2}-\mu_{12}\phi_{u_1})u_2\\ &\qquad +\Big(-1+(2p-1)\omega\Big)\frac{1}{2\pi}\int_0^{2\pi} |u_2+e^{i\theta}u_1|^{p-1}(u_2+e^{i\theta}u_1)d\theta=0. \end{cases} \end{equation} Then, since $\vec{u}\in \mathcal{M}$, $k=I(\vec{u})$, $I^\prime(\vec{u})\vec{u}=\omega G^\prime(\vec{u})\vec{u}$ and the Pohozaev identity(see Proposition \ref{pz}) applied to \eqref{b2}, we have \begin{align*} \begin{cases}\frac12 a+\frac12 b+\frac{1}{4} c-\frac{1}{p+1} d=k,\\ \frac32a+\frac12 b+\frac34 c-\frac{2p-1}{p+1}d=0,\\ (1-3\omega)a+(1-\omega)b+(1-3\omega)c+\Big(-1+(2p-1)\omega\Big)d=0,\\ (1-3\omega)\frac12 a+(1-\omega)\frac32 b+(1-3\omega)\frac54c-\Big(1-(2p-1)\omega\Big)\frac{3}{p+1}d=0, \end{cases} \end{align*} where $a,b,c,d$ are given in \eqref{nota1}. Then since $b,d>0$ and $2<p<5$, we can prove $\omega=0$ (see \cite[Theorem 3.2]{R}). \end{proof} \begin{lem}\label{es1} It holds that $$ \|\phi_u-\phi_v\|_{D^{1,2}}\le C\|u-v\|_{L^3}\|u+v\|_{L^2}, $$ where $C>0$ is a constant. \end{lem} \begin{proof} Since $-\Delta(\phi_u-\phi_v)=u^2-v^2$, we have \begin{align*} \int_{\mathbb{R}^3}|\nabla(\phi_u-\phi_v)|^2dx&\le \int_{\mathbb{R}^3}(u-v)(u+v)(\phi_u-\phi_v)dx\le \|u-v\|_{L^3}\|u+v\|_{L^2}\|\phi_u-\phi_v\|_{L^6}\\ &\le C\|u-v\|_{L^3}\|u+v\|_{L^2}\|\phi_u-\phi_v\|_{D^{1,2}}. \end{align*} \end{proof} \begin{prop}\label{proth} Let $2<p<5$ and $\lambda, \mu_{ij} >0$ for $i,j=1,2$. Then $I$ has a global minimum $\vec{u}$ on $\mathcal{M}$. Moreover, $\vec{u}$ is a non-trivial, non-negative critical point of $I$. \end{prop} \begin{proof} Let $ \vec{u}_n =(u_{1,n},u_{2,n})\in \mathcal{M}$ such that $I(\vec{u}_n)\rightarrow \inf_{\mathcal{M}}I$. We claim that $\{\vec{u}_n\}$ is bounded in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$. Denote \begin{equation}\label{je2} \begin{aligned} &a_n=\int_{\mathbb{R}^3} |\nabla u_{1,n}|^2+|\nabla u_{2,n}|^2dx, \ \ b_n=\int_{\mathbb{R}^3}\lambda u_{1,n}^2+\lambda u_{2,n}^2dx, \\ &c_n =\int_{\mathbb{R}^3}\mu_{11}u_{1,n}^2 \phi_{u_{1,n}}+\mu_{22}u_{2,n}^2 \phi_{u_{2,n}}^2 -2 \mu_{12} u_{1,n}^2 \phi_{u_{2,n}} dx, \\ &d_n=\int_{\mathbb{R}^3}\bigg[ \frac{1}{2\pi}\int_0^{2\pi} \Big(u_{1,n}^2+2u_{1,n} u_{2,n} \cos \theta+u_{2,n}^2\Big)^\frac{p+1}{2}d\theta\bigg] dx. \end{aligned} \end{equation} From \eqref{b3}, we get that $b_n$ and $d_n$ are bounded, and hence by \eqref{err1}, $c_n$ is bounded. Thus, since $ \vec{u}_n =(u_{1,n},u_{2,n})\in \mathcal{M}$ and $b_n, c_n, d_n$ are bounded, we have that $a_n$ is bounded. We may assume that $\vec{u}_n\rightharpoonup \vec{u}=(u_1,u_2)$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ and $\vec{u}_n\rightarrow \vec{u}$ in $L^q(\mathbb{R}^3)\times L^q(\mathbb{R}^3)$, where $2<q<6$. Define \begin{align*} &a =\int_{\mathbb{R}^3} |\nabla u_1|^2+|\nabla u_2|^2dx, \ \ b=\int_{\mathbb{R}^3}\lambda u_1^2+\lambda u_2^2dx, \\ &c =\int_{\mathbb{R}^3}\mu_{11}u_1^2 \phi_{u_1}+\mu_{22}u_2^2 \phi_{u_2} -2 \mu_{12} u_1^2 \phi_{u_2} dx, \ \ d=\int_{\mathbb{R}^3}\bigg[ \frac{1}{2\pi}\int_0^{2\pi} \Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2}d\theta\bigg] dx\\ &\bar{a}=\lim_{n\rightarrow \infty} a_n, \ \ \bar{b}=\lim_{n\rightarrow \infty} b_n, \ \ \bar{c}=\lim_{n\rightarrow \infty} c_n, \ \ \bar{d}=\lim_{n\rightarrow \infty} d_n, \end{align*} where $a_n, b_n, c_n$ and $d_n$ are given in \eqref{je2}. Then, by the compactness of the embedding $H_r^1\rightarrow L^q$ and Lemma \ref{es1}, where $2<q<6$, we have $c=\bar{c}$ and $d=\bar{d}$. We claim that $\vec{u}_n\rightarrow \vec{u}$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ and $\vec{u}\in \mathcal{M}$. To prove the claim, it suffices to show that $a+b=\bar{a}+\bar{b}$. From the weak convergence, we have $a\le \bar{a}$ and $b\le \bar{b}$. Suppose that $\int_{\mathbb{R}^3} |\nabla u_1|^2+\lambda u_1^2dx<\lim_{n\rightarrow\infty}\int_{\mathbb{R}^3} |\nabla u_{1,n}|^2+\lambda u_{1,n}^2dx$. This implies that $a+b<\bar{a}+\bar{b}$. Since $I(\vec{u}_n)\rightarrow \inf_{\mathcal{M}}I$ and $G(\vec{u}_n)=0$, we have \begin{equation}\label{b4} \begin{cases} \frac12\bar{a}+\frac12\bar{b}+\frac14\bar{c}-\frac{1}{p+1}\bar{d}=\inf_{\mathcal{M}} I,\\ \frac32\bar{a}+\frac12\bar{b}+\frac34\bar{c}-\frac{2p-1}{p+1} \bar{d}=0.\end{cases} \end{equation} By \eqref{b1} and the second equation of \eqref{b4}, $0<\frac{p+1}{2p-1}\Big(\frac32\bar{a}+\frac12\bar{b}\Big)=-\frac{3(p+1)}{4(2p-1)}\bar{c}+\bar{d}=-\frac{3(p+1)}{4(2p-1)}c+d$, which implies that $\vec{u}\neq \vec{0}$. Then, by \eqref{je1}, we have $\bar{a}\ge a>0, \bar{b}\ge b>0$ and $\bar{d}=d>0$. Define \begin{align*} \bar{h}(t)=\frac12\bar{a}t^3+\frac12\bar{b}t+\frac14\bar{c}t^3-\frac{1}{p+1} \bar{d}t^{2p-1}\ \ \mbox{ and } \ \ h(t)=\frac12at^3+\frac12bt+\frac14ct^3-\frac{1}{p+1} dt^{2p-1}. \end{align*} By Lemma \ref{uniqct}, both functions have a unique critical point, corresponding to their maxima. From \eqref{b4}, the maximum of $\bar{h}$ is equal to $\inf_{\mathcal{M}} I$ and is achieved at $t=1$. We observe that since $c=\bar{c}$, $d=\bar{d}$, $a\le \bar{a}$, $b\le \bar{b}$ and $a+b<\bar{a}+\bar{b}$, $h(t)<\bar{h}(t)$ for all $t>0$. Let $t_0>0$ be the point such that the maximum of $h$ is achieved. Then $h^\prime(t_0)=0$ and $h(t_0)<\max \bar{h}=\inf_{\mathcal{M}} I.$ Define $\vec{v}_0(x) =(t_0^2u_1(t_0x),t_0^2u_2(t_0x)).$ Then we have \begin{align*} I(\vec{v}_0)&=\frac12at_0^3+\frac12bt_0+\frac14ct_0^3-\frac{1}{p+1} dt_0^{2p-1}=h(t_0)<\inf_{\mathcal{M}} I,\\ G(\vec{v}_0)&=\frac32at_0^3+\frac12bt_0+\frac34ct_0^3-\frac{2p-1}{p+1}d t_0^{2p-1}=t_0h^\prime(t_0)=0. \end{align*} Thus, $\vec{v}_0\in \mathcal{M}$ and $I(\vec{v}_0)<\inf_{\mathcal{M}}I$, which is a contradiction. Thus, we have $\vec{u}_n\rightarrow \vec{u}$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ and $\vec{u}\in \mathcal{M}$. To prove the existence of a non-negative non-trivial critical point of $I$, we claim that \begin{equation}\label{posti2} \begin{aligned} \int_0^{2\pi} \Big(a^2+2a b \cos \theta+b^2\Big)^\frac{p+1}{2}d\theta=\int_0^{2\pi} \Big(a^2+2|a| |b| \cos \theta+b^2\Big)^\frac{p+1}{2}d\theta, \end{aligned} \end{equation} where $ a,b\in \mathbb{R}$. Indeed, since $t^2-2t\cos\theta+1>0, t^2+2t\cos\theta+1>0$ for $|t|<1$, $0\le\theta\le 2\pi$ and $$ \int_0^{2\pi}(2t\cos\theta+t^2)^nd\theta=\int_0^{2\pi}(-2t\cos\theta+t^2)^nd\theta, $$ where $n\in \mathbb{N}$ and $|t|<1$, we have \begin{align*} \int_0^{2\pi}(1+2t\cos\theta+t^2)^\frac{p+1}{2}d\theta&=\sum_{n=0}^\infty\frac{h^{(n)}(0)}{n!}\int_0^{2\pi}(2t\cos\theta+t^2)^nd\theta\\ &=\sum_{n=0}^\infty\frac{h^{(n)}(0)}{n!}\int_0^{2\pi}(-2t\cos\theta+t^2)^nd\theta\\ &=\int_0^{2\pi}(1-2t\cos\theta+t^2)^\frac{p+1}{2}d\theta, \end{align*} where $|t|<1$ and $h(r)=(1+r)^{\frac{p+1}{2}}$, and thus we deduce the claim \eqref{posti2}. By \eqref{posti2}, if $\vec{u}=(u_1,u_2)$ is a minimizer of $I$ restricted to $\mathcal{M}$, then so is $ (|u_1|,|u_2|)$. Then, by Lemma \ref{constr2}, we see that $ (|u_1|,|u_2|)$ is a non-negative non-trivial critical point of $I$. \end{proof} \subsection{Proof of Theorem \ref{th3}} We prove the existence part in subsection \ref{p25}. By Proposition \ref{proth}, we see that for $2<p<5$, there exists a non-trivial, non-negative ground state solution $\vec{w}$ of \eqref{gme}. Moreover, note that by Lemma \ref{constr1}, $\vec{w}$ is a minimizer for $I$ in $\mathcal{M}$, $\mathcal{M}$ is smooth in a neighborhood of $\vec{w}$ and $\mathcal{M}$ is of codimension $1$ in a neighborhood of $\vec{w}$, where $\mathcal{M}$ is given in \eqref{mmf}. Hence, a non-negative ground state solution $\vec{w}$ for \eqref{gme} has Morse index less than equal to $1$. For $\mu_{ij}>0$, let $W$ be a ground state solution of \begin{equation}\label{sequa} -\Delta u+ \lambda u + ( {\mu_{11} - \mu_{12}}) \phi_{u} u= \frac{c_p}{2}|u|^{p-1} u, \end{equation} where $c_p$ is given in \eqref{cp}. The existence of a ground state solution to \eqref{sequa} was proved in \cite{V} when $\mu_{11}-\mu_{12}<0$ and \cite[Theorem 3.2]{R} when $\mu_{11}-\mu_{12}>0$. Clearly, there exists a ground state to \eqref{sequa} when $\mu_{11}=\mu_{12}$. We note that if $\mu_{11}=\mu_{22}$, $(W,W)$ is a solution of \eqref{gme}, and thus $(W,W)\in \mathcal{M}$. First, we assume that $\frac13 (-2 + \sqrt{73})\le p<5$ and $\mu_{ij}>0$, where $i,j=1,2$. Then, by Proposition \ref{msi1}, a ground state solution of \eqref{gme} $\vec{w}$ is positive, where we used the fact that a non-negative ground state solution $\vec{w}$ for \eqref{gme} has Morse index less than equal to $1$. Next, we assume that $2<p<5$ and $\mu_{11}=\mu_{22}$. Then, by Proposition \ref{egc1} and the fact that $I(W,W)\ge I(\vec{w})$, we see that a ground state solution $\vec{w}$ is positive. Moreover, by Proposition \ref{rig1}, we deduce that $\vec{w}=(\tilde{W},\tilde{W})$, where $\tilde{W}$ is a positive ground state solution of \eqref{sequa}. $\Box$ \section{Power potential case with $1<p\le2$ and $\det(\mu_{ij}) <0$}\label{negadet} In this subsection, we prove Theorem \ref{th5}, the existence of a positive solution of \eqref{gme} for $1<p\le2$ and large $\mu_{12}>0$. This can be considered as an extreme case of $\det(\mu_{ij}) <0$. \subsection{Proof of Theorem \ref{th5}} Let $v_i(x)=\sqrt{\mu_{12}}u_i(x)$ for $i=1,2$. Then \begin{align*} I_+(\vec{u})&=\frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx\\ &\quad +\frac14\int_{\mathbb{R}^3} \mu_{11}(u_1)_+^2 \phi_{(u_1)_+} + \mu_{22}(u_2)_+^2 \phi_{(u_2)_+}-2\mu_{12} (u_1)_+^2 \phi_{(u_2)_+} dx\\ &\qquad - \frac{1}{p+1} \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi} \Big((u_1)_+^2+2(u_1)_+ (u_2)_+ \cos \theta+(u_2)_+^2\Big)^\frac{p+1}{2}d\theta dx\\ &=\mu_{12}^{-1}\bigg[\frac{1}{2}\int_{\mathbb{R}^3}|\nabla v_1|^2+|\nabla v_2|^2+\lambda v_1^2+\lambda v_2^2dx\\ &\qquad +\frac{1}{4\mu_{12}}\int_{\mathbb{R}^3} \mu_{11}(v_1)_+^2 \phi_{(v_1)_+} + \mu_{22}(v_2)_+^2 \phi_{(v_2)_+}-2\mu_{12} (v_1)_+^2 \phi_{(v_2)_+} dx\\ &\qquad - \frac{1}{(p+1)\mu_{12}^{\frac{p-1}{2}}} \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi} \Big((v_1)_+^2+2(v_1)_+ (v_2)_+ \cos \theta+(v_2)_+^2\Big)^\frac{p+1}{2}d\theta dx\bigg]\\ &=\mu_{12}^{-1} \Big(I_0(\vec{v})+H_{\mu_{12}}(\vec{v})\Big), \end{align*} where $\vec{u}=(u_1,u_2)$, $\vec{v}=(v_1,v_2)$, $$ I_0(\vec{v})=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla v_1|^2+|\nabla v_2|^2+\lambda v_1^2+\lambda v_2^2dx-\frac12\int_{\mathbb{R}^3} (v_1)_+^2 \phi_{(v_2)_+} dx $$ and \begin{align*} H_{\mu_{12}}(\vec{v})&=\frac{1}{4\mu_{12}}\int_{\mathbb{R}^3} \mu_{11}(v_1)_+^2 \phi_{(v_1)_+} + \mu_{22}(v_2)_+^2 \phi_{(v_2)_+}dx\\ &\qquad - \frac{1}{2\pi(p+1)\mu_{12}^{\frac{p-1}{2}}} \int_{\mathbb{R}^3}\int_0^{2\pi} \Big((v_1)_+^2+2(v_1)_+ (v_2)_+ \cos \theta+(v_2)_+^2\Big)^\frac{p+1}{2}d\theta dx. \end{align*} By applying the result in \cite[Theorem 1]{1JS}, we try to find a critical point $\vec{v}$ of $I_0+H_{\mu_{12}}$. Indeed, we observe that by \eqref{ibp}, Lemma \ref{es1}, the compact embeddings $H^1_r(\mathbb{R}^3)\subset L^q(\mathbb{R}^3)$, where $2<q<6$ and the fact that \begin{align*} H_{\mu_{12}}^\prime(\vec{v})\vec{\psi}&=\frac{1}{\mu_{12}}\int_{\mathbb{R}^3} \mu_{11}(v_1)_+ \psi_1 \phi_{(v_1)_+} + \mu_{22}(v_2)_+\psi_2 \phi_{(v_2)_+}dx\\ & - \frac{1}{2\pi \mu_{12}^{\frac{p-1}{2}}}\int_{\mathbb{R}^3} \int_0^{2\pi} \Big((v_1)_+^2+2(v_1)_+ (v_2)_+ \cos \theta+(v_2)_+^2\Big)^\frac{p-1}{2}\\ &\qquad \times \left(\Big((v_1)_++1_{\{v_1>0\}}(v_2)_+\cos\theta\Big)\psi_1+\Big((v_2)_++1_{\{v_2>0\}}(v_1)_+\cos\theta\Big)\psi_2\right) d\theta dx, \end{align*} where $\vec{\psi}=(\psi_1,\psi_2)$, we see that $H_{\mu_{12}}:{\bf H}\rightarrow \mathbb{R}$ is a $C^1$ functional such that $H_{\mu_{12}}:{\bf H}\rightarrow \mathbb{R}$, $H_{\mu_{12}}^\prime:{\bf H}\rightarrow {\bf H}^{-1}$ are compact, and for any $M>0$, $$ \lim_{\mu_{12}\rightarrow \infty}\sup_{\|\vec{v}\|_{\bf{H}} \le M}|H_{\mu_{12}}(\vec{v})|=\lim_{\mu_{12}\rightarrow \infty}\sup_{\|\vec{v}\|_{\bf{H}} \le M}\|H_{\mu_{12}}^\prime(\vec{v})\|_{{\bf H}^{-1}}=0. $$ Also, note that $I_0$ satisfies the Palais-Smale condition and the following properties: \begin{enumerate} \item there exist $c, r>0$ such that if $\|\vec{u}\|_{\bf{H}}=r$, then $I_0(\vec{u})\ge c$, and there exists $\vec{w}\in {\bf H}$ such that $\|\vec{w}\|_{\bf{H}}>r$ and $I_0(\vec{w})<0$; \item $$\min_{\gamma \in \Gamma}\max_{s\in[0,1]}I_0(\gamma(s))=\inf\{I_0(\vec{v}) \ | \ \vec{v}\in {\bf H}\setminus \{\vec{0}\}, I_0^\prime(\vec{v})=0\},$$ where $\Gamma=\{\gamma\in C([0,1],{\bf H})\ | \ \gamma(0)=0, \gamma(1)=\vec{w}\}$; \item there exists a curve $\gamma_0(s)\in \Gamma$ passing throungh $\vec{v}_0$ at $s=s_0$ and satisfying $$ I_0(\vec{v}_0)>I_0(\gamma_0(s)) \mbox{ for all } s\neq s_0. $$ \end{enumerate} Indeed, using the compact embeddings $H^1_r(\mathbb{R}^3)\subset L^q(\mathbb{R}^3)$, where $2<q<6$, we can prove that the functional $I_0$ satisfies the Palais-Smale condition, that is, for any sequence $\{\vec{u}_n \}\subset {\bf H}$ satisfying $$|I_0(\vec{u}_n)|\le C \mbox{ uniformly in } n, \mbox{ and } \|I_0^\prime(\vec{u}_n)\|_{{\bf H}^{-1}}\rightarrow 0 \mbox{ as } n\rightarrow \infty,$$ $\{\vec{u}_n \}$ has a strong convergence subsequence in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$. For $w\in C_c^\infty(\mathbb{R}^3)\setminus\{0\}$, $w>0$ and for large $t>0$, $$ I_0(tw,tw)=t^2\int_{\mathbb{R}^3}|\nabla w|^2 +\lambda w^2dx-\frac{t^4}{2}\int_{\mathbb{R}^3} w^2 \phi_{w} dx<0 $$ and for some $C_1>0$, \begin{align*} I_0(\vec{u})&\ge \frac12(\|u_1\|_{\lambda}^2+\|u_2\|_{\lambda}^2)-\frac12\|\phi_{u_2}\|_{L^6}\|u_1\|_{L^\frac{12}{5}}^2\ge \frac12(\|u_1\|_{\lambda}^2+\|u_2\|_{\lambda}^2)-C_1\| u_2 \|_{\lambda}^2\|u_1\|_{\lambda}^2\\ &\ge \frac12\Big[ \|u_1\|_{\lambda}^2(1-C_1\|u_1\|_{\lambda}^2)+ \|u_2\|_{\lambda}^2(1-C_1\|u_2\|_{\lambda}^2) \Big], \end{align*} which imply that there exist $c,r>0$ such that if $\|\vec{u}\|_{{\bf H}}=r$, then $I_0(\vec{u})\ge c>0$ and there exists $(w,w)\in {\bf H}$ such that $\|(w,w)\|_{\bf{H}}>r$ and $I_0(w,w)<0$. This proves $(1)$. By the mountain pass theorem \cite{AR}, we see that there exists a critical point $\vec{v}_0\in {\bf H}\setminus\{0\}$ of $I_0$ such that $$ I_0(\vec{v}_0)=\inf_{\gamma\in \Gamma} \max_{s\in[0,1]}I_0(\gamma(s))>0, $$ where $\Gamma=\{\gamma\in C([0,1],{\bf H})\ |\ \gamma(0)=0, \gamma(1)=(w,w)\}$. On the other hand, let $\vec{v}\in {\bf H}\setminus \{0\}$ and $I_0^\prime(\vec{v})=0.$ By a strong maximum principle and the fact that $\vec{v}=(v_1,v_2)$ satisfies \begin{equation*} \begin{cases} -\Delta v_1+\lambda v_1 - \phi_{(v_2)_+} (v_1)_+=0\ \ \mbox{ in }\ \ \mathbb{R}^3, \\ -\Delta v_2+\lambda v_2 - \phi_{(v_1)_+} (v_2)_+=0 \ \ \mbox{ in }\ \ \mathbb{R}^3,\end{cases} \end{equation*} we see that $\vec{v}$ is positive. Then, by Lemma \ref{betaunq}, $\vec{v}=(V,V)$, where $V\in H_r^1$ satisfies $V> 0$ and $-\Delta V+\lambda V-\phi_V V=0$. Moreover, from \cite[Lemma 9]{Le}, we see that $V$ is a unique radial, positive solution of $-\Delta w+\lambda w-\phi_w w=0$. Thus, we see that $\vec{v}_0=(V,V)$, $$ I_0(\vec{v}_0)=\inf\{I_0(\vec{u})\ |\ \vec{u}\in {\bf H}\setminus \{0\}, I_0^\prime(\vec{u})=0\} $$ and for all $ s\neq 1$, $$ I_0(\vec{v}_0)>I_0(\gamma_0(s))=s^2\int_{\mathbb{R}^3}|\nabla V|^2 +\lambda V^2dx-\frac{s^4}{2}\int_{\mathbb{R}^3} V^2 \phi_{V} dx =\left(s^2-\frac{s^4}{2}\right)\int_{\mathbb{R}^3}|\nabla V|^2 +\lambda V^2dx, $$ where $\gamma_0(s)=(sV,sV)$. These prove $(2)$ and $(3)$. Thus, by \cite[Theorem 1]{1JS}, we can find a critical point $\vec{v}_{\mu_{12}}$ of $I_0 +H_{\mu_{12}} $ such that $\vec{v}_{\mu_{12}}\rightarrow \vec{v}_0=(V,V)$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ as $\mu_{12}\rightarrow \infty$. Then from this, \eqref{ibp}, a strong maximum principle and the fact that $\vec{v}_{\mu_{12}}=(v_{1,\mu_{12}},v_{2,\mu_{12}})$ satisfies \begin{equation*} \begin{cases} -\Delta v_1+\lambda v_1+\frac{1}{\mu_{12}}(\mu_{11}\phi_{(v_1)_+}-\mu_{12}\phi_{(v_2)_+})(v_1)_+\\ \ \ =\frac{1}{2\pi\mu_{12}^{\frac{p-1}{2}}}\int_0^{2\pi}\Big((v_1)_+^2+(v_2)_+^2+2(v_1)_+(v_2)_+\cos\theta\Big)^{\frac{p-1}{2}}\Big((v_1)_++1_{\{v_1>0\}}(v_2)_+\cos\theta\Big)d\theta \ \mbox{ in }\ \mathbb{R}^3, \\ -\Delta v_2+\lambda v_2+\frac{1}{\mu_{12}}(\mu_{22}\phi_{(v_2)_+}-\mu_{12}\phi_{(v_1)_+})(v_2)_+\\ \ \ =\frac{1}{2\pi\mu_{12}^{\frac{p-1}{2}}}\int_0^{2\pi}\Big((v_1)_+^2+(v_2)_+^2+2(v_1)_+(v_2)_+\cos\theta\Big)^{\frac{p-1}{2}}\Big((v_2)_++1_{\{v_2>0\}}(v_1)_+\cos\theta\Big)d\theta \ \mbox{ in }\ \mathbb{R}^3. \end{cases} \end{equation*} we see that for large $\mu_{12}$, $\vec{v}_{\mu_{12}}$ is positive. Thus, we deduce that for large $\mu_{12}$, $(\mu_{12})^{-\frac12}\vec{v}_{\mu_{12}}$ is a positive critical point of $I$. \section{Power potential case with $1 < p < 2$ and $\det(\mu_{ij}) > 0$}\label{posidet} In this section, we prove Theorem \ref{th31} and Theorem \ref{th4}, which give the non-existence result to \eqref{gme} when $\mu_{11}$ and $\mu_{22}$ are large, and the existence result of two positive solutions to \eqref{gme} when $\det(\mu_{ij}) > 0$ and $\mu_{ij}$ is small, respectively. \subsection{ Triviality of $\vec{u}$ when $\mu_{11}$ and $\mu_{22}$ are large} We first prove that any solution $\vec{u}$ to \eqref{gme} is trivial when $1<p\le 2$, $\lambda\ge2$, $\mu_{ij}>0$, $\mu_{11}>4$ and $(\mu_{11}-4)(\mu_{22}-4)>\mu_{12}^2.$ \noindent{\bf Proof of Theorem \ref{th31}.} Suppose that $\vec{u}=(u_1,u_2)\in H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ is a solution of \eqref{gme}. Then we have \begin{align*} I^\prime(\vec{u})\vec{u} &=\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\qquad - \frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi}\Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2} d\theta dx=0. \end{align*} We note that for $a,b\in \mathbb{R}$, $$ \frac{1}{2\pi}\int_0^{2\pi}\Big(a^2+2a b \cos \theta+b^2\Big)^\frac{p+1}{2} d\theta\le (|a| +|b| )^{p+1} \le 2^p(|a|^{p+1}+|b|^{p+1}) $$ and $$ 4\int_{\mathbb{R}^3}|u|^3dx= 4\int_{\mathbb{R}^3}(-\Delta \phi_{u})|u|dx= 4\int_{\mathbb{R}^3} \nabla \phi_{u} \cdot \nabla |u|dx\le \int_{\mathbb{R}^3} |\nabla u|^2+4|\nabla \phi_{u}|^2dx. $$ Then we see that if $\mu_{11}>4$ and $\Big(\mu_{11}-4\Big)\Big(\mu_{22}-4\Big)>\mu_{12}^2,$ \begin{align*} 0= I^\prime(\vec{u})\vec{u} &\ge \int_{\mathbb{R}^3}\lambda u_1^2+\lambda u_2^2dx+\int_{\mathbb{R}^3} \Big(\mu_{11}-4\Big)u_1^2 \phi_{u_1} + \Big(\mu_{22}-4\Big)u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\quad +4\int_{\mathbb{R}^3}|u_1|^3+|u_2|^3dx-2^p\int_{\mathbb{R}^3}|u_1|^{p+1}+|u_2|^{p+1}dx\\ &\ge \int_{\mathbb{R}^3}\lambda u_1^2+\lambda u_2^2dx +4\int_{\mathbb{R}^3}|u_1|^3+|u_2|^3dx-2^p\int_{\mathbb{R}^3}|u_1|^{p+1}+|u_2|^{p+1}dx. \end{align*} To complete the proof, we claim that if $\lambda\ge2$ and $1<p\le2$, then $h(t)\ge 0$ for $t\ge0$, where $h:\mathbb{R}^+\rightarrow \mathbb{R}$ is given by $h(t)=\lambda +4t-2^p t^{p-1}$. We observe that $$ h^\prime(t)\begin{cases}<0 &\mbox{ if } t< \frac12(p-1)^\frac{1}{2-p},\\ >0 &\mbox{ if } t>\frac12(p-1)^\frac{1}{2-p}, \\ =0 &\mbox{ if } t=\frac12(p-1)^\frac{1}{2-p},\end{cases} $$ $$h(t)\ge 0 \ \ \mbox{ for }t\ge 0 \ \ \mbox{ if }\ \ \lambda\ge 2(p-1)^\frac{p-1}{2-p}-2(p-1)^\frac{1}{2-p}= 2 (p-1)^\frac{p-1}{2-p}(2-p), $$ and for $1<p\le 2$, $ (p-1)^\frac{p-1}{2-p}(2-p)< (p-1)^\frac{p-1}{2-p}\le 1$. From these, we can prove the claim, and thus, $(u_1,u_2)=(0,0)$ if $\lambda \ge 2$ and $1<p\le 2$. $\Box$ \subsection{Construction of two positive solutions}\label{p12} Following the idea in \cite{R}, we study the existence of two positive solutions of \eqref{gme} for $1<p<2$ and $\mu_{11}\mu_{22}-\mu_{12}^2>0$. \begin{prop}\label{psc} Let $\lambda, \mu_{ij} >0$ for $i,j=1,2$. Assume $1<p<2$ and $\mu_{11}\mu_{22}-\mu_{12}^2>0$. Then we have \begin{enumerate}[(i)] \item $\inf_{\vec{u}\in {\bf H}} I(\vec{u})>-\infty$;\\ \item $I$ satisfies (PS) condition. \end{enumerate} \end{prop} \begin{proof} We first claim that $\inf_{\vec{u}\in {\bf H}} I(\vec{u})>-\infty.$ We note that for $a,b\in \mathbb{R}$, $$ \frac{1}{2\pi} \int_0^{2\pi}\Big(a^2+2ab \cos \theta+b^2\Big)^\frac{p+1}{2} d\theta\le (|a|+|b|)^{p+1}\le 2^{p+1}(|a|^{p+1}+|b|^{p+1}), $$ and for some $0<\tilde{\mu}_{11}<\mu_{11}$, $0<\tilde{\mu}_{22}<\mu_{22}$, $\mu_{12}<\tilde{\mu}_{12}$, we have $\tilde{\mu}_{11}\tilde{\mu}_{22}-\tilde{\mu}_{12}^2>0$. Then we have \begin{align*} I(\vec{u})&\ge \frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx\\ &\quad +\frac14\int_{\mathbb{R}^3}(\mu_{11}-\tilde{\mu}_{11})u_1^2 \phi_{u_1} + (\mu_{22}-\tilde{\mu}_{22})u_2^2 \phi_{u_2}+ 2(\tilde{\mu}_{12}-\mu_{12})u_1^2 \phi_{u_2}dx\\ &\quad +\frac14\int_{\mathbb{R}^3} \tilde{\mu}_{11}u_1^2 \phi_{u_1} + \tilde{\mu}_{22}u_2^2 \phi_{u_2}-2\tilde{\mu}_{12} u_1^2 \phi_{u_2} dx - \frac{2^{p+1}}{p+1} \int_{\mathbb{R}^3} |u_1|^{p+1}+|u_2|^{p+1} dx\\ &\ge \frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\frac{\hat{\mu}}{4}\int_{\mathbb{R}^3} u_1^2 \phi_{u_1} + u_2^2 \phi_{u_2}+ u_1^2 \phi_{u_2} dx \\ &\quad - \frac{2^{p+1}}{p+1} \int_{\mathbb{R}^3} |u_1|^{p+1}+|u_2|^{p+1} dx, \end{align*} where $\hat{\mu}\in\Big(0,\min\{\mu_{11}-\tilde{\mu}_{11}, \mu_{22}-\tilde{\mu}_{22},\tilde{\mu}_{12}-\mu_{12}\}\Big).$ From this and the fact that $$ \sqrt{\frac{ \hat{\mu}}{8}}\int_{\mathbb{R}^3}|u|^3dx=\sqrt{\frac{\hat{\mu}}{8}}\int_{\mathbb{R}^3}(-\Delta \phi_{u})|u|dx=\sqrt{\frac{\hat{\mu}}{8}}\int_{\mathbb{R}^3} \nabla \phi_{u} \cdot \nabla |u|dx\le \int_{\mathbb{R}^3}\frac14 |\nabla u|^2+\frac{\hat{\mu}}{8}|\nabla \phi_{u}|^2dx, $$ we obtain \begin{equation}\label{xn1} \begin{aligned} I(\vec{u})&\ge \frac14\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2dx+ \frac{\lambda}{2}\int_{\mathbb{R}^3} u_1^2+ u_2^2dx+\frac{ \hat{\mu}}{8}\int_{\mathbb{R}^3}u_1^2 \phi_{u_1} +u_2^2 \phi_{u_2}+ 2u_1^2 \phi_{u_2}dx \\ &\quad + \sqrt{\frac{ \hat{\mu}}{8}}\int_{\mathbb{R}^3} |u_1|^3+ |u_2|^3dx- \frac{2^{p+1}}{p+1} \int_{\mathbb{R}^3} |u_1|^{p+1}+|u_2|^{p+1} dx. \end{aligned} \end{equation} Define $$ h :\mathbb{R}^+\times \mathbb{R}^+\rightarrow \mathbb{R},\ \ h(s,t)=\frac{\lambda}{4}(s^2+ t^2)+\sqrt{\frac{ \hat{\mu}}{8}}(s^3+ t^3)-\frac{2^{p+1}}{p+1}(s^{p+1}+t^{p+1}), $$ where $p\in (1,2)$. We claim that \begin{equation}\label{xn2} h \mbox{ is positive for } \{(s,t)\in \mathbb{R}^+\times \mathbb{R}^+\ |\ s^2+t^2<r_0 \mbox{ or } s^2+t^2>R_0\}, \end{equation} where $r_0, R_0>0$ are constants. Indeed, note that if $(s,t)\in \{(s,t)\in \mathbb{R}^+\times \mathbb{R}^+\ |\ s^2+t^2<r_0\}$, $ \frac{s^{p+1}+t^{p+1}}{s^2+t^2}<r_0^{\frac{p-1}{2}} $ and if $(s,t)\in \{(s,t)\in \mathbb{R}^+\times \mathbb{R}^+ \ |\ s<t, \ s^2+t^2>R_0\},$ $ \frac{s^{p+1}+t^{p+1}}{s^3+t^3}\le \frac{2t^{p+1}}{t^3}<2\Big(\frac{2}{R_0}\Big)^\frac{2-p}{2}. $ These imply \eqref{xn2}. Define $m=\inf_{s>0,t>0}h(s,t)$. If $m\ge0$, we are done, beacuse $I(\vec{u})\ge0$, and thus we assume $m<0$. Then, by \eqref{xn1} and \eqref{xn2}, we have $$\{(s,t)\in \mathbb{R}^+\times \mathbb{R}^+\ | \ h(s,t)<0\}\subset \{(s,t)\in \mathbb{R}^+\times \mathbb{R}^+\ |\ r_0<s^2+t^2<R_0\}$$ and \begin{equation}\label{bddbelow} \begin{aligned} I(\vec{u})&\ge \frac14\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2dx+ \frac{\lambda}{4}\int_{\mathbb{R}^3} u_1^2+ u_2^2dx+\frac{\hat{\mu}}{8}\int_{\mathbb{R}^3}u_1^2 \phi_{u_1} +u_2^2 \phi_{u_2}+ 2u_1^2 \phi_{u_2}dx \\ &\quad + \int_{\mathbb{R}^3}h(|u_1|,|u_2|) dx\\ &\ge \frac14\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2dx+ \frac{\lambda}{4}\int_{\mathbb{R}^3} u_1^2+ u_2^2dx+\frac{\hat{\mu}}{8}\int_{\mathbb{R}^3}u_1^2 \phi_{u_1} +u_2^2 \phi_{u_2}+ 2u_1^2 \phi_{u_2}dx \\ &\quad + m \Big|\{x\in \mathbb{R}^3\ |\ h(|u_1(x)|,|u_2(x)|)<0\} \Big|\\ &\ge \frac14\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2dx+ \frac{\lambda}{4}\int_{\mathbb{R}^3} u_1^2+ u_2^2dx+\frac{\hat{\mu}}{8}\int_{\mathbb{R}^3}u_1^2 \phi_{u_1} +u_2^2 \phi_{u_2}+ 2u_1^2 \phi_{u_2}dx +m|A|, \end{aligned} \end{equation} where $A=\{x\in \mathbb{R}^3\ |\ r_0<u_1^2(x)+u_2^2(x) <R_0\}.$ Note that $A$ is spherically symmetric. Suppose to the contrary that there exists $\{\vec{u}_n\}\subset {\bf H}$ such that $I(\vec{u}_n)\rightarrow -\infty$. Clearly, $\|u_{1,n}\|_{\lambda}+\|u_{2,n}\|_{\lambda}\rightarrow \infty$ as $n\rightarrow \infty$. For each $\vec{u}_n=(u_{1,n},u_{2,n})$, define $A_n=\{x\in \mathbb{R}^3\ |\ r_0<u_{1,n}^2(x)+u_{2,n}^2(x) <R_0\}$ and $\rho_n=\sup\{|x|\ |\ x\in A_n\}$. By \eqref{bddbelow} and the fact that $I(\vec{u}_n)<0$, we have $$ |m||A_n|\ge \frac14( \|u_{1,n}\|_{\lambda}^2 +\|u_{2,n}\|_{\lambda}^2), $$ which implies that $|A_n|\rightarrow \infty$ as $n\rightarrow \infty$. Since $$|u(x)|\le C|x|^{-1}\|u\|_{H^1} \mbox{ for } u\in H_r^1,$$ where $C>0$ (see \cite{S}), we see that for $x\in \mathbb{R}^3$ with $|x|=\rho_n$, $u_{1,n}^2(x)+u_{2,n}^2(x)=r_0$ and $$ r_0=u_{1,n}^2(x)+u_{2,n}^2(x)\le C^2\rho_n^{-2}(\|u_{1,n}\|_{H^1}^2+\|u_{2,n}\|_{H^1}^2)\le C_1\rho_n^{-2}|m||A_n|, $$ which implies that $\rho_n\le C_2|A_n|^\frac12,$ where $C_1, C_2>0$. On the other hand, by \eqref{bddbelow} and the fact that $I(\vec{u}_n)<0$, we have $$ \frac{\hat{\mu}}{8}\int_{\mathbb{R}^3} u_{1,n}^2 \phi_{u_{1,n}} +u_{2,n}^2 \phi_{u_{2,n}}+ 2u_{1,n}^2 \phi_{u_{2,n}}dx \le |m||A|. $$ Then from this and the facts that $ |x-y|\le 2\rho_n \mbox{ for } x,y \in A_n $ and \begin{align*} &\int_{\mathbb{R}^3} u_{1,n}^2 \phi_{u_{1,n}} +u_{2,n}^2 \phi_{u_{2,n}}+ 2u_{1,n}^2 \phi_{u_{2,n}}dx\\ &= \int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{u_{1,n}^2(x)u_{1,n}^2(y)+u_{2,n}^2(x)u_{2,n}^2(y)+2u_{1,n}^2(x)u_{2,n}^2(y)}{|x-y|} dxdy\\ &=\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\Big(u_{1,n}^2(x)+u_{2,n}^2(x)\Big)\Big(u_{1,n}^2(y)+u_{2,n}^2(y)\Big)}{|x-y|} dxdy, \end{align*} we have \begin{align*} \frac{8}{\hat{\mu}}|m||A_n|&\ge \int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\Big(u_{1,n}^2(x)+u_{2,n}^2(x)\Big)\Big(u_{1,n}^2(y)+u_{2,n}^2(y)\Big)}{|x-y|} dxdy\\ &\ge \int_{A_n}\int_{A_n} \frac{\Big(u_{1,n}^2(x)+u_{2,n}^2(x)\Big)\Big(u_{1,n}^2(y)+u_{2,n}^2(y)\Big)}{|x-y|} dxdy\ge r_0^2\frac{|A_n|^2}{2\rho_n}. \end{align*} This implies that $|A_n|\le C_3\rho_n$, where $C_3>0$, which is a contradiction to the facts that $|A_n|\rightarrow \infty$ as $n\rightarrow \infty$ and $\rho_n\le C_2|A_n|^\frac12.$ Thus, we have $\inf_{\vec{u}\in {\bf H}} I(\vec{u})>-\infty.$ Next, we prove that $I$ satisfies (PS) condition. Let $\vec{u}_n=(u_{1,n},u_{2,n})$ be a sequence in ${\bf H}$ such that $I^\prime(\vec{u}_n)\rightarrow 0$ as $n\rightarrow \infty$. Then we have\begin{align*} \|u_{1,n}\|_{\lambda}+\|u_{2,n}\|_{\lambda}&\ge I^\prime(\vec{u}_n)\vec{u}_n\\ &=\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\qquad - \frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi}\Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2} d\theta dx. \end{align*} Following the first part of the proof \eqref{xn1}, we deduce that for some $\bar{\mu}>0$, \begin{equation}\label{engb1} \begin{aligned} \|u_{1,n}\|_{\lambda}+\|u_{2,n}\|_{\lambda}&\ge \frac12\int_{\mathbb{R}^3}|\nabla u_{1,n}|^2+|\nabla u_{2,n}|^2+\lambda u_{1,n}^2+\lambda u_{2,n}^2dx\\ &\quad +\frac{\bar{\mu}}{2}\int_{\mathbb{R}^3} u_{1,n}^2 \phi_{u_{1,n}} + u_{2,n}^2 \phi_{u_{2,n}}+ 2u_{1,n}^2 \phi_{u_{2,n}} dx +\int_{\mathbb{R}^3}g(|u_{1,n}|,|u_{2,n}|)dx, \end{aligned} \end{equation} where $g(s,t)=\frac12\lambda (s^2+ t^2)+\sqrt{\bar{\mu}}(s^3+t^3)-2^{p+1}(s^{p+1}+t^{p+1})$. If $\|u_{1,n}\|_{\lambda}+\|u_{2,n}\|_{\lambda}$ is unbounded in $n$, then by \eqref{engb1}, for large $n$, we have \begin{align*} &\frac13\int_{\mathbb{R}^3}|\nabla u_{1,n}|^2+|\nabla u_{2,n}|^2+\lambda u_{1,n}^2+\lambda u_{2,n}^2dx+\frac{\bar{\mu}}{2}\int_{\mathbb{R}^3} u_{1,n}^2 \phi_{u_{1,n}} + u_{2,n}^2 \phi_{u_{2,n}}+ 2u_{1,n}^2 \phi_{u_{2,n}} dx \\ &\quad +\int_{\mathbb{R}^3}g(|u_{1,n}|,|u_{2,n}|)dx<0. \end{align*} Then, by the same arguments in \eqref{bddbelow} below, we deduce a contradiction. Since $\|u_{1,n}\|_{\lambda}+\|u_{2,n}\|_{\lambda}$ is bounded, we assume that $\vec{u}_n\rightharpoonup \vec{u}=(u_1,u_2)$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ and $\vec{u}_n\rightharpoonup \vec{u}$ in $L^q(\mathbb{R}^3)\times L^q(\mathbb{R}^3)$, where $2<q<6$. From this and Lemma \ref{es1}, we see that \begin{align*} &\lim_{n\rightarrow \infty} I^\prime(\vec{u}_n)\vec{u}\\ &= \lim_{n\rightarrow \infty}\bigg[\int_{\mathbb{R}^3} \nabla u_{1,n}\cdot \nabla u_1 + \nabla u_{2,n}\cdot \nabla u_2+\lambda u_{1,n} u_1+\lambda u_{2,n}u_2dx\\ &\quad + \int_{\mathbb{R}^3}\mu_{11} u_{1,n} \phi_{u_{1,n}} u_1 + \mu_{22} u_{2,n} \phi_{u_{2,n}} u_2 - \mu_{12} u_{1,n} \phi_{u_{2,n}} u_1 -\mu_{12} u_{2,n} \phi_{u_{1,n}} u_2 dx\\ & - \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi} \Big((u_{1,n}+u_{2,n} \cos \theta ) u_1+ (u_{2,n}+u_{1,n} \cos \theta )u_2\Big)\\ &\quad \times \Big(u_{1,n}^2+2u_{1,n} u_{2,n} \cos \theta+u_{2,n}^2\Big)^\frac{p-1}{2} d\theta dx\bigg]\\ &=\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx+\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\quad - \frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi}\Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2} d\theta dx=0 \end{align*} and \begin{align*} &\lim_{n\rightarrow \infty}I^\prime(\vec{u}_n)\vec{u}_n\\ &=\lim_{n\rightarrow \infty}\bigg[\int_{\mathbb{R}^3}|\nabla u_{1,n}|^2+|\nabla u_{2,n}|^2+\lambda u_{1,n}^2+\lambda u_{2,n}^2dx\\ &\quad +\int_{\mathbb{R}^3} \mu_{11}u_{1,n}^2 \phi_{u_{1,n}} + \mu_{22}u_{2,n}^2 \phi_{u_{2,n}}-2\mu_{12} u_{1,n}^2 \phi_{u_{2,n}} dx\\ &\quad - \frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi}\Big(u_{1,n}^2+2u_{1,n} u_{2,n} \cos \theta+u_{2,n}^2\Big)^\frac{p+1}{2} d\theta dx\bigg]\\ &=\lim_{n\rightarrow \infty}\bigg[\int_{\mathbb{R}^3}|\nabla u_{1,n}|^2+|\nabla u_{2,n}|^2+\lambda u_{1,n}^2+\lambda u_{2,n}^2dx\bigg]+\int_{\mathbb{R}^3} \mu_{11}u_1^2 \phi_{u_1} + \mu_{22}u_2^2 \phi_{u_2}-2\mu_{12} u_1^2 \phi_{u_2} dx\\ &\quad - \frac{1}{2\pi} \int_{\mathbb{R}^3}\int_0^{2\pi}\Big(u_1^2+2u_1 u_2 \cos \theta+u_2^2\Big)^\frac{p+1}{2} d\theta dx=0. \end{align*} Thus, we deduce that $\vec{u}_n\rightarrow \vec{u}=(u_1,u_2)$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$. \end{proof} \begin{prop}\label{existwo} Let $1<p<2$ and $\lambda, \mu_{ij} >0$ for $i,j=1,2$. If $\mu_{11}\mu_{22}-\mu_{12}^2>0$ and $\mu_{ij}$ is small enough, where $i,j=1,2$, then there are at least two different non-negative radial solutions $\vec{u}_{1,\mu_{ij}}$, $\vec{u}_{2,\mu_{ij}}$ of \eqref{gme}, where $\vec{u}_{1,\mu_{ij}}$ is a non-negative minimizer of \eqref{gme} with negative energy, and $\vec{u}_{2,\mu_{ij}}$ is a positive solution of \eqref{gme} with positive energy. \end{prop} \begin{proof} We observe that if $\mu_{ij}=0$ for $i,j=1,2$, $I$ is not bounded below, and thus there exists $\mu_0>0$ such that if $0<\mu_{ij}<\mu_0$, then $\inf_{\vec{u}\in {\bf H}} I(\vec{u})<0$. By Proposition \ref{psc}, $I$ is bounded below and satisfies (PS) condition. Then, by the Ekeland variational principle \cite{E}, the infimum is achieved, that is, there exists $\vec{u}_{1,\mu_{ij}}\in {\bf H}\setminus \{(0,0)\}$ such that $\inf_{\vec{u}\in {\bf H}} I(\vec{u})= I(\vec{u}_{1,\mu_{ij}})<0$ for $0<\mu_{ij}<\mu_0$. Moreover, by \eqref{posti2}, we may assume that the minimizer $\vec{u}_{1,\mu_{ij}}$ is non-negative. Next, in order to prove the existence of second positive solution to \eqref{gme}, we consider a limit problem \begin{equation}\label{limg} \begin{cases} -\Delta u_1+\lambda u_1 =\frac{1}{2\pi} \int_0^{2\pi}\Big((u_1)_+^2+(u_2)_+^2+2(u_1)_+(u_2)_+\cos\theta\Big)^{\frac{p-1}{2}} ((u_1)_++1_{\{u_1>0\}}(u_2)_+\cos\theta )d\theta, \\ -\Delta u_2+\lambda u_2 =\frac{1}{2\pi}\int_0^{2\pi}\Big((u_1)_+^2+(u_2)_+^2+2(u_1)_+(u_2)_+\cos\theta\Big)^{\frac{p-1}{2}} ((u_2)_++1_{\{u_2>0\}}(u_1)_+\cos\theta )d\theta \end{cases} \end{equation} and a corresponding energy functional $$ \begin{aligned} I_{+,0}(\vec{u})=&\frac12\int_{\mathbb{R}^3}|\nabla u_1|^2+|\nabla u_2|^2+\lambda u_1^2+\lambda u_2^2dx\\ &\quad - \frac{1}{p+1} \frac{1}{2\pi}\int_{\mathbb{R}^3}\int_0^{2\pi} \Big((u_1)_+^2+2(u_1)_+ (u_2)_+ \cos \theta+(u_2)_+^2\Big)^\frac{p+1}{2}d\theta dx. \end{aligned} $$ Then by \eqref{ibppa} and a maximum principle, $I_{+,0}$ is $C^1$ and any critical point of $I_{+,0}$ is a non-negative solution of \eqref{limg}. It is standard to show that \begin{equation}\label{cond1} I_{+,0} \mbox{ satisfies (PS) condition } \end{equation} because of the arguments in Proposition \ref{psc} (ii) and the fact that $I_{+,0}(\vec{u})-\frac{1}{p+1}I_{+,0}'(\vec{u})\vec{u}=(\frac12-\frac{1}{p+1})\|\vec{u}\|_{\bf{H}}^2$. Moreover, we see that there exist $c, r>0$ such that \begin{equation}\label{cond22} \mbox{if } \|\vec{u}\|_{\bf{H}}=r, \mbox{ then }I_{+,0}(\vec{u})\ge c, \mbox{ and there exists } \vec{w}\in {\bf H} \mbox{ such that } \|\vec{w}\|_{\bf{H}}>r \mbox{ and } I_{+,0}(\vec{w})<0. \end{equation} Then by the mountain pass theorem of Ambrosetti and Rabinowitz \cite{AR}, we can prove that there exists a non-negative mountain pass solution $\vec{v}$ with Morse index for \eqref{limg} less than equal to $1$. Note that by Lemma \ref{semimorse}, the Morse index of semi-trivial solutions $(w,0)$ and $(0,w)$ for \eqref{limg} are larger than equal to $2$, where $w$ is a unique positive solution of \begin{equation}\label{schr1} -\Delta w+\lambda w=w^p. \end{equation} Thus, a non-negative mountain pass solution $\vec{v}$ is a positive solution. Then by Proposition \ref{rig1}, we see that $\vec{v}=(U,U),$ where $U$ is a unique positive solution of \begin{equation}\label{lseq} -\Delta U+\lambda U=\frac{c_p}{2} U^p \end{equation} and $c_p$ is given in \eqref{cp} Moreover, \begin{equation}\label{cond12} I_{+,0}(U,U)=(\frac12-\frac{1}{p+1})\|(U,U)\|_{\bf{H}}^2>0. \end{equation} We employ the result in \cite[Theorem 1]{1JS} to find a critical point $\vec{u}_{2,\mu_{ij}}$ of $I_+$ such that $\vec{u}_{2,\mu_{ij}}$ converges to $(U,U)$ as $\mu_{ij}\rightarrow 0$. We write \begin{align*} I_+(\vec{u})&=I_{+,0}(\vec{u})+H_{\mu_{11},\mu_{22},\mu_{12}}(\vec{u}), \end{align*} where $I_+$ is given in \eqref{funpl} and $$ H_{\mu_{11},\mu_{22},\mu_{12}}(\vec{u})=\frac14\int_{\mathbb{R}^3} \mu_{11}(u_1)_+^2 \phi_{(u_1)_+} + \mu_{22}(u_2)_+^2 \phi_{(u_2)_+}-2\mu_{12} (u_1)_+^2 \phi_{(u_2)_+} dx. $$ Observe that since $J_{+,0}'(\vec{u})\vec{u}<-\rho$ if $\vec{u}\in {\bf{H}}\setminus \{(0,0)\}$ and $I_{+,0}'(\vec{u})\vec{u}=0$, where $J_{+,0}(\vec{u})= I_{+,0}'(\vec{u})\vec{u}$ and $\rho$ is a positive constant, we get \begin{align*} \inf\{ I_{+,0}(\vec{u}) \ |\ \vec{u}\in {\bf{H}}\setminus \{(0,0)\}, I_{+,0}'(\vec{u})\vec{u}=0\}&=\inf\{ I_{+,0}(\vec{u}) \ |\ \vec{u}\in {\bf{H}}\setminus \{(0,0)\}, I_{+,0}'(\vec{u})=0\}\\ &=\min\{ I_{+,0}(U,U), I_{+,0}(w,0), I_{+,0}(0,w)\}, \end{align*} where $U$ is a positive solution of \eqref{lseq} and $w$ is a positive solution of \eqref{schr1}. Then by the fact that the Morse index of semi-trivial solutions $(w,0)$ and $(0,w)$ for \eqref{limg} are larger than equal to $2$, we see that \begin{equation}\label{cond3} I_{+,0}(U,U)=\inf\{ I_{+,0}(\vec{u}) \ |\ \vec{u}\in {\bf{H}}\setminus \{(0,0)\}, I_{+,0}'(\vec{u})=0\}. \end{equation} Also, we see that for $s\neq1$, \begin{equation}\label{cond4} I_{+,0}(U,U)> I_{+,0}(\gamma(s))=\left(s^2-\frac{2}{p+1}s^{p+1}\right)\int_{\mathbb{R}^3}|\nabla U|^2+\lambda U^2dx \end{equation} where $\gamma(s)=(sU,sU)$, and by the compact embeddings $H^1_r(\mathbb{R}^3)\subset L^q(\mathbb{R}^3)$, where $2<q<6$, $H_{\mu_{11},\mu_{22},\mu_{12}}:\bf{H}\rightarrow \mathbb{R}$ is a $C^1$ functional such that \begin{equation}\label{cond5} \begin{aligned} &\lim_{\mu_{11},\mu_{22},\mu_{12}\rightarrow 0}\sup_{\|\vec{u}\|_{{\bf H}}\le M }| H_{\mu_{11},\mu_{22},\mu_{12}}(\vec{u})|=\lim_{\mu_{11},\mu_{22},\mu_{12}\rightarrow 0}\sup_{\|\vec{u}\|_{{\bf H}}\le M }\|H_{\mu_{11},\mu_{22},\mu_{12}}'(\vec{u})\|_{{\bf{H}}^{-1}}=0,\\ &H_{\mu_{11},\mu_{22},\mu_{12}}:{\bf{H}}\rightarrow \mathbb{R} \mbox{ and } H_{\mu_{11},\mu_{22},\mu_{12}}':{\bf{H}}\rightarrow {\bf{H}}^{-1} \mbox{ are compact}. \end{aligned} \end{equation} Thus, by \eqref{cond1}, \eqref{cond22} and \eqref{cond3}-\eqref{cond5}, applying the result in \cite[Theorem 1]{1JS}, there exists a critical point $\vec{u}_{2,\mu_{ij}}$ of $I_+$ such that $\vec{u}_{2,\mu_{ij}}$ converges to $(U,U)$ in $H^1(\mathbb{R}^3)\times H^1(\mathbb{R}^3)$ as $\mu_{ij}\rightarrow 0$. Then from this, \eqref{ibp} and a strong maximum principle, we obtain that $\vec{u}_{2,\mu_{ij}}$ is positive. Moreover, the convergence $\vec{u}_{2,\mu_{ij}}\rightarrow (U,U)$ and \eqref{cond12} imply that for small $\mu_{ij}>0$, $I_+(\vec{u}_{2,\mu_{ij}})>0.$ \end{proof} \subsection{Proof of Theorem \ref{th4}} In Proposition \ref{existwo}, we have shown the existence of a non-negative minimizer $\vec{u}_{1,\mu_{ij}}$ to \eqref{gme} with negative energy, and a positive solution $\vec{u}_{2,\mu_{ij}}$ to \eqref{gme} with positive energy if $\mu_{11}\mu_{22}-\mu_{12}^2>0$ and $\mu_{ij}>0$ is small, where $i,j=1,2$. We claim that a non-negative minimizer $\vec{u}_{1,\mu_{ij}}$ of \eqref{gme} is positive. Suppose to the contrary that a minimizer $\vec{u}_{1,\mu_{ij}}$ of \eqref{gme} is semi-trivial, that is, $\vec{u}_{1,\mu_{ij}}=(v_{1,\mu_{ij}},0).$ Then, by Proposition \ref{mmors}, we get a contradiction to the fact that $\vec{u}_{1,\mu_{ij}}$ is a minimizer of \eqref{gme}. Thus, we deduce that for small $\mu_{ij}>0$ with $\det(\mu_{ij}) > 0$, a minimizer $\vec{u}_1$ of \eqref{gme} is positive. Finally, we further assume $\mu_{11}=\mu_{22}$. For small $\mu_{ij}>0$ with $\mu_{11}>\mu_{12}$, let $\vec{u}_{1,\mu_{ij}}$ and $\vec{u}_{2,\mu_{ij}}$ be a positive minimizer of \eqref{gme} with negative energy and a positive solution of \eqref{gme} with positive energy, respectively. Then, by Proposition \ref{rig1}, we deduce that $\vec{u}_{1,\mu_{ij}}$ and $\vec{u}_{2,\mu_{ij}}$ have the form $\vec{u}_{1,\mu_{ij}}=(U_{1,\mu_{ij}},U_{1,\mu_{ij}})$ and $\vec{u}_{2,\mu_{ij}}=(U_{2,\mu_{ij}},U_{2,\mu_{ij}})$, where $U_{1,\mu_{ij}}$ and $U_{2,\mu_{ij}}$ are solutions of \begin{equation}\label{sie1} -\Delta u+ \lambda u + ( {\mu_{11} - \mu_{12}}) \phi_{u} u= \frac{c_p}{2}|u|^{p-1} u. \end{equation} We claim that for small $\mu_{ij}>0$ with $\mu_{11}>\mu_{12}$, $U_{1,\mu_{ij}}$ is a positive minimizer of \eqref{sie1} with negative energy and $U_{2,\mu_{ij}}$ is a positive solution of \eqref{sie1} with positive energy. Since $I(u,u)=2\tilde{I}(u)$, where $$ \tilde{I}(u)=\int_{\mathbb{R}^3}\frac12\Big(|\nabla u|^2+ \lambda u^2\Big) +\frac{\mu_{11}-\mu_{12}}{4}\phi_u u^2 -\frac{c_p}{2(p+1)} |u|^{p+1}dx, $$ we deduce that $U_{1,\mu_{ij}}$ is a minimizer of \eqref{sie1} with negative energy. On the other hand, by the construction in subsection \ref{p12}, we see that $U_{2,\mu_{ij}}\rightarrow U \not\equiv 0$ in $H^1(\mathbb{R}^3)$ as $\mu_{ij}\rightarrow 0$, where $i,j=1,2$, $U>0$ and $U$ satisfies \eqref{lseq}. Then we see that as $\mu_{ij}\rightarrow 0$, \[ \tilde{I}(U_{2,\mu_{ij}})\rightarrow \int_{\mathbb{R}^3}\frac12\Big(|\nabla U|^2+ \lambda U^2\Big) -\frac{c_p}{2(p+1)} |U|^{p+1}dx =\left(\frac12-\frac{1}{p+1}\right)\int_{\mathbb{R}^3}|\nabla U|^2+ \lambda U^2dx>0. \]
2,869,038,155,207
arxiv
\section{Introduction} In the last decade, distributed adaptive signal processing has emerged as a vital topic because of the vast applications in need of decentralized real-time data processing over networked systems. For multi-agent networks, distributed adaptive algorithms only rely on local information exchange, i.e., information exchange among neighbor nodes, to estimate the unknowns. This trait endows distributed adaptive algorithms with low communication overhead, robustness to node/link failures and scalability to large networks. In the literature, the centralized least mean squares (LMS) and recursive least squares (RLS) \cite{Haykin:1996:AFT:230061} have been extended to their decentralized counterparts \cite{sayed2014adaptive,jiang2013distributed} to deal with estimation problems over networks. Furthermore, many natural signals are inherently sparse with most entries equal to zero such as the image signals and audio signals in \cite{duarte2008single,griffin2011single,donoho2006compressed,candes2006robust}. Sparsity of signals are particularly conspicuous in the era of big data: for many applications, redundant input features (e.g., a person's salary, education, height, gender, etc.) are collected to be fed into a learning system to predict a desired output (e.g., whether a person will resign his/her job). Most input features are unrelated to the output so that the weight vector between the input vector and the output is highly sparse. As such, several sparse adaptive algorithms have been proposed such as the sparse LMS in \cite{su2012performance,jin2010stochastic}, the sparse RLS in \cite{babadi2010sparls} and the distributed sparse RLS in \cite{liu2014distributed}. Most of the decentralized sparse adaptive algorithms are focused on the single task estimation problem, in which all nodes receive data associated with the same unknown vector and collaborate to estimate it. On the contrary, many applications are inherently multitask-oriented, i.e., each node has its own unknown vector different from others'. For instance, in a sensor network, each node may want to estimate an unknown vector related to its specific location and thus different nodes have different unknown vectors to be estimated. In fact, several decentralized multitask adaptive algorithms have been proposed in the literature including the multitask diffusion LMS in \cite{chen2014multitask}, its asynchronous version in \cite{nassif2014multitask} and its application in the study of tremor in Parkinson's disease \cite{monajemi2016informed}. In particular, a sparse multitask LMS algorithm is proposed in \cite{nassif2015proximal} to promote sparsity of the estimated multitask vectors. To the best of our knowledge, all the existing distributed adaptive algorithms for multitask estimation problems are based on various decentralized versions of LMS. The RLS based sparse multitask estimation problems have not aroused much attention. It is well known that the RLS possesses much faster convergence speed than the LMS. Hence, the RLS is more suitable for applications in need of fast and accurate tracking of the unknowns than the LMS, especially when the devices are capable of dealing with computations of moderately high complexity (which is the case as the computational capability of devices is increasing drastically). This motivates us to study the decentralized sparse multitask RLS problem over networks. The main contributions of this paper are summarized as follows. \begin{itemize} \item A global networked RLS minimization problem is formulated. In accordance with the multitask nature of the estimation problem, each node has its own weight vector. Since neighbor nodes often share analogous properties and thus similar weight vectors, we add regularization term to penalize deviations of neighbors' weight vectors. To enforce sparsity of the weight vectors, we further introduce $l_1$ regularization. \item A decentralized online alternating direction method of multipliers (ADMM) algorithm is proposed for the formulated sparse multitask RLS problem. The proposed ADMM algorithm is simplified so that each iteration consists of simple closed-form computations and each node only needs to store and update one $M\times M$ matrix and six $M$ dimensional vectors, where $M$ is the dimension of the weight vectors. We show that the gaps between the outputs of the proposed ADMM algorithm and the optimal points of the formulated RLS problems converge to zero. \item To overcome the relatively high computational cost of the proposed ADMM algorithm, we further present a decentralized online subgradient method, which enjoys lower computational complexity. We theoretically analyze its convergence behaviors and show that the tracking error of the weight vectors is upper bounded by some constant related to the network topology and algorithm parameters. \item Numerical simulations are conducted to corroborate the effectiveness of the proposed algorithms. Their advantages over the single task sparse RLS algorithm in \cite{liu2014distributed} are highlighted. We also observe an accuracy-complexity tradeoff between the proposed two algorithms. \end{itemize} The roadmap of the remaining part of this paper is as follows. In Section \Rmnum{2}, the sparse multitask RLS problem is formally formulated. In Section \Rmnum{3}, we propose and simplify a decentralized online ADMM algorithm for the formulated RLS problem. In Section \Rmnum{4}, we propose a decentralized online subgradient method for the formulated problem in order to reduce computational complexity. In Section \Rmnum{5}, numerical simulations are conducted. In Section \Rmnum{6}, we conclude this work. \section{The Statement of the Problem} We consider a network of $N$ nodes and some edges between these nodes. We assume that the network is a simple graph, i.e., the network is undirected with no self-loop and there is at most one edge between any pair of nodes. Denote the set of neighbors of node $n$ (those who are linked with node $n$ by an edge) as $\Omega_n$. The network can be either connected or disconnected (there does not necessarily exist a path connecting every pair of nodes). Time is divided into discrete slots denoted as $t=1,2,...$. Each node $n$ has an unknown (slowly) time-variant $M$ dimensional weight vector $\widetilde{\mathbf{w}}_n(t)\in\mathbb{R}^M$ to be estimated. The formulated network is therefore a multitask learning network since different nodes have different weight vectors, as opposed to the traditional single task learning network \cite{sayed2014adaptive}, which is usually transformed into a consensus optimization problem framework \cite{shi2014linear,ling2015dlm,shi2015extra}. Each node $n$ has access to a sequence of private measurements $\{d_n(t),\mathbf{u}_n(t)\}_{t=1,2,...}$, where $\mathbf{u}_n(t)\in\mathbb{R}^M$ is the input regressor at time $t$ and $d_n(t)\in\mathbb{R}$ is the output observation at time $t$. The measurement data are private in the sense that node $n$ has access only to its own measurement sequence but not others'. The data at node $n$ are assumed to conform to a linear regression model with (slowly) time-variant weight vector $\widetilde{\mathbf{w}}_n(t)$: \begin{equation} d_n(t)=\mathbf{u}_n^\mathsf{T}(t)\widetilde{\mathbf{w}}_n(t)+e_n(t), \end{equation} where $e_n(t)$ is the output measurement noise at time $t$. In multitask learning networks, the benefit of cooperation between nodes comes from the fact that neighboring nodes have \emph{similar} weight vectors \cite{chen2014multitask}, where similarity is embodied by some specific distance measures. By incorporating terms promoting similarity between neighbors and enforcing cooperation in the network, an estimator may achieve potentially higher performance than its non-cooperative counterpart. Moreover, many signals in practice are highly sparse, i.e., most entries in the signal are equal to zero, with examples encompassing image signals, audio signals, etc. \cite{duarte2008single,griffin2011single,donoho2006compressed,candes2006robust}. The sparsity of signals is especially conspicuous in today's big data era because redundant data are collected as input features among which most are unrelated to the targeted output, leading to sparsity of the corresponding weight vectors. Furthermore, as per convention in adaptive algorithms \cite{Haykin:1996:AFT:230061}, we assume that the weight vectors $\widetilde{\mathbf{w}}_n(t)$ varies with time very slowly. This suggests that past data are of great merit to estimate the current weight vector, which justifies the advantage of the RLS (studied in this paper) over the LMS (studied in all existing works on multitask estimation \cite{chen2014multitask,nassif2014multitask,nassif2015proximal,monajemi2016informed}) in terms of convergence speed. In all, we propose an RLS based estimator to track the unknown weight vectors $\{\widetilde{\mathbf{w}}_n(t)\}_{n=1,2,...,N}$ while enforcing similarity between neighbors' weight vectors and sparsity of all weight vectors. The estimator at time $T$ is the optimal solution of the following optimization problem: \begin{eqnarray}\label{RLS} \text{Minimize}_{\mathbf{w}_1,...,\mathbf{w}_N}~~\sum_{n=1}^N\sum_{t=1}^T\lambda^{T-t}\left(d_n(t)-\mathbf{u}_n^\mathsf{T}(t)\mathbf{w}_n\right)^2+\beta\sum_{n=1}^N\sum_{m\in\Omega_n}\|\mathbf{w}_n-\mathbf{w}_m\|_2^2+\gamma\sum_{n=1}^N\|\mathbf{w}_n\|_1, \end{eqnarray} where $0<\lambda<1,\beta>0,\gamma>0$ are the forgetting factor of the RLS algorithm, regularization coefficient for similarity between neighbors' weight vectors and regularization coefficient for sparsity, respectively. If $\beta=\infty$, then problem \eqref{RLS} enforces consensus of weight vectors across nodes, and thus degenerates to the sparse RLS problem in \cite{liu2014distributed}. Note that the measurement data $\{\mathbf{u}_n(t),d_n(t)\}$ arrives in a sequential manner, which necessitates an online (real time) algorithm to solve \eqref{RLS} due to the prohibitive computation and storage cost of offline methods. Further note that the private measurement data are distributed among network nodes. Thus, a distributed algorithm for \eqref{RLS} is imperative as centralized algorithms are vulnerable to link failures and can incur large communication costs, not to mention the privacy concerns of the private data. Therefore, we are aimed at finding \emph{distributed online} algorithm for solving \eqref{RLS}. In the following two sections, we propose two different distributed online algorithms with complementary merits in accuracy and computational complexity. \section{The Decentralized Online ADMM} In this section, we propose an alternating direction method of multipliers (ADMM) based decentralized online algorithm for solving \eqref{RLS}. It is further simplified so that its iteration consists of simple closed-form computations and each node only needs to store and update one $M\times M$ matrix and six $M$ dimensional vectors. We show that the gaps between the outputs of the proposed ADMM algorithm and the optimal points of \eqref{RLS} converge to zero. Before the development of the algorithm, we first present some rudimentary knowledge of ADMM in the following subsection. \subsection{Preliminaries of ADMM} ADMM is an optimization framework widely applied to various signal processing applications, including wireless communications \cite{shen2012distributed}, power systems \cite{zhang2016admm} and multi-agent coordination \cite{chang2014proximal}. It enjoys fast convergence speed under mild technical conditions \cite{deng2016global} and is especially suitable for the development of distributed algorithms \cite{boyd2011distributed,bertsekas1989parallel}. ADMM solves problems of the following form: \begin{eqnarray}\label{admm_prime} \text{Minimize}_{\mathbf{x},\mathbf{z}} f(\mathbf{x})+g(\mathbf{z})~~\text{s.t.}~~\mathbf{Ax+Bz=c}, \end{eqnarray} where $\mathbf{A}\in\mathbb{R}^{p\times n},B\in\mathbb{R}^{p\times m},c\in\mathbb{R}^p$ are constants and $\mathbf{x}\in\mathbb{R}^n,\mathbf{z}\in\mathbb{R}^m$ are optimization variables. $f:\mathbb{R}^n\mapsto\mathbb{R}$ and $g:\mathbb{R}^m\mapsto\mathbb{R}$ are two convex functions. The augmented Lagrangian can be formed as: \begin{equation} \mathfrak{L}_\rho(\mathbf{x,z,y})=f(\mathbf{x})+g(\mathbf{z})+\mathbf{y}^\mathsf{T}(\mathbf{Ax+Bz-c})+\frac{\rho}{2}\|\mathbf{Ax+Bz-c}\|_2^2, \end{equation} where $\mathbf{y}\in\mathbb{R}^p$ is the Lagrange multiplier and $\rho>0$ is some constant. The ADMM then iterates over the following three steps for $k\geq0$ (the iteration index): \begin{eqnarray} &&\mathbf{x}^{k+1}=\arg\min_\mathbf{x} \mathfrak{L}_\rho\left(\mathbf{x},\mathbf{z}^k,\mathbf{y}^k\right),\label{x_prime}\\ &&\mathbf{z}^{k+1}=\arg\min_\mathbf{z} \mathfrak{L}_\rho\left(\mathbf{x}^{k+1},\mathbf{z},\mathbf{y}^k\right),\label{z_prime}\\ &&\mathbf{y}^{k+1}=\mathbf{y}^{k}+\rho\left(\mathbf{Ax}^{k+1}+\mathbf{Bz}^{k+1}-\mathbf{c}\right).\label{multiplier_prime} \end{eqnarray} The ADMM is guaranteed to converge to the optimal point of \eqref{admm_prime} as long as $f$ and $g$ are convex \cite{boyd2011distributed,bertsekas1989parallel}. It is recently shown that global linear convergence can be ensured provided additional assumptions on problem \eqref{admm_prime} holds \cite{deng2016global}. \subsection{Development of Decentralized Online ADMM for \eqref{RLS}} To apply ADMM to \eqref{RLS}, we first transform it to the form of \eqref{admm_prime}. We introduce auxiliary variables $\mathbf{x}_n\in\mathbb{R}^M,n=1,...,N,$ and $\mathbf{v}_{n,i}\in\mathbb{R}^M,n=1,...N,i=1,...,|\Omega_n|$, where $|\cdot|$ denotes the cardinality of a set. Denote the index of $i$-th neighbor of node $n$ as $g(n,i)$. Thus, problem \eqref{RLS} can be equivalently transformed into the following problem: \begin{eqnarray} \begin{split}\label{admm} &\text{Minimize}~~\sum_{n=1}^N\sum_{t=1}^T\lambda^{T-t}\left(d_n(t)-\mathbf{u}_n(t)^\mathsf{T}\mathbf{x}_n\right)^2\\ &~~~~~~~~~~~~+\beta\sum_{n=1}^N\left[|\Omega_n|\|\mathbf{x}_n\|_2^2-2\left(\sum_{i=1}^{|\Omega_n|}\mathbf{v}_{n,i}\right)^\mathsf{T}\mathbf{x}_n+\sum_{i=1}^{|\Omega_n|}\|\mathbf{v}_{n,i}\|_2^2\right]+\gamma\sum_{n=1}^N\|\mathbf{w}_n\|_1\\ &\text{s.t.}~~\mathbf{x}_n=\mathbf{w}_n,n=1,...,N,\\ &~~~~~~\mathbf{v}_{n,i}=\mathbf{w}_{g(n,i)},n=1...,N,i=1...,|\Omega_n|, \end{split} \end{eqnarray} where the optimization variables are $\mathbf{w}_n,\mathbf{x}_n,\mathbf{v}_{n,i},n=1,...,N,i=1,...,|\Omega_n|$. Note that optimization problem \eqref{admm} is in the form of \eqref{admm_prime} (regarding $\mathbf{x}_n$'s and $\mathbf{v}_{n,i}$'s as the variable $\mathbf{x}$ in \eqref{admm_prime} and $\mathbf{w}_n$'s as the variable $\mathbf{z}$ in \eqref{admm_prime}). Thus, we can apply ADMM to problem \eqref{admm}. Introducing Lagrange multiplier $\mathbf{y}_n\in\mathbb{R}^M,\mathbf{z}_{n,i}\in\mathbb{R}^M$, we can form the augmented Lagrangian of \eqref{admm} as follows: \begin{eqnarray} \begin{split} &\mathfrak{L}_\rho(\{\mathbf{x}_n,\mathbf{v}_{n,i},\mathbf{w}_n,\mathbf{y}_n,\mathbf{z}_{n,i}\}_{n=1,...,N,i=1,...,|\Omega_n|})\\ &=\sum_{n=1}^N\sum_{t=1}^T\lambda^{T-t}\left(d_n(t)-\mathbf{u}_n(t)^\mathsf{T}\mathbf{x}_n\right)^2+\beta\sum_{n=1}^N\left[|\mathcal{N}_n|\|\mathbf{x}_n\|_2^2-2\left(\sum_{i=1}^{|\Omega_n|}\mathbf{v}_{n,i}\right)^\mathsf{T}\mathbf{x}_n+\sum_{i=1}^{|\Omega_n|}\|\mathbf{v}_{n,i}\|_2^2\right]\\ &~~~+\gamma\sum_{n=1}^N\|\mathbf{w}_n\|_1+\sum_{n=1}^N\mathbf{y}_n^\mathsf{T}(\mathbf{x}_n-\mathbf{w}_n)+\sum_{n=1}^N\sum_{i=1}^{|\Omega_n|}\mathbf{z}_{n,i}^\mathsf{T}(\mathbf{v}_{n,i}-\mathbf{w}_{g_{n,i}})\\ &~~~+\frac{\rho}{2}\sum_{n=1}^N\|\mathbf{x}_n-\mathbf{w}_n\|_2^2+\frac{\rho}{2}\sum_{n=1}^N\sum_{i=1}^{|\Omega_n|}\|\mathbf{v}_{n,i}-\mathbf{w}_{g(n,i)}\|_2^2 \end{split} \end{eqnarray} In the following, for ease of notation, we use $\mathbf{x}$ to represent all the $\{\mathbf{x}_n\}$ and similarly for $\mathbf{v,w,y,z}$. We apply the ADMM updates \eqref{x_prime}, \eqref{z_prime} and \eqref{multiplier_prime} to problem \eqref{admm} as follows: \begin{eqnarray} &&\left\{\mathbf{x}^{k+1},\mathbf{v}^{k+1}\right\}=\arg\min_{\mathbf{x,v}} \mathfrak{L}_\rho\left(\mathbf{x},\mathbf{v},\mathbf{w}^k,\mathbf{y}^k,\mathbf{z}^k\right),\label{xv}\\ &&\mathbf{w}^{k+1}=\arg\min_{\mathbf{w}} \mathfrak{L}_\rho\left(\mathbf{x}^{k+1},\mathbf{v}^{k+1},\mathbf{w},\mathbf{y}^k,\mathbf{z}^k\right),\label{w}\\ &&\mathbf{y}_n^{k+1}=\mathbf{y}_n^k+\rho\left(\mathbf{x}_n^{k+1}-\mathbf{w}_n^{k+1}\right),\label{y}\\ &&\mathbf{z}_{n,i}^{k+1}=\mathbf{z}_{n,i}^k+\rho\left(\mathbf{v}_{n,i}^{k+1}-\mathbf{w}_{g(n,i)}^{k+1}\right).\label{z} \end{eqnarray} In the following, we detail how to implement the updates of the primal variables, i.e., \eqref{xv} and \eqref{w}, in a distributed and online fashion. \subsubsection{Updating $\mathbf{x}$ and $\mathbf{v}$} The update of $\mathbf{x}$ and $\mathbf{v}$ in \eqref{xv} can be decomposed across nodes. For each node $n$, the subproblem is: \begin{eqnarray} \begin{split}\label{xv1} &\left\{\mathbf{x}_n^{k+1},\left\{\mathbf{v}_{n,i}^{k+1}\right\}_{i=1,...,|\Omega_n|}\right\}\\ &=\arg\min_{\mathbf{x}_n,\left\{\mathbf{v}_{n,i}\right\}_{i=1,...,|\Omega_n|}}~\Bigg\{\sum_{t=1}^T\lambda^{T-t}\left(d_n(t)-\mathbf{u}_n(t)^\mathsf{T}\mathbf{x}_n\right)^2\\ &+\beta\left[|\Omega_n|\|\mathbf{x}_n\|_2^2-2\left(\sum_{i=1}^{|\Omega_n|}\mathbf{v}_{n,i}\right)^\mathsf{T}\mathbf{x}_n+\sum_{i=1}^{|\Omega_n|}\|\mathbf{v}_{n,i}\|_2^2\right]\\ &+\mathbf{y}_n^{k\mathsf{T}}\mathbf{x}_n+\sum_{i=1}^{|\Omega_n|}\mathbf{z}_{n,i}^{k\mathsf{T}}\mathbf{v}_{n,i}+\frac{\rho}{2}\left\|\mathbf{x}_n-\mathbf{w}_n^k\right\|_2^2+\frac{\rho}{2}\sum_{i=1}^{|\Omega_n|}\left\|\mathbf{v}_{n,i}-\mathbf{w}_{g(n,i)}^k\right\|_2^2\Bigg\}. \end{split} \end{eqnarray} Define the data dependent input correlation matrix and input-output cross correlation vector of node $n$ at time $T$ to be: \begin{eqnarray} &&\mathbf{R}_n(T)=\sum_{t=1}^T\lambda^{T-t}\mathbf{u}_n(T)\mathbf{u}_n(T)^\mathsf{T},\label{correlation}\\ &&\mathbf{p}_n(T)=\sum_{t=1}^T\lambda^{T-t}d_n(t)\mathbf{u}_n(t).\label{cross-correlation} \end{eqnarray} Note that the objective function in \eqref{xv1} is a convex quadratic function. Hence, the necessary and sufficient condition for optimality of problem \eqref{xv1} is that the gradient of the objective function vanishes. The gradient of the objective function, which is denoted as $J_n^k(T)$, with respect to $\mathbf{x}_n$ and $\mathbf{v}_{n,i}$ can be computed as follows: \begin{eqnarray} &&\nabla_{\mathbf{x}_n}J_n^k(T)=(2\mathbf{R_n(T)}+2\beta|\Omega_n|\mathbf{I}+\rho\mathbf{I})\mathbf{x}_n-2\beta\sum_{i=1}^{|\Omega_n|}\mathbf{v}_{n,i}-2\mathbf{p}_n(T)+\mathbf{y}_n^k-\rho\mathbf{w}_n^k,\\ &&\nabla_{\mathbf{v}_{n,i}}J_n^k(T)=-2\beta\mathbf{x}_n+(2\beta+\rho)\mathbf{v}_{n,i}+\mathbf{z}_{n,i}^k-\rho\mathbf{w}_{g(n,i)}^k. \end{eqnarray} Letting the gradients with respect to $\mathbf{x}_n$ and $\mathbf{v}_{n,i}$ be zero, we rewrite the update in \eqref{xv1} as: \begin{eqnarray}\label{xv_matrix} \begin{split} &\left[ \begin{array}{c;{2pt/2pt}cccc} 2\mathbf{R}_n(T)+2\beta|\Omega_n|\mathbf{I}+\rho\mathbf{I} & -2\beta\mathbf{I} & -2\beta\mathbf{I} & \cdots & -2\beta\mathbf{I}\\ \hdashline[2pt/2pt] -2\beta\mathbf{I} & (2\beta+\rho)\mathbf{I} & & & \\ -2\beta\mathbf{I} & & (2\beta+\rho)\mathbf{I} & & \text{\huge0}\\ \vdots & & & \ddots &\\ -2\beta\mathbf{I} & &\text{\huge0} & & (2\beta+\rho)\mathbf{I} \end{array} \right] \left[ \begin{array}{c} \mathbf{x}_n^{k+1}\\ \hdashline[2pt/2pt] \mathbf{v}_{n,1}^{k+1}\\ \mathbf{v}_{n,2}^{k+1}\\ \vdots\\ \mathbf{v}_{n,|\Omega_n|}^{k+1} \end{array} \right] \\&=\left[ \begin{array}{c} 2\mathbf{p}_n(T)-\mathbf{y}_n^k+\rho\mathbf{w}_n^k\\ \hdashline[2pt/2pt] -\mathbf{z}_{n,1}^k+\rho\mathbf{w}_{g(n,1)}^k\\ -\mathbf{z}_{n,2}^k+\rho\mathbf{w}_{g(n,2)}^k\\ \vdots\\ -\mathbf{z}_{n,|\Omega_n|}^k+\rho\mathbf{w}_{g(n,|\Omega_n|)}^k\\ \end{array} \right] \end{split} \end{eqnarray} To inverse the matrix in \eqref{xv_matrix}, we need to use the following matrix inversion lemma. \begin{lem} For arbitrary matrices $\mathbf{A}\in\mathbb{R}^{m\times m},\mathbf{B}\in\mathbb{R}^{m\times n},\mathbf{C}\in\mathbb{R}^{n\times m},\mathbf{D}\in\mathbb{R}^{n\times n}$ such that all the matrix inversions at the R.H.S. of \eqref{inversion_lemma} exist, we have: \begin{eqnarray}\label{inversion_lemma} \left[ \begin{array}{cc} \mathbf{A} & \mathbf{B}\\ \mathbf{C} & \mathbf{D} \end{array} \right]^{-1} =\left[ \begin{array}{cc} \left(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}\right)^{-1} & -\left(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}\right)^{-1}\mathbf{BD}^{-1}\\ -\mathbf{D}^{-1}\mathbf{C}\left(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}\right)^{-1} & \mathbf{D}^{-1}+\mathbf{D}^{-1}\mathbf{C}\left(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C}\right)^{-1}\mathbf{BD}^{-1}. \end{array} \right] \end{eqnarray} \end{lem} Define a new matrix: \begin{equation}\label{F} \mathbf{F}_n(T)=\left[2\mathbf{R}_n(T)+\left(\rho+\frac{2\beta\rho|\Omega_n|}{2\beta+\rho}\right)\mathbf{I}\right]^{-1}. \end{equation} By invoking the matrix inversion lemma \eqref{inversion_lemma}, we can solve for the update \eqref{xv_matrix} in closed form: \begin{align} &\mathbf{x}_n^{k+1}=\mathbf{F}_n(T)\left(2\mathbf{p}_n(T)-\mathbf{y}_n^k+\rho\mathbf{w}_n^k\right)+\frac{2\beta}{2\beta+\rho}\mathbf{F}_n(T)\sum_{i=1}^{|\Omega_n|}\left(-\mathbf{z}_{n,i}^k+\rho\mathbf{w}_{g(n,i)}^k\right),\label{xx}\\ &\mathbf{v}_{n,i}^{k+1}=\frac{2\beta}{2\beta+\rho}\mathbf{F}_n(T)\left(2\mathbf{p}_n(T)-\mathbf{y}_n^k+\rho\mathbf{w}_n^k\right)+\frac{1}{2\beta+\rho}\left(-\mathbf{z}_{n,i}^k+\rho\mathbf{w}_{g(n,i)}^k\right)\nonumber\\ &~~~~~~~~~+\left(\frac{2\beta}{2\beta+\rho}\right)^2\mathbf{F}_n(T)\sum_{j=1}^{|\Omega_n|}\left(-\mathbf{z}_{n,j}^k+\rho\mathbf{w}_{g(n,j)}^k\right).\label{vv} \end{align} \subsubsection{Updating $\mathbf{w}$} We note that the update for $\mathbf{w}$ in \eqref{w} can be decomposed not only across nodes but also across each entry of the vector $\mathbf{w}_n$. For each node $n$, the $l$-th entry of $\mathbf{w}_n$ can be updated as follows: \begin{align} &w_n^{k+1}(l)=\arg\min_{w_n(l)}\Bigg\{\gamma|w_n(l)|-y_n^k(l)w_n(l)-\left(\sum_{g(m,i)=n}z_{m,i}^k(l)\right)w_n(l)+\frac{\rho}{2}\left[w_n(l)-x_n^{k+1}(l)\right]^2\nonumber\\ &~~~~~~~~~~~~+\frac{\rho}{2}\sum_{g(m,i)=n}\left[w_n(l)-v_{m,i}^{k+1}(l)\right]^2\Bigg\}\\ &=\arg\min_{w_n(l)}\Bigg\{\gamma|w_n(l)|+\frac{\rho}{2}(1+|\Omega_n|)\Bigg[w_n(l)-\frac{1}{\rho(1+|\Omega_n|)}\Bigg(y_n^k(l)+\rho x_n^{k+1}(l)\nonumber\\ &~~~~~~~~~~~~+\sum_{g(m,i)=n}\left(z_{m,i}^k(l)+v_{m,i}^{k+1}(l)\right)\Bigg)\Bigg]^2\Bigg\}\\ &=\mathcal{S}_\frac{\gamma}{\rho(1+|\Omega_n|)}\left(\frac{1}{\rho(1+|\Omega_n|)}\Bigg(y_n^k(l)+\rho x_n^{k+1}(l)+\sum_{g(m,i)=n}\left(z_{m,i}^k(l)+v_{m,i}^{k+1}(l)\right)\Bigg)\right),\label{soft} \end{align} where the soft-threshold function $\mathcal{S}$ is defined for $a\in\mathbb{R},\kappa>0$ as follows: \begin{align} \mathcal{S}_\kappa(a)= \begin{cases} a-\kappa,\text{~~if~~}a>\kappa,\\ 0,\text{~~if~~}|a|\leq\kappa,\\ a+\kappa,\text{~~if~~}a<\kappa. \end{cases} \end{align} In \eqref{soft}, we have made use of the following fact. \begin{lem} For any $\lambda>0,\rho>0,v\in\mathbb{R}$, we have: \begin{align} \mathcal{S}_\frac{\lambda}{\rho}(v)=\arg\min_x~\left(\lambda |x|+\frac{\rho}{2}(x-v)^2\right). \end{align} \end{lem} Once we extend the definition of $\mathcal{S}$ to vectors in a entrywise way, we can write the update for $\mathbf{w}_n$ compactly as: \begin{align} \mathbf{w}_n^{k+1}=\mathcal{S}_\frac{\gamma}{\rho(1+|\Omega_n|)}\left(\frac{1}{\rho(1+|\Omega_n|)}\Bigg(\mathbf{y}_n^k+\rho \mathbf{x}_n^{k+1}+\sum_{g(m,i)=n}\left(\mathbf{z}_{m,i}^k+\mathbf{v}_{m,i}^{k+1}\right)\Bigg)\right).\label{ww} \end{align} \subsubsection{Online Algorithm with Varying $T$} So far, the derived ADMM algorithm is only suitable for one particular time shot $T$. Since it takes iterations for ADMM to converge to the optimal point, for each time $T$, we ought to run multiple rounds of ADMM iterations $k=1,...,K$ for some sufficiently large $K$. After the ADMM has converged for this particular time $T$, we update the data related quantities ($\mathbf{R}_n(T)$ and $\mathbf{p}_n(T)$) and move to the next time slot. However, since the underlying weight vectors are varying across time (i.e., the underlying linear system is non-stationary), it is meaningless to estimate the weight vectors very accurately for every time slot. Thus, in the following, we choose $K=1$, i.e., only one iteration of ADMM update is executed in each time slot. This is inspired by many existing adaptive algorithms such as the LMS algorithm, where only one step of gradient descent is performed at each time slot \cite{Haykin:1996:AFT:230061}. As such, we replace $k$ with $T-1$ in the previously derived updates \eqref{xx}, \eqref{vv}, \eqref{ww} and get updates that are suitable for varying time $T$: \begin{align} &\mathbf{x}_n(T)=\mathbf{F}_n(T)\left(2\mathbf{p}_n(T)-\mathbf{y}_n(T-1)+\rho\mathbf{w}_n(T-1)\right)\nonumber\\ &~~~~~~~~~+\frac{2\beta}{2\beta+\rho}\mathbf{F}_n(T)\sum_{i=1}^{|\Omega_n|}\left(-\mathbf{z}_{n,i}(T-1)+\rho\mathbf{w}_{g(n,i)}(T-1)\right),\label{xxx}\\ &\mathbf{v}_{n,i}(T)=\frac{2\beta}{2\beta+\rho}\mathbf{F}_n(T)\left(2\mathbf{p}_n(T)-\mathbf{y}_n(T-1)+\rho\mathbf{w}_n(T-1)\right)\nonumber\\ &~~~~~~~~~+\frac{1}{2\beta+\rho}\left(-\mathbf{z}_{n,i}(T-1)+\rho\mathbf{w}_{g(n,i)}(T-1)\right)\nonumber\\ &~~~~~~~~~+\left(\frac{2\beta}{2\beta+\rho}\right)^2\mathbf{F}_n(T)\sum_{j=1}^{|\Omega_n|}\left(-\mathbf{z}_{n,j}(T-1)+\rho\mathbf{w}_{g(n,j)}(T-1)\right),\label{vvv}\\ &\mathbf{w}_n(T)=\mathcal{S}_\frac{\gamma}{\rho(1+|\Omega_n|)}\left(\frac{1}{\rho(1+|\Omega_n|)}\Bigg(\mathbf{y}_n(T-1)+\rho \mathbf{x}_n(T)+\sum_{g(m,i)=n}\left(\mathbf{z}_{m,i}(T-1)+\mathbf{v}_{m,i}(T)\right)\Bigg)\right).\label{www} \end{align} Moreover, the updates \eqref{y} and \eqref{z} for dual variables can be rewritten as: \begin{align} \label{y_algo}&\mathbf{y}_n(T)=\mathbf{y}_n(T-1)+\rho\left(\mathbf{x}_n(T)-\mathbf{w}_n(T)\right),\\ &\mathbf{z}_{n,i}(T)=\mathbf{z}_{n,i}(T-1)+\rho\left(\mathbf{v}_{n,i}(T)-\mathbf{w}_{g(n,i)}(T)\right).\label{zzz} \end{align} The correlation matrices and cross-correlation vectors can be updated as follows: \begin{align} &\mathbf{R}_n(T+1)=\lambda\mathbf{R}_n(T)+\mathbf{u}_n(T+1)\mathbf{u}_n(T+1)^\mathsf{T},\label{R_update}\\ &\mathbf{p}_n(T+1)=\lambda\mathbf{p}_n(T)+d_n(T+1)\mathbf{u}_n(T+1).\label{p_update} \end{align} And $\mathbf{F}_n(T)$ is computed according to \eqref{F}. \begin{rem} The computation of $\mathbf{F}_n(T)$ in \eqref{F} necessitates inversion of an $M\times M$ matrix, which incurs a computational complexity of $\mathcal{O}(M^3)$ unless special structure is present. For the special case of $\lambda=1$ (which is suitable for time-invariant weight vectors), this burden can be alleviated as follows. According to \eqref{F}, \eqref{R_update} and the condition that $\lambda=1$, we have: \begin{align} \mathbf{F}_n(T)&=\left[2\left(\mathbf{R}_n(T-1)+\mathbf{u}_n(T)\mathbf{u}_n(T)^\mathsf{T}\right)+\left(\rho+\frac{2\beta\rho|\Omega_n|}{2\beta+\rho}\right)\mathbf{I}\right]^{-1}\\ &=\left[\mathbf{F}_n^{-1}(T-1)+2\mathbf{u}_n(T)\mathbf{u}_n(T)^\mathsf{T}\right]^{-1}\\ &=\mathbf{F}_n(T-1)-\frac{\mathbf{F}_n(T-1)\mathbf{u}_n(T)\mathbf{u}_n(T)^\mathsf{T}\mathbf{F}_n(T-1)}{\frac{1}{2}+\mathbf{u}_n(T)^\mathsf{T}\mathbf{F}_n(T-1)\mathbf{u}_n(T)}.\label{special} \end{align} However, in the general case where $\lambda<1$, the matrix inversion incurred by the computation of $\mathbf{F}_n(T)$ is inevitable, which is the most computationally intensive part of the proposed ADMM algorithm. \end{rem} \subsubsection{Simplification of the ADMM Updates} So far, the ADMM updates involve primal variables $\{\mathbf{v}_{n,i}\}$ and dual variables $\{\mathbf{z}\}_{n,i}$. For each node $n$, $\{\mathbf{v}_{n,i}\}$ and $\{\mathbf{z}\}_{n,i}$ include $2|\Omega_n|$ $M$-dimensional vectors, which is costly to sustain in terms of communication and storage overhead, especially when the numbers of neighbors (degrees) are large. This motivates us to simplify the ADMM updates \eqref{xxx}-\eqref{p_update} so that the number of vectors at each node is independent of its degree. To this end, we first define the following auxiliary variables: \begin{align} &\mathbf{\underline{z}}_n(T)=\sum_{i=1}^{|\Omega_n|}\mathbf{z}_{n,i}(T),\\ &\mathbf{\overline{z}}_n(T)=\sum_{g(m,i)=n}\mathbf{z}_{m,i}(T),\\ &\mathbf{\underline{v}}_n(T)=\sum_{i=1}^{|\Omega_n|}\mathbf{v}_{n,i}(T),\\ &\mathbf{\overline{v}}_n(T)=\sum_{g(m,i)=n}\mathbf{v}_{m,i}(T),\\ \label{w_up}&\mathbf{\overline{w}}_n(T)=\sum_{m\in\Omega_n}\mathbf{w}_m(T),\\ \label{eta}&\boldsymbol{\eta}_n(T)=\mathbf{F}_n(T)(2\mathbf{p}_n(T)-\mathbf{y}_n(T-1)+\rho\mathbf{w}_n(T-1)),\\ \label{theta}&\boldsymbol{\theta}_n(T)=\mathbf{F}_n(T)(-\mathbf{\underline{z}}_n(T-1)+\rho\mathbf{\overline{w}}_n(T-1)),\\ \label{eta_up}&\boldsymbol{\overline{\eta}}_n(T)=\sum_{m\in\Omega_n}\boldsymbol{\eta}_m(T),\\ \label{theta_up}&\boldsymbol{\overline{\theta}}_n(T)=\sum_{m\in\Omega_n}\boldsymbol{\theta}_m(T). \end{align} Thus, the update for $\mathbf{x}$ in \eqref{xxx} can be rewritten as: \begin{align}\label{x_algo} \mathbf{x}_n(T)=\boldsymbol{\eta}_n(T)+\frac{2\beta}{2\beta+\rho}\boldsymbol{\theta}_n(T). \end{align} Using \eqref{vvv} yields the update for $\mathbf{\underline{v}}_n(T)$ and $\mathbf{\overline{v}}_n(T)$: \begin{align}\label{v_under_algo} \mathbf{\underline{v}}_n(T)&=\frac{2\beta|\Omega_n|}{2\beta+\rho}\boldsymbol{\eta}_n(T)+\left(\frac{2\beta}{2\beta+\rho}\right)^2|\Omega_n|\boldsymbol{\theta}_n(T)+\frac{1}{2\beta+\rho}(-\mathbf{\underline{z}}_n(T-1)+\rho\mathbf{\overline{w}}_n(T-1)). \end{align} \begin{align}\label{v_up_algo} \mathbf{\overline{v}}_n(T)&=\frac{2\beta}{2\beta+\rho}\boldsymbol{\overline{\eta}}_n(T)+\left(\frac{2\beta}{2\beta+\rho}\right)^2\boldsymbol{\overline{\theta}}_n(T)+\frac{1}{2\beta+\rho}(-\mathbf{\overline{z}}_n(T-1)+\rho|\Omega_n|\mathbf{w}_n(T-1)). \end{align} The update for $\mathbf{w}_n(T)$ can be rewritten as: \begin{align}\label{w_algo} \mathbf{w}_n(T)=\mathcal{S}_\frac{\gamma}{\rho(1+|\Omega_n|)}\left(\frac{1}{\rho(1+|\Omega_n|)}(\mathbf{y}_n(T-1)+\rho\mathbf{x}_n(T)+\mathbf{\overline{z}}_n(T-1)+\rho\mathbf{\overline{v}}_n(T))\right). \end{align} Similarly, from \eqref{zzz}, we can spell out the updates for $\mathbf{\underline{z}}_n(T)$ and $\mathbf{\overline{z}}_n(T)$: \begin{align} \label{z_under_algo}&\mathbf{\underline{z}}_n(T)=\mathbf{\underline{z}}_n(T-1)+\rho(\mathbf{\underline{v}}_n(T)-\mathbf{\overline{w}}_n(T)),\\ \label{z_up_algo}&\mathbf{\overline{z}}_n(T)=\mathbf{\overline{z}}_n(T-1)+\rho(\mathbf{\overline{v}}_n(T)-|\Omega_n|\mathbf{w}_n(T)). \end{align} Now, we are ready to formally present the proposed decentralized online ADMM algorithm for solving \eqref{RLS}, which is summarized in Algorithm \ref{a1}. Notice that the algorithm is completely distributed: each node only needs to communicate with its neighbors. It is also online (real-time): each node only needs to store and update one $M\times M$ matrix $\mathbf{R}_n(T)$ and six $M$ dimensional vectors $\mathbf{p}_n(T),\mathbf{w}_n(T),\mathbf{\overline{w}}_n(T),\mathbf{y}_n(T),\mathbf{\underline{z}}_n(T),\mathbf{\overline{z}}_n(T)$. All other involved quantities in Algorithm \ref{a1} are intermediate and can be derived from these stored matrices and vectors. \begin{algorithm}[!htbp] \renewcommand{\algorithmicrequire}{\textbf{Inputs:} } \renewcommand\algorithmicensure {\textbf{Outputs:} } \caption{The proposed decentralized online ADMM algorithm} \begin{algorithmic}[1]\label{a1} \REQUIRE~~\\ Measurement data stream at each node $\{\mathbf{u}_n(t),d_n(t)\}$, $n=1,2,...,N$, $t=1,2,...$ \ENSURE~~\\ Estimates of the unknown weight vectors at each node $\{\mathbf{w}_n(T)\}$, $T=1,2,...$ \STATE Initialize $\mathbf{R}_n(0)=\mathbf{0}_{M\times M}$ and $\mathbf{p}_n(0)=\mathbf{w}_n(0)=\mathbf{\overline{w}}_n(0)=\mathbf{y}_n(0)=\mathbf{\underline{z}}_n(0)=\mathbf{\overline{z}}_n(0)=\mathbf{0}_M$. $T=0$. \STATE \textbf{Repeat:} \STATE $T\leftarrow T+1$. \STATE Each node $n$ updates its correlation matrix and cross-correlation vector once receiving the new data $\mathbf{u}_n(T),d_n(T)$: \begin{align} &\mathbf{R}_n(T)=\lambda\mathbf{R}_n(T-1)+\mathbf{u}_n(T)\mathbf{u}_n(T)^\mathsf{T},\\ &\mathbf{p}_n(T)=\lambda\mathbf{p}_n(T-1)+d_n(t)\mathbf{u}_n(T). \end{align} \STATE Each node $n$ computes $\mathbf{F}_n(T)$ according to \eqref{F}. \STATE Each node $n$ computes $\boldsymbol{\eta}_n(T)$ and $\boldsymbol{\theta}_n(T)$ according to \eqref{eta} and \eqref{theta} and then broadcasts its the results to its neighbors. \STATE Each node $n$ receives $\boldsymbol{\eta}_m(T)$ and $\boldsymbol{\theta}_m(T)$ from its neighbors $m\in\Omega_n$ and forms $\boldsymbol{\overline{\eta}}_n(T)$ and $\boldsymbol{\overline{\theta}}_n(T)$ based on \eqref{eta_up} and \eqref{theta_up}. \STATE Each node $n$ computes $\mathbf{x}_n(T),\mathbf{\underline{v}}_n(T),\mathbf{\overline{v}}_n(T)$ according to \eqref{x_algo}, \eqref{v_under_algo} and \eqref{v_up_algo}, respectively. \STATE Each node $n$ computes $\mathbf{w}_n(T)$ based on \eqref{w_algo} and broadcasts the result to its neighbors. \STATE Each node $n$ receives $\mathbf{w}_m(T)$ from its neighbors $m\in\Omega_n$ and form $\mathbf{\overline{w}}_n(T)$ according to \eqref{w_up}. \STATE Each node $n$ updates $\mathbf{y}_n(T),\mathbf{\underline{z}}_n(T),\mathbf{\overline{z}}_n(T)$ according to \eqref{y_algo}, \eqref{z_under_algo} and \eqref{z_up_algo}. \end{algorithmic} \end{algorithm} \subsection{Convergence of the Algorithm \ref{a1}} In this subsection, we briefly discuss about the convergence of Algorithm \ref{a1} and show that the gap between its output, $\mathbf{w}_n(T)$, and the optimal point of problem \eqref{RLS}, which we denote as $\mathbf{w}_n^*(T)$, converges to zero. We make the following assumptions. \begin{assump} The true weight vector $\mathbf{\widetilde{w}}_n$ is time-invariant, i.e., the linear regression data model is $d_n(t)=\mathbf{u}_n(t)^\mathsf{T}\mathbf{\widetilde{w}}_n+e_n(t)$. \end{assump} \begin{assump} For each node $n$, the input process $\{\mathbf{u}_n(t)\}_{t=1,2,...}$ is independent across time with time-invariant correlation matrix $\mathbf{R}_n=\mathbb{E}\left[\mathbf{u}_n(t)\mathbf{u}_n(t)^\mathsf{T}\right]$. \end{assump} \begin{assump} For each node $n$, the noise process $\{e_n(t)\}_{t=1,2,...}$ has zero mean, i.e., $\mathbb{E}[e_n(t)]=0$ and is independent across time and independent from the input process $\{\mathbf{u}_n(t)\}$. \end{assump} Note that all of these assumptions are standard when analyzing the performance of adaptive algorithms in the literature \cite{Haykin:1996:AFT:230061}. From the definition of $\mathbf{R}_n(T)$ and $\mathbf{p}_n(T)$, we know that they are weighted sum of i.i.d. terms. According to the strong law of large numbers for weighted sums \cite{chow1973limiting,babadi2010sparls}, as $n\rightarrow\infty$, $\mathbf{R}_n(T)$ converges to $\lim_{T\rightarrow\infty}\mathbb{E}[\mathbf{R}_n(T)]=\frac{\mathbf{R}_n}{1-\lambda}$. Similarly, $\mathbf{p}_n(T)$ converges to $\lim_{T\rightarrow\infty}\mathbb{E}[\mathbf{p}_n(T)]=\frac{\mathbf{R}_n\mathbf{\widetilde{w}}_n}{1-\lambda}$. When $T\rightarrow\infty$, the optimization problem at time $T$, i.e., problem \eqref{RLS}, is to minimize (w.r.t. $\{\mathbf{w}_n\}$): \begin{align} &\sum_{n=1}^N\sum_{t=1}^T\lambda^{T-t}\left(\mathbf{w}_n^\mathsf{T}\mathbf{u}_n(t)\mathbf{u}_n(t)^\mathsf{T}\mathbf{w}_n-2d_n(t)\mathbf{u}_n(t)^\mathsf{T}\mathbf{w}_n\right)+\beta\sum_{n=1}^N\sum_{m\in\Omega_n}\|\mathbf{w}_n-\mathbf{w}_m\|_2^2+\gamma\sum_{n=1}^N\|\mathbf{w}_n\|_1\nonumber\\ &\approx\frac{1}{1-\lambda}\sum_{n=1}^N\left(\mathbf{w}_n^\mathsf{T}\mathbf{R}_n\mathbf{w}_n-2\mathbf{\widetilde{w}}_n^\textsf{T}\mathbf{R}_n\mathbf{w}_n\right)+\beta\sum_{n=1}^N\sum_{m\in\Omega_n}\|\mathbf{w}_n-\mathbf{w}_m\|_2^2+\gamma\sum_{n=1}^N\|\mathbf{w}_n\|_1.\label{convergence_admm} \end{align} Note that the R.H.S. of \eqref{convergence_admm} does not depend on $T$. Note that ADMM is guaranteed to converge to the optimal point for static convex optimization problem of the form \eqref{admm_prime} and the R.H.S. of \eqref{convergence_admm} can be transformed into the form of \eqref{admm_prime} as we do in Subsection III-B. So, the output of Algorithm \ref{a1}, $\mathbf{w}_n(t)$, converges to the minimum point of the R.H.S. of \eqref{convergence_admm}. Due to \eqref{convergence_admm} and the definition of $\mathbf{w}_n^*(T)$, we know that $\mathbf{w}_n^*(T)$ also converges to the minimum point of the R.H.S. of \eqref{convergence_admm}. Hence, the difference between the output of Algorithm \ref{a1}, i.e., $\mathbf{w}_n(t)$, and the optimal point of \eqref{RLS}, i.e., $\mathbf{w}_n^*(T)$, converges to zero. \section{The Decentralized Online Subgradient Method} The implementation of the proposed Algorithm \ref{a1} necessitates an inversion of an $M\times M$ matrix at each time and each node, which may not be suitable for nodes with low computational capability. In fact, a relatively high computational overhead is a general drawback of dual domain methods (e.g., ADMM) in optimization theory \cite{ling2015dlm}. On the contrary, primal domain methods such as gradient descent method, though having relatively slow convergence speed, enjoys low computational complexity \cite{nedic2009distributed}. As such, in this section, we present a distributed online subgradient method for problem \eqref{RLS} to trade off convergence speed and accuracy for low computational complexity. \subsection{Development of the Decentralized Online Subgradient Method} Recall the optimization problem at time $T$, i.e., problem \eqref{RLS}. Denote the objective function of \eqref{RLS} as $H_T(\mathbf{w})$. We derive the subdifferential (the set of subgradients \cite{boyd2006subgradient}) of $H_T$ at $\mathbf{w}$ to be: \begin{align}\label{subdifferential} \partial H_T(\mathbf{w})= \left[ \begin{array}{c} 2\mathbf{R}_1(T)\mathbf{w}_1-2\mathbf{p}_1(T)+2\beta\left(2|\Omega_1|\mathbf{w}_1-2\sum_{m\in\Omega_1}\mathbf{w}_m\right)+\gamma\sgn(\mathbf{w}_1)\\ \vdots\\ 2\mathbf{R}_N(T)\mathbf{w}_N-2\mathbf{p}_N(T)+2\beta\left(2|\Omega_N|\mathbf{w}_N-2\sum_{m\in\Omega_N}\mathbf{w}_m\right)+\gamma\sgn(\mathbf{w}_N) \end{array} \right], \end{align} where the sign (set) function is defined as: \begin{align}\label{definition_sgn} \sgn(x)= \begin{cases} 1,~~\text{if}~~x>0,\\ -1,~~\text{if}~~x<0,\\ [-1,1],~~\text{if}~~x=0. \end{cases} \end{align} The extension of the $\sgn$ function to vectors is entrywise. The subgradient method is to simply use the iteration $\mathbf{w}(T)=\mathbf{w}(T-1)-\alpha \mathbf{g}$, where $\mathbf{g}\in H_T(\mathbf{w}(T-1))$ is any subgradient of $H_T$ at $\mathbf{w}(T-1)$ and $\alpha>0$ is the step size \cite{boyd2006subgradient}. This naturally leads to the following decentralized online update: \begin{align}\label{subgradient_w} &\mathbf{w}_n(T)=\mathbf{w}_n(T-1)-\alpha\Bigg[2\mathbf{R}_n(T)\mathbf{w}_n(T-1)-2\mathbf{p}_n(T)+4\beta\sum_{m\in\Omega_n}(\mathbf{w}_n(T-1)-\mathbf{w}_m(T-1))\nonumber\\ &~~~~~~~~~~~~~+\gamma\sgn(\mathbf{w}_n(T-1))\Bigg], \end{align} where $\sgn(0)$ is any number within the interval $[-1,1]$\footnote{There is a standard abuse of notation for the $\sgn$ function: in \eqref{subdifferential} and \eqref{definition_sgn}, $\sgn(0)$ is defined to be the interval $[-1,1]$ while in \eqref{subgradient_w}, $\sgn(0)$ is defined to be any arbitrary number within $[-1,1]$. In the following, the latter definition will be used.}. By introducing an auxiliary variable $\mathbf{\overline{w}}_n(T)$, we propose the decentralized online subgradient method for \eqref{RLS} in Algorithm \ref{a2}. We observe that Algorithm \ref{a2} is completely decentralized as every node only communicates with its neighbors. It is also online since each node only needs to store and update one $M\times M$ matrix and three $M$ dimensional vectors. More importantly, Algorithm \ref{a2} is free of any matrix inversion, which is a major burden of Algorithm \ref{a1}. \begin{algorithm}[!htbp] \renewcommand{\algorithmicrequire}{\textbf{Inputs:} } \renewcommand\algorithmicensure {\textbf{Outputs:} } \caption{The proposed decentralized online subgradient algorithm} \begin{algorithmic}[1]\label{a2} \REQUIRE~~\\ Measurement data stream at each node $\{\mathbf{u}_n(t),d_n(t)\}$, $n=1,2,...,N$, $t=1,2,...$ \ENSURE~~\\ Estimates of the unknown weight vectors at each node $\{\mathbf{w}_n(T)\}$, $T=1,2,...$ \STATE Initialize $\mathbf{R}_n(0)=\mathbf{0}_{M\times M}$, $\mathbf{p}_n(0)=\mathbf{w}_n(0)=\mathbf{\overline{w}}_n(0)=\mathbf{0}_M$, $T=0$. \STATE \textbf{Repeat:} \STATE $T\leftarrow T+1$. \STATE Each node $n$ updates its correlation matrix and cross-correlation vector once receiving the new data $\mathbf{u}_n(T),d_n(T)$: \begin{align} &\mathbf{R}_n(T)=\lambda\mathbf{R}_n(T-1)+\mathbf{u}_n(T)\mathbf{u}_n(T)^\mathsf{T},\\ &\mathbf{p}_n(T)=\lambda\mathbf{p}_n(T-1)+d_n(t)\mathbf{u}_n(T). \end{align} \STATE Each node $n$ updates $\mathbf{w}_n(T)$: \begin{align} &\mathbf{w}_n(T)=\mathbf{w}_n(T-1)-\alpha\Bigg[2\mathbf{R}_n(T)\mathbf{w}_n(T-1)-2\mathbf{p}_n(T)+4\beta(|\Omega_n|\mathbf{w}_n(T-1)-\mathbf{\overline{w}}_n(T-1))\nonumber\\ &~~~~~~~~~~~~~+\gamma\sgn(\mathbf{w}_n(T-1))\Bigg].\label{update_subgradient} \end{align} \STATE Each node $n$ broadcasts its $\mathbf{w}_n(T)$ to its neighbors. \STATE Each node $n$ receives $\mathbf{w}_m(T)$ from its neighbors $m\in\Omega_n$ and form $\mathbf{\overline{w}}_n(T)$ as follows: \begin{align} \mathbf{\overline{w}}_n(T)=\sum_{m\in\Omega_n}\mathbf{w}_m(T). \end{align} \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis of Algorithm \ref{a2}} In this subsection, we analyze the convergence behavior of Algorithm \ref{a2}. We still make the same assumptions as in the analysis of the proposed ADMM algorithm in Subsection III-C, i.e., Assumptions 1-3. As explained in Subsection III-C, for large $T$, we can approximate $\mathbf{R}_n(T)$ by $\frac{\mathbf{R}_n}{1-\lambda}$ and $\mathbf{p}_n(T)$ by $\frac{\mathbf{R}_n\mathbf{\widetilde{w}}_n}{1-\lambda}$. Define the error vector of node $n$ at time $T$ to be: \begin{align} \mathbf{f}_n(T)=\mathbf{w}_n(T)-\mathbf{\widetilde{w}}_n.\label{error} \end{align} Thus, substituting \eqref{error} into the definition of $\mathbf{\overline{w}}_n(T-1)$ yields: \begin{align} \mathbf{\overline{w}}_n(T-1)=\sum_{m\in\Omega_n}\left(\mathbf{\widetilde{w}}_m+\mathbf{f}_m(T-1)\right). \end{align} Hence, using \eqref{update_subgradient}, \eqref{error} and the approximation of $\mathbf{R}_n(T),\mathbf{p}_n(T)$, we can derive a recursive equation for the error vector: \begin{align} \mathbf{f}_n(T)&=\mathbf{w}_n(T-1)-\mathbf{\widetilde{w}}_n-\alpha\Bigg[2\mathbf{R}_n(T)\left(\mathbf{\widetilde{w}}_n+\mathbf{f}_n(T-1)\right)-2\mathbf{p}_n(T)\nonumber\\ &~~~+4\beta\left(|\Omega_n|\left(\mathbf{\widetilde{w}}_n+\mathbf{f}_n(T-1)\right)-\sum_{m\in\Omega_n}\left(\mathbf{\widetilde{w}}_m+\mathbf{f}_m(T-1)\right)\right)+\gamma\sgn(\mathbf{w}_n(T-1))\Bigg]\\ &\approx\mathbf{f}_n(T-1)-\alpha\Bigg[\frac{2\mathbf{R}_n}{1-\lambda}(\mathbf{\widetilde{w}}_n+\mathbf{f}_n(T-1))-\frac{2\mathbf{R}_n\mathbf{\widetilde{w}}_n}{1-\lambda}\nonumber\\ &~~~+4\beta\left(|\Omega_n|\mathbf{\widetilde{w}}_n-\sum_{m\in\Omega_n}\mathbf{\widetilde{w}}_m+|\Omega_n|\mathbf{f}_n(T-1)-\sum_{m\in\Omega_n}\mathbf{f}_m(T-1)\right)+\gamma\sgn(\mathbf{w}_n(T-1))\Bigg]\\ &=\left(\mathbf{I}-\frac{2\alpha\mathbf{R}_n}{1-\lambda}\right)\mathbf{f}_n(T-1)-4\alpha\beta\left(|\Omega_n|\mathbf{f}_n(T-1)-\sum_{m\in\Omega_n}\mathbf{f}_m(T-1)\right)\nonumber\\ &~~~-4\alpha\beta\left(|\Omega_n|\mathbf{\widetilde{w}}_n-\sum_{m\in\Omega_n}\mathbf{\widetilde{w}}_m\right)-\alpha\gamma\sgn(\mathbf{w}_n(T-1)). \end{align} Taking expectations yields: \begin{align} \label{expect}\mathbb{E}[\mathbf{f}_n(T)]=\left[(1-4\alpha\beta |\Omega_n|)\mathbf{I}-\frac{2\alpha\mathbf{R}_n}{1-\lambda}\right]\mathbb{E}[\mathbf{f}_n(T-1)]+4\alpha\beta\sum_{m\in\Omega_n}\mathbb{E}[\mathbf{f}_m(T-1)]+\mathbf{r}_n(T), \end{align} where $\mathbf{r}_n(T)$ is defined as: \begin{align}\label{r_def} \mathbf{r}_n(T)=-4\alpha\beta\left(|\Omega_n|\mathbf{\widetilde{w}}_n-\sum_{m\in\Omega_n}\mathbf{\widetilde{w}}_m\right)-\alpha\gamma\mathbb{E}[\sgn(\mathbf{w}_n(T-1))]. \end{align} Denote the adjacency matrix of the network as $\mathbf{A}\in\mathbb{R}^{M\times M}$, i.e., $A(i,j)=1$ if $i,j$ are connected with an edge; otherwise $A(i,j)=0$. Define $\mathbf{C}=\mathbf{A}\otimes\mathbf{I}_{M\times M}\in\mathbb{R}^{MN\times MN}$ to be the block adjacency matrix, where $\otimes$ means Kronecker product. Define the matrix $\mathbf{B}\in\mathbb{R}^{MN\times MN}$ as follows: \begin{align} \mathbf{B}= \left[ \begin{array}{lllc} (1-4\alpha\beta|\Omega_1|)\mathbf{I}-\frac{2\alpha\mathbf{R}_1}{1-\lambda} & & &\\ & (1-4\alpha\beta|\Omega_2|)\mathbf{I}-\frac{2\alpha\mathbf{R}_2}{1-\lambda} & &\text{\huge0}\\ & & \ddots&\\ &\text{\huge0} & & (1-4\alpha\beta|\Omega_N|)\mathbf{I}-\frac{2\alpha\mathbf{R}_N}{1-\lambda} \end{array} \right] \end{align} Furthermore, stack $\mathbf{f}_n(T)$ and $\mathbf{r}_n(T)$ into long vectors respectively: \begin{align} \mathbf{f}(T)= \left[ \begin{array}{c} \mathbf{f}_1(T)\\ \vdots\\ \mathbf{f}_N(T) \end{array} \right]\in\mathbb{R}^{NM}, \mathbf{r}(T)= \left[ \begin{array}{c} \mathbf{r}_1(T)\\ \vdots\\ \mathbf{r}_N(T) \end{array} \right]\in\mathbb{R}^{NM}. \end{align} Hence, \eqref{expect} can be written in a more compact form: \begin{align}\label{expect_big} \mathbb{E}[\mathbf{f}(T)]=(\mathbf{B}+4\alpha\beta\mathbf{C})\mathbb{E}[\mathbf{f}(T-1)]+\mathbf{r}(T). \end{align} Define $\boldsymbol{\Phi}=\frac{1}{\alpha}(\mathbf{I}-\mathbf{B})$. Since $\boldsymbol{\Phi}-4\beta\mathbf{C}$ is symmetric, we can perform eigendecomposition for it, i.e., $\boldsymbol{\Phi}-4\beta\mathbf{C}=\mathbf{Q\Lambda Q}^\mathsf{T}$ for some orthogonal matrix $\mathbf{Q}\in\mathbb{R}^{MN\times MN}$ and diagonal matrix $\boldsymbol{\Lambda}=\diag(\delta_1,...,\delta_{MN})$, where $\delta_i$'s are the eigenvalues of $\boldsymbol{\Phi}-4\beta\mathbf{C}$. Therefore, $\mathbf{B}+4\alpha\beta\mathbf{C}=\mathbf{I}-\alpha(\mathbf{\Phi}-4\beta\mathbf{C})=\mathbf{Q\Sigma Q}^\mathsf{T}$, where we define $\mathbf{\Sigma}=\diag(1-\alpha\delta_1,...,1-\alpha\delta_{MN})$. Hence, \eqref{expect_big} can be rewritten as: \begin{align} \mathbf{Q}^\mathsf{T}\mathbb{E}[\mathbf{f}(T)]=\mathbf{\Sigma Q^\mathsf{T}}\mathbb{E}[\mathbf{f}(T-1)]+\mathbf{Q}^\mathsf{T}\mathbf{r}(T), \end{align} or \begin{align}\label{epsilon_recursive} \boldsymbol{\epsilon}(T)=\boldsymbol{\Sigma\epsilon}(T-1)+\mathbf{\widetilde{r}}(T), \end{align} where we define $\boldsymbol{\epsilon}(T)=\mathbf{Q}^\mathsf{T}\mathbb{E}[\mathbf{f}(T)]$ and $\mathbf{\widetilde{r}}(T)=\mathbf{Q}^\mathsf{T}\mathbf{r}(T)$. We first want to bound $\mathbf{\widetilde{r}}(T)$. To this end, from \eqref{r_def} we derive: \begin{align} \|\mathbf{r}_n(T)\|_2&\leq\alpha\mathbb{E}\left[4\beta|\Omega_n|\|\mathbf{\widetilde{w}}_n\|_2+4\beta\sum_{m\in\Omega_n}\|\mathbf{\widetilde{w}}_m\|_2+\gamma\|\sgn(\mathbf{w}_n(T-1))\|_2\right]\\ &\leq\alpha\left(8\beta\max_{n=1,...,N}|\Omega_n|\max_{n=1,...,N}\|\mathbf{\widetilde{w}}_n\|_2+\gamma\sqrt{M}\right). \end{align} Therefore, \begin{align}\label{r_tilde_bound} \|\mathbf{\widetilde{r}}(T)\|_2^2=\|\mathbf{Q}^\mathsf{T}\mathbf{r}(T)\|_2^2=\|\mathbf{r}(T)\|_2^2=\sum_{n=1}^N\|\mathbf{r}_n(T)\|_2^2\leq N\alpha^2\left(8\beta\max_{n=1,...,N}|\Omega_n|\max_{n=1,...,N}\|\mathbf{\widetilde{w}}_n\|_2+\gamma\sqrt{M}\right)^2. \end{align} Moreover, recursive applications of \eqref{epsilon_recursive} yields: \begin{align} \boldsymbol{\epsilon}(T)=\mathbf{\Sigma}^T\boldsymbol{\epsilon}(0)+\sum_{t=0}^{T-1}\boldsymbol{\Sigma}^t\mathbf{\widetilde{r}}(T-t), \end{align} where the superscript $T$ of $\mathbf{\Sigma}$ means power instead of transposition. So, \begin{align}\label{key} \|\boldsymbol{\epsilon}(T)\|_2\leq\|\mathbf{\Sigma}\|_2^T\|\boldsymbol{\epsilon}(0)\|_2+\sum_{t=0}^{T-1}\|\mathbf{\Sigma}\|_2^t\|\mathbf{\widetilde{r}}(T-t)\|_2, \end{align} where $\|\mathbf{\Sigma}\|_2$ means the spectral norm of $\mathbf{\Sigma}$, i.e., the maximum singular value of $\mathbf{\Sigma}$. Since $\mathbf{\Sigma}$ is a diagonal matrix, it is easy to calculate its spectral norm as $\|\mathbf{\Sigma}\|_2=\max_{i=1,...,NM}|1-\alpha\delta_i|$. Now, we make two additional assumptions in the following. \begin{assump} $\frac{2}{1-\lambda}\min_n\lambda_{\min}(\mathbf{R}_n)-4\beta(\max_{n}|\Omega_n|-\min_{n}|\Omega_n|)>0$. \end{assump} \begin{assump} $\alpha<\frac{2}{\max_{i=1,...,NM}\delta_i}$. \end{assump} We note the following Weyl's theorem \cite{horn2012matrix}. \begin{lem} (Weyl) Given two symmetric matrices $\mathbf{A,B}\in\mathbb{R}^{n\times n}$. Arrange the eigenvalues in increasing order, i.e., $\lambda_1(\mathbf{X})\leq...\leq\lambda_n(\mathbf{X})$ for $\mathbf{X}\in\{\mathbf{A,B,A+B}\}$. Then for any $k\in\{1,...,n\}$, we have: \begin{align} \lambda_k(\mathbf{A})+\lambda_1(\mathbf{B})\leq\lambda_k(\mathbf{A+B})\leq\lambda_k(\mathbf{A})+\lambda_n(\mathbf{B}). \end{align} \end{lem} Making use of Weyl's theorem, we obtain: \begin{align}\label{eigen} \lambda_{\min}(\mathbf{\Phi}-4\beta\mathbf{C})\geq\lambda_{\min}(\Phi)+\lambda_{\min}(-4\beta\mathbf{C})=\lambda_{\min}(\Phi)-4\beta\lambda_{\max}(\mathbf{C}). \end{align} Recall that $\mathbf{\Phi}=\diag(4\beta|\Omega_1|\mathbf{I}+\frac{2\mathbf{R}_1}{1-\lambda},...,4\beta|\Omega_N|\mathbf{I}+\frac{2\mathbf{R}_N}{1-\lambda})$. We have $\lambda_{\min}(\Phi)=\min_n\left\{4\beta|\Omega_n|+\frac{2}{1-\lambda}\lambda_{\min}(\mathbf{R}_n)\right\}$. Further noting that $\mathbf{C}=\mathbf{A}\otimes\mathbf{I}$ and defining $\|\mathbf{A}\|_1=\max_{1\leq j\leq N}\sum_{i=1}^N|A_{ij}|$ to be the maximum-column-sum norm of matrix $\mathbf{A}$, we have \begin{align} \lambda_{\max}(\mathbf{C})=\lambda_{\max}(\mathbf{A})\leq|\lambda_{\max}(\mathbf{A})|\leq\rho(\mathbf{A})\leq\|\mathbf{A}\|_1=\max_n|\Omega_n|, \end{align} where $\rho(\mathbf{A})=\max_{i=1,...,N}|\lambda_i(\mathbf{A})|$ is the spectral radius of $\mathbf{A}$. So, continuing from \eqref{eigen} we have: \begin{align} \lambda_{\min}(\mathbf{\Phi}-4\beta\mathbf{C})&\geq\min_n\left\{4\beta|\Omega_n|+\frac{2}{1-\lambda}\lambda_{\min}(\mathbf{R}_n)\right\}\\ &\geq\frac{2}{1-\lambda}\min_n\lambda_{\min}(\mathbf{R}_n)-4\beta(\max_{n}|\Omega_n|-\min_{n}|\Omega_n|)\\ &>0, \end{align} where the last step follows from Assumption 4. Since $\delta_i$'s are eigenvalues of $\mathbf{\Phi}-4\beta\mathbf{C}$, we thus have $\delta_i>0,i=1,...,MN$. This together with Assumption 5 implies that $\max_{i=1,...,NM}|1-\alpha\delta_i|<1$, i.e., $\|\mathbf{\Sigma}\|_2<1$. Hence, as $T\rightarrow\infty$, the first time of the R.H.S. of \eqref{key} converges to zero. Furthermore, since \eqref{r_tilde_bound} holds for any $T$, the second term of the R.H.S. of \eqref{key} is always bounded above by $\frac{\sqrt{N}\alpha}{1-\|\mathbf{\Sigma}\|_2}\left(8\beta\max_{n=1,...,N}|\Omega_n|\max_{n=1,...,N}\|\mathbf{\widetilde{w}}_n\|_2+\gamma\sqrt{M}\right)$. So, in all, from \eqref{key} and the fact that $\|\mathbb{E}[\mathbf{f}(T)]\|_2=\|\mathbf{Q}^\mathsf{T}\mathbb{E}[\mathbf{f}(T)]\|_2=\|\boldsymbol{\epsilon}(T)\|_2$, we have: \begin{align} \limsup_{T\rightarrow\infty}\|\mathbb{E}[\mathbf{f}(T)]\|_2\leq\frac{\sqrt{N}\alpha}{1-\|\mathbf{\Sigma}\|_2}\left(8\beta\max_{n=1,...,N}|\Omega_n|\max_{n=1,...,N}\|\mathbf{\widetilde{w}}_n\|_2+\gamma\sqrt{M}\right). \end{align} To summarize, we have proved the following theorem. \begin{thm} For Algorithm \ref{a2}, under Assumptions 1-5, we have: \begin{align} \limsup_{T\rightarrow\infty}\|\mathbb{E}[\mathbf{f}(T)]\|_2\leq\frac{\sqrt{N}\alpha}{1-\|\mathbf{\Sigma}\|_2}\left(8\beta\max_{n=1,...,N}|\Omega_n|\max_{n=1,...,N}\|\mathbf{\widetilde{w}}_n\|_2+\gamma\sqrt{M}\right), \end{align} where $\|\mathbf{\Sigma}\|_2<1$ is a constant related to the network structure and the error vector $\mathbf{f}(T)$ is defined as: \begin{align} \mathbf{f}(T)= \left[ \begin{array}{c} \mathbf{w}_1(T)-\mathbf{\widetilde{w}}_1\\ \vdots\\ \mathbf{w}_N(T)-\mathbf{\widetilde{w}}_N \end{array} \right]\label{convergence_subgradient} \end{align} \end{thm} Two remarks regarding the result and assumptions of the theorem are in order, respectively. \begin{rem} From the R.H.S. of \eqref{convergence_subgradient}, it seems that the smaller the values of $\beta,\gamma$, the better the performance of Algorithm \ref{a2}. However, this is not true. The reason is that the R.H.S. of \eqref{convergence_subgradient} is only an upper bound on the error and when deriving this upper bound, the regularization terms are treated as biases and we want to bound their influences (similar remarks can be made to the analysis of most regularized adaptive algorithms, e.g., \cite{liu2014distributed,babadi2010sparls}). However, in practice, as we have argued, the facts that (i) neighbors have similar weights; and (2) weights are sparse; are important prior knowledge, which can enhance the performance of the algorithm. To embody this prior knowledge, we need to assign appropriate positive values to $\beta$ and $\gamma$. Another way to decrease the R.H.S. of \eqref{convergence_subgradient} is to decrease $\alpha$. However, this comes with a cost: smaller $\alpha$ leads to slower convergence speed. So, in practice, a positive value for $\alpha$ cannot be too small. \end{rem} \begin{rem} Assumptions 1-3 are standard hypotheses to analyze the convergence behaviors of adaptive algorithms. Algorithm 5, i.e., small enough step size $\alpha$, is also common in gradient-descent-type algorithms such as LMS and subgradient method. The only nonconventional assumption is Assumption 4. In particular, Assumption 4 holds when $\lambda$ is sufficiently close to 1, which is the case (e.g., $\lambda=0.995,0.999$) for most applications of RLS \cite{Haykin:1996:AFT:230061,babadi2010sparls,liu2014distributed}. This is not surprising because in most applications the weight vectors vary fairly slowly (if the weight vectors change too drastically across time, then virtually no adaptive algorithms can track them well as the past data become useless and mere one shot data at current time are not enough for estimating the high dimensional weight vectors) and thus $\lambda$ is chosen very close to 1. Another interesting observation is that for networks with uniform degree, i.e., $\max_{n}|\Omega_n|=\min_{n}|\Omega_n|$, Assumption 4 is always satisfied. \end{rem} \section{Numerical Evaluation} In this section, numerical simulations are conducted to verify the effectiveness of the proposed decentralized online ADMM algorithm (or ADMM algorithm in short), Algorithm 1, and the proposed decentralized online subgradient method (or subgradient method in short), Algorithm 2. The performance of the global offline optimization of problem \eqref{RLS} (global optimizor henceforth) is shown as a benchmark. For the sake of comparison, the performance of the distributed single task sparse RLS algorithm in \cite{liu2014distributed} (DSPARLS henceforth) is also presented to highlight the impact of multitask. \begin{figure} \centering \includegraphics[scale=.2]{figs/network.eps}\\ \caption{The network topology.}\label{network} \end{figure} We consider a network with $N=20$ nodes and 40 random edges so that the average node degree is 4. The network topology is illustrated in Fig. \ref{network}. The dimension of the input data is $M=20$. Each entry of the input data sequence $\{\mathbf{u}_n(t)\}$ is generated according to the uniform distribution over the interval $[0,1]$ independently. The noise sequence $\{e_n(t)\}$ is generated according to the uniform distribution on $[0,N_0]$ independently, where $N_0$ is a constant controlling the noisy level of the observations. To achieve sparsity, we let 18 entries (whose positions are randomly selected) of the true weight vectors $\widetilde{\mathbf{w}}_n(t)$ be zero. The two remaining entries $\widetilde{\mathbf{w}}_n^\text{part}(0)\in\mathbb{R}^2$ of the initial weight vectors are generated in a way that enforces similarity between neighbors. Specifically, we first generate $N$ i.i.d. two dimensional random vectors $\{\boldsymbol{\phi}_n\}_{n=1,...,N}$ uniformly distributed on $[0,1]^2$. Then, we solve the following optimization problem to obtain $\widetilde{\mathbf{w}}_n^\text{part}(0)$: \begin{align} \{\widetilde{\mathbf{w}}_n^\text{part}(0)\}_{n=1,...,N}=\arg\min_{\mathbf{w}_n\in\mathbb{R}^2,n=1,...,N}\sum_{n=1}^N\|\mathbf{w}_n-\boldsymbol{\phi}_n\|_2^2+\frac{1}{2}\sum_{n=1}^N\sum_{m\in\Omega_n}\|\mathbf{w}_n-\mathbf{w}_m\|_2^2, \end{align} which promotes similarity between neighbors and can be easily solved as the objective function is a convex quadratic function. To capture the slowly time-variant trait of the weight vectors, the increment from $\widetilde{\mathbf{w}}_n(t)$ to $\widetilde{\mathbf{w}}_n(t+1)$, i.e., $\widetilde{\mathbf{w}}_n(t+1)-\widetilde{\mathbf{w}}_n(t)$ is generated by uniform distribution on $[-0.5N_1,0.5N_1]$ independently across time and nodes, where $N_1$ is a constant controlling the varying rate of the weight vectors. \begin{figure} \renewcommand\figurename{\small Fig.} \centering \vspace*{8pt} \setlength{\baselineskip}{10pt} \subfigure[Learning curve of Scenario 1]{ \includegraphics[scale = 0.15]{figs/exp_2_learning_curve_network.eps}} \subfigure[Learning curve of Scenario 2]{ \includegraphics[scale = 0.15]{figs/exp_1_learning_curve_network.eps}} \caption{Learning curves of Scenario 1 and Scenario 2.} \label{learning_curve} \end{figure} Now, we choose the regularization parameters and forgetting factor as $\beta=\gamma=1,\gamma=0.995$. We consider two scenarios in terms of the noise level $N_1$ and the varying rate of weight vectors $N_2$. In Scenario 1, $N_1=0.1,N_2=0.02$ while in Scenario 2, $N_1=0.3,N_2=0.05$. The latter scenario has noisier observations and weight vectors which vary faster. Thus, the weight vectors of Scenario 2 are more difficult to track than those of Scenario 1. For the proposed ADMM algorithm, the proposed subgradient method, the DSPARLS algorithm in \cite{liu2014distributed} and the global optimizor, we plot the relative errors (defined to be $\|\mathbf{w}(t)-\widetilde{\mathbf{w}}(t)\|_2/\|\widetilde{\mathbf{w}}(t)\|_2$, where $\mathbf{w}(t)$ and $\widetilde{\mathbf{w}}(t)$ are concatenations of $\mathbf{w}_n(t)$ and $\widetilde{\mathbf{w}}_n(t)$ of all nodes, respectively) as functions of time indices, i.e., the learning curves, under Scenario 1 (Fig. \ref{learning_curve}-(a)) and Scenario 2 (Fig. \ref{learning_curve}-(b)), respectively. Each learning curve is the average of 300 independent trials. Several interesting observations can be made from Fig. \ref{learning_curve}. First, the relative errors of both the proposed ADMM algorithm and the proposed subgradient method can converge to that of the global optimizor, i.e., the performance benchmark, as the observation data accumulate. On the contrary, the relative error of DSPARLS does not converge to that of the global optimizor. This highlights the effectiveness of the two proposed algorithms when tracking multitask weight vectors, which cannot be tracked well by existing method (DSPARLS in this case) for the single task situation. Second, comparisons between the learning curves of the proposed two algorithms indicate that the proposed ADMM algorithm needs much fewer observations, or equivalently much less time (about 100 time units), to track the weight vectors accurately than the proposed subgradient method does (about 600 time units). This is not surprising as dual domain methods generally converge faster than primal domain methods in the literature of optimization theory \cite{ling2015dlm,mokhtari2016dqm}. However, the advantage of the proposed ADMM algorithm in convergence speed comes at the cost of higher computational overhead per time unit than the proposed subgradient method. This accuracy-complexity tradeoff makes the proposed two algorithms appropriate for different applications depending on the computational capability of devices and needed tracking accuracy. Third, as one expects, Scenario 1 has better tracking performance than Scenario 2: the ultimate relative error of the proposed algorithms in Scenario 1 is about 0.067 while that of Scenario 2 is about 0.17. So, higher noise level and faster varying speed of the weight vectors do result in lower tracking accuracy. \begin{figure} \renewcommand\figurename{\small Fig.} \centering \vspace*{8pt} \setlength{\baselineskip}{10pt} \subfigure[Relative errors of each node at time 200 in Scenario 1]{ \includegraphics[scale = 0.15]{figs/exp_1_nodes_200.eps}} \subfigure[Relative errors of each node at time 500 in Scenario 1]{ \includegraphics[scale = 0.15]{figs/exp_1_nodes_500.eps}} \subfigure[Relative errors of each node at time 200 in Scenario 2]{ \includegraphics[scale = 0.15]{figs/exp_2_nodes_200.eps}} \subfigure[Relative errors of each node at time 500 in Scenario 2]{ \includegraphics[scale = 0.15]{figs/exp_2_nodes_500.eps}} \caption{Relative tracking errors of each node.} \label{nodes} \end{figure} Next, we investigate the tracking performance of each individual node. To this end, in Fig. \ref{nodes}, we show the relative tracking errors of each node at time $200$ and $500$ under Scenario 1 and Scenario 2. Several remarks are in order. First, we note that in all four cases of Fig. \ref{nodes}, the red curve (the proposed ADMM algorithm) and the blue curve (the global optimizor) coincide precisely for every node. This further confirms the previous observation from Fig. \ref{learning_curve} that the performance of the proposed ADMM can converge to that of the global optimizor quickly. Second, the proposed subgradient method, though performs poorly at time 200, has relative errors close to those of the global optimizor at time 500. This suggests that the proposed subgradient method eventually has performance close to the benchmark (the global optimizor). But this good performance necessitates longer time (or equivalently more data) compared to the proposed ADMM algorithm. Third, the performance of the single task learning algorithm DSPARLS never converge to that of the global optimizor. In particular, from Fig. \ref{nodes}-(b)(d), the performance of DSPARLS is worst at node 5 and node 17. Recall the network topology in Fig. \ref{network} and we see that these two nodes are loosely connected to other nodes, i.e., their degree is low. Thus, the weight vectors at these two nodes can potentially deviate far from the weight vectors at the rest of the nodes and thus violate the single task assumption of DSPARLS the most among all nodes. This partially explains the poor performance of DSPARLS at nodes 5 and 17. \begin{figure} \renewcommand\figurename{\small Fig.} \centering \vspace*{8pt} \setlength{\baselineskip}{10pt} \subfigure[Number of successful trials for different $\beta$]{ \includegraphics[scale = 0.15]{figs/exp_2_beta_number.eps}} \subfigure[Average time needed to reach success among successful trials for different $\beta$]{ \includegraphics[scale = 0.15]{figs/exp_2_beta_time.eps}} \subfigure[Number of successful trials for different $\gamma$]{ \includegraphics[scale = 0.15]{figs/exp_2_gamma_number.eps}} \subfigure[Average time needed to reach success among successful trials for different $\gamma$]{ \includegraphics[scale = 0.15]{figs/exp_2_gamma_time.eps}} \subfigure[Number of successful trials for different $\lambda$]{ \includegraphics[scale = 0.15]{figs/exp_2_lambda_number.eps}} \subfigure[Average time needed to reach success among successful trials for different $\lambda$]{ \includegraphics[scale = 0.15]{figs/exp_2_lambda_time.eps}} \caption{Number of successful trials and the average time needed to reach success among successful trials.} \label{parameter} \end{figure} Previous experiments indicate that the proposed ADMM algorithm possesses faster and more accurate tracking performance than the proposed subgradient method. Next, we conduct a more thorough performance comparison between the proposed two algorithms for different regularization parameters $\beta,\gamma$ and different forgetting factors $\lambda$. We note that the global optimizor usually converges well before time $1000$ and we denote the steady relative error of the global optimizor at time $1000$ as $\check{e}$. We say a simulation trial of an algorithm (either the proposed ADMM algorithm or the proposed subgradient method) is \emph{successful} if, before time 1000, there exists a time window (i.e., interval) of length 20 over which the average relative error of the algorithm is lower than $1.1\check{e}$. The basic parameter setup is $\gamma=\beta=1,\lambda=0.995$. In each of the subfigures in Fig. \ref{parameter}-(a)(c)(e), we vary one parameter while keep the remaining two parameters the same as the basic setup. For each parameter setup, we conduct 100 independent trials and plot the number of successful trials in Fig. \ref{parameter}-(a)(c)(e). We observe that (i) the proposed ADMM algorithm is always successful, i.e., it can always converge to the steady performance of the global optimizor; (ii) the proposed subgradient method is successful in most trials as long as the forgetting factor $\lambda$ is sufficiently close to 1, which is the case in most applications (e.g., $\lambda=0.995$) as the weight vectors are varying very slowly and a large $\lambda$ is needed for tracking them. Note that the large $\lambda$ is reminiscent of the Assumption 4 in the performance analysis of the proposed subgradient method. See Remark 3 for a justification of large $\lambda$. Moreover, we investigate the average time needed to reach success (defined to be the middle point of the first time window over which the average relative error is less than $1.1\check{e}$) among successful trials. The results are shown in Fig. \ref{parameter}-(b)(d)(f). We remark that the proposed ADMM mostly needs no more than 150 time units to be successful, i.e., be close to the steady performance of the global optimizor, while it takes the proposed subgradient method a much longer time (around 600 time units) to be successful. This further confirms our previous assertion that the proposed ADMM algorithm possesses faster tracking performance than the proposed subgradient method. \section{Conclusion} In this paper, we study the decentralized sparse multitask RLS problem. We first propose a decentralized online ADMM algorithm for the formulated RLS problem. We simplify the algorithm so that each iteration of the ADMM algorithm consists of simple closed form computations and each node only needs to store and update one $M\times M$ matrix and six $M$ dimensional vectors. We show that the gap between the iterations of the proposed ADMM algorithm and the optimal points of the formulated RLS problem will converge to zero. Moreover, to further reduce the computational complexity, we propose a decentralized online subgradient method. We show that the iterations of the proposed subgradient method will track the true weight vectors with errors upper bounded by some constant related to the network topology and algorithm parameters. Compared with the ADMM algorithm, the subgradient method enjoys lower computational complexity at the expense of slower convergence speed. The proposed algorithms are corroborated by numerical experiments, which highlight the effectiveness of the proposed algorithms and the accuracy-complexity tradeoff between the proposed two algorithms.
2,869,038,155,208
arxiv
\section{Mean field analysis} To study the mean field fixed points we employ above the ansatz that only the single mode that shares its momentum with the drive is macroscopically occupied, i.e. $b_{k} = 0$ for $k\neq p$. The equation of motion for $b_{p}$ is given in this case by \begin{align} i \dot{b}_{p} = \frac{U }{M}|b_{p}|^2b_{p} -\left[J \cos(p) +\Omega+i\frac{\Gamma}{2}\right]b_{p} +\sqrt{M}D. \end{align} Denoting the steady state solution as $b_{p}(t\rightarrow \infty)=\sqrt{N_0}e^{-i\theta}$ we arrive at the following equation for the occupation of the condensate, $n_0=N_0/M$, \begin{align} \sqrt{n_0}\left[\cos (p)+\omega\right] = u n_0^{3/2} \pm \sqrt{d^2-\frac{\gamma^2}{4}n_0},\label{eq:n0} \end{align} and its phase $\theta=\sin^{-1}(\gamma\sqrt{n_0}/2d)$. Eq.~\eqref{eq:n0} has three roots for $n_0$, which determine three fixed points $b_p^{(\nu)}=\sqrt{n_0^{(\nu)}}e^{-i\theta^{(\nu)}}$, with $\nu=1,2,3$. The stability of $b_p^{(\nu)}$ is then determined by perturbing $b_j=b^{(\nu)}_j+\delta b^{(\nu)}_j$, where $b^{(\nu)}_j= b^{(\nu)}_p e^{-ipx_j}/\sqrt{M}$ is the steady state solution and $\delta b^{(\nu)}_j=e^{-\gamma s/2} e^{-i\theta^{(\nu)}}\delta \zeta_j$ with $s=J t$ the rescaled time. Taking into account only the leading terms in $\delta \zeta_j=e^{-ipx_j}(v_je^{-i\mu s}-u^*_je^{i\mu^* s})$, the equations of motion in the momentum basis $q$ prescribe \begin{equation} \mu_q\left(\begin{array}{c} u_{q}\\ v_{q} \end{array}\right) = \mathcal L_q \left(\begin{array}{c} u_{q}\\ v_{q} \end{array}\right). \end{equation} Here $\mathcal{L}_q= -\epsilon_0(q)+ i \sigma_y \epsilon_y+\sigma_z \epsilon_z(q)$, where $\epsilon_0(q)=\sin(q) \sin(p)$, $\epsilon_y=n_0 u$, $\epsilon_z(q)=2 n_0 u -\cos(q)\cos(p)-\omega$, $\sigma_i$ are the Pauli matrices and $\mu_q$ is the Bogoliubov dispersion, given by $\mu_q=-\epsilon_0(q) \pm \sqrt{\epsilon_z^2(q)-\epsilon_y^2}$. For a solution to be stable we require $\delta b_j$ to decay with time, thus stability is contingent upon $| \text{Im} \mu_q| \leq \gamma /2$ for all values of $q=\frac{2\pi}{M} m$, $m\in \{0,1,..,M-1\}$. This is an extension~\cite{carusotto2013quantum} of the more familiar case without loss ($\gamma=0$). In the main text we describe the resulting phase diagram, which is presented in Fig.~\ref{fig:phase-diagram}a,b. Here we describe the phase diagram in terms of existence and stability of the fixed points. In region I (III) there exists only a single fixed point that is also stable: the dim (bright), $\nu=1$ ($\nu=3$), is the low (high) occupation fixed point. Whereas, in region II there exist three fixed points of which only the dim ($\nu=1$) one is stable. In addition there are the bistable regime (denoted by ``Bi'') in which both $\nu=1$ and $\nu=3$ are stable, and the regime where no fixed point is stable (denoted by ``NFP'') in which the system flows into the chaotic attractor, which is not revealed by the linear analysis above. \section{Quantum analysis} We calculate the Wigner function displayed in Fig.~\ref{fig:WignerFunction} using the formula \begin{align} W(\boldsymbol{\beta};t)=\int d\boldsymbol{\eta}\, d\boldsymbol{\eta}^* \rho_{\boldsymbol{\beta};\boldsymbol{\eta}}(t) e^{\frac{1}{2}\left(\boldsymbol{\eta}^*\cdot\boldsymbol{\beta}-\boldsymbol{\beta}^*\cdot\boldsymbol{\eta}\right)}, \end{align} where $\rho_{\boldsymbol{\beta};\boldsymbol{\eta}}(t)=\langle \boldsymbol{\beta}-\frac{\boldsymbol{\eta}}{2}|\hat{\rho}(t)|\boldsymbol{\beta}+\frac{\boldsymbol{\eta}}{2}\rangle$. Here $\ket{\boldsymbol{\beta}}$ represents a product of momentum coherent states, i.e, $\ket{\boldsymbol \beta}= \bigotimes_k\ket{\beta_k},$ satisfying $\hat b_k \ket{\boldsymbol \beta} =\beta_k\ket{\boldsymbol \beta}$. Expressing the density matrix using momentum occupation states $\ket{\boldsymbol{n}}=\bigotimes_k \ket{n_k}$, we get the following transform \begin{eqnarray}\label{eq:wigner} W(\boldsymbol{\beta};t)=\sum_{\boldsymbol{n},\boldsymbol{n}'}\rho_{\boldsymbol{n},\boldsymbol{n}'}(t)\prod_{k} F_{n_k,n_k'}(2\beta_k), \end{eqnarray} where \begin{align} F_{n,n'}(\xi)=\frac{4e^{-|\xi|^2/2}\xi^{n'-n}}{\sqrt{n!n'!}}U(-n,n'-n+1,|\xi|^2), \end{align} with $U(a,b,c)$ denoting the confluent hypergeometric function. The density matrix $\rho_{\boldsymbol{n},\boldsymbol{n}'}(t)$ is then calculated employing $2000$ quantum trajectories. In the main text we considered two types of Wigner functions. The first was the full 6-dimensional Wigner function of Eq.~\eqref{eq:wigner} of which we displayed only the 2-dimensional cut representing the resonant plane ($\beta_k=0$ for $k\neq p$) at time $Jt_\text{m}$ (see Fig.\ref{fig:WignerFunction}a-e). Second, we consider the reduced density matrix relevant to the resonant momentum, i.e, the density matrix in Eq.~\eqref{eq:wigner} is replaced with $\hat \rho_{p}(t)= \mathrm{Tr}_{k\neq p} \hat \rho(t)$ (see Fig.\ref{fig:WignerFunction}f-j). Using the same quantum trajectory Monte-Carlo formalism, we calculate the different photonic correlation functions as we now describe. To calculate the $g^{(2)}_p(t;\tau)=\langle \hat{b}^\dagger_p(t) \hat{b}^\dagger_p(t+\tau) \hat{b}_p(t+\tau) \hat{b}_p(t)\rangle/\langle \hat{b}^\dagger_p(t) \hat{b}_p(t)\rangle^2$ correlator, we employ the usual method of first enforcing a quantum a jump at $t$ and then finding the evolution along $\tau$ following the same Monte-Carlo procedure. Calculating the OTOC in a dissipative-driven system is more challenging. The OTOC can be expanded into a sum of four two-time correlators with both forward and backward time evolutions. For each of these correlators, let $t_1$ be the time associated with the rightmost operator, and let $t_2$ be the other time. We proceed according to the following steps: (i), we evolve the wavefunction to $t_1$ (i.e. either $t$ or $t+\tau$); (ii), we apply the operators associated with $t_1$ on both the evolved ket and bra or, where this is not possible, we define four helper states following M\o{}lmer et al~\cite{molmer1993monte}; next, (iii), we apply either forward or backward evolution in time from $t_1$ to $t_2$, as necessary (i.e., forward evolution from $t_1=t$ to $t_2=t+\tau$ or backward evolution from $t_1=t+\tau$ to $t_2=t$). The latter is done according to $\hat{\mathcal{H}}\to-\hat{\mathcal{H}}$ but with the dissipative part of the Lindblad evolution remaining the same~\cite{tuziemski2019out}; finally, (iv), we find the expectation value of the operator or product of operators associated with time $t_2$.
2,869,038,155,209
arxiv
\section{Introduction} Consider the following question and its corresponding answer, taken from the Microsoft Machine Comprehension (MS MARCO) dataset v2.1 \citep{Nguyen2016ms}: \vspace{0.2cm} \textbf{Question}: \textit{Where is Putney Bridge?} \textbf{Answer}: \textit{Putney Bridge is a bridge crossing of the River Thames in west London.} \vspace{0.2cm} \noindent This \textit{where}-question is answered using a \emph{place description} -- a description that characterizes the location of interest (Putney Bridge) based on a few anchor places (River Thames and London). Place descriptions, however, are not the only way to answer where-questions. Where-questions can also be answered via other representations such as maps or sketches \citep{mapvstext:2010}. Invariant to the chosen representation, the answers localize the place in question based on its spatial relationships with chosen anchor places \citep{COUCLELIS198799}. Hence, answering where-questions poses the following challenges no matter what representation is used: \begin{itemize} \item Generating informative answers -- i.e., the answer should complete the inquirers' gap of knowledge in a way that obvious or already-known responses should be avoided and useful and necessary information are included \citep{shanon1983answers}. In the example, obvious, inadequate or undetailed answers such as \textit{on Earth} or \textit{in the UK} or \textit{over a river} are avoided by the responder. \item Answering the question in a cognitively efficient manner \citep{wilson2002relevance} -- e.g., producing short and straightforward place descriptions \citep{agilePaper} and personalized map labelling strategies in map visualizations \citep{wilson2018systems}. In the example, the responder excludes unnecessary information such as the nearby theaters and restaurants to keep the answer as simple and relevant as possible. \item Determining the level of granularity of answers -- e.g., a suitable zoom level for maps \citep{ballatore2019} and referring to places of suitable granularity in place descriptions \citep{hamzeiCOSIT}. In our example, the name of the roads and streets that are connected to the bridge are neglected in the answer based on the judgement of the responder for the relevant scale level. \item Selecting places that can be assumed to be known by the inquirer -- e.g., labelling the places known to inquirers in maps \citep{suomela2009displaying} and referring to them in place descriptions as anchors. In the example, the location of \emph{River Thames} and \emph{London} are assumed to be known to the inquirer. \end{itemize} Where these challenges are met by an answer, the communication succeeds. Addressing these challenges is a necessary step towards answering where-questions. To understand and imitate human selectivity in choosing anchor places, we investigate and characterize human-generated answers to where-questions. The results of our research are applied for generating answers to where-questions as natural language responses (place descriptions). Selecting relevant anchor places is an essential part of generating place descriptions that succeed in answering where-questions. Moreover, information about anchor places can be used in static maps to be visualized in a proper context frame \citep{ballatore2019}. Current geographic question answering systems are often focused on coordinate retrieval as answers to where-questions \citep[e.g.,][]{luque:2006, Stadler:2012}. While coordinates are useful for communication between location-based services to perform spatial analysis or visualization \citep{JIANG2006712}, it is not necessarily a relevant response to inquirers without a proper map visualization. Yet, a characterization of relevant anchor places to localize a place in question is still missing. In this paper, we study human-generated answers to where-questions to inform the properties of such answers and to devise and test a method to imitate their structures in machine-generated responses. To achieve these goals, the information in a where-question and its answer is modelled as \textit{an ordered set of places} that are mentioned in their content. Then the properties of places in questions and corresponding answers are derived and further investigated. This model forms a template (i.e., an ordered set of place properties) that enables computers to learn and imitate human answering behaviour. In other words, place properties are utilized to understand why a set of places are chosen as anchors to localize the place in question and how this selectivity can be imitated by computers. The properties that are used in the templates are generic geographic information that describe the shared meaning of places in form of generic types from a finite set of categories. Referring to the example above, the place in question is a \textit{bridge} which is localized by referring to the river it goes over and the city it belongs to. Here, the template captures the structure of the answer as relationships between \textit{bridges and rivers}, and \textit{bridges and cities}. \subsection{Background: Geographic Question Answering} Geographic Question Answering (GeoQA) is defined as methods and algorithms that help inquirers to satisfy their information need by deriving answers to their geographic questions. In GeoQA, answering geographic questions can be based on diverse information sources such as textual information \citep{Mishra:2010, Ferres:2006}, geodatabases \citep{chen2014}, and spatially-enabled knowledge bases \citep{Ferres:2010}. GeoQA (and in general QA) architectures typically resolve three tasks: (a) question classification and intent analysis, (b) finding relevant sources, and (c) extracting answers from the sources \citep{Ferres:2006}. The classification of the questions \citep{agilePaper, Mohasseb2018} enables GeoQA to coarsely identify the intent and purpose of asking questions (e.g., localization, or navigation). Next, the questions are translated into formal representations such as database queries or even just a vector representation of extracted keywords \citep{Punjani:2018:TQA:3281354.3281362}. Using the representations, the information sources can be searched or queried to look up the possible answers \citep{Zheng2019}. Finally, the factoid answers are retrieved from the sources -- e.g., a sentence in a Web document, a cell in a selected table, or a node in a graph knowledge base \citep{sun-etal-2018-open}. In recent years, several GeoQA studies were conducted for answering geographic questions \citep{Stadler:2012}, creating knowledge bases from unstructured data \citep{Mai:2018}, and relaxing unanswerable questions \citep{Mai:2020}. Focusing on answering geographic questions, previous studies provide solutions to retrieve responses from knowledge bases \citep{Stadler:2012} and documents \citep{Buscaldi:2006,luque:2006}. GeoQA studies are mostly focused on what/which questions about geographic places \citep[e.g.,][]{Scheider:2020, Vahedi:2016}. In answering where-questions, the task is either simplified into retrieving stored coordinates \citep{luque:2006, Stadler:2012}, or selecting a part of text without explicit adaptation to the question \citep{Buscaldi:2006}. When answering where-questions, the answer extraction step is particularly challenging. Without a well-designed approach to imitate human answering behavior, the extracted answers can easily be over-specified and consequently uninterpretable for the inquirer, or under-specified and thus obvious and uninformative to the inquirer \citep{shanon1983answers}. Hence, the challenge is to provide relevant answers by selecting proper set of anchor places to localize the place in question. \subsection{Rationale and Research Gap} To enable computers to provide responses with similar qualities to human-generated answers, the responses need to be relevant. An answer is relevant if its positive cognitive effects to inquirers are large and the processing effort to achieve the effect is small \citep{wilson2002relevance}. In other words, answers should be informative enough and as straightforward as possible. Assuming human-generated answers are relevant responses, machine-generated responses should imitate the selectivity in human-generated answer to provide useful pieces of information and avoid unnecessary ones. Generating such relevant responses is the major prerequisite of intelligent GeoQA as defined by \cite{winter2009spatial}. Generic information captures shared meaning of geographic places. While generic geographic information is not used in QA, it has been used to investigate and characterize place descriptions \citep{Richter_et_al_2013,Purves2007}, route descriptions \citep{Raubal:2002}, and regions \citep{tomko2008categorical}. This research hypothesizes that, at least in the English language, generic geographic information can be used to characterize human answering behavior and ultimately to generate templates for answering where-questions. We approach this hypothesis by addressing three sub-hypotheses. \setcounter{hyp}{0} \begin{hyp}[Characteristics of the answers] \label{hyp:a} Human-generated answers to where-questions have special characteristics that can be described and characterized in terms of generic geographic information such as type, scale, and prominence; \end{hyp} \begin{hyp}[Relation between where-questions and their answers] \label{hyp:b} There is a strong relationship between generic information in the content of where-questions and their answers which can be used to characterize human answering behavior; \end{hyp} \begin{hyp} [Generating answers to where-questions] \label{hyp:c} If Hypotheses 1 and 2 hold, the characteristics of human-generated answers and the relation between the questions and their answers can be used to generate templates to answer to where-questions. \end{hyp} \noindent To investigate the hypotheses, the following research questions will be addressed: \begin{enumerate} \item How can the characterizing patterns of the human-generated answers be derived? \item How does generic geographic information in where-questions relate to the generic information in their human-generated answers? \item How can the templates be generated to imitate the structure of human-generated answers? \end{enumerate} By addressing the research questions, we contribute: \begin{itemize} \item A generalized approach to investigate human answering behavior to where-questions using generic geographic information; \item An investigation of the human-generated answers to where-question asked in Web search, using patterns of type, scale and prominence of places. \end{itemize} \section{Methodology} To investigate the hypotheses, we propose a generalized approach of \emph{specific-generic translation}. Next, a method using specific-generic translation is devised to investigate the QA in interaction of people with a general-purpose search engine. Other QA scenarios (e.g., human-human dialogue, human-social bot interaction) may require different design of specific-generic translations. \subsection{Specific-Generic Translation} Figure~\ref{fig:method} shows the proposed approach to derive characterizing patterns in human-generated answers and generating template to answer where-questions. The approach includes two stages: (1) \emph{learning generic patterns} where the objective is to investigate and to characterize human answering behavior into a machine learning model, and (2) \emph{answering questions} where the model is used to generate answers. The novelty of the approach is in the encoding of questions and answers into their generic meaning and to model the relation between questions and answers in their generic forms. Later, the model is used to generate generic forms of answers to where-questions. Finally, the answer is constructed by decoding generic form (e.g., type) of the answer into its specific representation (i.e., toponym). \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figures/generic_to_specific.eps} \caption{Specific-generic translation approach} \label{fig:method} \end{figure} The specific-generic translation approach involves the following steps: \begin{enumerate} \item Selecting a set of generic information classes (e.g., place type, scale, prominence, functions and access) based on the context of QA and availability of data; \item Defining a schema for each selected generic information class; \item Designing an information extraction approach to encode the questions and answers into generic forms (\textit{Generic Encoder} in Figure~\ref{fig:method}); \item Evaluating how effective each generic class is in capturing the relation between the questions and their human-generated answers (\textit{Generalized Patterns} in Figure~\ref{fig:method}). The results of evaluation also provide insights about human answering behavior in the context of the QA problem. \item Training a predictive model that can learn generalized patterns of human-generated answers (\textit{Predictive Model} in Figure~\ref{fig:method}); \item Defining a decoding approach to map generic forms of answers into specific (toponym) representation (\textit{Generic Decoder} in Figure~\ref{fig:method}). This step can be followed by additional steps such as natural language generation to be used in real-world applications. \end{enumerate} In this paper, we discussed the results of the first five steps for question answering in a Web search scenario in details. The last step is only demonstrated using examples. \subsection{Type, Scale and Prominence (TSP) Encoding} Based on the specific-generic translation, TSP encoding is proposed to investigate where-questions constructed only with toponyms. The generic forms which are used to investigate these questions and their answers are \emph{type}, \emph{scale} and \emph{prominence} of the toponyms. We first introduce our terminology before discussing the details of the proposed TSP encoding method. \subsubsection{Definitions} The investigated types of where-questions are defined as: \begin{itemize} \item \textbf{Toponym-Based Where-Question (TWQ)}: A toponym-based where-question is a geographical where-question that is generated completely using toponyms. For example, \emph{Where is Beverly Hills?} is a toponym-based where-question, while \emph{Where is Clueless filmed?} (without toponym) and \emph{Where can I buy furniture?} (affordance, buying furniture) do not belong to this type. \item \textbf{Simple Where-Question (SWQ)}: Simple where-questions are a sub-class of TWQs that contains only one toponym in their body (e.g., \emph{Where is Beverly Hills?}). \item \textbf{Detailed Where-Question (DWQ)}: Detailed where-questions are a sub-class of TWQs with more than one toponym in their content (e.g., \emph{Where is Beverly Hills, California?}). In DWQs, contextual details are provided in the content of the questions that shows what the inquirer already know -- e.g., Beverly Hills is located in California. \end{itemize} We use \emph{type}, \emph{scale} and \emph{prominence}, defined as: \begin{itemize} \item \textbf{Type}: A type (e.g., restaurant, mountain) is a reference to a group of places with similar characteristics (e.g., affordance, or physical properties). Type defines similar places and differentiates dissimilar ones, sometimes in a hierarchical or taxonomical manner. Here, the relation between a specific reference to a place (unambiguous toponym) and its type is considered as a one-to-one relation. \item \textbf{Scale}: Scale is defined as a finite hierarchically-organized ordinal set of levels grounded in the human understanding and differentiation of places based on their size and their relationships (i.e., containment). The relation between scale and an unambiguous toponym is considered as one-to-one. Due to the specific context of the QA scenario, very fine levels of scale of geographic entities, such as room-level or object-level, can be neglected here, while in everyday human-human communication these levels of scale may have a more important role. \item \textbf{Prominence}: Prominence is a measure of how well-known a place is. In this research, prominence is characterized by a finite ordinal set of levels. While prominence of places is subjective and differs from person to person based on their experience, here prominence is considered as an absolute and objective measure to rank places, established through a proxy measure defined later. This approach enables to avoid individual experiential biases and is supported by the evidence of success in day to day communication in which the absolute prominence evaluation is adapted between hearers and speakers. \end{itemize} Type, scale and prominence are used to characterize place descriptions \citep{Richter_et_al_2013,Purves2007}. These geographic concepts can be used to capture different kinds of relationships among places. These relationships can be used to understand the relation between where-questions and their answers. For example, such relationships between rivers and seas (\textit{flows to}), and cities and countries (\textit{part of}) can be captured using place type. Considering the relation between where-questions and their answers, \textit{containment} (different levels) and \textit{nearness} (a same level) can be captured through differences among scale levels. Finally, prominence is a measure to check whether the answer is interpretable by the inquirers -- i.e., more prominent places are expected to be better known by inquires. Finally, aspects of human-generated answers which are investigated in this paper are defined below: \begin{itemize} \item \textbf{Content}: The content of an answer is a collection of distinct information units that are presented to satisfy the inquirer's information need. Content can be generic (e.g., type) or specific (e.g., toponym). Content is the most important aspect of the answers, in a way that the difference between correct and incorrect responses are completely based on their content. \item \textbf{Style}: The style of an answer is the way that the content is presented. Style directly influences the perception of naturalness of the response. Referring to the introductory example, \textit{\ldots Putney Bridge is a bridge crossing of the River Thames in west London} and \textit{\ldots Putney Bridge is a bridge in west London which goes over River Thames} are two different styles of answers (with same content) to the question. Here, the former is preferred by the responder. \end{itemize} \subsubsection{TSP Sequences} In TSP encoding, we use a sequence representation to model generic/specific information in the questions and answers. A sequence is defined as an ordered set of items (here, references to generic/specific geographic information). We first model questions and their corresponding answers as sequences of toponyms (specific representation). Then, these toponym sequences are encoded into type, scale and prominence sequences by translating each specific toponym into its corresponding generic type, scale and prominence reference. Referring to the introductory example, the specific representations (toponym sequences) and the encoded type sequences (an example of a generic sequence) of question and answer are presented below: \begin{itemize} \item \textbf{Toponym sequences}: [Putney Bridge] [River Thames, London] \item \textbf{Type sequences}: [bridge] [river, city] \end{itemize} Here, the \emph{content} refers to the information items in the sequences, and their order defines the \emph{style} in which the information is presented. \section{Implementation} Figure~\ref{fig:method2} shows the proposed workflow\footnote{Additional details of implementation are presented in the supplementary material (Section 1)} to investigate TWQs and their answers. Here, we detail the dataset, extraction, encoding, generic patterns and prediction. A complete implementation of the proposed TSP encoding approach also includes decoding from generic to specific information. Here, the decoding step is demonstrated through examples, and a fully automated implementation remains out of scope of this paper. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figures/TSP_method.eps} \caption{The proposed implementation approach} \label{fig:method2} \end{figure} \subsection{Data} The questions in MS MARCO v2.1 \citep{Nguyen2016ms} are categorized into five categories using tags: (1) \emph{numeric}, (2) \emph{entity}, (3) \emph{location}, (4) \emph{person}, and (5) \emph{description} \citep{Nguyen2016ms}. Geographic questions can thus be easily extracted using the predefined \emph{location} tag. The dataset contains over one million records divided into \emph{training}, \emph{development} and \emph{testing} subsets. Each record in the dataset includes a \emph{question}, \emph{human-generated answer(s)} (except for records in the \emph{test} dataset, where the answers are deliberately excluded), a \emph{tag}, and a \emph{set of documents} retrieved by the Microsoft Bing search engine\footnote{More information about the dataset can be found in: \url{https://github.com/dfcf93/MSMARCO}}. The `location questions' in MS MARCO (56,721 question-answer pairs) include 36,939 geographic questions, and the remainder are questions about fictional, mystic and other non-geographic places \citep{agilePaper}. Among the geographic questions, 13,195 pairs of questions and answers are geographic where-questions \citep{agilePaper}. There are several reasons to choose MS MARCO for this study considering other available datasets such as SQuAD \citep{rajpurkar2016squad}: \begin{itemize} \item MS MARCO is the largest available QA dataset; \item The questions are labelled and geographic questions can be easily extracted; \item All questions are asked in a specific real-world scenario (i.e., Web search); \item Inquirers pose questions to resolve their information needs while in some datasets such as SQuAD, questions are made from documents. In other words, questions in SQuAD is more about what a document can answer rather than what actual inquirers want to know. \item The answers are provided using an open form strategy. The answerers can utilize suggested documents (one or more) and their own knowledge to answer a question. Hence, the answers are not directly extracted as a single span of a document. \end{itemize} \subsection{Extraction} We first extract the questions labelled as \emph{location} and starting with a \emph{where}-token. Next, the text is the toponyms inside the questions and answers are identified using Named Entity Recognition (NER) and gazetteer lookups using both OSM Nominatim and Geonames gazetteers. Here, the Stanford NLP toolkit is used to extract named entities \citep{finkel2005incorporating}. In this step, if a compound noun phrase is tagged as location, first the compound noun is checked by gazetteer lookup; if it is not identified, then its constituent simple nouns are checked. If a compound or simple noun phrase is found in both gazetteers, it is stored as a toponym. For the extracted toponyms, we retain only records for which (1) the OSM and Geonames records have same name, and (2) the Geonames' point-based coordinates are inside the region-based representation of their corresponding OSM records. The toponym disambiguation is then undertaken based on the \emph{minimum spatial context} heuristic \citep{Leidner:2003:GSN:1119394.1119399}. We use bounding boxes to determine which combination of the geographic locations satisfy the minimum spatial extent condition. In cases of duplicate places in GeoNames which lead to the same bounding boxes, the combination with more specific place types is selected. For example, populate place (PPL) is a place type in GeoNames which could be a village, city, state and even country. Hence, administrative divisions (e.g., state) are chosen over the populated places. Finally, if the toponym ambiguity still exists, we use importance value to select the more prominent combination. More sophisticated heuristics in toponym disambiguation \citep[e.g.,][]{wang:2010, Lieberman:2012} are not used due to reliance on significant assumptions -- e.g., the relation between place types in the toponyms, city-country relation. These heuristics constrain the relationships between type, scale and prominence of resolved places in the text. This may impact the results of this study and lead to stronger associations based on type, scale and prominence between toponyms in questions and answers. Here, to present fair results, we avoid using these disambiguation methods. \subsection{Encoding} The gazetteers' attributes for the extracted toponyms have been used as proxies to capture type, scale and prominence of the toponyms in questions and answers. Using these proxies, the sequence representations for each question-answer pair are encoded into TSP sequences. \textbf{Type:} The Geonames type schema\footnote{\url{https://www.geonames.org/export/codes.html}\label{note1}} has been used without modification to encode generic place types. This schema contains 667 unique types of geographic features, covering both natural and man-made geographic types. \textbf{Scale:} To capture scale, we have extended the schema from \cite{Richter_et_al_2013}. This schema contains seven levels of granularity: (1) furniture, (2) room, (3) building, (4) street, (5) district, (6) city, and (7) country. We have extended the coarse levels of granularity by adding \emph{county}, \emph{state}, \emph{country}, and \emph{continent}, and removing the \emph{furniture} and \emph{room} levels from the schema. OSM records include an attribute related to the OSM definition of scale (i.e., place\_rank\footnote{\url{https://wiki.openstreetmap.org/wiki/Nominatim/Development\_overview}\label{note2}}), a number between 0 to 30. We convert the extracted gazetteers' records into the appropriate scale level based on a look-up table that maps OSM scale levels into the proposed scale schema (see the supplementary material, Section 1.1). \textbf{Prominence:} To capture prominence, the \emph{importance} attribute in the extracted OSM Nominatim record is used. The OSM importance value is estimated using Wikipedia importance score \citep{thalhammer2016pagerank} with some minor tweaks\footnote{\url{https://lists.openstreetmap.org/pipermail/geocoding/2013-August/000916.html}}. The value is defined between 0 and 1, and it is designed to be used for ranking search results. We translate these values into seven finite levels of prominence, derived by \emph{natural breaks} classification \citep{Jensk1967IYC} of the frequency spectrum of the values. \subsection{Distribution Analysis and Rule Mining} Distribution analysis and rule mining techniques are used to extract and investigate patterns in the human-generated answers and the relation between the questions and their answers. Distributions of type, scale and prominence sequences are used to compare the questions and answers. To derive patterns in the questions and their answers, association rule mining, a-priori algorithm \citep{Agrawal:1994}, is used. The strength of the extracted rules are evaluated using the standard measures -- i.e., \emph{support}, \emph{confidence}, and \emph{lift}. Support defines how frequently an association rule is observed in the whole dataset, and confidence determines how often the rule is true. Lift is a measure to evaluate the importance of the rules -- i.e., lift greater than one shows positive and strong dependency among the elements of the extracted rule. This part of the method is devised to test the first and second hypotheses. \subsection{Prediction} The input for the prediction is an encoded sequence of TWQs, and the output is the generic sequence of their corresponding answers. The problem can then be formulated as a sequence prediction from concatenated generic sequences for the questions and their answers, where a part of a sequence is known, and the rest is predicted. Table~\ref{tab:seqMethods} shows the sequence prediction methods which are used in this study. We used and extended an open-source toolkit for sequence analysis \citep{SPMF} to implement the prediction methods. These classic methods are divided into probabilistic \citep{cleary1984data,pitkow1999mininglongestrepeatin,padmanabhan1996using} and non-probabilistic categories \citep{ziv1978compression,laird1994discrete,gueniche2013compact,gueniche2015cptplus}. The probabilistic methods are based a graph representation of conditional probabilities \citep{cleary1984data} or Markov chain's transition probability matrix \citep{pitkow1999mininglongestrepeatin, padmanabhan1996using} of the sequence elements. The non-probabilistic methods compress the sequences in a lossy \citep{ziv1978compression, laird1994discrete} or lossless approaches \citep{gueniche2013compact, gueniche2015cptplus} into tree-based \citep{gueniche2013compact, gueniche2015cptplus} or graph-based \citep{laird1994discrete} data structures (for a review of sequence prediction methods see \cite{review:2020}). The structure of sequence and the relation between prior elements in the sequence to their succeeding elements are trained into a model. The model is then tested on an unseen part of data using K-fold cross validation (K=10). We considered two baseline methods to evaluate the performance of the sequence prediction methods: (1) random sequence generation and (2) most frequent pattern. The random generation baseline only utilizes the schema of type, scale and prominence without any information about the distributions of values in the answers. The most frequent patterns baseline predicts templates of answers using the schema and the distribution of generic references in the answers. The difference between the prediction performances of random generation and the most frequent patterns shows the impacts of using the distribution of generic values in generating templates of answers (see hypothesis 1). The sequence prediction methods also consider the relation between generic values in the questions to their answers. Consequently, the improvement in generating the templates compared to the most frequent patterns baseline is related to the association between generic values of questions and their answers (hypothesis 2). \begin{table} \centering \caption{\label{tab:seqMethods}Sequence prediction methods} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{lll} \toprule \textbf{Method} & Publication & \textbf{Year}\\ \midrule Lampel-Ziv 1978 (LZ78) & \citep{ziv1978compression}& 1978\\ First order Markov Chains (Mark1) & \citep{cleary1984data}& 1984\\ Transition Directed Acyclic Graph (TDAG) & \citep{laird1994discrete}& 1994\\ Dependency Graph (DG) & \citep{padmanabhan1996using}& 1996\\ All-k-Order Markov Chains (AKOM) & \citep{pitkow1999mininglongestrepeatin}&1999\\ Compact Prediction Tree (CPT) & \citep{gueniche2013compact}&2013\\ Compact Prediction Tree Plus (CPT+) &\citep{gueniche2015cptplus}&2015\\ \bottomrule \end{tabular} } \end{table} In prediction, each generic form of questions is used to predict the same generic form of their answers. In addition, we have devised an approach to predict one of the generic forms of an answer using all generic forms (i.e., type, scale and prominence) of its corresponding question. Algorithm~\ref{alg:tsp} shows the process to use all three type/scale/prominence sequences to predict a generic form of the answers in each generic class. Here, each combination of type, scale and prominence values are mapped to a unique code. Using these codes, a new sequence is generated for each question/answer to capture type, scale and prominence together. Next, these sequences are used to predict the generic form of answers. Finally, a reverse mapping is used to decode these sequences into type, scale and prominence sequences. \begin{algorithm}[ht] \small \caption{Training and prediction based on type-scale-prominence together} \label{alg:tsp} \begin{algorithmic}[1] \Procedure{$\mathbf{TSP\_Prediction}$}{$type$, $scale$, $prominence$} \State generate a code for each unique combination of $type$-$scale$-$prominence$ ($TSP$) \State create encoded sequences based on generated $TSP$ codes \State train a model to predict $TSP$ in answers based on $TSP$ in the questions \For{every $question$} \State given a $question$ ($TSP$); predict the $answer$ ($TSP$) \State decode the predicted $answer$ ($TSP$) to $answer$ ($type$/$scale$/$prominence$) \If {multiple predictions are allowed} \State avoid counting duplicate decoded values for $type$/$scale$/$prominence$ \EndIf \EndFor \EndProcedure \Statex \end{algorithmic} \end{algorithm} \section{Results} \subsection{Extraction and Encoding} The assessment of toponym extraction, finding TWQs, and categorizing the questions into SWQs and DWQs are presented in Table~\ref{tab:extractionResults}. Here, average precision and recall of the extraction results are calculated using manually annotated data (5\% of TBWQs and their answers). For the task of finding TWQs in the dataset, the \emph{false negatives} (TWQs that have not been extracted) are not investigated, hence the recall is unknown. As shown in Table~\ref{tab:extractionResults}, 6,274 TWQs and their answers are found in the dataset. The TWQs are approximately 11.1\% of the \emph{location questions} of the dataset. For evaluation, 5\% of extracted TWQs (314 questions) are investigated and the precision of extraction is 91.7\% -- i.e., 288 of 314 extracted questions are TWQs. Using the 288 TWQs, the precision and recall of extracting toponyms and classifying the questions to SWQs and DWQs are presented in Table~\ref{tab:extractionResults}. \begin{table} \centering \caption{\label{tab:extractionResults}Extraction evaluation} \resizebox{\textwidth}{!}{ \begin{tabular}{lllll} \toprule \textbf{Extraction} & \textbf{\#Extracted}& \textbf{\#Investigated} & \textbf{Precision} & \textbf{Recall} \\ \midrule TWQs&6274&314 (5\%)&91.7\% (288 out of 314)&--\\ \midrule SWQs&3285&121 out of 288&89.4\%&90.2\%\\ DWQs&2989&167 out of 288&92.7\%&92.1\%\\ \midrule Toponyms&22307&1133\footnote{unique toponyms extracted from the sampled questions and answers}&88.6\%&90.8\%\\ \bottomrule \end{tabular} } \end{table} Table~\ref{tab:encodingResults} shows the number of records that are completely encoded for question-answer pairs in type, scale and prominence sequences. Here, if even the information for one place (which is mentioned either in the question or its answer) is missing, the question and its answer are not used to extract patterns or test the predictability of generating generic form the answer. As shown in the table, the encoding into scale and prominence is not always possible due to incompleteness of attribute information (i.e., \emph{place\_rank} and \emph{importance}) in OSM Nominatim. \begin{table} \centering \caption{\label{tab:encodingResults}Encoding results} \begin{tabular}{llll} \toprule \textbf{Encoding} & \textbf{\#TWQs}& \textbf{\#SWQs}&\textbf{\#DWQs}\\ \midrule Type sequences &6,274 & 3,285 & 2,989\\ Scale sequences &3,936 & 1,985 & 1,951\\ Prominence sequences &6,051 & 3,098 & 2,953\\ \bottomrule \end{tabular} \end{table} \subsection{Distributions} The distribution of TWQs\footnote{A detailed comparison of SWQs and DWQs is presented in the supplementary material (Section 2)} and their answers based on type, scale and prominence are shown in Figures~\ref{fig:tqvsa},~\ref{fig:sqvsa} and~\ref{fig:pqvsa}. Figure~\ref{fig:tqvsa} shows that the diversity of types in the questions is higher than in the answers. While administrative divisions are more frequent than other generic types in both questions and answers, they are more dominant in the answers. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/type_plot.eps} \caption{Distribution of place types in the questions and in the answers.} \label{fig:tqvsa} \end{figure} Figure~\ref{fig:sqvsa} shows the scale in the answers is systematically one level coarser than in the questions. In addition, the distribution shows that city-level and state-level scales are frequently observed in the questions, while the answers mostly contain references of county and country levels of scale. The results further show that the coarsest level of scale (i.e., continent level) is rarely observed in the answers. This observation shows an answer at the continent level would be under-specified in most cases, and therefore uninformative. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/scale_plot.eps} \caption{Distribution of levels of scale in all toponym-based where questions and answers.} \label{fig:sqvsa} \end{figure} The distributions of prominence levels in questions and answers are similar to the distributions by scale (Figure~\ref{fig:pqvsa}). In the questions, we observe a bi-modal distribution of levels of prominence in the content of questions. The distribution of prominence in the answers, however, shows that higher levels are dominant. In contrast to the distributions by scale, the most prominent level is dominant in the answers. Hence, people tend to refer to well-known places in their answers. Unlike with scale, the highest levels of prominence do not necessarily lead to obvious or irrelevant answers\footnote{A detailed analysis of sequence distributions is available in supplementary material (Section 3)}. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/prominence_plot.eps} \caption{Distribution of prominence levels in the questions versus answers} \label{fig:pqvsa} \end{figure} \subsection{Extracted Rules} To test Hypotheses 1 and 2 (see Section 1.4), we extract strong rules in the encoded pairs of questions-answers through association rule mining. The association rules extracted from the answers can be used to describe how answers are constructed in detail (Hypothesis 1). The relationship between the content of the questions and their answers can thus also be further investigated (Hypothesis 2). Tables~\ref{tab:trules}-\ref{tab:prules} show the top five extracted rules (based on \emph{frequency}/\emph{support}) for type, scale and prominence, respectively. In the tables, the values starting with \emph{Q-} relate to the contents of the questions and the values starting with \emph{A-} to the content of the corresponding answers. As shown in the tables, some rules describe the structure of answers (e.g., \{A-ADM1, A-ADM2\} =\textgreater \{A-PCLI\}) while the others describe the relationships between questions and answers (e.g., \{Q-ADM2\} =\textgreater \{A-PCLI\}). \begin{table} \centering \caption{\label{tab:trules}Extracted rules from type sequences} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{llllll} \toprule \textbf{Rank}&\textbf{rule}&\textbf{support}&\textbf{confidence}&\textbf{lift}&\textbf{frequency} \\ \midrule \multicolumn{6}{c}{\textbf{Simple where-questions}}\\ \midrule 1 & \{A-ADM2\} =\textgreater \{A-ADM1\} & 0.15 & 0.52 & 1.28 & 478 \\ 2 & \{Q-ADM1\} =\textgreater \{A-PCLI\} & 0.08 & 0.74 & 1.69 & 259 \\ 3 & \{A-ADM1,A-ADM2\} =\textgreater \{A-PCLI\} & 0.08 & 0.54 & 1.24 & 259 \\ 4 & \{Q-ADM2\} =\textgreater \{A-PCLI\} & 0.06 & 0.52 & 1.21 & 188 \\ 5 & \{Q-PPL,A-ADM2\} =\textgreater \{A-PCLI\} & 0.04 & 0.54 & 1.23 & 112 \\ \midrule \multicolumn{6}{c}{\textbf{Detailed where-questions}}\\ \midrule 1 & \{Q-ADM1\} =\textgreater \{A-ADM2\} & 0.57 & 0.76 & 1.12 & 1701 \\ 2 & \{Q-ADM1\} =\textgreater \{A-PCLI\} & 0.38 & 0.50 & 1.13 & 1126 \\ 3 & \{A-PCLI\} =\textgreater \{A-ADM2\} & 0.35 & 0.79 & 1.17 & 1053 \\ 4 & \{A-ADM2,Q-ADM1\} =\textgreater \{A-PCLI\} & 0.31 & 0.54 & 1.21 & 916 \\ 5 & \{Q-PPL\} =\textgreater \{A-ADM2\} & 0.22 & 0.78 & 1.15 & 656 \\ \bottomrule \end{tabular} } \end{table} \begin{table} \centering \caption{\label{tab:srules}Extracted rules from scale sequences} \resizebox{0.75\textwidth}{!}{ \begin{tabular}{llllll} \toprule \textbf{Rank}&\textbf{Rule}&\textbf{support}&\textbf{confidence}&\textbf{lift}&\textbf{frequency}\\ \midrule \multicolumn{6}{c}{\textbf{Simple where-questions}}\\ \midrule 1 & \{Q-6\} =\textgreater \{A-9\} & 0.21 & 0.55 & 1.01 & 417 \\ 2 & \{Q-6\} =\textgreater \{A-8\} & 0.20 & 0.54 & 1.24 & 404 \\ 3 & \{A-7\} =\textgreater \{A-9\} & 0.16 & 0.56 & 1.73 & 307 \\ 4 & \{Q-6\} =\textgreater \{A-7\} & 0.15 & 0.54 & 1.37 & 295 \\ 5 & \{A-7\} =\textgreater \{A-8\} & 0.15 & 0.54 & 1.25 & 293 \\ \midrule \multicolumn{6}{c}{\textbf{Detailed where-questions}}\\ \midrule 1 & \{Q-8\} =\textgreater \{A-7\} & 0.65 & 0.80 & 1.08 & 1277 \\ 2 & \{Q-6\} =\textgreater \{Q-8\} & 0.49 & 0.81 & 0.99 & 952 \\ 3 & \{Q-6\} =\textgreater \{A-7\} & 0.48 & 0.80 & 1.07 & 940 \\ 4 & \{A-9\} =\textgreater \{Q-8\} & 0.45 & 0.87 & 1.07 & 887 \\ 5 & \{A-7,Q-6\} =\textgreater \{Q-8\} & 0.42 & 0.88 & 1.07 & 823 \\ \bottomrule \end{tabular} } \end{table} \begin{table} \centering \caption{\label{tab:prules}Extracted rules from prominence sequences} \resizebox{0.75\textwidth}{!}{ \begin{tabular}{llllll} \toprule \textbf{Rank}&\textbf{Rule}&\textbf{support}&\textbf{confidence}&\textbf{lift}&\textbf{frequency}\\ \midrule \multicolumn{6}{c}{\textbf{Simple where-questions}}\\ \midrule 1 & \{A-4\} =\textgreater \{A-7\} & 0.14 & 0.54 & 1.09 & 425 \\ 2 & \{A-5\} =\textgreater \{A-7\} & 0.13 & 0.50 & 1.02 & 417 \\ 3 & \{Q-3\} =\textgreater \{A-7\} & 0.12 & 0.52 & 1.05 & 382 \\ 4 & \{Q-6\} =\textgreater \{A-7\} & 0.08 & 0.58 & 1.18 & 260 \\ 5 & \{Q-4\} =\textgreater \{A-7\} & 0.08 & 0.53 & 1.07 & 250 \\ \midrule \multicolumn{6}{c}{\textbf{Detailed where-questions}}\\ \midrule 1 & \{Q-6\} =\textgreater \{A-4\} & 0.32 & 0.56 & 1.19 & 957 \\ 2 & \{Q-6\} =\textgreater \{A-7\} & 0.30 & 0.51 & 1.06 & 884 \\ 3 & \{A-4\} =\textgreater \{A-7\} & 0.25 & 0.54 & 1.11 & 742 \\ 4 & \{Q-3\} =\textgreater \{Q-6\} & 0.24 & 0.54 & 0.93 & 695 \\ 5 & \{Q-3\} =\textgreater \{A-7\} & 0.22 & 0.51 & 1.04 & 650 \\ \bottomrule \end{tabular} } \end{table} Table~\ref{tab:trules} shows the dominant role of administrative divisions in the human-generated answers. Association rules extracted based on the scale (Table~\ref{tab:srules}) show the \emph{greater-than} and \emph{between} levels of the answers to SWQs and DWQs. The top five patterns of answers are mostly constructed with references to the highest level of prominence (\emph{A-7}). This shows the major impact of prominence in human answering behavior to where-questions -- i.e., people refer to prominent places in answering where-questions. Tables~\ref{tab:trules}, \ref{tab:srules} and \ref{tab:prules} show that stronger association rules with higher support are extracted from DWQs in comparison to SWQs. The rules show strong associations between antecedent and consequent parts of the extracted rules with lift value greater than one. The results show that stronger rules with higher confidence and support are extracted using \textit{scale} in comparison to \textit{type} and \textit{prominence}. The tables only present the extracted rules with highest frequency and support. These tables show how a small set of generic rules describes a large proportion of data in the MS MARCO dataset. Sorting rules by confidence or lift will change the order of the rules. For example, the maximum lift (equal to \textit{8.93}) in the extracted rules belongs to \{Q-6, Q-9\} =\textgreater \{A-8\} for detailed-where questions using scale. The frequency of this rule is \textit{43}, and it describes the relevant scale level (between minimum and maximum levels of the question) for detailed where-questions. The maximum confidence is 0.93 for detailed where-questions encoded by type. This association rule is \{Q-PPLA2, Q-ADM1\} =\textgreater \{A-ADM2\} with a frequency \textit{109}. This rule describes that \textit{populated places} in detailed where-questions are mostly localized by referring to \textit{counties} they belong to. \subsection{Predicting the Generic Form of Answers} We test the predictability of the generic sequence of an answer given the generic sequence of the corresponding question. We investigate different prediction scenarios, including (1) the same generic class prediction (e.g., predicting type sequence of answers using type sequence of questions), and (2) prediction of one generic class using all generic classes (e.g., predicting type sequence of answers using type/scale/prominence sequences of questions, see Algorithm~\ref{alg:tsp}). We assess the prediction accuracy of \textit{content} and \textit{content-and-style} of the answers (defined in Section 2.2.1). Referring to the introductory example, if type sequence of the answer is predicted as [river, city] then it is captured as a correct prediction for both content and content-and-style. The other permutation of this sequence (i.e., [city, river]) is considered as a correct prediction of content and incorrect prediction for content-and-style. Evidently, any other type sequence is an incorrect prediction for both content and content-and-style scenarios. Each prediction scenario is applied over all questions, SWQs and DWQs to investigate the impacts of \textit{question types} on the prediction accuracy. Each scenario is tested using all six sequence prediction methods and is compared with the two baseline approaches (i.e., random generation and most frequent patterns). Only the best prediction performances among the six sequence prediction methods are presented. The \textit{best performance} is the maximum prediction accuracy achieved by one of the methods for a prediction scenario. We also test prediction accuracy when multiple predictions are allowed -- i.e., \textit{top-k predictions} for $k$ from one to five. In top-k predictions, $k$ unique sequences are predicted for each answer and if one of the sequences matches with the generic form of the answer then the prediction is successful. Table~\ref{tab:tp_bests} shows the best performances in predicting type sequences of answers.The prediction accuracy based of TSP sequences is noticeably higher than that of predictions using only type sequences. This shows a complementary role of scale and prominence in predicting type sequence of the answers. Contrasting DWQs and SWQs shows that extra details in DWQs are useful for prediction of the generic form of answers. In addition, we observe how subjectivity in style of answers and flexibility of language to convey information lead to noticeable less accuracy in prediction of content-and-style of answers in comparison to prediction of content. This observation is related to the flexibility of natural language, in which the same meaning can be presented in different ways. Finally, the number of predictions ($k$ in the table) shows that the accuracy dramatically increases in the case of multiple predictions. \begin{table} \centering \caption{\label{tab:tp_bests}Prediction accuracy for type sequences} \resizebox{0.85\textwidth}{!}{ \begin{tabular}{lcccc} \toprule \textbf{\#Predictions (k)} & \multicolumn{2}{c}{\textbf{Content}} & \multicolumn{2}{c}{\textbf{Content and Style}} \\ \midrule & Type $\rightarrow$ Type & TSP $\rightarrow$ Type & Type $\rightarrow$ Type & TSP $\rightarrow$ Type \\ \midrule \multicolumn{5}{c}{\textbf{All questions}} \\ \midrule 1 & 45.2 & 55.7 & 29.0 & 40.7 \\ 2 & 68.9 & 77.1 & 44.6 & 60.5 \\ 3 & 80.2 & 83.3 & 57.8 & 73.3 \\ 4 & 83.6 & 84.7 & 64.0 & 76.1 \\ 5 & 84.4 & 85.5 & 68.3 & 77.4 \\ \midrule \multicolumn{5}{c}{\textbf{Simple where-questions}} \\ \midrule 1 & 39.5 & 47.5 & 14.2 & 27.4 \\ 2 & 60.8 & 69.4 & 32.7 & 48.5 \\ 3 & 73.2 & 75.8 & 48.2 & 63.4 \\ 4 & 77.2 & 77.5 & 58.1 & 66.2 \\ 5 & 78.5 & 78.2 & 63.1 & 67.0 \\ \midrule \multicolumn{5}{c}{\textbf{Detailed where-questions}} \\ \midrule 1 & 59.1 & 67.3 & 47.1 & 59.6 \\ 2 & 80.4 & 88.7 & 61.3 & 76.3 \\ 3 & 84.0 & 91.2 & 65.9 & 84.4 \\ 4 & 88.0 & 91.3 & 73.6 & 86.4 \\ 5 & 88.5 & 92.1 & 75.6 & 87.1 \\ \bottomrule \end{tabular} } \end{table} Tables~\ref{tab:sp_bests} and~\ref{tab:pp_bests} show that compared to type sequence prediction, the TSP sequences contribute less effectively in predicting the prominence and scale sequences -- i.e., only slightly improve the prediction accuracy. When considering multiple predictions, TSP sequences lead to worse results than prominence sequences or scale sequences alone. This can be explained by overfitting to specific patterns in the training dataset. Here, overfitting is observed because the schema of types is more than 20 times larger than the scale and prominence schemas. Hence, using type in prediction of scale or prominence leads to very detailed patterns that are not generalizable enough and decrease the prediction accuracy on unseen data. Finally, scale is the most predictable, and prominence is the least predictable generic class. Similar to the observations based on type prediction performances, DWQs are more predictable than SWQs based on scale and prominence. \begin{table} \centering \caption{\label{tab:sp_bests}Prediction accuracy for scale sequences} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{lllll} \toprule \textbf{\#Predictions (k)} & \multicolumn{2}{c}{\textbf{Content}} & \multicolumn{2}{c}{\textbf{Content and Style}} \\ \midrule & Scale $\rightarrow$ Scale & TSP $\rightarrow$ Scale & Scale $\rightarrow$ Scale& TSP $\rightarrow$ Scale\\ \midrule \multicolumn{5}{c}{\textbf{All questions}} \\ \midrule 1 & 55.0 & 56.7 & 38.2 & 42.2 \\ 2 & 79.4 & 79.2 & 61.0 & 62.8 \\ 3 & 91.6 & 86.1 & 79.0 & 76.0 \\ 4 & 96.3 & 88.7 & 92.0 & 81.9 \\ 5 & 98.0 & 89.3 & 96.0 & 83.5 \\ \midrule \multicolumn{5}{c}{\textbf{Simple where-questions}} \\ \midrule 1 & 48.5 & 49.5 & 20.4 & 28.6 \\ 2 & 79.6 & 71.8 & 49.1 & 49.8 \\ 3 & 89.9 & 78.3 & 71.9 & 67.0 \\ 4 & 95.6 & 81.8 & 90.3 & 74.0 \\ 5 & 97.5 & 82.6 & 94.9 & 75.5\\ \midrule \multicolumn{5}{c}{\textbf{Detailed where-questions}} \\ \midrule 1 & 69.6 & 68.2 & 59.8 & 60.6 \\ 2 & 88.4 & 89.6 & 78.4 & 77.3 \\ 3 & 95.8 & 93.3 & 88.6 & 87.1 \\ 4 & 97.5 & 95.2 & 94.8 & 92.1 \\ 5 & 98.6 & 95.2 & 97.0 & 92.7 \\ \bottomrule \end{tabular} } \end{table} \begin{table} \centering \caption{\label{tab:pp_bests}Prediction accuracy of prominence sequences} \resizebox{\textwidth}{!}{ \begin{tabular}{lllll} \toprule \textbf{\#Predictions (k)} & \multicolumn{2}{c}{\textbf{Content}} & \multicolumn{2}{c}{\textbf{Content and Style}} \\ \midrule & Prominence $\rightarrow$ Prominence & TSP $\rightarrow$ Prominence & Prominence $\rightarrow$ Prominence & TSP $\rightarrow$ Prominence \\ \midrule \multicolumn{5}{c}{\textbf{All questions}} \\ \midrule 1 & 50.8 & 53.0 & 19.9 & 30.7 \\ 2 & 74.1 & 73.4 & 39.2 & 49.1 \\ 3 & 85.0 & 81.9 & 61.6 & 66.4 \\ 4 & 92.1 & 86.7 & 79.2 & 77.1 \\ 5 & 96.1 & 88.6 & 89.2 & 81.8 \\ \midrule \multicolumn{5}{c}{\textbf{Simple where-questions}} \\ \midrule 1 & 45.4 & 45.6 & 14.3 & 19.5 \\ 2 & 75.4 & 69.4 & 34.9 & 39.1 \\ 3 & 84.7 & 77.0 & 54.5 & 56.9 \\ 4 & 91.3 & 80.5 & 73.7 & 68.2 \\ 5 & 95.6 & 81.9 & 87.9 & 72.7 \\ \midrule \multicolumn{5}{c}{\textbf{Detailed where-questions}} \\ \midrule 1 & 53.3 & 58.2 & 26.9 & 43.8 \\ 2 & 75.1 & 80.0 & 50.9 & 60.4 \\ 3 & 86.0 & 88.9 & 70.9 & 78.4 \\ 4 & 93.1 & 93.0 & 82.5 & 87.0 \\ 5 & 96.8 & 95.4 & 91.9 & 92.6 \\ \bottomrule \end{tabular} } \end{table} Table~\ref{tab:seqPredBaselineComparision} shows the improvement of accuracy in best prediction performances compared to two baselines -- i.e., random generator, and most frequent pattern(s). The minimum improvement is +18.3\% in prediction of type sequences of answers using type sequences of questions in comparison to the most frequent pattern(s). This observation shows that strong patterns exist in the distributions of answers and consequently, the baseline method performs well in prediction of type sequences of answers. The strongest improvement is +61.6\% when comparing the best predictive performance of type sequences using type/scale/prominence sequences together, compared to the random baseline. This is because of the large number of distinct types in type schema that lead to false predictions for the random baseline. The accuracy improvements illustrate the strong relationship between the generic content of questions and generic content of their answers. \begin{table} \centering \caption{\label{tab:seqPredBaselineComparision}Accuracy improvement using sequence prediction compared to the baselines} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lll} \toprule \textbf{Prediction Scenario} & \textbf{Random} & \textbf{Most Frequent Pattern(s)} \\ \midrule Type $\rightarrow$ Type & +48.9\% & +18.3\%\\ Scale $\rightarrow$ Scale & +58.1\% & +27.6\%\\ Prominence $\rightarrow$ Prominence & +39.2\% & +30.4\%\\ \midrule TSP $\rightarrow$ Type & +61.6\% & +31.0\%\\ TSP $\rightarrow$ Scale & +54.1\% & +23.6\%\\ TSP $\rightarrow$ Prominence & +42.3\% & +33.5\%\\ \midrule Overall & +50.7\% & +27.4\%\\ \bottomrule \end{tabular} } \end{table} To compare the sequence prediction methods, we used the difference between the prediction accuracy of each method to the best performance achieved by all methods for each prediction scenario. Table~\ref{tab:seqMethodsRMSE} shows the root mean square error (RMSE) for each sequence prediction method. The RMSE shows how well-performed a method is in comparison to other methods. If the RMSE of a method is lower than others, the prediction accuracy of the method is higher than the others. The prediction scenarios in Table~\ref{tab:seqMethodsRMSE} are simplified groups of actual predictions. For example, prediction scenario of scale is related to predicting scale sequences of answers using (1) scale sequence of questions or (2) type/scale/prominence sequences of questions. As shown in Table~\ref{tab:seqMethodsRMSE}, in all scenarios the \textbf{CPT} method is the \textit{best} performing method and \textbf{TDAG} performs \textit{worst} based on the RMSE values. The results suggest that \textbf{CPT} is the best method to construct predictive models to predict the generic form of answers. \begin{table} \centering \caption{\label{tab:seqMethodsRMSE}RMSE of sequence prediction methods} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccc} \toprule \textbf{Prediction Scenario} & \textbf{LZ78} & \textbf{Mark1} & \textbf{TDAG} & \textbf{DG} & \textbf{AKOM} & \textbf{CPT} & \textbf{CPT+} \\ \midrule Type & 7.4\% & 15.2\% & \cellcolor{red!25}\textbf{21.8\%} & 13.4\% & 17.3\% & \cellcolor{green!25}\textbf{7.1\%} & 12.9\% \\ Scale & 9.8\% & 12.5\% & \cellcolor{red!25}\textbf{17.9\%} & 10.6\% & 14.3\% & \cellcolor{green!25}\textbf{5.7\%} & 11.8\% \\ Prominence & 8.7\% & 13.3\% & \cellcolor{red!25}\textbf{19.2\%} & 9.9\% & 15.2\% & \cellcolor{green!25}\textbf{4.9\%} & 11.5\% \\ \midrule Content & 8.9\% & 15.5\% & \cellcolor{red!25}\textbf{22.7\%} & 9.1\% & 17.4\% & \cellcolor{green!25}\textbf{1.9\%} & 10.2\% \\ Content and Style & 8.6\% & 11.7\% & \cellcolor{red!25}\textbf{16.1\%} & 13.2\% & 13.7\% & \cellcolor{green!25}\textbf{8.2\%} & 13.8\% \\ \bottomrule \end{tabular} } \end{table} \section{Demonstration: From Generic to Specific} Translating generic encoding of answer to specific form (e.g., type sequence to toponym sequence) is the last phase in the proposed approach. Our approach to the generic-to-specific translation problem is grounded in the following assumption: \emph{places mentioned in the questions have relationships to places referred to in their answers, and these relations can be found in a knowledge base}. In addition, the specific form of questions and generic form of answers are available through encoding and prediction, respectively. Based on this assumption and the available information, the specific form of answer can be derived using a SPARQL query template (Query~\ref{lst:g_sparql}). While the \emph{structure} of a suitable knowledge base for this purpose has been studied before by \cite{Chen2018}, no such knowledge base is yet available with the definitions of type, scale and prominence as used in this study. Hence, the translation is only demonstrated here using the introductory example\footnote{More examples are provided in the supplementary material (Section 4)}. We have used DBPedia and Geonames as sources to demonstrate how SPARQL queries can be used to find specific forms of answers. Considering the information stored in DBPedia and Geonames, this demonstration is limited to type sequences of the answers because the prominence and scale are not available in the place ontology of these knowledge bases. Even the type schema used in DBPedia is different from the Geonames' type schema, and consequently in the following example, mapping to the DBPedia type schema is done manually. \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL template}, label={lst:g_sparql}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX [KNOWLEDGE BASE] SELECT distinct ?question ?answer WHERE { VALUES ?question [SPECIFIC] . ?answer a [GENERIC] . {?question ?r ?answer} UNION {?answer ?r ?question} . } \end{lstlisting} \end{minipage} Referring to the introductory example, the where-question and its answer is modelled as follows: \begin{itemize} \item specific representation (question): [Putney Bridge]; \item TSP encoding (question): type sequence [BDG], scale sequence [4], prominence sequence [3]; \item TSP encoding (answer): type sequence [STM, ADM2], scale sequence [6, 6], prominence sequence [6, 7]; \item specific representation (answer): [River Thames, London] \end{itemize} The SPARQL queries for finding the specific forms of answers are presented in Queries \ref{lst:sparql3} and \ref{lst:sparql3_g} using DBPedia and Geonames ontologies. The results of these queries are shown in Table \ref{tab:eg3}. Using DBPedia, the generic forms are correctly translated into River Thames and London. However, the generic to specific translation using Geonames is partially successful. In Geonames, places are conceptualized as points and it supports only containment. This example shows that point-based conceptualization of places is not sufficient for generic to specific translation and more diverse support of spatial relationships can be useful to find the correct specific forms. \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of the example (DBPedia)}, label={lst:sparql3}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX dbo: <http://dbpedia.org/ontology/> SELECT distinct ?q1 ?a1 ?a2 WHERE { VALUES ?q1 {<http://dbpedia.org/resource/Putney_Bridge>} ?a1 a dbo:PopulatedPlace . {?a1 ?r1 ?q1} UNION {?q1 ?r1 ?a1} . ?a2 a dbo:River . {?a2 ?r2 ?q1} UNION {?q1 ?r2 ?a2} . } \end{lstlisting} \end{minipage} \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of the example (Geonames)}, label={lst:sparql3_g}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX gn: <http://www.geonames.org/ontology#> SELECT distinct ?q1 ?a1 ?a2 WHERE { VALUES ?q1 {<http://sws.geonames.org/6619925/>} ?a1 gn:featureCode gn:A.ADM2 . {?a1 ?r1 ?q1} UNION {?q1 ?r1 ?a1} . ?a2 gn:featureCode gn:H.STM . {?a2 ?r2 ?q1} UNION {?q1 ?r2 ?a2} . } \end{lstlisting} \end{minipage} \begin{table}\centering \caption{\label{tab:eg3}SPARQL results to find specific form of the answer} \begin{tabular}{lllll}\toprule \textbf{Knowledge Base} & \textbf{Q1} & \textbf{A1} & \textbf{A2} \\ \midrule DBPedia & Putney Bridge & \cellcolor{green!25}\textbf{London} & \cellcolor{green!25}\textbf{River Thames} \\ \midrule Geonames & Putney Bridger & \cellcolor{green!25}\textbf{London} & \cellcolor{red!25}\textbf{--} \\ \bottomrule \end{tabular} \end{table} \section{Discussion} The results of the proposed method shows how generic information can be used to characterize and imitate human answering behavior to generate templates for answering the questions. While the results are limited to the human-search engine interaction, the proposed methodology (specific-generic translation) is flexibly defined to be applicable to other QA scenarios as well. We have used type, scale and prominence as generic classes to investigate MS~MARCO dataset. We have compared their potentials in describing human answering behavior and their performances in predicting the generic forms of the answers. As a result, two major observations are reported. First, while strong patterns for each generic class have been observed, we find that \textit{scale} is the most predictive class. This is because where-questions are a specific subset of spatial questions and scale directly captures inherent spatial aspect of places. Meanwhile, in the notions of type and prominence, other aspects of places contribute as well -- e.g., functional and physical aspects. In addition, scale is a generic class that captures hierarchical relationships between places, and previous studies show that these relationships are the basis for answering where-questions \citep{shanon1983answers}. Moreover, we have observed that type is performing better than prominence in both characterizing and predicting human-generated answers. This observation is highly influenced by the proxies used to capture type and prominence. Second, when comparing SWQs and DWQs, our investigation shows that the generic templates to answer to DWQs, compared to SWQs, can be generated more accurately. We find stronger rules and patterns in the answers to DWQs than in answers to SWQs. This is because DWQs contain richer details which helps narrowing down the list of possible relevant answers. To illustrate this point, two examples are provided (1) \emph{Where in California is Beverly Hills?} and (2) \emph{Where is Beverly Hills?} In the first question, the list of possible relevant answers is narrowed down to \emph{Los Angeles County} because the inquirer already knows it is in \emph{California}. For the latter, respondents are free to subjectively guess the state of the inquirer's geographic knowledge and provide answers such as \emph{Los Angeles County}, \emph{California}, and \emph{United States}. \textbf{Theoretical limitations:} The instruction for specific-generic approach is devised in a flexible manner to be usable in different GeoQA scenarios. However, utilizing the approach needs a careful design (e.g., selecting appropriate list of generic classes) to fit for a particular scenario. The proposed TSP encoding is limited to the QA scenario of general Web search and may not be suitable for other QA scenarios such as human interaction with autonomous cars. In short, the theoretical limitations of this study are: \begin{enumerate} \item The generic-to-specific translation approach is only focused on where-questions, and other types of geographic questions are neglected. \item The proposed approach is focused only on the questions and their relationship with the answers, when no other contextual information about inquirers is available. \item The approach is designed with an exclusive focus on toponyms while qualitative spatial relationships have an important role in answering where questions. \item The additional impacts of qualitative spatial relationships (e.g., \emph{in southern part of}) as modifiers of scale are neglected in the TSP encoding. \end{enumerate} \textbf{Results limitations:} There are some limitations to the implementation presented in this study: \begin{enumerate} \item The biases of the MS MARCO dataset directly influence our results. The data are extracted from the Microsoft Bing search engine, and hence the results are necessarily biased to the questions asked by users of this search engine. In addition, the sampling approach used when extracting MS MARCO questions from the MS Bing query logs may have a direct and unquantifiable impact on the generality of the results. \item The results are influenced by the geographic biases and incompleteness of data in Geonames and OSM Nominatim. The bias and incompleteness of gazetteers are well-documented by \cite{ACHESON2017309}. \item The bias in the proxies that have been used to capture the TSP encoding also have an impact on the results. \end{enumerate} Despite these limitations, the identified patterns align well with everyday experience and provide a grounding for answering where-questions. \section{Conclusions} Generating responses with a similar quality to human-generated answers is a challenge to current search engines and QA systems. In particular, where-questions are hard to answer because the responses can sometimes be either vague or obvious to the inquirers. To avoid generating ambiguous or obvious responses or retrieving unnecessary information as a part of the answers, a proper set of anchor places must be identified to localize the place in question. The assumption that answers to where-questions can be found completely, without any further modification, inside a textual document or as a node or its properties in a knowledge base may not hold in general. Consequently, we introduced here an approach to generate templates to answer where-questions based on relevant pieces of information. The approach is based on the automatic extraction of patterns of generic geographic forms from human-generated QA. These captured in predictive models and are used to generate templates of answers similar human-generated responses. Three generic classes (i.e., type, scale and prominence) are used to investigate the properties of the anchor places in human-generated answers. We have used questions and answers from MS MARCO v2.1, an extensive dataset constructed from questions submitted to a general-purpose search engine. Using distribution analysis and rule mining techniques, we have identified the characteristics and recurrent patterns in the questions and their answers (Hypotheses 1 and 2). We have then applied sequence prediction methods to generate the generic forms for answers based on the generic forms of the corresponding questions (Hypothesis 3). We have also briefly sketched an approach how such generic forms may help with the generation of the appropriate answers, based on the information available in the knowledge bases. The results show that the prediction of answer structures based on \textit{scale} is more precise, compared to predictions relying on \textit{type} and \textit{prominence}. The rules extracted based on scale have higher support and confidence than the rules extracted from type or prominence. We also observe how the type of questions (i.e., SWQs vs. DWQs) influence the strength of the extracted rules and lead to noticeable differences in prediction performances. Finally, we compared different sequence prediction methods and find that CPT \citep{gueniche2013compact} is the best performing approach in all scenarios. However, the results of this study are limited to human interaction with a general purpose search engine. Consequently, an important future direction of this research is to investigate other corpora of QA related to different scenarios -- e.g., human-human dialogue. We have also observed that the neglect of qualitative spatial relationships in our encoding and prediction mechanism may present a a major theoretical shortcoming of the proposed specific-generic translation. Consequently, developing a more sophisticated encoding is necessary to extract a deeper understanding of the human answering behavior of where-questions. Developing an automatic approach to decode generic forms of answers into specific representations (i.e., toponyms) is a necessary step to complete the process of the specific-generic translation approach. Available information in documents or knowledge bases can be used to derive the specific representations. Another important future direction is to investigate how the proposed approach can be combined with current personalization methods, in order to adapt answers to specific inquirers and their context. Finally, investigation of other types of where-questions (i.e., where-questions with generic references) and their human-generated answers using specific-generic translation remains as a future work. \section{Data and Codes Availability Statement} This study makes use of a third-party data source, MS MARCO v2.1 \citep{Nguyen2016ms}. The dataset is freely available under a proprietary agreement for non-commercial use\footnote{\url{http://www.msmarco.org/dataset.aspx}}. The computational workflow of this publication is implemented in Java and R. The implementation is available under the MIT License\footnote{\url{https://opensource.org/licenses/MIT}} and accessible in GitHub: \url{https://github.com/hamzeiehsan/Template-for-answering-where-questions}. \section*{Acknowledgments} The support by the Australian Research Council grant DP170100109 is acknowledged. We also thank the anonymous reviewers for their helpful comments that improve the quality of this paper. \bibliographystyle{apacite} \section{Implementation} \subsection{Scale Encoding} In scale encoding, the scale values of OpenStreetMap (OSM) are mapped into the proposed scale schema. The mapping is shown in Table \ref{tab:scaleMapping}. \begin{table} \centering \caption{\label{tab:scaleMapping}Scale mapping from OSM values to proposed schema} \begin{tabular}{llllll}\midrule Scale level (proposed schema) & OSM scale levels & Description\\ \midrule 1& value $\geq$ 27 & buildings and houses\\ 2& 22 $\leq$ value $<$ 27 & airports, roads and streets\\ 3& 18 $\leq$ value $<$ 22 & suburbs and districts\\ 4& 16 $\leq$ value $<$ 18 & cities, islands and villages\\ 5& 12 $\leq$ value $<$ 16 & counties\\ 6& 8 $\leq$ value $<$ 12 & states and regions\\ 7& 4 $\leq$ value $<$ 8 & countries\\ 8& value $<$ 4 & oceans, seas and continents\\ \bottomrule \end{tabular} \end{table} \subsection{Prominence Encoding} To derive prominence levels from OSM importance values, we have used `natural breaks' classification. First, all the importance values are collected, then based on the histogram the natural break method calculates the boundaries of each prominence level. Figure \ref{fig:prominenceLevels} shows the histogram and the derived boundaries for the prominence levels. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{imgs/importance_to_prominence.png} \caption{Histogram of importance values and boundaries of prominence levels.} \label{fig:prominenceLevels} \end{figure} \section{Comparing Simple-Where Questions and Detailed Where-Questions} Figure~\ref{fig:tswqvsdwq} shows the distributions of place types in SWQs and DWQs. In DWQs, administrative divisions are almost twice as frequent as in SWQs. The extra detail (additional toponyms) included in the content of DWQs dominantly refers to administrative divisions. For example, \emph{Where in California is Disneyland?} can be viewed as a SWQ, \emph{Where is Disneyland?}, with extra details that shows what the inquirer already knows about its coarse location (\emph{in California}). Hence, the differences between the two distributions illustrate where the additional detail is included. \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/type-swqVsdwq3.png} \caption{Distribution of types in the SWQs versus DWQs.} \label{fig:tswqvsdwq} \end{figure} Answers to SWQs (Figure~\ref{fig:sswqvsdwq}) have a linearly incremental distribution (except for the coarsest level of scale). The figure shows the answers are generated with toponyms that belong to \textit{coarser levels} of scale compared to places mentioned in the SWQs. The distribution of scale in the answers to DWQs shows a dominance of references to places of scale \textit{between} the two mean values of the questions' bi-modal distribution. Thus, people tend to generate responses at a level of scale greater than that of the place that is asked for, and lower than the scale of the places mentioned as additional details. This is because places at a same or coarser level of scale than the places in questions would lead to obvious, and hence irrelevant, answers. The clear distinction of the difference in answering SWQs and DWQs are shown in Table~\ref{tab:sswqvsdwq}, enabling the comparison of the levels of scale of places identified in the questions and their answers. \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/fig-7.png} \caption{Distribution of the levels of scale in the SWQs versus DWQs.} \label{fig:sswqvsdwq} \end{figure} \begin{table} \centering \caption{\label{tab:sswqvsdwq}Comparison of levels of scale in the answers compared to their questions} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{llll} \toprule \textbf{Level of scale in answers} & \textbf{Lower than} &\textbf{ Equal (Between)} & \textbf{Greater than} \\ \midrule \multicolumn{4}{c}{\textbf{SWQ}}\\ \midrule each value & 17.0\% &13.1\% & \textbf{69.9\%}\\ min value & 25.8\% & 16.0\% & \textbf{58.2\%}\\ median value & 14.5\% & 14.9\% & \textbf{70.6\%}\\ max value & 10.6\% & 5.9\% & \textbf{83.5\%}\\ \midrule \multicolumn{4}{c}{\textbf{DWQ}}\\ \midrule each value & 7.2\% & \textbf{59.6\%} & 33.2\%\\ min value & 10.5\% & \textbf{77.7\%} & 11.8\%\\ median value & 6.7\% & \textbf{77.3\%} & 16.0\%\\ max value & 3.9\% & 40.8\% & \textbf{55.3\%}\\ \bottomrule \end{tabular} } \end{table} Figure~\ref{fig:pswqvsdwq} shows the distribution of prominence in SWQs, DWQs, and their answers. We observe that people ask about less-known places and answer referring to well-known ones. In DWQs, both less-known and well-known places are frequently observed. The well-known places in DWQs are the details presented in the content of the questions. In DWQs, inquirer's state of knowledge can be estimated better, because the question contains what they already know and what they want to find out. Similar to the pattern observed based on scale, prominence levels of places in the answers to DWQs are mostly between the levels of prominence of the toponyms mentioned in their corresponding questions (Table~\ref{tab:pswqvsdwq}). \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/fig-8.png} \caption{Distribution of prominence levels in the SWQs versus DWQs} \label{fig:pswqvsdwq} \end{figure} \begin{table} \centering \caption{\label{tab:pswqvsdwq}Comparison of prominence levels in the answers compared to their questions} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{llll} \toprule \textbf{Prominence level in answers} & \textbf{Lower than} & \textbf{Equal (Between)} & \textbf{Greater than} \\ \midrule \multicolumn{4}{c}{\textbf{SWQ}}\\ \midrule each value& 19.8\% & 12.5\% & \textbf{67.7\%}\\ min value& 30.7\% & 14.7\% &\textbf{54.6\%}\\ median value& 19.4\% & 9.5\% &\textbf{71.1\%}\\ max value& 9.7\%& 7.9\% &\textbf{82.4\%}\\ \midrule \multicolumn{4}{c}{\textbf{DWQ}}\\ \midrule each value& 8.7\% & \textbf{68.4\%} & 22.9\% \\ min value& 13.0\% & \textbf{77.9\%} & 9.1\% \\ median value& 6.8\% & \textbf{79.7\%} & 13.5\% \\ max value& 3.5\% & \textbf{55.6\%} & 40.9\%\\ \bottomrule \end{tabular} } \end{table} \section{Sequence Distributions and Frequent Patterns} \subsection{Sequence Distributions} Sequence distributions capture not only the content of sequences, but the way that the content is generated in the sequence (i.e., \emph{style}). The length of sequences can be further investigated using sequence distributions. Figure~\ref{fig:d_tsp_qvsa} shows the sequence distributions of type, scale and prominence in the questions and answers. Here, we visualize only the first five positions of answer-sequences. While this long-tailed distribution contains answers up to a length of 13, the vast majority (95.84\%) of answers have less than five toponyms. Most of the questions have only one or two toponyms, and the answers are mostly short as well (usually with less than three toponyms in their content). In Figure~\ref{fig:d_tsp_qvsa}, only the top ten types are visualized, with the rest grouped as \emph{OTHER}. \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/d_qVsa.png} \caption{Sequence distributions of the questions and answers} \label{fig:d_tsp_qvsa} \end{figure} Based on the distributions of type, scale and prominence in the questions, the details in DWQs are likely to be found as the second toponym. In Figure~\ref{fig:d_tsp_qvsa}, one can observe that these details usually belong to the type of ADM1\footnote{The codes for place types are described in Section~\ref{appendix:a}.} (first-order administrative divisions -- i.e., states) or PCLI (independent political entities -- i.e., countries). Similar patterns can be observed using scale and prominence -- i.e., the second elements in questions belong to well-known places which are in coarse levels of scale. Figure~\ref{fig:d_tsp_qvsa} shows the strong role of administrative places in the human-generated answers in this corpus. The answers are generated with mid-levels to coarse-levels of scale and presented starting with mid-levels followed by coarse levels in most cases. Similar patterns can be observed in terms of prominence; however, places in lowest and highest prominence levels can be found in the answers. When comparing the questions and answers based on type, we observe that while the questions ask about a diverse range of place types, the answers are mostly generated using the top ten types (i.e., the less frequent types labelled as \emph{OTHER} are almost twice as frequent in the questions compared to answers). \subsection{Frequent Patterns} Frequent patterns describe the generalized patterns in human answering behavior. Stronger patterns of a generic class (e.g., scale) show the usefulness of the generic information to describe the human-generated answers. Figures~\ref{fig:t_sp}, \ref{fig:s_sp} and \ref{fig:p_sp} illustrate the top ten patterns based on type, scale and prominence, respectively. These patterns are shown for all answers, answers to SWQs, and answers to DWQs. Figure~\ref{fig:t_sp} shows that the top ten patterns in the answers are mostly generated with first and second orders administrative divisions and independent political entities. In the generic class, \emph{type}, almost 60\% of all answers are described through only ten sequence patterns. Ten patterns also cover 70\% and 50\% of the answers to DWQs and SWQs, respectively. The differences shows that answers to DWQs are more describable based on place type than the responses to SWQs. This is because in DWQs, some details are provided and people can therefore infer the inquirers' information needs better. In SWQs, due to lack of context leads to more ambiguity and likely requires more subjective judgments in responses. \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/type-distr.png} \caption{Top ten frequent pattern in type-sequences of the answers} \label{fig:t_sp} \end{figure} Figure~\ref{fig:s_sp} illustrates the top ten patterns based on scale. The coarse levels of scale constitute most of the frequent patterns. Answers covered by these patterns show that scale is an important generic information characterizing well human-generated answers. The comparison of the answers to DWQs and SWQs shows similar results to types. More than 80\% of the answers to DWQs can be described using ten patterns based on scale. The style of these patterns shows that the answers are hierarchically presented (starting from finer levels followed by coarser levels of scale). \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/scale-distr.png} \caption{Top ten frequent pattern in scale-sequences of the answers} \label{fig:s_sp} \end{figure} Figure~\ref{fig:p_sp} shows the ten most frequent patterns based on prominence levels. Most of the patterns are constructed with high levels of prominence. Similar to scale, a strong pattern in the style of answers can be observed -- i.e., starting with less-known places followed by well-known ones. The patterns based on prominence have, however, less support in the data compared to the patterns derived from scale or type sequences. This observation can be explained by prominence being more tightly related to the specific context of the questions, compared to scale or type. For example, finding a highly prominent place reference may not be always possibl as prominent places are not uniformly distributed in the world. \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/prominence-distr.png} \caption{Top ten frequent pattern in prominence-sequences of the answers} \label{fig:p_sp} \end{figure} \section{Demonstration Examples} In this section, four examples are provided to demonstrate how generic form of answers (e.g., type-sequence) can be translated into their specific form (toponym-sequence). \textbf{Example 1:} \emph{Where is Nagasaki?} \emph{In Japan}. The specific form and TSP encoding of this question-answer pair are shown below: \begin{itemize} \item specific representation (question): [Nagasaki]; \item TSP encoding (question): type-sequence [ADM1], scale-sequence [6], prominence-sequence [4]; \item TSP encoding (answer): type-sequence [PCLI], scale-sequence [7], prominence-sequence [7]; \item specific representation (answer): [Japan] \end{itemize} Queries \ref{lst:sparql1} and \ref{lst:sparql1_g} show the SPARQL queries to derive the specific form of the answer using the information available in DBPedia and Geonames, respectively. The result of both queries is the unified resource identifier (URI) of Japan which is the correct specific representation of the answer. The query results are shown in Table \ref{tab:eg1}. \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 1 (DBPedia)}, label={lst:sparql1}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX dbo: <http://dbpedia.org/ontology/> SELECT distinct ?q1 ?a1 WHERE { VALUES ?q1 {<http://dbpedia.org/resource/Nagasaki>} ?a1 a dbo:Country . ?q1 ?r ?a1 } \end{lstlisting} \end{minipage} \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 1 (Geonames)}, label={lst:sparql1_g}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX gn: <http://www.geonames.org/ontology#> SELECT distinct ?q1 ?a1 WHERE { VALUES ?q1 {<http://sws.geonames.org/1856156/>} . ?a1 gn:featureCode gn:A.PCLI . ?q1 ?r ?a1 } \end{lstlisting} \end{minipage} \begin{table}\centering \caption{\label{tab:eg1}SPARQL results to Example 1} \begin{tabular}{lll}\toprule \textbf{Knowledge Base} & \textbf{Q1} & \textbf{A1} \\ \midrule DBPedia & Nagasaki & \cellcolor{green!25}\textbf{Japan} \\ \midrule Geonames & Nagasaki & \cellcolor{green!25}\textbf{Japan} \\ \bottomrule \end{tabular} \end{table} \textbf{Example 2:} \emph{Where in Illinois is Cahokia?} \emph{In St. Clair County, Illinois, United States}: \begin{itemize} \item specific representation (question): [Illinois, Cahokia]; \item TSP encoding (question): type-sequence [ADM1, PPL], scale-sequence [6, 4], prominence-sequence [6, 3]; \item TSP encoding (answer): type-sequence [ADM2, PCLI], scale-sequence [5, 7], prominence-sequence [3, 7]; \item specific representation (answer): [St. Clair County, United States] \end{itemize} Based on the SPARQL template, two queries (Queries~\ref{lst:sparql2} and~\ref{lst:sparql2_g}) are used to translate the generic form of the answer to a specific form using DBPedia and Geonames. The results of the queries are shown in Table~\ref{tab:eg2}. Using DBPedia, the correct answer is among one of the three retrieved responses \{[Collinsville, United States], [Illinois, United States], [St. Clair County, United States]\}. The second response in the results can be easily filtered due to repetitive content (i.e., Illinois) considering the content of the question. However, the first one which is not mentioned in the human-generated answer cannot be avoided using only the type-sequence of the answer. Consequently, by considering the predicted scale-sequence of the answer, the correct response [St. Clair County, United State] can be derived (Table~\ref{tab:eg2}). \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 2 (DBPedia)}, label={lst:sparql2}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX dbo: <http://dbpedia.org/ontology/> SELECT distinct ?q1 ?q2 ?a1 ?a2 WHERE { VALUES ?q1 {<http://dbpedia.org/resource/Cahokia>} VALUES ?q2 {<http://dbpedia.org/resource/Illinois>} ?a1 a dbo:PopulatedPlace . ?q1 ?r ?a1 . ?a1 ?r2 ?q2 . ?a2 a dbo:Country . ?q2 ?r3 ?a2 . } \end{lstlisting} \end{minipage} \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 2 (Geonames)}, label={lst:sparql2_g}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX gn: <http://www.geonames.org/ontology#> SELECT distinct ?q1 ?q2 ?a1 ?a2 WHERE { VALUES ?q1 {<http://sws.geonames.org/4234969/>} . VALUES ?q2 {<http://sws.geonames.org/4896861/>} . ?a1 gn:featureCode gn:A.ADM2 . ?q1 ?r ?a1 . ?a1 ?r2 ?q2 . ?a2 gn:featureCode gn:A.PCLI . ?q2 ?r3 ?a2 . } \end{lstlisting} \end{minipage} \begin{table}\centering \caption{\label{tab:eg2}SPARQL results to Example 2} \resizebox{0.7\textwidth}{!}{ \begin{tabular}{lllll} \toprule \textbf{Knowledge Base} & \textbf{Q1} & \textbf{Q2} & \textbf{A1} & \textbf{A2} \\ \midrule DBPedia & Cahokia & Illinois & Collinsville & \cellcolor{green!25}\textbf{United States} \\ & & & Illinois & \cellcolor{green!25}\textbf{United States} \\ & & & \cellcolor{green!25}\textbf{St. Clair County} & \cellcolor{green!25}\textbf{United States} \\\midrule Geonames & Cahokia & Illinois & \cellcolor{green!25}\textbf{St. Clair County} & \cellcolor{green!25}\textbf{United States}\\ \bottomrule \end{tabular} } \end{table} \textbf{Example 3:} \emph{Where is the Danube River, Europe?} \emph{It originates in Germany's Black Forest, and flows in a southeasterly direction through central and eastern Europe to the Black Sea}: \begin{itemize} \item specific representation (question): [Danube River, Europe]; \item TSP encoding (question): type-sequence [STM, CONT], scale-sequence [4, 8], prominence-sequence [6, 7]; \item TSP encoding (answer): type-sequence [PCLI, MTS, SEA], scale-sequence [7, 3, 7], prominence-sequence [7, 4, 5]; \item specific representation (answer): [Germany, Black Forest, Black Sea] \end{itemize} The SPARQL queries for finding the specific forms of answers of Example 2 are presented in Queries \ref{lst:sparql3} and \ref{lst:sparql3_g}. The results of these queries are shown in Table \ref{tab:eg3}. Using DBPedia, the results for the mountain range and the sea that are related to the Danube are the Black Forest and the Black Sea, while the country is not unique and ten countries (including the right one, Germany, based on the human-generated answer) are found in relation to the river. Here, we have only used the type-sequence of the answer, and using prominence we could limit the country lists to Germany (the only country in Level 7 of prominence in the results). Interestingly, the results using Geonames are incorrect. In Geonames, the river is stored as a coordinate point, which belongs to one country. Moreover, the relationships between Black Forest (\emph{originates}) and Black Sea (\emph{flows to}) to the river are not stored in Geonames due to its limited list of supported spatial relations -- i.e., Geonames supports only containment. \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 3 (DBPedia)}, label={lst:sparql3}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX dbo: <http://dbpedia.org/ontology/> SELECT distinct ?q1 ?a1 ?a2 ?a3 WHERE { VALUES ?q1 {<http://dbpedia.org/resource/Danube>} ?a1 a dbo:Country . ?q1 ?r1 ?a1 . ?a2 a dbo:MountainRange . ?q1 ?r2 ?a2 . ?a3 a dbo:Sea . ?a3 ?r3 ?q1 } \end{lstlisting} \end{minipage} \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 3 (Geonames)}, label={lst:sparql3_g}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX gn: <http://www.geonames.org/ontology#> SELECT distinct ?q1 ?a1 ?a2 ?a3 WHERE { VALUES ?q1 {<http://sws.geonames.org/791630/>} ?a1 gn:featureCode gn:A.PCLI . ?q1 ?r1 ?a1 . ?a2 gn:featureCode gn:T.MTS . ?q1 ?r2 ?a2 . ?a3 gn:featureCode gn:H.SEA . ?q1 ?r3 ?a3 } \end{lstlisting} \end{minipage} \begin{table}\centering \caption{\label{tab:eg3}SPARQL results to Example 3} \begin{tabular}{lllll}\toprule \textbf{Knowledge Base} & \textbf{Q1} & \textbf{A1} & \textbf{A2} & \textbf{A3} \\ \midrule DBPedia & Danube & Moldova & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Ukraine & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & \cellcolor{green!25}\textbf{Germany} & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Romania & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Croatia & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Serbia & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Austria & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Bulgaria & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Hungary & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ & & Slovakia & \cellcolor{green!25}\textbf{Black Forest} & \cellcolor{green!25}\textbf{Black Sea} \\ \midrule Geonames & Danube River & Romania & \cellcolor{red!25}\textbf{--} & \cellcolor{red!25}\textbf{--} \\ \bottomrule \end{tabular} \end{table} \textbf{Example 4:} \emph{Where is Golden Gate Bridge?} \emph{It is located between San Francisco and Marin County, in the U.S. state of California.}: \begin{itemize} \item specific representation (question): [Golden Gate Bridge]; \item TSP encoding (question): type-sequence [BDG], scale-sequence [5], prominence-sequence [4]; \item TSP encoding (answer): type-sequence [ADM2, ADM2, ADM1], scale-sequence [5, 5, 7], prominence-sequence [4, 4, 7]; \item specific representation (answer): [San Francisco, Marin County, U.S. state of California] \end{itemize} \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 4 (DBPedia)}, label={lst:sparql2}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX dbo: <http://dbpedia.org/ontology/> SELECT distinct ?q1 ?a1 ?a2 ?a3 WHERE { VALUES ?q1 {<http://dbpedia.org/resource/Golden_Gate_Bridge>} ?a1 a dbo:PopulatedPlace . {?a1 ?r1 ?q1} UNION {?q1 ?r1 ?a1} . ?a2 a dbo:PopulatedPlace . {?a2 ?r2 ?q1} UNION {?q1 ?r2 ?a2} . ?a3 a dbo:PopulatedPlace . {?a3 ?r3 ?q1} UNION {?q1 ?r3 ?a3} . } \end{lstlisting} \end{minipage} \begin{minipage}{0.9\linewidth} \begin{lstlisting}[captionpos=b, caption={SPARQL query of Example 4 (Geonames)}, label={lst:sparql2_g}, basicstyle=\scriptsize \ttfamily,frame=single] PREFIX gn: <http://www.geonames.org/ontology#> SELECT distinct ?q1 ?a1 ?a2 WHERE { VALUES ?q1 {<http://sws.geonames.org/5352844/>} ?a1 gn:featureCode gn:A.ADM2 . {?a1 ?r1 ?q1} UNION {?q1 ?r1 ?a1} . ?a2 gn:featureCode gn:A.ADM2 . {?a2 ?r2 ?q1} UNION {?q1 ?r2 ?a2} . ?a3 gn:featureCode gn:A.ADM1 . {?a3 ?r3 ?q1} UNION {?q1 ?r3 ?a3} . } \end{lstlisting} \end{minipage} \begin{table}\centering \caption{\label{tab:eg2}SPARQL results to Example 4} \resizebox{0.7\textwidth}{!}{ \begin{tabular}{lllll} \toprule \textbf{Knowledge Base} & \textbf{Q1} & \textbf{A1} & \textbf{A2} & \textbf{A3} \\ \midrule DBPedia & Golden Gate Bridge & \cellcolor{green!25}\textbf{San Francisco} & \cellcolor{green!25}\textbf{Marin County} & \cellcolor{green!25}\textbf{California} \\ \midrule Geonames & Golden Gate Bridge & \cellcolor{green!25}\textbf{San Francisco} & \cellcolor{red!25}\textbf{--} & \cellcolor{green!25}\textbf{California}\\ \bottomrule \end{tabular} } \end{table} \section{Feature Codes} \label{appendix:a} Table~\ref{tab:feature_codes} shows the types which are mentioned in this paper. The complete list can be found in the Geonames website. \begin{table}[h!] \caption{\label{tab:feature_codes}Feature codes used in the paper (extracted from Geonames documentation)} \begin{tabular}{lll} \toprule \textbf{Code} & \textbf{Description} & \textbf{Example}\\ \midrule ADM1 & first-order administrative division (states, and provinces) & Oklahoma \\ ADM2 & second-order administrative division (counties) & Brevard County \\ ADM3 & third-order administrative division (cities) & City of Alhambra \\ ADM4 & fourth-order administrative division (towns) & Newburgh \\ AREA & a part of land without homogeneous character/boundaries & Theresienwiese \\ BDG & a bridge & Putney Bridge \\ FRM & a part of land dedicated to agricultural purposes & Branksome \\ HTL & hotels & The Carriage House \\ MT & mountains & Eagles Nest \\ PCLI & independent political entity & Paraguay \\ PPL & diverse type of populated places (e.g., cities, and villages) & El Granada \\ PPLA2 & seat of a second-order administrative division & Lake City\\ PRK & parks and recreational places & Franklin Square Park \\ RGN & an area with particular cultural character & Central Africa \\ SCH & schools and universities & Stuyvesant High School \\ STM & streams & Withlacoochee River \\ \bottomrule \end{tabular} \end{table} \end{document}
2,869,038,155,210
arxiv
\section{Introduction and main results.} The main object of study of this note is a class of stable-like processes associated with the following operator \begin{equation}\label{ope:L} \mathcal{L} f(x)=\int[ f(x+h)-f(x)-1_{(|h|\leq 1)}h\cdot {\nabla} f(x)]\frac{n(x,h)}{|h|^{d+\alpha}}\,\d h, \end{equation} where when $\alpha\in (0,1)$, the term $1_{(|h|\leq 1)}h\cdot {\nabla} f(x)$ is not present. When $\alpha \in [1,2)$, this term is needed for the integral to converge. We discard it when $\alpha\in (0,1)$ because otherwise, the process will be dominated by the drift. Here and throughout this paper, we associate a process to a given operator through the martingale problem of Stroock and Varadhan \cite{ST}. This class of integral operators have received quite a bit of attention lately. For example, see \cite{BT} where the uniqueness of the solution to the martingale problem for the above operator was considered. In \cite{BL}, a Harnack inequality for harmonic functions of this operator was proved. For very recent results involving this operator, see \cite{Ba1} and the references therein for a sample of relevant papers. Here the corresponding process is a purely discontinuous one. Discontinuous processes are proving to be very useful in physics and in finance. The book \cite{CT} provides a lot of information about financial modelling using jump processes. It is therefore important to study properties of non-local operators like the ones defined by \eqref{ope:L}. We will need some notations before we can state our main results. The ball of radius $R$ and center $x$ will be denoted by $B(x, R)$ and the stopping time $\tau_D$ is defined by \begin{equation*} \tau_D=\inf \{t>0; X_t\notin D \}. \end{equation*} We will also make the following assumptions: \begin{assumption}\label{ass1} \begin{enumerate}[(a)] \item There exists a positive constant $\kappa$ such that \begin{equation*} n(x,h)\geq \kappa \qquad \forall \, x,\,h \in \mathbf{R}^d. \end{equation*} \item There exist positive constants $K$ and $\beta \in (0,1)$ such that \begin{equation*} |n(x,h)-1|\leq K(1\wedge|h|^\beta) \qquad \forall \, x,\,h \in \mathbf{R}^d. \end{equation*} \end{enumerate} \end{assumption} The following is the first main result of the paper. \begin{theorem}\label{Theorem 1} Let $x_0\in \mathbf{R}^d$ and $R\in (0,\frac{1}{2}]$. Suppose that $(\P^x, X_t)$ solves the martingale problem associated with $\mathcal{L}$ started at $x\in B(x_0, R)$. Then under Assumptions \ref{ass1}, there exists a positive number $p$ such that for any measurable function $f\in L^p(\mathbf{R}^d)$, the following holds \begin{equation}\label{ineq:main} \left|\E^x\int_0^{t\wedge \tau_{B(x_0, R)}}f(X_s)\,\d s \right|\leq c(R)\|f\|_{L^p(B(x_0, R))}, \end{equation} where $c(R)\rightarrow 0$ as $R\rightarrow 0$ and $p$ satisfies $d/p\leq \min\{\alpha, \beta\}$. \end{theorem} The above inequality was first proved by Krylov in \cite{K} for a $d$-dimensional diffusion process. Kurenok proved a variant of this inequality for stable process with index $\alpha \in(1,2)$. See \cite{Ku} for more information. In \cite{LP}, Lepeltier and Marchal derived such an inequality for diffusions with jumps. They considered processes associated with an integro-differential operator which is the sum of a uniformly elliptic operator and a non-local part. Their proof is an adaptation of Krylov's original proof. Here our operator is a purely non-local one so we need to adopt a different strategy; we use a perturbation method where we compare the operator $\mathcal{L}$ with that of the generator of a stable process. This inequality is very useful in the study of stochastic differential equations and their applications to control theory, filtering problems and so on. Here, as an application of this inequality, we obtain the existence of a solution to the martingale problem associated with the operator $\mathcal{L}$. The novelty of this result is that no continuity of the jump kernel is required. \begin{theorem}\label{Theorem 2} Suppose Assumptions \ref{ass1} hold. Then for every $x\in \mathbf{R}^d$, there exists a solution to the martingale problem for $\mathcal{L}$ starting at $x$. \end{theorem} We end this introduction with some remarks concerning Assumptions \ref{ass1}. That $n(x,h)$ is bounded below and above is the analog of strict ellipticity of an elliptic operator in non-divergence form. For small $|h|$, Assumption \ref{ass1} says that our jump kernel is close to that of a stable process. Such a condition seems to be required for the inequality \eqref{ineq:main} to hold. In fact, in \cite{Ba3} Bass has constructed a process which spends a positive amount of time in a set of measure zero. This violates inequality \eqref{ineq:main} and a close examination of the process constructed in \cite{Ba3} reveals that its jump kernel does not satisfy the second part of Assumption \ref{ass1}. The paper is structured as follows. In section 2, we derive some estimates which will be needed in the proof of Theorem \ref{Theorem 1} and Theorem \ref{Theorem 2} which we prove in sections 3 and 4 respectively. \section{Some estimates.} As mentioned in the introduction, we will use a perturbation method to prove our result. To this end, we define the following operator \begin{equation}\label{s-operator} \mathcal{L}_0 f(x)=\int[ f(x+h)-f(x)-1_{(|h|\leq 1)}h\cdot {\nabla} f(x)]\frac{1}{|h|^{d+\alpha}}\, \d h, \end{equation} where the term $1_{(|h|\leq 1)}h\cdot {\nabla} f(x)$ is not present when $\alpha \in (0,1)$. We now let $p_t(x,y)$ denote the transition density function of the process associated the operator $\mathcal{L}_0$ and define \begin{equation}\label{resolv} r^\lambda(x)=\int_0^\infty e^{-\lambda t}p_{t}(0,x)\,\d t, \end{equation} where $\lambda$ is a strictly positive constant. We will need upper bounds on $r^\lambda$ and its derivatives. But first let us make the following definition which will make the subsequent proofs read easier. \begin{eqnarray*} F_x(y,h)&:=&|r^\lambda(x+h-y)-r^\lambda(x-y)-1_{(|h|\leq 1)}h\cdot {\nabla} r^\lambda(x-y)|,\\ G_x(y,h)&:=&\frac{|n(x,h)-1||f(y)|}{|h|^{d+\alpha}},\\ I_{\alpha,x}(f)&:=&\int_{|x-y|\leq 1}\frac{|f(y)|}{|x-y|^{d-\alpha}}\,\d y,\\ J_{\alpha,x}(f)&:=&\int_{|x-y|>1}\frac{|f(y)|}{|x-y|^{d+\alpha}}\,\d y, \end{eqnarray*} where $f$ is a measurable function. In the above, the term $1_{(|h|\leq 1)}h\cdot{\nabla} r^\lambda(x-y)$ is not present whenever $\alpha \in(0,1)$. \begin{proposition}\label{prop:Tang} There exists a positive constant $c\in (0,\infty)$ such that the following hold: \begin{enumerate}[(a)] \item $r^\lambda(x)\leq c(\frac{1}{\lambda}|x|^{-2\alpha})\wedge 1)|x|^{-d+\alpha}$, \item $\sum_i\left|\frac{\partial r^\lambda(x)}{\partial x_i}\right|\leq c(\frac{1}{\lambda}|x|^{-2\alpha}\wedge 1)|x|^{-d+\alpha-1}$, \item $\sum_{i,j}\left|\frac{\partial^2 r^\lambda(x)}{\partial x_i\partial x_j}\right|\leq c(\frac{1}{\lambda}|x|^{-2\alpha}\wedge 1)|x|^{-d+\alpha-2}$. \end{enumerate} \end{proposition} We omit the proof of the above proposition since it can be found in \cite{T}. The following estimates will be important for our perturbation argument to work. \begin{proposition}\label{prop:diff} There exists a positive constant $c(\alpha, \beta, d)$ such that, \begin{enumerate}[(a)] \item $\begin{displaystyle}\int_{|x-y|\leq 1}\int F_x(y,h)G_x(y,h)\,\d h\,\d y\leq c(I_{\beta,x}(f)+I_{\alpha, x}(f)),\end{displaystyle}$ \item $\begin{displaystyle}\int_{|x-y|>1}\int F_x(y,h)G_x(y,h)\,\d h\,\d y\leq cJ_{2\alpha,x}(f).\end{displaystyle}$ \end{enumerate} \end{proposition} \begin{proof} We prove only the first part of the proposition and focus only on the case when $\alpha \in [1,2)$. The remaining part of the proof is similar. We begin by splitting the integral as follows. \begin{equation*} \begin{split} \int_{|x-y|\leq 1}&\int F_x(y,h)G_x(y,h)\,\d h\,\d y\\ &=\int_{|x-y|\leq 1}\int_AF_x(y,h)G_x(y,h)\d h\,\d y\\ &=\int_{|x-y|\leq 1}\int_BF_x(y,h)G_x(y,h)\d h\,\d y\\ &=\int_{|x-y|\leq 1}\int_CF_x(y,h)G_x(y,h)\d h\,\d y\\ &=I_1+I_2+I_3, \end{split} \end{equation*} where \begin{eqnarray*} A&:=&(h\in \mathbf{R}^d; |h|\leq |x-y|/2)\\ B&:=&(h\in \mathbf{R}^d;|x-y|/2< |h|\leq 3|x-y|/2)\\ C&:=&(h\in \mathbf{R}^d; |h|> 3|x-y|/2). \end{eqnarray*} We consider $I_1$ first and use Proposition \ref{prop:Tang} and Assumption \ref{ass1}(b) to write \begin{eqnarray*} I_1&\leq&c_1\int_{|x-y|\leq 1}\int_A \left(\sup_{z\in B(|x-y|,|x-y|/2)}\Delta r^\lambda(z)\right)\frac{|f(y)|}{|h|^{d+\alpha-\beta-2}}\,\d h\, \d y\\ &\leq&c_2\int_{|x-y|\leq 1}\frac{|f(y)|}{|x-y|^{d-\beta}}\,\d y\\ &=&c_2I_{\beta,x}(f). \end{eqnarray*} To deal with the second inequality, we consider the corresponding region of integration and use Proposition \ref{prop:Tang} to write \begin{eqnarray*} F_x(y,h)&\leq&{c_3}[\frac{1}{|x+h-y|^{d-\alpha}}+\frac{1}{|x-y|^{d-\alpha}}+\frac{|h|}{|x-y|^{d-\alpha+1}}]\\ &\leq&c_4[\frac{1}{|x-y|^{d-\alpha}}+\frac{1}{|x+h-y|^{d-\alpha}}]. \end{eqnarray*} We now use Assumption \ref{ass1}(b) together with the above to obtain \begin{equation*} \begin{split} \int_{|x-y|\leq 1}&\int_B F_x(y,h)G_x(y,h)\,\d h\,\d y\\ &\leq c_5[\int_{|x-y|\leq 1}\int_B\frac{|f(y)|}{|x-y|^{d-\alpha}}\frac{1}{|h|^{d+\alpha-\beta}}\,\d h\,\d y\\ &+\int_{|x-y|\leq 1}\int_B\frac{|f(y)|}{|x+h-y|^{d-\alpha}}\frac{1}{|x-y|^{d+\alpha+\beta}}\,\d h\,\d y]\\ &\leq c_6\int_{|x-y|\leq 1}\frac{|f(y)|}{|x-y|^{d-\beta}}\,\d y\\ &=c_6 I_{\beta,x}(f). \end{split} \end{equation*} Finally, we look at the third integral and its corresponding region of integration to obtain \begin{eqnarray*} F_x(y,h)&\leq&c_7[\frac{1}{|x+h-y|^{d-\alpha}}+\frac{1}{|x-y|^{d-\alpha}}+\frac{|h|1_{(|h|\leq 1)}}{|x-y|^{d-\alpha}}]\\ &\leq&c_8[\frac{1}{|x-y|^{d-\alpha}}+\frac{|h|1_{(|h|\leq 1)}}{|x-y|^{d-\alpha}}]. \end{eqnarray*} The above together with the second part of Assumption \ref{ass1} yield \begin{eqnarray*} I_3&\leq&c_9\int_{|x-y|\leq1}\int_{3|x-y|/2}^1\frac{|f(y)|}{|x-y|^{d-\alpha}}\frac{1}{|h|^{d+\alpha-\beta}}\,\d h\,\d y\\ &+&c_{10}\int_{|x-y|\leq1}\int_{3|x-y|/2}^1\frac{|f(y)|}{|x-y|^{d-\alpha+1}}\frac{1}{|h|^{d+\alpha-\beta-1}}\,\d h\,\d y\\ &+&c_{11}\int_{|x-y|\leq 1}\int_1^\infty\frac{|f(y)|}{|x-y|^{d-\alpha}}\frac{1}{|h|^{d+\alpha}}\d h\, \d y\\ &=&c_{12}[I_{\alpha,x}(f)+I_{\beta,x}(f)]. \end{eqnarray*} We now combine the above inequalities to obtain the result. \end{proof} The first part of the following proposition is the crucial estimate for the proof of Theorem \ref{Theorem 1}. \begin{proposition}\label{prop:main} Let \begin{equation*} u(x)=\int_{\mathbf{R}^d}r^\lambda(x-y) f(y)\,\d y, \end{equation*} where $f$ is a measurable function. Then, there exists a positive constant $c(d,\alpha,\beta)$ such that \begin{enumerate}[(a)] \item $|\mathcal{L} u(x)-\mathcal{L}_0 u(x)|\leq c\left[I_{\alpha,x}(f)+I_{\beta, x}(f)+J_{2\alpha,x}(f) \right],$ \item $|u(x)|\leq c\left[I_{\alpha, x}(f)+J_{\alpha,x}(f)\right]$. \end{enumerate} \end{proposition} \begin{proof} We prove part (a) first. Note that \begin{eqnarray*} |\mathcal{L} u(x)-\mathcal{L}_0 u(x)|&\leq&\iint F_x(y,h)G_x(y,h)\,\d h\,\d y\\ &=&\int_{|x-y|\leq 1}\int F_x(y,h)G_x(y,h)\,\d h\,\d y\\ &+&\int_{|x-y|>1}\int F_x(y,h)G_x(y,h)\,\d h\,\d y. \end{eqnarray*} Part (a) of the proposition is readily obtained once we apply Proposition \ref{prop:diff}. For part (b), we use Proposition \ref{prop:Tang} to write \begin{eqnarray*} |u(x)|&\leq& c_1\int_{|x-y|\leq 1}\frac{|f(y)|}{|x-y|^{d-\alpha}}\,\d y+c_2\int_{|x-y|> 1}\frac{|f(y)|}{|x-y|^{d+\alpha}}\,\d y\\ &=&c_1I_{\alpha, x}(f)+c_2J_{\alpha, x}(f). \end{eqnarray*} \end{proof} \section{Proof of Theorem \ref{Theorem 1}.} \longproof{of Theorem \ref{Theorem 1}} We begin by setting $u(x)=\int_{\mathbf{R}^d}r^\lambda (x-y)g(y)\,\d y$, where $g$ is a smooth function so that $u$ satisfies \begin{equation}\label{poisson} \mathcal{L}_0 u(x)-\lambda u(x)=-g(x)\qquad x\in \mathbf{R}^d. \end{equation} Since $(\P^x, X_t)$ solves the martingale problem associated with $\mathcal{L}$ started at $x$, we have \begin{equation*} u(X_t)-u(X_0)-\int_0^t\mathcal{L} u(X_s)\, \d s=\text{martingale}. \end{equation*} Multiplying the above by $\lambda e^{-\lambda t}$ and integrating from $t=0$ to $t=\infty$, we obtain \begin{equation*} -u(X_0)-\int_0^\infty e^{-\lambda s}(\mathcal{L} u-\lambda u)(X_s)\d s=\text{martingale}, \end{equation*} which in turn yields \begin{equation*} -u(x)=\E^x\left[\int_0^\infty e^{-\lambda s}(\mathcal{L} u-\lambda u)(X_s)\d s\right]. \end{equation*} We now use \eqref{poisson} to write the right hand side of the above equality as follows \begin{eqnarray*} -u(x)&=&\E^x\left[\int_0^\infty e^{-\lambda s}(\mathcal{L} u-\mathcal{L}_0u)(X_s)\d s\right]+\E^x\left[\int_0^\infty e^{-\lambda s}(\mathcal{L}_0u-\lambda u)(X_s)\d s\right]\\ &=&\E^x\left[\int_0^\infty e^{-\lambda s}(\mathcal{L} u-\mathcal{L}_0u)(X_s)\d s\right]-\E^x\left[\int_0^\infty e^{-\lambda s}g(X_s)\d s\right]. \end{eqnarray*} Rearranging the above two equalities and using Proposition \ref{prop:main}, we obtain \begin{equation*} \left|\E^x\left[\int_0^\infty e^{-\lambda s}g(X_s)\d s\right]\right|\leq c_1(I_{\alpha,x}(g)+I_{\beta,x}(g)+J_{\alpha,x}(g)+J_{2\alpha,x}(g)). \end{equation*} After noting that the above inequality holds for $L^p$-functions as well, we choose $g(X_s)=f(X_s)1_{(s\leq \tau_{B(x_0,R)})}$ where $f$ is an $L^p$-function. For this choice of $g$, $J_\gamma(g)=0$ for all $\gamma>0$. Moreover, by using H\"older's inequality and the fact that $\frac{d}{p}\leq \alpha$, we obtain \begin{eqnarray*} I_{\alpha,x}(f)&\leq&c_2\int_{B(x_0,R)}\frac{|f(y)|}{|x-y|^{d-\alpha}}\, \d y\\ &\leq&c_3R^{\alpha-d/p}\|f\|_{L^p(B(x_0,R))}. \end{eqnarray*} Combining the above estimates, we obtain \begin{equation*} \left| \E^x\int_0^{t\wedge \tau_{B(x_0, R)}}f(X_s)\,\d s \right |\leq c(R)\|f\|_{L^p(B(x_0, R))}, \end{equation*} where the constant $c(R)$ depends on $\alpha,\,\beta,\,\kappa$ and $K$ as well. \qed \section{Proof of Theorem \ref{Theorem 2}.} We begin this section with the following construction due to Meyer. Suppose that we have two jump kernels $n_0(x,h)$ and $n(x,h)$ with $n_0(x,h)\leq n(x,h)$ and such that for all $x\in \mathbf{R}^d$, \[N(x)=\int_{\mathbf{R}^d}(n(x,h)-n_0(x,h)) \d h \leq c.\] Let $\mathcal{L}'$ and $\mathcal{L}_0$ be the operators corresponding to the kernels $n(x,h)$ and $n_0(x,h)$ respectively. If $\overline X^0_t$ be the process corresponding to the operator $\mathcal{L}_0$, then we can construct a process $\overline X_t$ corresponding to the operator $\mathcal{L}'$ as follows. Let $S_1$ be an exponential random variable of parameter 1 independent of $\overline X_t$, let $C_t=\int_0^tN(\overline X_s)\d s$, and let $U_1$ be the first time that $C_t$ exceeds $S_1$. At the time $U_1$, we introduce a jump from $\overline X_{U_1-}$ to $y$, where $y$ is chosen at random according to the following distribution: \[ \frac{n(\overline X_{U_1-},h)-n_0(\overline X_{U_1-},h)}{N(\overline X_{U_1-})}\d h.\] This procedure is repeated using an independent exponential variable $S_2$. And since $N(x)$ is finite for any finite time interval, we have introduced only a finite number of jumps. In \cite{Me}, it is proved that the new process corresponds to the operator $\mathcal{L}'$. We will also need the following in the proof of Theorem \ref{Theorem 1}. \begin{proposition}\label{prop:tech} Suppose for each $k$ that $\P_k(X_0=x)=1$ and that for every $f\in C_b^2$ there exists $c_f$(depending only on $\|f\|$ and $\|f''\|$) such that $f(X_t)-f(X_0)-c_ft$ is a $\P_k$-supermartingale. Then \begin{enumerate}[(a)] \item the sequence $\P_k$ is tight on the space of c$\acute{a}$dl$\grave{a}$g functions, \item $\P_k(\tau_{B(x,R)}\leq t)\leq c_1t/R^2$~, $c_1$ is independent of $x$. \end{enumerate} \end{proposition} \begin{proof} See Propositions 3.1 and 3.2 of \cite{Ba2}. \end{proof} \longproof{of Theorem \ref{Theorem 1}}. Define the jump kernel $n_k(x,h)$ as follows \[n_k(x,h)=\left \{\begin{array}{lll} \ \frac{n(x,h)}{|h|^{d+\alpha}} & {\rm for} & |h|>\frac{1}{k}\\ \\ \frac{1}{|h|^{d+\alpha}} & {\rm for} & |h|\leq\frac{1}{k}. \end{array}\right.\] Let $\mathcal{L}_k$ be the operator corresponding to $n_k(x,h)$. For each $k$, there exists a solution $\P^x_k$ to the martingale problem associated to the operator $\mathcal{L}_k$. The existence of the solution $\P^x_k$ can be justified by the fact that for small $h$, the jump kernel of $\mathcal{L}_k$ is that of a symmetric stable process of index $\alpha$. Using Meyer's construction, we can add the big jumps (i.e $|h|>\frac{1}{k}$) and get a process corresponding to $\mathcal{L}_k$. It follows from our assumptions and Proposition $\ref{prop:tech}$(a) that $\P_k^x$ is tight on the space of c$\acute{a}$dl$\grave{a}$g functions. Relabeling if necessary, let $\P^x_k$ be a subsequence which converges to a probability measure, $\P^x$. We need to show that $\P^x$ is a solution to the martingale problem associated with $\mathcal{L}$. It suffices to show that \begin{equation}\label{A1} \E^x_k\left[ Y[\int_0^t\mathcal{L}_kf(X_s)\d s] \right]\rightarrow \E^x\left[ Y[\int_0^t\mathcal{L} f(X_s)\d s]\right] \hskip5mm {\rm as} \hskip 5mm k\rightarrow \infty, \end{equation} where $Y=\prod_{i=1}^{m}g_i(X_{r_i}), \hskip3mm r_1\leq r_2\leq...\leq r_m\leq t$ and the $g_is$ are continuous functions bounded by 1. Using Taylor's formula and Assumption \ref{ass1}(b), we obtain \begin{eqnarray}\label{A2} |\mathcal{L} f(x)-\mathcal{L}_kf(x)|&=&\left|\int_{|h|\leq \frac{1}{k}}[f(x+h)-f(x)-1_{(|h|\leq 1)}h\cdot {\nabla} f(x)]\frac{[n(x,h)-1]}{|h|^{d+\alpha}}\d h\right|\nonumber\\ &\leq&c_1\left( \frac{1}{k}\right)^{2-\alpha+\beta}, \end{eqnarray} where $c_1$ depends on the function $f$. To show (\ref{A1}), we write \begin{eqnarray}\label{A3} I^k(x)&=&\left|\E^x_k\left[ Y[\int_0^t\mathcal{L}_kf(X_s)ds] \right]-\E^x\left[ Y[\int_0^t\mathcal{L} f(X_s)\d s] \right]\right|\nonumber\\ &\leq&\left|\E^x_k\left[ Y[\int_0^t(\mathcal{L}_k-\mathcal{L})f(X_s)\d s]\right]\right|\nonumber\\ &+&\left|\E^x_k\left[ Y[\int_0^t\mathcal{L} f(X_s)\d s]\right]-\E^x\left[ Y[\int_0^t\mathcal{L} f(X_s)\d s]\right]\right|\nonumber\\ &=&I_1+I_2. \end{eqnarray} Using Theorem \ref{Theorem 1}, we can bound $I_1$ as follows, \begin{eqnarray*} I_1&\leq&\left|\E^x_k\left[ |Y|[\int_0^{t\wedge \tau_{B(x,R)}}(\mathcal{L}_k-\mathcal{L})f(X_s)\d s\right]\right|\\ &+&\left|\int_{\tau_{B(x,R)}\leq t}|Y| \int_0^t|(\mathcal{L}_k-\mathcal{L})f(X_s)|\d s \d \P^x\right|\\ &\leq&c_2\|\mathcal{L} f(x)-\mathcal{L}_kf(x)\|_{L^p({B(x,R)})}\\ &+&c_3\P^x(\tau_{B(x,R)}\leq t)\|\mathcal{L} f(x)-\mathcal{L}_kf(x)\|_{L^\infty}. \end{eqnarray*} Inequality (\ref{A2}) and Proposition \ref{prop:tech}(a) yield $\begin{displaystyle}I_1\leq c_4\left(\frac{1}{k}\right)^{2-\alpha+\beta}\end{displaystyle}$. So for large enough $k$, we have $I_1\leq \epsilon/2$. Finally the weak convergence of $\P^x_k$ yields $I_2\leq \epsilon/2$ for large $k$. Hence the theorem is proved. \qed \section*{Acknowledgements} The author would like to thank Richard Bass for suggesting this problem. \begin{small}
2,869,038,155,211
arxiv
\section{Introduction} Network tomography is proposed in \cite{YV96} to obtain network characteristics without modifying network infrastructure, where the author suggests the use of end-to-end measurement and statistical inference together to estimate the characteristics instead of direct measurement. The end-to-end measurement can be divided into two classes: passive and active, depending on whether probe packets are sent from sources to receivers. Without probing, the passive methods depends on the data collected from log files to estimate network characteristics. However, the data collected in the log files are either unrelated or poorly related that makes inference hard if not impossible. In contrast, the active approach attaches a number of sources to some end nodes of a network that send probe packets to the receivers attached to the other side of the network, where the paths from the sources to the receivers cover the links of interest. Since the probes are multicast to the receivers, the observations obtained by the receivers are strongly correlated. Then, statistical inference is applied on the data collected by the receivers to estimate the network characteristics, such as link-level loss rates \cite{Duffield2002}, delay distribution \cite{LY03}, \cite{TCN03}, \cite{PDHT02}, \cite{SH03}, \cite{LGN06}, and loss pattern \cite{ADV07}. In this paper, our focus is on using active approach to estimate the loss rate of a path/link. Loss rate estimation is also called loss tomography in literature, where the main focus is on searching for efficient maximum likelihood estimators that can avoid the use of iterative procedures to approximate the MLE. To achieve this, a deep analysis of the likelihood function and a comprehensive study of the likelihood equations obtained from the likelihood function are essential. Unfortunately, there are only a few analytical results presented in the literature, \cite{CDHT99} and \cite{Zhu06} are two of the few. Both papers show that when the Bernoulli loss model is assumed for the loss process of a link and independent identical distributed ({\it i.i.d}) probing is used in end-to-end measurement, the maximum likelihood equation of the pass/loss rate of a path/link takes a polynomial form. The difference between them is that \cite{CDHT99} is for the pass rate of a path connecting the source to an internal node, \cite{Zhu06} is for the loss rate of a link connecting two nodes that form a parent and child pair. We call them path-based estimator and link-based estimator, respectively. Both estimators target the tree topology, and their advantages and disadvantages are presented in \cite{Zhu09}. Apart from agreeing on the polynomial form, both report that the degree of the polynomial is one less than the number of descendants connected to the path/link being estimated. Then, how to solve a high degree polynomial becomes the critical issue since there is no analytical solution for a polynomial that is 5 degree or greater from Galois theory. Unfortunately, there has been little progress in this regard until \cite{ZD09}, where a connection between observations and the degree of the polynomial is established that provides the theoretical foundation to reduce the degree of the polynomial obtained from the likelihood equation. Prior to \cite{ZD09}, the authors of \cite{DHPT06} introduced an explicit estimator built on the law of large numbers. The estimator has been proved to be a consistent estimator and has the same asymptotic variances as that of an MLE to first order. When $n < \infty$, the estimate obtained by the estimator can be very different from the MLE. Considering the cost of probing and dynamic nature of network traffic, we argue here that despite the importance of the large sample theory in statistics, it is unwise to use an estimator that is purely based on the law of large numbers in practice because the accuracy of the estimator depends on a large number of samples that can take a long time to collect and cost a lot of resources. The question then becomes whether there is an explicit maximum likelihood estimator for multicast loss tomography. If so, what is that and how different between the estimates obtained by the explicit MLE estimator and the estimator presented in \cite{DHPT06} when $n < \infty$. The two issues will be addressed in this paper. Firstly, we present an explicit MLE estimator for the tree topology under the Bernoulli model that has a similar computation complexity as the one presented in \cite{DHPT06}. Secondly, a comparison between the two estimators is presented that shows the newly proposed estimator is better than the previous one when $n < \infty$. The new estimator is also better than the previous one in terms of the rate of convergence when $n\rightarrow \infty$ since the MLE is asymptotic efficient and the best asymptotically normal estimate. By expanding both the statistical model used in the likelihood equation and the observations obtained from receivers, we found that the accuracy of an estimator is related to the consideration of the correlated observations; while the efficiency of an estimator is inversely related to the degree of the likelihood equation that is proportional to the number of correlations. Then, how to keep the accuracy without losing efficiency becomes the key issue that has been under investigation for some time. As a result, the connection between the degree of the polynomial and the observations obtained by receivers is established that sets up the foundation to have an explicit maximum likelihood estimator. Meanwhile, the exactly cause of the larger variance created by the explicit estimator presented in \cite{DHPT06} is identified in the paper. The rest of the paper is organized as follows. In Section 2, we introduce the notations used in the paper. In addition to the notation, the set of sufficient statistics used in this paper is introduced in this section. We then present the explicit MLE in Section 3 that unveils the connection between sufficient statistics and the likelihood model used to describe the loss process of a path/link. Section 4 is devoted to compare and analyze the explicit MLE with the estimator presented in \cite{DHPT06}. The last section is devoted to concluding remark. \section{Problem Formulation} \subsection{Notation}\label{treenotation} In order to make correlated observations at the receivers, multicast is used to send probes to receivers, where the multicast tree or subtree used to connect the source to receivers is slightly different from an ordinary one at its root, that has only a single child. Let $T=(V, E)$ donate the multicast tree, where $V=\{v_0, v_1, ... v_m\}$ is a set of nodes representing routers and switches of a network; $E=\{e_1,..., e_m\}$ is a set of directed links connecting node $f(i)$ to node $i$, where $f(i)$ is the parent of node $i$. To distinguish a link from another, each link is assigned a unique number from 1 to m; similarly, each node also has a unique number from 0 to m, where link $i$ is used to connect node $f(i)$ to node $i$. The numbers are assigned to the nodes from small to big along the tree structure from top to bottom and left to right. The source is attached to node 0 to send probes to the receivers attached to the leaf nodes of $T$. $R$ is used to denote all receivers. Let $A=\{A_1,..., A_m\}$ be an m-element vector, where $A_i, i \in \{1,\cdot\cdot,m\}$, is the pass rate of the path connecting node 0 to node $i$. In addition, except leaf nodes each node has a number of children, where $d_i$ denotes the children of node $i$ and $|d_i|$ denotes the number of children of node $i$. Note that a multicast subtree is different from an ordinary subtree, where multicast subtree $i$, $T(i)$, is rooted at node $f(i)$ that has link $i$ as its root link. The group of receivers attached to $T(i)$ is denoted by $R(i)$. If $n$ probes are dispatched from the source, each probe $i=1,...., n$ gives rise of an independent realization $X^{(i)}$ of the probe process $X$, $X_k^i=1, k\in R$ if probe $i$ reaches receiver $k$; otherwise $X_k^i=0$. The observations of $\Omega=(X^{(i)})^{i \in \{1,..,n\}}$ comprise the data for inference. Given observation $X^{(j)}$, let \begin{equation} Y_i^{j}=\bigvee_{k \in R(i)} X_k^j, \mbox{\hspace{1cm}} j \in \{1, .., n\}.\label{projection0}\end{equation} \noindent be the observation obtained by $R(i)$ for probe $j$. If $Y_i^j=1$ probe $j$ reaches at least one receiver attached to $T(i)$, that also implies the probe reaches node $i$. Then, \[ n_i(1)=\sum_{j=1}^n Y_i^j, \] \noindent is the number of probes confirmed from observations that reach node $i$. $n_i(1), i \in V\setminus 0$ have been proved to be a set of minimal sufficient statistics in \cite{ZD09}. In addition to $n_i(1), i \in V\setminus 0$, we also introduced another set of numbers for each node, $k$, where $n_{ij}(1) = \sum_{u=1}^n (Y_i^u \wedge Y_j^u), i, j \in d_k $ is for the number of probes confirmed from observations that reach at least one receiver of $R(i)$ and one of $R(j)$ simultaneously; and $n_{ijk}(1)=\sum_{u=1}^n (Y_i^u \wedge Y_j^u\wedge Y_k^u), i,j,k \in d_k $ is for the number of probes confirmed from observations that reach simultaneously to at least one receiver in each of $R(i)$, $R(j)$ and $R(k)$; $\cdot\cdot\cdot$; and $n_{G}(1) = \sum_{u=1}^n (\bigwedge_{j \in G} Y_j^u)$ is for the probes observed by at least one receiver in each subtree rooted at node $k$. We did not realize that this set of numbers is a set of sufficient statistics until recently, we then name them as the set of alternative sufficient statistics. The following theorem confirms this: \begin{theorem} \label{alternative set} The alternative set of statistics defined above is a set of sufficient statistics. \end{theorem} \begin{IEEEproof} As stated, $n_i(1), i \in V\setminus 0$ has been proved to be a set of minimal sufficient statistics. Then, if there is a function $\Gamma$ that can map the alternative set to $n_i(1), i \in V\setminus 0$, the alternative set is a set of sufficient statistics. The function $\Gamma$ is as follows \begin{eqnarray} n_i(1)&=&\sum_{j \in d_i} n_j(1)-\sum_{\substack{ j<k\\ j, k \in d_i}} n_{jk}(1) \cdot\cdot + \nonumber \\ &&(-1)^{|d_i|-1} n_{d_i}(1), \mbox{ } i \in (V\setminus (0 \cup R(i))) \nonumber \\ n_i(1)&=&\sum_{j=1}^n Y_i^j, \mbox{ } i \in R \end{eqnarray} The function is a recursive function from bottom up along the tree topology. \end{IEEEproof} \section{The Explicit Maximum Likelihood Estimator} \subsection{Explicit Estimator} Among the few studies providing analytical results, Multicast Inference of Network Characters (MINC) is the most influence one that covers almost all of the areas in network tomography, including link-level loss, link delay and topology tomographies. In loss tomography, it uses a Bernoulli model to describe the losses occurred on a link. Using this model, the authors of \cite{CDHT99} derive an MLE for the pass rate of a path connecting the source to an internal node. The MLE is expressed in a set of polynomials, one for a path \cite{CDHT99}, \cite{CDMT99}, \cite{CDMT99a}. Once knowing the pass rates to two nodes that form a parent and child pair, the loss rate of the link connecting the two node can be calculated by $1-A_i/A_{f(i)}$. Considering the complexity of using numeric method to solve higher degree polynomials $( >5 )$, the authors of \cite{DHPT06} propose an explicit estimator on the basis of the law of large numbers, where the authors define $Z_k^{(i)}= \min_{j \in d_k} Y_j^{(i)}$ and $B_k=P(Z_k=1)$. The key of \cite{DHPT06} is based on the following theorem. \begin{theorem} \begin{enumerate} \item For $K \in V\setminus R$, \begin{equation} A_k=\Phi(B_k, \gamma):=\Big(\dfrac{\prod_{j \in d_k}\gamma_j}{B_k}\Big)^{1/(|d_k|-1)} \label{explicit} \end{equation} \item Define $\breve{A}_k=\hat \gamma_k$ for $k \in R,$ and $\breve{A}_k=\Phi(\hat B_k, \hat\gamma)$ otherwise. Then $\breve{A}_k$ is a consistent estimator of $A_k$, and hence $\breve{A}_k=\breve{A_k}/\breve{A}_{f(k)}$ is a consistent estimator of $A_k$. \end{enumerate} \end{theorem} \noindent where $\hat B_k$, the empirical probability of $B_k$, is equal to $n^{-1}\sum_{i=1}^n Z_l^{(i)}$. Note that the consistent property proved only ensures when $n\rightarrow \infty$, $\breve{A}_k$ is almost surely approach to $A_k$, the true pass rate of the path from the source to node $k$. This property is the basic requirement for an estimator. If an estimator cannot ensure consistency, it should not be called an estimator. Then, the main concern with the explicit estimator is its accuracy in comparison with the MLE when $n<\infty$ sine it only uses a part of all available information. \subsection{Insight of MLE} A minimum variance and unbiased estimator (MVUE) is normally regarded as a good estimator in statistics. A maximum likelihood estimator is a MVUE if it meets some simple regularity conditions. Unfortunately, the estimator proposed in \cite{DHPT06} is not a MLE. Apart from that, there are a number of concerns with the applicability of the estimator in practice because the accuracy of the estimate requires $n\rightarrow \infty$. Then, the scalability of the estimator must be addressed, where the time and resources spent on measurement, time spent on processing the data collected from measurement, and the stationary period of network traffic must be considered in practice. To remedy the scalability of the explicit estimator, we start to search for an explicit maximum likelihood estimator and have a close look at the maximum likelihood estimator proposed in \cite{CDHT99}, which is as follows: \begin{eqnarray} H(A_k, k) = 1-\dfrac{\gamma_k}{A_k} - \prod_{j \in d_k}(1-\dfrac{\gamma_j}{A_k})=0 \label{treepoly} \end{eqnarray} \noindent where $\gamma_i, i \in V\setminus 0$ is the pass rate of the multicast tree with its root link connecting node $0$ to node $i$. Rewriting (\ref{treepoly}) as \begin{eqnarray} 1-\dfrac{\gamma_k}{A_k}=\prod_{j \in d_k}(1-\dfrac{\gamma_j}{A_k}), \label{treepoly1} \end{eqnarray} we found two interesting features of the polynomial; one is the expandable feature, the other is merge-able feature. The former allows us to expand both sides of (\ref{treepoly1}) to have a number of terms on each side that are corresponded to each other. The latter, on the other hand, allows us to merge a number of terms located on the right hand side (RHS) and in the product into a single term as the one located on left hand side (LHS) of (\ref{treepoly1}). The advantage of the merge-able feature will be detailed in the next subsection. We now put our attention on the expanding feature to unveil the internal correlation embedded in (\ref{treepoly1}). Expanding the RHS of (\ref{treepoly1}) and dividing all terms by $A_k$, we have \begin{eqnarray} \gamma_k&=&\sum_{j \in d_k}\gamma_j - \sum_{\substack{ j<k\\ j, k \in d_k}}\dfrac{\gamma_j\gamma_k}{A_k} \cdot\cdot \nonumber \\ && + (-1)^{|d_k|-1}\dfrac{\prod_{j \in d_k}\gamma_j}{A_k^{|d_k|-1}}. \label{mle} \end{eqnarray} Using the empirical probability $\hat \gamma_j = \dfrac{n_j(1)}{n}$ to replace $\gamma_j$ in (\ref{mle}), we have a $|d_k|-1$ degree polynomial of $A_k$. Solving the polynomial, the MLE of path $i$, $\hat A_k$, is obtained. Note that $\hat\gamma_k$ can be replaced by the alternative sufficient statistics, then the LHS of (\ref{mle}) becomes \begin{multline} \dfrac{1}{n}\big(\sum_{j \in d_k} n_j(1)-\sum_{\substack{ j<k\\ j, k \in d_k}} n_{jk}(1) \cdot\cdot + (-1)^{|d_k|-1} n_{d_k}(1) \big ) \label{numberside} \end{multline} Comparing the RHS of (\ref{mle}) with (\ref{numberside}), one is able to find the correspondences between the terms. Each term of (\ref{mle}) represents a type of correlation in the model among/between the members of the term, while each term of (\ref{numberside}) is the statistics or evidence obtained from an experiment for the corresponding term of (\ref{mle}). Except the first term of (\ref{mle}) and the first term of (\ref{numberside}) that are exactly equal to each other, other pairs between the two can be different from each other if $n<\infty$. Taking the first terms of (\ref{mle}) and (\ref{numberside}) out, we have \begin{eqnarray} &&\sum_{\substack{ j<k\\ j, k \in d_k}}\dfrac{\gamma_j\gamma_k}{A_k} \cdot\cdot + (-1)^{|d_k|-1}\dfrac{\prod_{j \in d_k}\gamma_j}{A_k^{|d_k|-1}} \nonumber \\ &=&\dfrac{1}{n}(\sum_{\substack{ j<k\\ j, k \in d_k}} n_{jk}(1) \cdot\cdot + (-1)^{|d_k|-1} n_{d_k}(1)) \label{mle2num} \end{eqnarray} Statistical inference aims to estimate $\hat A_k$ from (\ref{mle2num}), i.e. matching the model presented on the LHS to the statistics presented on the RHS. This equation also shows that in order to have the MLE of a path, one must consider all available information embedded in the observations of $R(i)$, in particular the correlations between the descendants. Without correlations and/or the corresponding statistics, inference is impossible. This corresponds to the data consistent problem raised in \cite{CDHT99} and \cite{Zhu09}. If we only consider matching a part of (\ref{mle}) to the corresponding part of (\ref{numberside}) when $n<\infty$, the estimate obtained would not be the MLE unless the ignored correlations are negligible. Then, the explicit estimator proposed in \cite{DHPT06} is not an MLE since it only pairs the last term of (\ref{mle}) to the last term of (\ref{numberside}). \subsection{The Explicit Maximum Likelihood Estimator} As stated, the degree of the polynomial is proportional to the number of descendants connected to the path being estimated and the estimation relies on the observed correlations between the descendants to estimate the unknown characteristic. Under the i.i.d. assumption, the likelihood function takes a product form as (\ref{treepoly1}). Unfortunately, the previously stated merge-able feature has not been given enough attention although (\ref{treepoly1}) clearly expresses the loss rate of subtree $k$ is equal to the product of the loss rates of those sub-multicast trees rooted at node $k$. In probability, (\ref{treepoly1}) states such a fact that the loss rate of subtree $k$ depends on a number of independent events, one for a sub-multicast tree rooted at node $k$. More importantly, this implies that those independent events can be merged into a single event, i.e. the LHS of (\ref{treepoly1}). With this in mind, whether the degree of (\ref{treepoly}) can be reduced depends on whether we are able to obtain the empirical pass rate of the tree that has a path from the source to node $k$ plus {\it some} of the multicast subtrees rooted at node $k$. Let $\gamma_{k_g}$ denote the pass rate, where $k$ is for the end node of the path being estimated, $g$ denotes the group of subtrees being merged. Based on the alternative sufficient statistics of $d_k$, $\hat\gamma_{k_g}$, the empirical probability of $\gamma_{k_g}$, can be computed \cite{ZD09}. Then, the degree of (\ref{treepoly1}) can be reduced to 1 that can be solved easily. Further, we have the following theorem to calculate the pass rate of a path explicitly for the tree topology. \begin{theorem} \label{explicit MLE} For the tree topology that uses the Bernoulli model to describe the loss process of a link, there is an explicit MLE estimator to estimate the pass rate of the path connecting the source to node $k, k \in V\setminus (0\cup R)$, which is as follows: \begin{equation} \hat A_k=\dfrac{\hat\gamma_{k1}\hat\gamma_{k2}}{\hat\gamma_{k1}+\hat\gamma_{k2}-\hat\gamma_k} \end{equation} where $\hat\gamma_k=\dfrac{n_k(1)}{n}$, $\hat\gamma_{k1}=\dfrac{n_{k1}(1)}{n}$, and $\hat\gamma_{k2}=\dfrac{n_{k2}(1)}{n}$. $n_{k1}(1)$ and $n_{k2}(1)$ are the number of probes confirmed from observations reaching at least one receiver attached to the merged subtree 1 and 2, respectively. \end{theorem} \begin{IEEEproof} Since node $k$ is not a leaf node, the subtrees rooted at the node can be divided into two exclusive groups: $d_{k1}$ and $d_{k2}$ where $d_{k1} \cup d_{k2}=d_k$ and $d_{k1} \cap d_{k2}=\emptyset$. The statistics of the merged subtrees can be computed by using $d_{k1}$ or $d_{k2}$ to replace $d_k$ in (\ref{numberside}). Then, we have \begin{eqnarray} 1-\dfrac{\hat\gamma_k}{A_k}&=&\prod_{j \in d_k}(1-\dfrac{\hat\gamma_j}{A_k}) \nonumber \\ &=& \prod_{j \in d_{k1}}(1-\dfrac{\hat\gamma_j}{A_k} )\prod_{k \in d_{k2}}(1-\dfrac{\hat\gamma_k}{A_k}) \nonumber \\ &=& (1-\dfrac{\hat\gamma_{k1}}{A_k})(1-\dfrac{\hat\gamma_{k2}}{A_k}) \label{betalinktogamma} \end{eqnarray} Solving (\ref{betalinktogamma}), we have \[ A_k=\dfrac{\hat\gamma_{k1}\hat\gamma_{k2}}{\hat\gamma_{k1}+\hat\gamma_{k2}-\hat\gamma_k} \] \end{IEEEproof} The theorem shows that using (\ref{numberside}) to merge the alternative statistics of multiple multicast subtrees rooted at the same node would not affect the estimation of the pass rate of the path that ends at the node. \section{Comparison of Estimators} In this section, we tackle the second task set at the beginning of the paper, i.e. compare the explicit MLE estimator against the explicit one presented in \cite{DHPT06} for $n <\infty$. Two scenarios: 2 descendants and 3 descendants are connected to the path being estimated, are considered to illustrate the estimates received from the explicit estimator proposed in \cite{DHPT06} drifts away from the MLE as the number of descendants increases. To make the variance approximate to the first order of the MLE, the explicit estimator needs to send more probes to receivers. We will use $H(A_i,i)$ as a reference in the following comparison to measure the accuracy between an estimator and its MLE counterpart. \subsection{Binary Tree} \begin{figure} \centerline{\psfig{figure=twonode.eps,height=3.0cm,width=3cm}} \caption{A Binary Tree} \label{binary} \end{figure} For a tree with binary descendants as Figure \ref{binary}, the pass rate of link/path i estimated by $H(A_i,i)$ is equal to \begin{eqnarray} \hat A_i &=& \dfrac{\hat\gamma_2\hat\gamma_1}{\hat\gamma_2+\hat\gamma_1-\hat\gamma_i} \end{eqnarray} \noindent Using the explicit estimator of \cite{DHPT06}, we have \begin{eqnarray} \breve{A}_i&=&\dfrac{\hat\gamma_2\hat\gamma_1}{\dfrac{n_{12}(1)}{n}} \nonumber \\ &=&\dfrac{\hat\gamma_2\hat\gamma_1}{\dfrac{n_2(1)+n_1(1)-n_i(1)}{n}} \nonumber \\ &=& \dfrac{\hat\gamma_2\hat\gamma_1}{\hat\gamma_2+\hat\gamma_1-\hat\gamma_i} \label{2node} \end{eqnarray} \noindent It is the same as the MLE. This is because there is only one type of correlation between the model and the observations, which are considered by the estimator. Thus, $\breve{A}_i=\hat A_i$. Based on Theorem \ref{explicit MLE}, the estimate obtained by the estimator proposed in this paper is also the same as above, which can be written as: \begin{eqnarray} \hat A_i &=& \dfrac{n_1(1)n_2(1)}{n\cdot (n_1(1)+n_2(1)-n_i(1))} \nonumber \\ &=& \dfrac{\dfrac{n_1(1)n_2(1)}{{n^2}}}{\dfrac{n_1(1)+n_2(1)-n_i(1)}{n}}. \end{eqnarray} Thus, for the binary tree, the three estimators produce the same result. \subsection{Tertiary Tree} Let $i$ have three descendants 1, 2, and 3. Based on $H(A_i, i)$, we have \begin{eqnarray} && \hat A_i^2(\hat\gamma_1+\hat\gamma_2+\hat\gamma_3 - \hat\gamma_i)-\nonumber \\ && \hat A_i(\hat\gamma_1\hat\gamma_2+\hat\gamma_1\hat\gamma_3+\hat\gamma_2\hat\gamma_3) +\hat\gamma_1\hat\gamma_2\hat\gamma_3=0. \nonumber \\\label{3nodemle} \end{eqnarray} \noindent Solving the quadratic function, we have the MLE of $\hat A_i$ Based on (\ref{mle2num}), the model and observations are connected by \begin{multline} \sum_{\substack{ j<k\\ j, k \in \{1,2,3\}}} \dfrac{n_{jk}(1)}{n} - \dfrac{n_{123} (1)}{n}= \\ \sum_{\substack{ j<k \\ j, k \in \{1,2,3\}}}\dfrac{\gamma_j\gamma_k}{\hat A_i} - \dfrac{\gamma_1\gamma_2\gamma_3}{\hat A_i^2} \label{3children} \end{multline} It is easy to prove (\ref{3children}) equals (\ref{3nodemle}). Using theorem \ref{explicit MLE}, we have the MLE directly, \begin{eqnarray} \hat A_i = \dfrac{(n_1(1)+n_2(1)-n_{12}(1))\cdot n_3(1)}{n\cdot(n_{13}(1)+n_{23}(1)-n_{123}(1))}. \label{3mle} \end{eqnarray} \noindent Intuitively, one can notice that (\ref{3mle}) considers all correlations between the 3 descendants. To prove this equals (\ref{3nodemle}), we write the RHS of the above as \begin{equation} \dfrac{\dfrac{(n_1(1)+n_2(1)-n_{12}(1))}{n}\cdot \dfrac{ n_3(1)}{n}}{\dfrac{(n_{13}(1)+n_{23}(1)-n_{123}(1))}{n}}. \label{3equality} \end{equation} The denominator of the above is obtained from \begin{eqnarray} &&\dfrac{n_1(1)+n_2(1)-n_{12}(1)}{n}+\dfrac{n_3(1)}{n} - \gamma_i. \label{denominator} \end{eqnarray} According to (\ref{mle}) and (\ref{numberside}), \[ \gamma_1+\gamma_2-\dfrac{\gamma_1\gamma_2}{\hat A_i}=\dfrac{n_1(1)+n_2(1)-n_{12}(1)}{n} \] Using the above in (\ref{denominator}), the denominator turns into \begin{eqnarray} (\gamma_1+\gamma_2+\gamma_3-\gamma_i)-\dfrac{\gamma_1\gamma_2}{\hat A_i}. \label{deno} \end{eqnarray} Similarly, the nominator of (\ref{3equality}) is equal to \begin{eqnarray} (\gamma_1+\gamma_2-\dfrac{\gamma_1\gamma_2}{\hat A_i})\cdot\gamma_3 = \gamma_1\gamma_3+\gamma_2\gamma_3-\dfrac{n_{12}(1)}{n}\gamma_3. \nonumber \\ \label{nomi} \end{eqnarray} Using (\ref{deno}) and (\ref{nomi}) to replace the denominator and nominator of (\ref{3mle}), we have \begin{eqnarray} &&\hat A_i\cdot(\gamma_1+\gamma_2+\gamma_3-\gamma_i)-\hat A_i\cdot\dfrac{\gamma_1\gamma_2}{\hat A_i} \nonumber \\ &=&\gamma_1\gamma_3+\gamma_2\gamma_3-\dfrac{\gamma_1\gamma_2}{\hat A_i}\gamma_3 \end{eqnarray} Moving every term to the LHS and multiplying $\hat A_i$, we have (\ref{3nodemle}). Because of the symmetric nature of the 3 descendants, we can also merge descendants 2 and 3 first or merge descendants 1 and 3 first, that lead to \[ \hat A_i = \dfrac{(n_2(1)+n_3(1)-n_{23}(1))\cdot n_1(1)}{n\cdot(n_{13}(1)+n_{12}(1)-n_{123}(1))} \] \noindent and \[ \hat A_i = \dfrac{(n_1(1)+n_3(1)-n_{13}(1))\cdot n_2(1)}{n\cdot(n_{12}(1)+n_{23}(1)-n_{123}(1))} \] respectively. In contrast, the explicit estimator presented in \cite{DHPT06} has its estimate \begin{equation} \breve{A_i}=\Big(\frac{\gamma_1\gamma_2\gamma_3}{\dfrac{n_{123}(1)}{n}}\Big)^{\frac{1}{2}}.\label{expli} \end{equation} \noindent Comparing (\ref{expli}) with (\ref{3mle}), a direct impression is that (\ref{expli}) fails to match the paired correlations, i.e. descendants 1 and 2, descendants 1 and 3, and descendants 2 and 3. If we assume (\ref{expli}) is a solution of a quadratic equation, the equation should be as follows: \begin{multline} \dfrac{n_{123}(1)}{n}\breve{A_i}^2 - 2\cdot\big(\dfrac{n_{123}(1)\gamma_1\gamma_2\gamma_3}{n}\big)^{\frac{1}{2}}\breve{A_i}+\gamma_1\gamma_2\gamma_3=0 \label{3expli} \end{multline} that has a double root. Given the double root assumption, (\ref{3expli}) is certainly not the polynomial that leads to MLE since it is contradict to the Lemma 1 introduced in \cite{CDHT99} and \cite{ZD09} that states there is one and only one root in $(0,1)$ for the maximum likelihood equation. Then, the estimator proposed in \cite {DHPT06} can be regarded as an estimator based on the method of moments that will be discussed in the next subsection. \subsection{Analysis} The fundamental principle of maximum likelihood estimate is unveiled clearly by (\ref{mle2num}), where the LHS of (\ref{mle2num}) is the statistical distribution of the loss process (called model previously) and the RHS is the statistics obtained from observations. There is one to one correspondence between the terms cross the equal sign. The maximum likelihood estimator aims to solve the equation to find the $A_k$ that fits to the statistic model; while the explicit estimator proposed previously attempts to use the last terms of both sides. When $n < \infty$, the estimate can be different from MLE. In fact, if only considering asymptotic accuracy, each of the corresponding terms can be connected and considered an explicit estimator as the one presented in \cite{DHPT06}. All of the explicit estimators can also be proved to be consistent as their predecessor. Then, the theorem 1 can be extended as \begin{theorem} \label{set of estimators} Each of the corresponding pairs of (\ref{mle2num}) forms an explicit estimator that is consistent as the one proposed in \cite{DHPT06}, that has the form of \begin{eqnarray} \Phi_i(n_{w}(1), w, \gamma)=A_i= \dfrac{1}{C_{|w|}^{|d_i|}}\sum_{ j \in w}\Big (\dfrac{\prod_{k\in j}\gamma_k}{\frac{n_{w}(1)}{n}}\Big)^{\frac{1}{|w|-1}} \end{eqnarray} where $w$ corresponds to one of the pairs cross the equal sign of (\ref{mle2num}) and $|w| < |d_i|$ denotes the number of members in the term, and $n_{w}(1)$ corresponds to the statistics denoting the number of probes reaching the receivers attached to the descendants of $w$. \end{theorem} \begin{IEEEproof} It can be proved as the proof of Theorem 2 in \cite{DHPT06}. \end{IEEEproof} Clearly, when $n<\infty$, the estimates obtained by the explicit estimators are not the MLE. Note that each term on the RHS of (\ref{mle2num}) is not a sufficient statistic by itself but only a part of the sufficient statistics defined in Theorem \ref{alternative set}. Combining a number of the estimators defined above can improve the accuracy of the estimate. When all terms are combined, we have the MLE. Despite having an explicit maximum likelihood estimator, we still carry out the following analysis to find out why the partial matching could not be an MLE. Firstly, we compare (\ref{mle2num}) with the explicit one proposed in \cite{DHPT06} on the basis of polynomial. The former is a $|d_k|-1$ degree of polynomial that has a unique solution in $(0,1)$, while the latter can be considered a polynomial that has a multiple root in $(0,1)$. In other wards, the explicit one considers the sum of the first $|d_k|-2$ terms on both sides of the equal sign are equal to each other, so does the last terms. Instead of using the sum of the first $|d_k|-2$ terms to estimate $A_k$, the explicit estimator uses the last one to avoid solving a $|d_k|-2$ degree polynomial. Using this approach to estimate $A_k$, an error is inevitable and the amount of the error depends on the number of descendants; and the more the better. This is because if a node has more descendants, we are able to obtain more information about the path connecting the source to the node from observations. The explicit estimator proposed in \cite{DHPT06} can also be viewed as a method of moments. If so, its estimate is normally superseded by the maximum likelihood, because maximum likelihood estimators have higher probability of being close to the quantities to be estimated. Also, the estimates obtained by the method of moments are not necessarily based on sufficient statistics, i.e., they sometimes fail to take into account all or a large part of the relevant information in the sample. Therefore, the accuracy of such an estimator depends on large sample. As stated, $n_{d_k}(1)$ itself is not a sufficient statistic for $A_k$, using $n_{d_k}(1)$ alone to estimate $A_k$ fails to consider all other correlations between descendants. Then, as stated, error is inevitable. Let $n$ be the sample size and $\delta$ be the error rate, their relation is expressed as \[ \dfrac{\delta}{\sqrt{n}} \] Based on the formula, the explicit estimator relies on sending infinite number of probes to reduce the error. Even though, the effect of ignoring other correlations remains, that makes the variance of the explicit estimator can only approximate to the first order of that of the maximum likelihood estimator. \subsection{Computational Complexity} The estimator proposed in this paper is the maximum likelihood one with a similar computation complexity as that of the explicit estimator presented in \cite{DHPT06}. To determine $\hat A_k$ or $\breve{A_k}$, both need to calculate the empirical probabilities $\hat \gamma_i, i \in V\setminus 0$. The two estimators require to compute $Y_j^i$ and $n_j(1)$, there are total $O(n\cdot(|V|-1))$ operations. In addition, the estimator proposed here needs to merge descendants into two for those nodes that have more than 2 descendants. That requires to compute $n_{jk}(1), ..., n_{d_k/2}(1)$ for node $k$, and there are $2\times (2^{\frac{|d_k|}{2}}-\dfrac{|d_k|}{2}-1)$ operations. On the other hand, the previous explicit estimator needs to calculate $\hat B_k$ that requires the computation of $Z_k^i$ for each node that takes a smaller amount of operations than the MLE does. On the other hand, the explicit one needs to perform n-th root operation for each node that has more than two children, while the MLE one only needs a simple arithmetic operation to estimate $A_k$. Therefore, in terms of operations, the two are similar to each other. \section{Conclusion} In this paper, an explicit MLE is proposed that is built on the unique features of the likelihood equation and the set of alternative sufficient statistics introduced in this paper. The two features of the likelihood equation, i.e. expandable and merge-able, can be considered micro and a macro views of the likelihood equation. The rise of the macro view makes merging possible; and the rise of the micro view unveils the fundamental of the explicit estimator proposed previously and the internal correlations in the model and observations. Based on the macro view, a closed form MLE estimator is proposed and presented in this paper, which is of the simplest one that has even been presented in literature. Applying the micro view on (\ref{treepoly1}), we establish the correlations between descendants, and we also establish the correspondence between the statistical model and the statistics obtained from the leaf nodes of the descendants. This correspondence further unveils the connection between the observations and the degree of the likelihood polynomial. As a result, the explicit MLE is proposed for the tree topology. In addition to the explicit estimator, we in this paper compare the estimator proposed in this paper with the explicit estimator proposed previously, which shows that when $n <\infty$, the MLE one is substantially better than the explicit one in terms of accuracy.
2,869,038,155,212
arxiv
\section{Introduction} Zero anaphora is a discourse phenomenon, where pronouns can be omitted when they are pragmatically or grammatically inferable from intra- and inter-sentential context~\cite{li1979third}. However, translating such implicit information (i.e. zero pronoun, ZP) poses various difficulties for machine translation (MT) in terms of completeness and correctness. Although neural models are getting better at learning representations, it is still difficult to implicitly learn complex ZPs in a general model. Actually, ZP prediction and translation need to not only understand the semantics or intentions of a single sentence, but also utilize its discourse-level context. Two technological advances in the field of ZP and MT, have seen vast progress over the last decades, but they have been developed very much in isolation. Early studies~\cite{chung2010effects,Nagard:2010:ACL,xiang2013enlisting} fed MT systems with the results of ZP prediction models, which are trained on a small-scale and non-homologous data compared to MT models. To narrow the data-level gap,~\newcite{Wang:2016:NAACL} proposed an automatic method to annotate ZPs by utilizing the parallel corpus of MT. The homologous data for both ZP prediction and translation leads to significant improvements on translation performances for both statistical MT~\cite{Wang:2016:NAACL} and neural MT models~\cite{Wang:2018:AAAI}. However, such approaches still require external ZP prediction models, which have a low accuracy of 66\%. The numerous errors of ZP prediction errors will be propagated to translation models, which leads to new translation problems. In addition, relying on external ZP prediction models in decoding makes these approaches unwieldy in practice, due to introducing more computation cost and pipeline complexity. In this work, we try to further bridge the model-level gap by jointly modeling ZP prediction and translation. Joint learning has proven highly effective on alleviating the error propagation problem, such as joint parsing and translation~\cite{Liu:2010:COLING}, as well as joint tokenization and translation~\cite{Xiao:2010:COLING}. Similarly, we expect that ZP prediction and translation could interact with each other: prediction offers more ZP information beyond 1-best result to translation and translation helps prediction resolve ambiguity. Specifically, we first cast ZP prediction as a sequence labeling task with a neural model, which is trained jointly with a standard neural machine translation (NMT) model in an end-to-end manner. We leverage the auto-annotated ZPs to supervise the learning of ZP prediction component, which releases the reliance on external ZP knowledge in decoding phase. In addition, previous studies revealed that discourse-level information can better tackle ZP resolution, because around 23\% of ZPs appear two or more sentences away from their antecedents~\cite{zhao2007identification,chen2013chinese}. Inspired by these findings, we exploit inter-sentential context to further improve ZP prediction and thus translation. Concretely, we employ hierarchical neural networks~\cite{Sordoni2015A,Wang:2017:EMNLP} to summarize the context of previous sentences in a text, which is integrated to the joint model for ZP prediction. We validate the proposed approach on the widely-used data for ZP translation~\cite{Wang:2018:AAAI}, which consist of 2.15M Chinese--English sentence pairs. Experimental results show that the joint model indeed improves performances on both ZP prediction and translation. Incorporating discourse-level context further improves performances, and outperforms ther external ZP prediction model~\cite{Wang:2018:AAAI} by +2.29 BLEU points in translation and +11\% in prediction accuracy. Experimental results on a further Japanese--English translation task show that our model consistently outperforms both the baseline and the external ZP prediction model, demonstrating the universality of the proposed approach. The key contributions of this paper are: \begin{enumerate} \item We propose a single model to jointly learn ZP prediction and translation, which improves performances on both tasks by allowing the two components to interact with each other. \item Our study demonstrates the effectiveness of discourse-level context for ZP prediction. \item Based on our manually-annotated testset, we conduct extensive analyses to assess ZP prediction and translation. \end{enumerate} \section{Background} \label{sec:2} \subsection{Zero Pronoun} \label{sec:2.1} \begin{CJK}{UTF8}{gbsn} \begin{table}[t] \renewcommand\arraystretch{1.1} \centering \begin{tabular}{R{0.55cm}|L{6.3cm}} \hline Inp. & 等 我 搬进来,{\bf \color{red}(我)} 能 买 台 电视 吗?\\ Ref. & Can {\bf \color{red}I} get a TV when I move in? \\ Out. & When I move in {\bf \color{blue}to buy} a TV.\\ \hline Inp. & 这块 \underline{\color{brown}蛋糕} 很 美味!你 烤 的 {\bf \color{red}(它)} 吗?\\ Ref. & The cake is very tasty! Did you bake {\bf \color{red} it}? \\ Out. & The cake is delicious! {\bf \color{blue}Are you baked}? \\ \hline \end{tabular} \caption{Examples of ZPs and translations where words in brackets are ZPs that are invisible in decoding and underlined words are antecedents of anaphoric ZPs. This leads to problems for NMT in respect of completeness (first case) and correctness (second case). ``Inp.'' and ``Ref.'' indicate Chinese input and English translation, respectively. ``Out.'' represents the output of a NMT model.} \label{tab-zpexample} \end{table} \end{CJK} In pro-drop languages such as Chinese and Japanese, ZPs occur much more frequently compared to non-pro-drop languages such as English~\cite{zhao2007identification}. \begin{CJK}{UTF8}{gbsn} As seen in Table~\ref{tab-zpexample}, the subject pronoun (``我'') and the object pronoun (``它'') are omitted in Chinese sentences (``Inp.'') while these pronouns are all compulsory in their English translations (``Ref.''). This is not a problem for human beings since we can easily recall these missing pronoun from the context. Taking the second sentence for example, the pronoun ``它'' is an anaphoric ZP that refers to the antecedent (``蛋糕'') in previous sentence, while the non-anaphoric pronoun ``我'' can still be inferred from the whole sentence. The first example also indicates the necessity of intra-sentential information for ZP prediction. \end{CJK} However, ZP poses a significant challenge for translation models from pro-drop to non-pro-drop languages, where ZPs are normally omitted in the source side but should be generated overly in the target side. As shown in Table~\ref{tab-zpexample}, even a strong NMT model fails to recall the implicit information, which lead to problems like {\em incompleteness} and {\em incorrectness}. The first case is translated into ``When I move in to buy a TV'', which makes the output miss subject element ({incompleteness}). The second case is translated into ``Are you baked?'', while the correct translation should be ``Did you bake it?'' ({incorrectness}). \subsection{Bridging Data Gap Between ZP Prediction and Translation} Recent efforts have explored ways to bridge the gap of ZP prediction and translation~\cite{Wang:2016:NAACL,Wang:2018:AAAI,Wang:2018:EMNLP} by training both models on the homologous data. The pipeline involves two phases, as described below. \paragraph{Translation-Oriented ZP Prediction} \begin{CJK}{UTF8}{gbsn} Its goal is to recall the ZPs in the source sentence (i.e. pro-drop language) with the information of the target sentence (i.e. non-pro-drop language) in a parallel corpus. Taking the second case (assuming that Inp. and Ref. are sentence pair in a parallel corpus) in Table~\ref{tab-zpexample} for instance, the ZP ``它 (\textit{it})'' is dropped in the Chinese side while its equivalent ``it'' exists in the English side. It is possible to identify the ZP position (between ``的'' and ``吗'') by alignment information, and then recover the ZP word ``它'' by a language model (scoring all possible pronoun candidates and select the one with the lowest perplexity). ~\newcite{Wang:2016:NAACL} proposed a novel approach to automatically annotate ZPs using alignment information from bilingual data, and the auto-annotation accuracy can achieve above 90\%. Thus, a large amount of ZP-annotated sentences were available to train an external ZP prediction model, which was further used to annotate source sentences in test sets during the decoding phase. They integrated the ZP predictor into SMT and showed promising results on both Chinese--English and Japanese--English data. However, their neural-based ZP prediction model still produce low accuracies on predicting ZPs, which is 66\% in F1 score. This is a key problem for the pipeline framework, since numerous errors would be propagated to the subsequent translation process. \end{CJK} \iffalse \begin{CJK}{UTF8}{gbsn} \begin{table}[t] \renewcommand\arraystretch{1.1} \centering \begin{tabular}{l|c|l} \bf Prediction & \bf F1-score & \bf Example \\\hline ZP Position & 88\% & 你 烤 的 {\color{red} \#ZP\#} 吗 ?\\ ZP Word & 66\% & 你 烤 的 {\bf \color{red} (它)} 吗 ?\\ \end{tabular} \caption{Evaluation of external models on predicting the exact words of ZPs (``ZP Words'') and the positions (``ZP Positions'').} \label{tab:accuracy} \end{table} \end{CJK} \fi \paragraph{Translation with ZP-Annotated Data} An intuitive way to exploit the annotated data is to train a standard NMT model on the annotated parallel corpus, which decodes the input sentence annotated by the external ZP prediction model.~\newcite{Wang:2018:AAAI} leveraged the encoder-decoder-reconstructor framework~\cite{Tu:2017:AAAI} for this task, which reconstructs the intermediate representations of NMT model back to the ZP-annotated input. The auxiliary loss on ZP reconstruction can guide the intermediate representations to learn critical information relevant to ZPs. However, their best model still needs external ZP prediction at decoding time. In response to this problem, \newcite{Wang:2018:EMNLP} leveraged the prediction results of the ZP positions, which have relatively higher accuracy (e.g. 88\%). Accordingly, they jointly learn the partial ZP prediction (\emph{i.e.,}\xspace predict the ZP word given the externally annotated ZP position) and ZP translation. In this work, we follow this direction with the encoder-decoder-reconstructor framework, and show our approach outperforms both strategies of using externally annotated data. \section{Approach} \label{sec:approach} In this study, we propose a joint model to learn ZP prediction and translation, which can be further improved by leveraging discourse-level context. \begin{itemize} \item {\em Joint ZP Prediction and Translation} (Section~\ref{sec:3.1}) We cast ZP prediction as a sequence labeling problem, which can be trained together with ZP translation model in an end-to-end manner. This releases the reliance on external ZP prediction models (e.g. 66\% or 88\% accuracy), since no ZP-annotated sentence is required any more in decoding. Instead, only the high-quality annotated bilingual data (e.g. 93\% accuracy) are needed. \item {\em Discourse-Aware ZP Prediction} (Section~\ref{sec:3.2}) We further improve ZP prediction and thus its translation with discourse-level context, which is summarized by hierarchical neural networks. The contextual representation is integrated into the reconstructor, based on which ZP prediction is conducted. \end{itemize} \subsection{Joint ZP Prediction and Translation} \label{sec:3.1} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/joint.pdf} \caption{Architecture of the joint ZP prediction and translation model, in which ZP prediction is casted as a sequence labelling problem.} \label{fig-architecture} \end{figure} \iffalse \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/architecture.pdf} \caption{Architecture of the ZP prediction and translation model with discourse-level context. The ZP prediction task is casted as a sequence labelling problem, and no externally annotated input is required for reconstruction. The discourse-level context is summarized by hierarchical neural networks, and contextual representation can be integrated into the ZP translation model with different strategies.} \label{fig-architecture} \end{figure*} \fi Figure~\ref{fig-architecture} illustrates the architecture of the joint model, which consists of two main components. The ZP translation component is a standard encoder-decoder NMT model, while an additional reconstructor is introduced for ZP prediction. To guarantee the reconstructor states contain enough information for ZP prediction, the reconstructor reads both the encoder and decoder states and the reconstruction score is computed by \begin{eqnarray} R({\bf \hat{x}}|{\bf h}^{enc}, {\bf h}^{dec}) = \prod_{t=1}^{T} g_r({\hat{x}}_{t-1}, {\bf h}^{rec}_t, \hat{\bf c}^{enc}_t, \hat{\bf c}^{dec}_t) \nonumber \label{eqn:rec} \end{eqnarray} where ${\bf h}^{rec}_t$ is the hidden state in the reconstructor: \begin{eqnarray} {\bf h}^{rec}_t &=& f_r(\hat{x}_{t-1}, {\bf h}^{rec}_{t-1}, \hat{\bf c}^{enc}_t, \hat{\bf c}^{dec}_t) \end{eqnarray} Here $g_r(\cdot)$ and $f_r(\cdot)$ are respective softmax and activation functions for the reconstructor. The context vectors $\hat{\bf c}^{enc}_t$ and $\hat{\bf c}^{dec}_t$ are the weighted sum of ${\bf h}^{enc}$ and ${\bf h}^{dec}$, and the weights are calculated by two interactive attention models: \begin{eqnarray} \hat{\alpha}^{enc} &=& \textsc{Att}_{enc}(x_{t-1}, {\bf h}^{rec}_{t-1}, {\bf h}^{enc}) \\ \hat{\alpha}^{dec} &=& \textsc{Att}_{dec}(x_{t-1}, {\bf h}^{rec}_{t-1}, {\bf h}^{dec}, \hat{\bf c}^{enc}_t) \end{eqnarray} The interaction between two attention models leads to a better exploitation of the encoder and decoder representations \cite{Wang:2018:EMNLP}. \begin{CJK}{UTF8}{gbsn} \paragraph{ZP Prediction as Sequence Labelling} We cast ZP prediction as a sequence labelling task, where each word is labelled if there is a pronoun missing before it. Given the input ${\bf x}=\{{x}_1, {x}_2, \dots, {x}_T\}$ with the last word $x_T$ being the end-of-sentence tag `` $\langle$eos$\rangle$'',\footnote{We introduce `` $\langle$eos$\rangle$'' to cover the case that a pronoun is missing at the end of a sentence.} the output to be labelled is a sequence of labels ${\bf zp} = \{{zp}_1, {zp}_2, \dots, {zp}_T\}$ with ${zp}_t \in \{N\} \cup \mathbb{V}_{zp}$. Among the label set, ``$N$'' denotes no ZP, and $\mathbb{V}_{zp}$ is the vocabulary of pronouns.\footnote{We employ the pronoun vocabulary used in~\newcite{Wang:2016:NAACL}, which contains 30 distinct Chinese pronouns.} Taking Figure~\ref{fig-architecture} as an example, the label sequence ``N N N 它 N N'' indicates that the pronoun ``它'' is missing before the fourth word ``吗'' in the source sentence ``你 烤 的 吗?''. More specifically, we model the probability of generating the label sequence $\bf zp$ as: \end{CJK} \begin{equation}\label{eqn:label} \begin{split} P({\bf zp}|{\bf h}^{rec}) = \prod_{t=1}^{T}P({zp}_{t}|{\bf h}^{rec}_{t}) \\ = \prod_{t=1}^{T} g_l(zp_t, {\bf h}^{rec}_t) \end{split} \end{equation} where $g_l(\cdot)$ is softmax for the ZP labeler. As seen, we integrate the ZP generation component into the ZP translation model. There is no reliance on external ZP prediction models in decoding phase. \paragraph{Training and Testing} The newly introduced prediction component is trained together with the encoder-decoder-reconstructor: \begin{equation} \begin{split} J(\theta, \gamma, \psi) = \argmax_{\theta, \gamma, \psi} \bigg\{ \underbrace{\log L({\bf y}|{\bf x}; \theta)}_\text{\normalsize \em likelihood} \\ + \underbrace{\log R({\bf x} | {\bf h}^{enc}, {\bf h}^{dec}; \theta)}_\text{\normalsize \em reconstruction} \\ + \underbrace{\log P({\bf zp} | {\bf h}^{rec}; \theta, \gamma)}_\text{\normalsize \em ZP labeling} \bigg\} \end{split} \end{equation} where $\{\theta, \gamma\}$ are respectively the parameters associated with the encoder-decoder-reconstructor and the ZP prediction component. The auxiliary prediction loss $P(\cdot)$ guides the hidden states of both the encoder-decoder and the reconstructor to embed the ZPs in the source sentence. Although the calculation of labeling loss relies on explicitly annotated labels, it is only used in training to guide the parameters to learn ZP-enhanced representations. Benefiting from the implicit integration of ZP information, we release the reliance on external ZP prediction model in testing. \subsection{Discourse-Aware ZP Prediction} \label{sec:3.2} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/discourse.pdf} \caption{Architecture of hierarchical neural encoder. ${\bf x}^{-K}, \dots, {\bf x}^{-1}$ are $K$ previous sentences before the current source sentence \begin{CJK}{UTF8}{gbsn}``你 烤 的 吗 ?''\end{CJK} in a text. } \label{fig-cross-sent} \end{figure} Discourse information have proven useful for predicting antecedents, which may occur in previous sentences~\cite{zhao2007identification,chen2013chinese}. Therefore, we further improve ZP prediction with discourse-level context, which is learned together with the joint model. \paragraph{Encoding Discourse-Level Context} Hierarchical structure networks are usually used for modelling discourse context on various natural language processing tasks such query suggestion~\cite{Sordoni2015A}, dialogue modeling~\cite{Serban:2016} and MT~\cite{Wang:2017:EMNLP}. Therefore, we employ hierarchical encoder~\cite{Wang:2017:EMNLP} to encoder discourse-level context for NMT. More specifically, we use the previous $K$ source sentences ${\bf X} = \{{\bf x}^{-K}, \dots, {\bf x}^{-1}\}$ as the discourse information, which is summarized with a two-layer hierarchical encoder, as shown in Figure~\ref{fig-cross-sent}. For each sentence ${\bf x}^{-k}$, we employ a {\em word-level encoder} to summarize the representation of the whole sentence: \begin{equation} {\bf h}^{-k} = \textsc{Encoder}_{word}({\bf x}^{-k}) \end{equation} After we can obtain all sentence-level representations ${\bf H}^{X}=\{{\bf h}^{-K}, \dots, {\bf h}^{-1}\}$, we feed them into a {\em sentence-level encoder} to produce a vector that represents the discourse-level context: \begin{equation}\label{eqn:c} {\bf C} = \textsc{Encoder}_{sentence}({\bf H}^{X}) \end{equation} Here the summary $C$ consists of not only the dependencies between words, but also the relations between sentences. Following~\newcite{Voita:2018:ACL}, we share the parameters of word-level encoder $\textsc{Encoder}_{word}$ with the encoder component in standard NMT model. Note that, $\textsc{Encoder}_{word}$ and $\textsc{Encoder}_{sentence}$ can be implemented as arbitrary networks, such as recurrent networks~\cite{cho2014learning}, convolutional networks~\cite{Gehring:2017:ICML}, or self-attention networks~\cite{Vaswani:2017:NIPS}. In this study, we used recurrent networks to implement our $\textsc{Encoder}$. \paragraph{Integrating Discourse into ZP Prediction} We directly feed the discourse-level context to the reconstructor to improve ZP prediction. Specifically, we combine the context vector and the reconstructor state: \begin{equation} \widehat{\bf h}_t^{rec} = f_c ({\bf h}_t^{rec}, {\bf C}) \end{equation} Here $f_c(\cdot)$ is a function for combining reconstructor states and the context vector, which is a simple concatenation (\textsc{Concat}) in this work. The revised reconstructor state $\widehat{\bf h}_t^{rec}$ is then used in Equations~(\ref{eqn:rec}) and~(\ref{eqn:label}). \section{Experiments} \begin{table*}[t] \centering \renewcommand\arraystretch{1.1} \begin{tabular}{c||l||r|l||c|c|c} \multirow{2}{*}{\bf \#} & \multirow{2}{*}{\bf Model} & \multicolumn{2}{c||}{\bf Translation} & \multicolumn{3}{c}{\bf Prediction} \\ \cline{3-7} & & \em \#Params & \em BLEU & \em P & \em R & \em F1\\ \hline\hline 1 & Baseline & 86.7M & 31.80 & n/a & n/a & n/a\\ \hline \multicolumn{7}{c}{\em External ZP Prediction~\cite{Wang:2018:AAAI}} \\ \hline 2 & ~~~+ ZP-Annotated Data & +0M & 32.67 & \multirow{2}{*}{0.67} & \multirow{2}{*}{0.65} & \multirow{2}{*}{0.66}\\ 3 & ~~~~~~~~+ Reconstruction & +73.8M & 35.08 & &\\ \hline \multicolumn{7}{c}{\em This Work: Joint ZP Prediction and Translation} \\ \hline 4 & Joint Model & +35.6M & 36.04$^\dag$ & 0.72 & 0.68 & 0.70\\ 5 & ~~~+ {Discourse-Level Context} & +56.6M & \bf 37.11$^\dag$ & 0.76 & 0.77 & \bf 0.77\\ \end{tabular} \caption{\label{tab:2} Evaluation of ZP translation and prediction on the Chinese--English data. ``\#Params'' represents the number of parameters used in different models. ``$\dag$'' indicates statistically significant difference ($p < 0.01$) from the best external ZP prediction model for translation performance. As seen, the proposed joint models improve performances in both ZP translation and prediction, over the external ZP prediction models. } \label{tab-results} \end{table*} \subsection{Setup} We conducted translation experiments on both Chinese$\Rightarrow$English and Japanese$\Rightarrow$English translation tasks, since Chinese and Japanese are pro-drop languages while English is not. For Chinese$\Rightarrow$English translation task, we used the data of auto-annotated ZPs~\cite{Wang:2018:AAAI}.\footnote{\url{https://github.com/longyuewangdcu/tvsub}.} The training, validation, and test sets contain 2.15M, 1.09K, and 1.15K sentence pairs, respectively. In the training data, there are 27\% of Chinese pronouns are ZPs, which poses difficulties for NMT models. For Japanese$\Rightarrow$English translation task, we respectively selected 1.03M, 1.02K, and 1.02K sentence pairs from Opensubtitle2016\footnote{\url{ http://www.opensubtitles.org}.} as training, validation, and test sets~\cite{tiedemann2012parallel}. We used case-insensitive 4-gram NIST BLEU \citep{Papineni:2002} as evaluation metrics, and {\em sign-test} \citep{Collins05} to test for statistical significance. To make fair comparison with~\newcite{Wang:2018:AAAI}, we also implemented our approach on top of the RNN-based NMT model, which incorporates dropout \cite{hinton2012improving} on the output layer and improves the attention model by feeding the most recently generated word. For training the models, we limited the source and target vocabularies to the most frequent 30K words for Chinese$\Rightarrow$English and 20K for Japanese$\Rightarrow$English. Each model was trained on sentences of length up to a maximum of 20 words with early stopping. Mini-batches were shuffled during processing with a mini-batch size of 80. The dimension of word embedding was 620 and the hidden layer size was 1,000. We trained for 20 epochs using Adadelta~\cite{zeiler2012adadelta}, and selected the model that yielded best performances on validation sets. For training the proposed models, the hidden layer sizes of hierarchical model and reconstruction model are 1,000 and 2,000, respectively. We modeled previous three sentences as discourse-level context.\footnote{We followed \newcite{Wang:2017:EMNLP} and \newcite{Tu:2018:TACL} to use 3 previous sentences as discourse context.} \subsection{Results on Chinese$\Rightarrow$English Task} \label{sec:4.2} Table~\ref{tab-results} lists the performance of ZP translation and prediction on Chinese$\Rightarrow$English data. The baseline (Row 1) is trained on the standard NMT model using the original parallel data ({$\bf x$}, {$\bf y$}). In addition, we implemented two comparative models (Row 2-3), which differ with respect to the training data used. The ``+ ZP-Annotated Data'' model was still trained on standard NMT model but using new training instances ({$\bf \hat{x}$}, {$\bf y$}) whose source-side sentences are auto-annotated with ZPs. The ``+ Reconstruction'' is the best model reported in~\newcite{Wang:2018:AAAI}, which employs two reconstructors to reconstruct the $\bf \hat{x}$ from hidden representations of encoder and decoder. At decoding time, ZPs can not be annotated by alignment method since target sentences are not available. Thus, source sentences are annotated by an external ZP prediction model, which is trained on monolingual training instances $\bf \hat{x}$. Finally, we evaluated two proposed models (Row 4-5) which are introduced in Section~\ref{sec:3.1} and \ref{sec:3.2}, respectively. \paragraph{Translation Quality} Benefiting from the explicitly annotated ZPs in the source language, the ``+ ZP-Annotated Data'' model (Row 2) outperforms the baseline system built on the original data where the pronouns are missing (\emph{i.e.,}\xspace +0.87 BLEU point). This illustrates that explicitly recalling translation of ZPs at training time helps produce better translations. Furthermore, the ``+ Reconstuction'' approach (Row 3) respectively outperforms the baseline and ``+ ZP-Annotated Data'' models by +3.28 and +2.41 BLEU points, which indicates that explicitly handling ZPs with reconstruction model can better address ZP problems. The proposed models consistently outperform other models in all cases, demonstrating the superiority of the joint learning of ZP prediction and translation. Specifically, the ``Joint Model'' (Row 4) significantly improves translation performance by +4.24 over baseline model. In addition, this joint approach also outperforms two comparative models ``+ ZP-Annotated Data'' and ``+ Reconstruction'' by +3.37 and +0.96 BLEU points, respectively. We attribute the improvement over external ZP prediction to: 1) releasing the reliance on external ZP prediction models can greatly alleviate error propagation problems; and 2) joint learning of ZP prediction and translation is able to guide the related parameters to learn better latent representations. Furthermore, introducing discourse-level context (Row 5) accumulatively improves translation performance, and significantly outperform the joint model by +1.07 BLEU points. More parameters may capture more information, at the cost of posing difficulties to training. ~\newcite{Wang:2018:AAAI} leverage two separate reconstructors with hidden state size being 2000 and 1000 respectively. Accordingly, their models introduce a large number of parameters. In contrast, we set the hidden size of the reconstructor be 1000, which greatly reduce the newly introduced parameters (+35.6M vs. +73.8M). Modeling discourse-level context further introduces +21M new parameters, which is reasonable comparing with previous work. Our best model variation outperform that of external ZP prediction by over 2 BLEU points with less parameters (143.3M vs. 160.5M), showing that the improvements are attributed to the stronger modeling capacity rather than more parameters. \begin{table}[t] \centering \renewcommand\arraystretch{1.1} \begin{tabular}{l|c|c} \bf Model & \bf BLEU & \bf $\bigtriangleup$ \\ \hline Baseline & 19.94 & --\\ External ZP Prediction & 20.86 & +0.92 \\ \hline Joint Model & 21.39 & +1.45 \\ ~~+ Discourse-Level Context & {\textbf{22.00}} & +2.06 \\ \end{tabular} \caption{Translation quality on Japanese--English data. As seen, the proposed models can also significantly improve translation performance, which shares the same trend with that on Chinese--English translation.} \label{tab-results-jaen} \end{table} \paragraph{ZP Prediction Accuracy} The joint model improves prediction accuracy as expected, which we attribute to the leverage of useful translation information. Incorporating the discourse-level context further improves ZP prediction, and the best performance is 11\% higher than external ZP prediction model. These results confirm our claim that joint learning of ZP prediction and translation can benefit both components by allowing them to interact with each other. \subsection{Results on Japanese$\Rightarrow$English Task} Table \ref{tab-results-jaen} lists the results. We compare our models and the best external ZP prediction approach. As seen, our models also significantly improve translation performance, demonstrating the effectiveness and universality of the proposed approach. This improvement on Japanese$\Rightarrow$English translation is lower than that on Chinese$\Rightarrow$English, showing that ZP prediction and translation are more challenging for Japanese. The reason may be two folds: 1) Japanese language has a larger number of pronoun variations borrowed from archaism, which leads to more difficulties in learning ZPs; 2) Japanese language is subject-object-verb (SOV) while English has subject-verb-object (SVO) structure, and this poses difficulties for ZP annotation via alignment method. \subsection{Analysis} \label{sec:4.3} We conducted extensive analyses on Chinese $\Rightarrow$English to better understand our models in terms of the effect of external ZP annotation and different types of ZPs errors. \begin{table}[t] \centering \renewcommand\arraystretch{1.1} \begin{tabular}{l|c|c|c} \multirow{2}{*}{\bf Model} & \multicolumn{3}{c}{\bf ZP-Annotated Input} \\ \cline{2-4} & \checkmark & \texttimes & $\bigtriangledown$ \\ \hline \hline Baseline & \multicolumn{2}{c|}{31.80} & -- \\ \hdashline External ZP Predict. & 35.08 & 34.02 & -1.06\\ \hline Joint Model & 36.04 & 35.93 & -0.11\\ ~~~+ Discourse & \bf 37.11 & \bf 36.51 & -0.60\\ \end{tabular} \caption{\label{tab:rec} Translation results when no ZP-annotated input is used in decoding by {\em removing the reconstructor component}. ``$\bigtriangledown$'' denotes the performance gap between whether using the annotated input (``\checkmark'') or not (``\texttimes'').} \end{table} \paragraph{Reliance on Externally ZP-Annotated Input} Some researchers may argue that previous approaches~\cite{Wang:2018:AAAI} are also able to release the reliance of externally annotated input by removing the reconstructor component. Table~\ref{tab:rec} lists the results. Without ZP-annotated input in decoding, all approaches can still outperform the baseline model, by benefiting better intermediate representations that contain necessary ZP information. Compared with reconstruction-based models, however, removing the reconstruction components leads to decrease on translation quality. As seen, the BLEU score of best ``External ZP prediction'' model dramatically drops by -1.06 points, showing that this approach is heavily dependent on the results of external ZP annotations. The performances of proposed models only decrease by -0.1$\sim$-0.6 BLEU point. It indicates that our models are compatible with the standard encoder-decoder-reconstructor framework, thus enjoy an additional benefit of re-scoring translation hypotheses in testing with reconstruction scores. All the results together prove the superiority of the proposed unified framework for ZP translation. \begin{table}[t] \centering \renewcommand\arraystretch{1.1} \begin{tabular}{l|c|c} \bf Model & \bf BLEU & \bf $\bigtriangleup$ \\ \hline Baseline & 31.80 & --\\ ~~~+ Discourse$\Rightarrow$Decoder & 32.34 & +0.54\\ \hline Baseline + ZP-Anno. & 32.67 & \\ ~~~+ Discourse$\Rightarrow$Decoder & 32.55 & -0.12\\ \hline Joint Model & 36.04 & --\\ ~~~+ Discourse$\Rightarrow$Decoder & 34.66 & -1.38\\ \end{tabular} \caption{Translation results when transforming the contextual representation to decoder of different models. Incorporating discourse-level context does not always lead to improvement of translation performance.} \label{tab-results-discourse} \end{table} \paragraph{Effect of Discourse-Level Context} Recent studies revealed that inter-sentential context can implicitly help to tackle anaphora resolution in NMT architecture~\cite{jean2017neural,bawden2018evaluating,Voita:2018:ACL}. Some may argue that document-level architectures are strong enough to alleviate ZP problems for NMT. To answer this concern, we compared with ``+ Discourse$\Rightarrow$Decoder'' models, which transform the contextual representation to the decoder part of different models. In this way, the discourse-level context can benefit both the generation of translation and ZP prediction. As shown in Table~\ref{tab-results-discourse}, directly incorporating inter-sentential context into standard NMT model (one of document-level NMT architectures) can improve translation quality by +0.54 BLEU point than baseline. However, this integration mechanism does not work well in ``Baseline + ZP-Annotation'' and our ``Joint'' models, which decreasing by -0.12 and -1.38 BLEU points, respectively. One potential problem with this strategy is that the propagation path is longer: ${\bf C} \rightarrow {\bf h}^{dec} \rightarrow {\bf h}^{rec} \rightarrow {\bf zp}$, which may suffer from the vanishing effect. This also confirms our hypothesis that discourse-level context benefits ZP prediction more than ZP translation. Therefore, we incorporate the discourse-level context into reconstructor instead of the decoder. \begin{table}[t] \renewcommand\arraystretch{1.1} \centering \begin{tabular}{l|c|ccc|c} {\bf Model} & \bf Error & \bf Sub. & \bf Obj. & \bf Dum. & \bf All\\ \hline \textsc{Base.} & Total & 112 & 41 & 45 & 198 \\ \hline \multirow{3}{*}{\textsc{Exte.}} & Fixed & 50 & 34 & 33 & 117 \\ & New & 11 & 14 & 7 & 32 \\ & Total & 73 & 21 & 19 & 113\\ \hline \multirow{3}{*}{\textsc{Join.}} & Fixed & 61 & 35 & 37 & 133 \\ & New & 8 & 11 & 7 & 26 \\ & Total & 59 & 17 & 15 & 91\\ \hline \multirow{3}{*}{\textsc{~~+Dis.}} & Fixed & 70 & 39 & 38 & 147 \\ & New & 7 & 9 & 7 & 23 \\ & Total & \bf 49 & \bf 11 & \bf 14 & \bf 74 \\ \end{tabular} \caption{\label{tab:mannual} Translation error statistics. The ZP types ``Sub.'', ``Obj.'' and ``Dum.'' denote errors caused by subjective, objective and dummy pronouns, respectively. The models ``Base.'', ``Exte.'', ``Join.'' and ``+Dis.'' denote ``Baseline'', ``+ Reconstruction'', ``Joint Model'' and ``+ Discourse-Level context'' models. {\bf Bold} numbers denote the least errors in each category.} \end{table} \paragraph{Manual Evaluation on Translation Errors} We finally investigate how the proposed approaches improve the translation by human evaluation. We randomly select 500 sentences from the test set. As shown in Table~\ref{tab:mannual}, we count how many translation errors caused by different types of ZPs (\emph{i.e.,}\xspace ``Subjective'', ``Objective'' and ``Dummy''\footnote{In pro-drop languages, it is used to fulfill the syntactical requirements without providing explicit meaning (e.g. ``it'').}) are fixed (``Fixed'') and newly generated (``New'') by different models. All the models can fix different amount of ZP problems in terms of completeness and correctness, which is consistent with the translation results reported in Table~\ref{tab-results}. This confirms that our improvement in terms of BLEU scores indeed comes from alleviating translation errors caused by ZPs. Among them, the proposed model ``\textsc{+Dis.}'' performs best, which fixes 74\% of the ZP errors, and only introduces 12\% of new errors. In addition, we found that subjective ZPs are more difficult to predict and translate since they usually occur in imperative sentences, and ZP prediction needs to understand intention of speakers. The ``\textsc{Exte.}'' model only fixes 45\% of subjective ZP errors but made 10\% new errors by predicting wrong ZPs. However, the proposed joint model works better, which fixes 54\% error with only introducing 7\% new errors. Predicting objective ZPs needs inter-sentential context, thus our ``\textsc{+Dis.}'' model is able to fix more objective ZP errors (95\% vs. 82\%) by introducing less new errors (22\% vs. 34\%) than ``\textsc{Exte.}''. \begin{CJK}{UTF8}{gbsn} \begin{table}[t] \renewcommand\arraystretch{1.1} \centering \begin{tabular}{r|l} \hline \multicolumn{2}{c}{\bf Fixed Error} \\ \hline \textsc{Pre.} & 等 我 搬进 来, 能 买 台 电视 吗?\\ \textsc{Inp.} & 当然 可以, 乔伊 不让 {(你)} 买 {(它)}?\\ \textsc{Ref.} & Sure. Joey wouldn't let you buy it?\\ \textsc{Exte.} & Of course. Sure, Joey won't get {\em \color{blue}it}?\\ \textsc{Join.} & Sure. Joey won't let {\em \color{blue}us} buy {\em \color{blue}one}?\\ \textsc{+Dis.} & Sure. Joey wouldn't let {\bf \color{red}you} buy {\bf \color{red}it}?\\ \hline \multicolumn{2}{c}{\bf Non-Fixed Error} \\ \hline \textsc{Pre.} & 我 和 露西 只是 要 搬 到 对门。\\ \textsc{Inp.} & 我们 一 分手 {(我)} 就 搬 回去。\\ \textsc{Ref.} & Once we broke up, I'll move back.\\ \textsc{Exte.} & Once we broke up, {\em \color{blue}she}'ll move back.\\ \textsc{Join.} & Once we broke up, {\em \color{blue}we} moved back.\\ \textsc{+Dis.} & Once we broke up, {\em \color{blue}we}'ll move back.\\ \hline \end{tabular} \caption{\label{fig-example} Example translations where pronouns in brackets are dropped in original inputs (``\textsc{Inp.}'') but labeled by humans according to references (``\textsc{Ref.}'') and previous sentence (``\textsc{Pre.}''). We italicize some {\em \color{blue} mis-translated} errors and highlight the {\bf \color{red} correct} ones in bold. } \end{table} \end{CJK} \paragraph{Case Study} \begin{CJK}{UTF8}{gbsn} Table~\ref{fig-example} shows two typical examples, of which pronouns are mistakenly translated by the strong baseline (``External ZP Prediction'') model~\cite{Wang:2018:AAAI} while fixed by our model and failed to be fix. In ``Fixed Error'' case, the dropped word ``它 (\textit{it})'' is an anaphoric ZP whose antecedent is the noun ``电视 (\textit{television})'' in previous sentence while the dropped word ``你 (\textit{you})'' is a non-anaphoric ZP that depends upon speaker or listener. As seen, our ``\textsc{Join.}'' model performs better than the ``\textsc{Exte.}'' model because two ZP positions are syntactically recalled in the target side, showing that the joint approach have better capability of utilizing intra-sentential information for identifying ZPs. Besides, our ``\textsc{+Dis.}'' model can semantically fix the error by predicting correct ZP words, demonstrating that inter-sentential context can aid to recovering such complex ZPs. However, as shown in ``Non-Fixed Error'' case, there are still some ZPs can not be precisely predicted due to the misunderstanding of intentions of utterances. Thus, exploiting dialogue focus for ZP translation is our future work~\cite{rao2015dialogue}. \end{CJK} \section{Related Work} \paragraph{ZP Prediction and Translation} ZP resolution is a challenging task which needs lexical, syntactic, discourse knowledge. Previous studies have been conducted to improves the performance of ZP resolution for different pro-drop languages~\cite{kong2010tree,chen2013chinese,park2015zero,yin2017chinese}. However, directly using results of external ZP resolution systems for translation task shows limited improvements~\cite{chung2010effects,Nagard:2010:ACL,Taira:2012:SSSST,xiang2013enlisting}, since such external systems are trained on small-scale data that is non-homologous to MT. To overcome the data-level gap, \newcite{Wang:2016:NAACL} proposed an automatic approach of ZP annotation by utilizing an alignment matrix from a large parallel data. By using the translation-oriented ZP corpus, they exploited different approaches to alleviate ZP problems for translation models~\cite{Wang:2016:NAACL,Wang:2018:AAAI,Wang:2018:EMNLP}. Note that \newcite{Wang:2018:EMNLP} also explored to address the problem of error propagation by jointly predicting ZP words given ZP position information. However, this method still relies an external model that predicting ZP positions at decoding time. Instead, this work proposes a unified model without any additional ZP annotations in decoding, thus release reliance on external ZP prediction in practice. \paragraph{Discourse-Aware NMT} Recent years, context-aware architecture has been well studied for NMT~\cite{Wang:2017:EMNLP,jean2017does,Tu:2018:TACL}. \newcite{Wang:2017:EMNLP} proposed hierarchical recurrent neural networks to summarize inter-sentential context from previous sentences and then integrate it into a standard NMT model with difference strategies. \newcite{jean2017does} introduced an additional set of an encoder and attention to encode and select part of the previous source sentence for generating each target word. Besides, \newcite{Tu:2018:TACL} proposed to augment NMT models with a cache-like memory network, which stores the translation history in terms of bilingual hidden representations at decoding steps of previous sentences. They also evaluated the above three models on different domains of data, showing that the hierarchical encoder performs comparable with the multi-attention model. More recently, some researchers began to investigate the effects of context-aware NMT on cross-lingual pronoun prediction~\cite{jean2017neural,bawden2018evaluating,Voita:2018:ACL}. They mainly exploited general anaphora in non-pro-drop languages such as English$\Rightarrow$Russian. \section{Conclusion} \label{sec:6} In this work, we proposed a unified model to learn jointly predict and translate ZPs by leveraging multi-task learning. We also employed hierarchical neural networks to exploit discourse-level information for better ZP prediction. Experimental results on both Chinese$\Rightarrow$English and Japanese$\Rightarrow$English data show that the two proposed approaches accumulatively improve both the translation performance and ZP prediction accuracy. Our models also outperform the existing ZP translation models in previous work, and achieve a new state-of-the-art on the widely-used subtitle corpus. Manual evaluation confirms that the performance improvement comes from the alleviation of translation errors, which are mainly caused by subjective, objective as well as discourse-aware ZPs. There are two potential extensions to our work. First, we will evaluate our method on other implication phenomena (or called unaligned words~\cite{takeno2017controlling}) such as tenses and article words for NMT. Second, we will investigate the impact of different context-aware models on ZP translation, including multi-attention~\cite{jean2017neural} and context-aware Transformer\cite{Voita:2018:ACL}.
2,869,038,155,213
arxiv
\section{Introduction} In a turbulent flow, various processes like energy cascade, intermittency, fluid element deformation are strongly related to the small scale velocity gradient field. Various experimental, direct numerical simulation and simple dynamical models based studies have been performed to understand the dynamics of the velocity gradient tensor \cite[]{luthi2005lagrangian, ashurst1987alignment, vieillefosse1982local,cantwell1992exact}. In continuation to these works, several other studies have been reported as well \cite[]{ashurst1987alignment, ashurst1987pressure, girimaji1990diffusion, ohkitani1993eigenvalue, pumir1994numerical, girimaji1995modified, o2005relationship, chevillard2006lagrangian, da2008invariants, chevillard2011lagrangian, soria1994study, pirozzoli2004direct_b, suman2012velocity, wang2012flow, vaghefi2015local, danish2016influence, parashar2017, parasharJFM, parasharIJHFF}. The pressure-Hessian and the viscous tensor are the two important processes governing the evolution of the velocity gradient tensor. These processes are inherently non-local in nature and are unclosed from a mathematical viewpoint. \citet{chevillard2008} developed a recent fluid deformation closure model (RFDM) for modelling the viscous tensor and the pressure-Hessian. Although the RFD model robustly captures various one-time statistics of the viscous tensor, it has various inherent limitations in predicting the pressure-Hessian tensor (discussed in section \ref{s:rfdm}). Hence, in this paper, we focus on modelling the pressure-Hessian tensor using velocity gradients information. Recently an improved model$-$recent fluid deformation of Gaussian fields (RFDG model) has been proposed by \citet{johnson2016}. It is an improvement over the RFD model in terms of predicting various one-time statistics of the velocity gradient tensor. However, the authors \cite[]{johnson2016} did not focus on any relevant statistics of the pressure-Hessian tensor. Due to this reason, in this work, all our comparisons will be made against the RFD model of \citet{chevillard2008}. In the recent past, machine learning has gained popularity in the turbulence research community. The earliest such contribution in the field of machine learning aided turbulence research was made by \citet{duraisamy2014}, where the authors developed an intermittency transport-based model for bypass transition using machine learning and inverse modelling. Since then, a large number of researchers have tried to model various turbulence processes using machine learning models \cite[]{duraisamy2015, brendan2015, brendan2015b, parish2016,zhang2015,jack2017,duraisamy2019}. \citet{ling2016} employed a deep neural network to directly model the Reynolds stress anisotropy tensor using strain-rate and rotation-rate tensors. In doing so, they developed a novel tensor basis neural network (TBNN), which can be employed to map a given tensor from known input tensors. The TBNN has been shown to achieve superior performance by embedding tensor invariance properties in the network itself. Later \citet{fang2018} used the TBNN for turbulent channel flow and compared their results against standard turbulence models. \citet{sotgiu2018} developed a new framework in conjunction with TBNN for predicting turbulent heat fluxes. Further, \citet{geneva2019} developed a Bayesian tensor basis neural network for predicting the Reynolds stress anisotropy tensor. As mentioned earlier, the recent fluid deformation closure model (RFDM) \cite[]{chevillard2008} is considered to be the state of the art model for pressure-Hessian calculation. However, the pressure-Hessian predicted by the RFD model shows nonphysical alignment tendencies with the strain rate tensor (explained in section \ref{s:rfdm}). Any further improvement in the existing model may require a deeper understanding of the complex relationship between pressure-Hessian and velocity gradients. For this task, we employ deep learning, which can potentially decipher any functional relationship that can potentially exist between the quantities of interest. The tensor basis neural network (TBNN) developed by \citet{ling2016} has already been shown to map tensorial quantities robustly. In this work, we use high resolution incompressible isotropic turbulence data from John Hopkins University turbulence database, JHTD \cite[]{JHUTD_1, JHUTD_2} (\url{http://turbulence.pha.jhu.edu}) to train a neural network model inspired by TBNN. Further, we show that by appropriate normalization of the input data and a few modifications in the network can lead to significant improvements in alignment characteristics of the predicted output. The predictions made by the TBNN are compared against two different isotropic turbulence datasets that were not used for training the network$-$(i) Taylor Reynolds number of 433, JHTB \cite[]{JHUTD_1, JHUTD_2} and (ii) isotropic turbulence at Taylor Reynolds number of 315 (UP Madrid database, \url{https://torroja.dmt.upm.es/turbdata/Isotropic}) \cite[]{cardesa2017}. To demonstrate the generality of the predicted solution in terms of alignment statistics for other types of flows, we also test the trained model for channel flow data at friction velocity of 1000 (UT Austin and JHU turbulence database) \cite[]{JHUTD_3}. Further evaluation of the neural network output helps us retrieve ten unique coefficients of the tensor basis of strain-rate and rotation-rate tensors, the linear combination over which can be used to predict the pressure-Hessian tensor robustly. This paper is organized into six sections. In section \ref{s:goveq} we present the governing equations. In section \ref{s:rfdm}, we explain the limitations of the RFD model. In section \ref{s:NN}, we present the details of the tensor basis neural network architecture employed for this study. The analysis of the predicted solution from the TBNN is also presented in section \ref{s:NN}. Further, in section \ref{s:NN_mod}, we explain the modifications incorporated in the TBNN network and compare its results against state of the art RFD model. Section \ref{s:summary} concludes the paper with a brief summary. \section{Governing Equations} \label{s:goveq} The governing equations of an incompressible flow field comprises of the continuity, momentum and state equation of a perfect gas: \begin{align} \frac{\partial{V_k}}{\partial{x_k}}&=0; \label{eq:mass_con}\\ \frac{\partial{V_i}}{\partial{t}}+V_k\frac{\partial{V_i}}{\partial{x_k}}&=-\frac{1}{\rho}\frac{\partial{p}}{\partial{x_i}}+\frac{\mu}{\rho}\frac{\partial^2 V_i}{\partial{x_k}\partial{x_k}}; \label{eq:moment_con}\\ p&=\rho RT \label{eq:state} \end{align} where $V_i$ and $x_i$ represents the velocity and position respectively. Density, pressure and temperature are represented by $\rho$ , $p$ and $T$, while $R$ denotes the gas constant. The velocity gradient tensor is defined as: $$A_{ij} \equiv \frac{\partial{V_i}}{\partial{x_j}}.$$ Taking the gradient of momentum equation (\ref{eq:moment_con}), the exact evolution equation of $A_{ij}$ can be derived: \begin{align} \frac{DA_{ij}}{Dt}=-A_{ik}A_{kj} - \underbrace{\frac{\partial^2p}{\partial{x_i}\partial{x_j}}}_{\mathcal{P}_{ij}} + \underbrace{\nu\frac{\partial^2A_{ij}}{\partial{x_k}\partial{x_k}}}_{\Upsilon_{ij}} \label{eq:exact_evolA} \end{align} where $\boldsymbol{\mathcal{P}}$ and $\boldsymbol{\Upsilon}$ represent the pressure-Hessian and the viscous Laplacian governing the evolution of the velocity gradient tensor. The rate of change of $A_{ij}$ following a fluid particle is represented using the substantial derivative: $D / Dt \left(\equiv \partial / \partial{t} + V_{k} \partial / \partial{x_k} \right)$. \section{Limitations of the RFD model for pressure-Hessian calculation} \label{s:rfdm} The state of the art model for pressure-Hessian calculation is the recent fluid deformation closure model (RFDM) developed by \citet[]{chevillard2008}. The RFD pressure-Hessian ($\boldsymbol{\mathcal{P}^{RFD}}$) is expressed as: \begin{equation} \boldsymbol{\mathcal{P}^{RFD}} = -\frac{\{\boldsymbol{A}^2\}}{\{\boldsymbol{C_{\tau _k}^{-1}}\}} \boldsymbol{C_{\tau_k}^{-1}}, \end{equation} where, $\boldsymbol{C}$ is the right Cauchy Green tensor modelled as: $\boldsymbol{C_{\tau k}}=e^{\tau_k \boldsymbol{A}}e^{\tau_k \boldsymbol{A^T}}$ and the symbol $\{\}$ represents the trace of the tensor. The pressure-Hessian predicted by the RFD model has some inherent inconsistencies as compared to the actual pressure-Hessian obtained from DNS. These limitations are listed below: \begin{enumerate} \item \textit{$\boldsymbol{\mathcal{P}^{RFD}}$ is always positive-definite:} It is evident that $\boldsymbol{C_{\tau k}}$ is a positive-definite matrix as it is basically a product of a real matrix ($e^{\tau_k \boldsymbol{A}}$) and it's transpose. Since the inverse of a positive-definite matrix is also positive-definite, therefore, $\boldsymbol{C_{\tau_k}^{-1}}$ is always positive-definite and $\boldsymbol{\mathcal{P}^{RFD}}$ is guaranteed to be either positive-definite or negative-definite, depending on the sign of $-\frac{\{\boldsymbol{A}^2\}}{\{\boldsymbol{C_{\tau _k}^{-1}}\}}$. Therefore, the eigenvalues of $\boldsymbol{\mathcal{P}^{RFD}}$ are either all negative or all positive. This behavior of $\boldsymbol{\mathcal{P}^{RFD}}$ is nonphysical, since the governing equations does not impose any such restriction on $\boldsymbol{\mathcal{P}}$ to be either positive-definite or negative-definite. $\boldsymbol{\mathcal{P}}$ is real symmetric by nature and hence will have at-least one positive and one negative eigenvalue most of the time. \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_rfdm.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_rfdm.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_rfdm.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_dns.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_dns.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_dns.eps}} \caption{Alignment of $\boldsymbol{\mathcal{P^{RFD}}}$-eigenvectors ($\boldsymbol{p_i}$) with $\boldsymbol{S}$-eigenvectors ($\boldsymbol{s_i}$). Here, $i$ (= $\alpha$, $\beta$ or $\gamma$) denotes the three eigenvectors corresponding to the three eigenvalues $\alpha>\beta>\gamma$} \label{fig:s_dot_p_rfdm} \end{figure} \item \textit{In strain-dominated regions, the eigenvectors of $\boldsymbol{\mathcal{P}^{RFD}}$ coincides with the strain-rate eigenvectors:} \par As discussed above $\boldsymbol{\mathcal{P}^{RFD}}$ is always either negative-definite or positive-definite. Further, it is evident that if $\boldsymbol{A}$ is close to being symmetric (strain-dominant, $\boldsymbol{A}\approx\boldsymbol{S}$), the eigenvectors of $\boldsymbol{\mathcal{P}^{RFD}}$ will be approximately parallel or perpendicular to the eigenvectors of $\boldsymbol{S}$ itself. Hence, in strain-dominated regions $\boldsymbol{\mathcal{P}^{RFD}}$ is expected to show biased alignment towards the strain-rate eigenvectors which is nonphysical. In order to verify this claim, we show the alignment of the eigenvectors of $\boldsymbol{\mathcal{P}}$ and $\boldsymbol{\mathcal{P}^{RFD}}$ with strain-rate eigenvectors in Figure \ref{fig:s_dot_p_rfdm}. In Figure \ref{fig:s_dot_p_rfdm}(a,b,c) we show the PDF (probability distribution function) of the alignment of eigenvectors of $\boldsymbol{\mathcal{P}^{RFD}}$ with strain-rate eigenvectors and in Figure \ref{fig:s_dot_p_rfdm}(d,e,f) we show alignment of $\boldsymbol{\mathcal{P}}$-eigenvectors with $\boldsymbol{S}$-eigenvectors for comparison. It can be observed that for a large percentage of particles $\boldsymbol{\mathcal{P^{RFD}}}$-eigenvectors are either parallel or perpendicular to the $S$-eigenvectors (Figure \ref{fig:s_dot_p_rfdm}(a,b,c)). On the other hand, the eigenvectors of $\boldsymbol{\mathcal{P}}$ obtained from DNS show no such alignment tendencies as shown by $\boldsymbol{\mathcal{P^{RFD}}}$-eigenvectors. \end{enumerate} \section{Using Neural networks to model pressure-Hessian} \label{s:NN} It is evident from the discussion in the previous section that the functional relationship between pressure-Hessian and local velocity gradient tensor, if any, is far too complex to be addressed by simple algebraic models. In general, the evolution of pressure-Hessian of individual fluid particles is expected to be governed by a large spectrum of flow quantities, their higher derivatives and their evolutionary history as well. Nevertheless, in this work, we intend to explore the maximum potential of local velocity gradients to describe the pressure-Hessian accurately. For this purpose, we employ deep neural networks. Given, a sufficiently large network and training-data, neural networks can potentially decipher the functional relationship (if any) existing between the quantities of interest. With this motivation, we resort to neural networks to provide a better mapping between pressure-Hessian and the velocity-gradient tensor. \subsection{Neural network architecture} In this work, we employ the tensor basis neural network (TBNN) developed by \citet{ling2016}. This architecture has been shown to be robust for mapping tensors. The TBNN increases the representation power of the neural network by embedding knowledge of Tensor basis ($\boldsymbol{T}^{i}$) and invariants ($\lambda^{i}$) in the network itself. The TBNN network takes advantage from the Caley-Hamilton theorem, which states that any function derived from a given tensor alone can be expressed as a linear combination of the integrity basis \cite[]{spencer1958} of the given tensors. The predictions made by the TBNN network are basically a linear combination of the integrity basis ($\boldsymbol{T}^{i}$) of the input tensors. Hence, the TBNN network explores the full spectrum of all the mappings that any input tensor can offer, by enforcing the output of the network to be a linear combination of its integrity basis. Further, the TBNN network has embedded rotational invariance, which ensures that the predictions made by the TBNN network are independent of the orientation of the coordinate system. If the input tensors are expressed in a rotated-coordinate system, the predicted output will also get rotated accordingly. Hence, the TBNN network predicts the same output tensor irrespective of the orientation of the coordinate system. Figure \ref{fig:tbnn}, presents a brief overview of the TBNN network. \begin{figure}[bt] \centering \includegraphics[width=17cm]{TBNN.pdf} \caption{Schematic of the TBNN network. $W^{(i)}$ and $b^{(i)}$ are the weight matrix and the bias vector of the $i^{th}$ layer. Both $W^{(i)}$ and $b^{(i)}$ are the learnable parameters of the neural network, which are optimized using the RMSprop optimizer \cite{RMSprop}.} \label{fig:tbnn} \end{figure} For incompressible flow field, \citet{pope1975} derived the ten trace-free integrity basis ($\boldsymbol{T}^i$) and five independent invariants ($\lambda^i$) of the strain-rate ($\boldsymbol{S}$) and rotation rate ($\boldsymbol{R}$) tensors. These tensor basis and invariants are listed below: \begin{align} \boldsymbol{T}^1 &= \boldsymbol{S}, \ &\boldsymbol{T}^2 &= \boldsymbol{SR}-\boldsymbol{RS}, \nonumber\\ \boldsymbol{T}^3 &= \boldsymbol{S}^2-\frac{1}{3}\boldsymbol{I}\{\boldsymbol{S}^2\}, \ &\boldsymbol{T}^4 &= \boldsymbol{R}^2-\frac{1}{3}\boldsymbol{I}\{\boldsymbol{R}^2\}, \nonumber\\ \boldsymbol{T}^5 &= \boldsymbol{RS}^2-\boldsymbol{S}^2\boldsymbol{R}, \ &\boldsymbol{T}^6 &= \boldsymbol{R}^2\boldsymbol{S}+\boldsymbol{SR}^2-\frac{2}{3}\boldsymbol{I}\{\boldsymbol{SR}^2\}, \nonumber\\ \boldsymbol{T}^7 &= \boldsymbol{RSR}^2-\boldsymbol{R}^2\boldsymbol{SR}, \ &\boldsymbol{T}^8 &= \boldsymbol{SRS}^2-\boldsymbol{S}^2\boldsymbol{RS}, \nonumber\\ \boldsymbol{T}^9 &= \boldsymbol{R}^2\boldsymbol{S}^2+\boldsymbol{S}^2\boldsymbol{R}^2-\frac{2}{3}\boldsymbol{I}\{\boldsymbol{S}^2\boldsymbol{R}^2\}, \ &\boldsymbol{T}^{10} &= \boldsymbol{RS}^2\boldsymbol{R}^2-\boldsymbol{R}^2\boldsymbol{S}^2\boldsymbol{R}; \label{eq:basis} \end{align} \begin{align} \lambda^1 = \{\boldsymbol{S}^2\}, \hspace{0.65cm} \lambda^2 = \{\boldsymbol{R}^2\}, \hspace{0.65cm} \lambda^3 = \{\boldsymbol{S}^3\}, \hspace{0.65cm} \lambda^4 = \{\boldsymbol{R}^2\boldsymbol{S}\}, \hspace{0.65cm} \lambda^5 = \{\boldsymbol{R}^2\boldsymbol{S}^2\}. \label{eq:invariants} \end{align} The symbol $\{\}$ represents the trace of the tensor. A linear combination of these ten tensor basis ($\boldsymbol{T}^i$) can represent any trace-free tensor that is directly derived from $\boldsymbol{S}$ and $\boldsymbol{R}$. Since the exact expression for the trace of the pressure-Hessian is already known: \begin{equation} \{\boldsymbol{\mathcal{P}}\} = -A_{ik}A_{ki}, \end{equation} these trace-free integrity bases ($T_i$) can be readily used to model the trace-free part of the pressure-Hessian using the TBNN. We use the symbol $\boldsymbol{\mathcal{P}_{tf}}$ to denote the trace-free part of $\boldsymbol{\mathcal{P}}$. To find the relevant mapping between velocity gradient tensor and $\boldsymbol{\mathcal{P}_{tf}}$ the ten coefficients ($C^i$) corresponding to the ten integrity basis ($\boldsymbol{T}^i$) needs to be modelled. The five invariants ($\lambda^i$) of $\boldsymbol{S}$ and $\boldsymbol{R}$ forms the primary input of the TBNN. The output of the last layer of the network yields the ten coefficients $C^i$. A secondary input containing the ten tensor basis $\boldsymbol{T}^i$ (called tensor layer) is fed to the last layer of the network. Finally, a dot product between the coefficient layer and the tensor layer of the network makes the final output of the network, which can be expressed as: \begin{equation} \boldsymbol{\mathcal{P}_{tf}^{TBNN}} = \sum_{i=1}^{10} C^i \boldsymbol{T}^i. \end{equation} The cost function of the network can be expressed as: \begin{equation} J = \frac{1}{2m}\sum_{j=1}^m \left[\left|\left|\left(\boldsymbol{\mathcal{P}^{TBNN}_{tf}} - \boldsymbol{\mathcal{P}_{tf}}\right)_j\right|\right|_{F}\right]^2, \end{equation} where $m$ is the number of training examples required to train the TBNN and the symbol $|| \ ||_F$ represents the Frobenius norm. \subsection{Training of the neural network} The employed tensor basis neural network (TBNN) model is trained using data from an isotropic incompressible flow field at Reynolds number of 433. This data is taken from the John Hopkins University's Turbulence database \cite[]{JHUTD_1, JHUTD_2} available online at \url{http://turbulence.pha.jhu.edu/}. The opensource library Keras \cite[]{chollet2015keras} with TensorFlow backend is used for training the TBNN model. The velocity gradient tensor and pressure-Hessian information are extracted from the database at a particular time instant. A total number of 262,144 unique data-points are extracted from the flow field. Out of these 262,144 data points, 236,544 points are used for training the network, while the remaining 25,600 data-points are reserved for the cross-validation of the predicted solution. The training data is randomly distributed into 924 mini-batches of 256 data-points each at the beginning of every epoch. Since one epoch is one complete pass through the training dataset overall mini-batches. Hence, one epoch accounts for 924 iterations of the training cycle. The velocity gradient tensor was non-dimensionalized with the mean value of the Frobenius norm of the whole sample of 262,144 data-points. No, further normalization was used for the derived tensor-basis ($\boldsymbol{T}^i$) and invariants ($\lambda^i$). \begin{figure}[bt] \centering \includegraphics[width=8cm]{cost.eps} \caption{Decay of cost function during training for TBNN. Mini-batch size=256, 1 epoch = 924 iterations of the optimizer.} \label{fig:cost} \end{figure} A deep network with 11 hidden layers and a combination of 50, 150, 150, 150, 150, 300, 300, 150, 150, 150, 100 in the consecutive hidden layers was found to yield the best performance of all the combinations that were tested. We use the Glorot normal initialization \cite[]{glorot2010} for weight matrices and RELU (rectified linear unit) non-linear activation function for the hidden layers. The RMSprop optimizer \cite[]{RMSprop}, with a learning rate of $1.0e-6$ was used to train the network. The training was stopped when the value of cost function $J$ became stagnant. The minimum value of training cost and cross-validation cost recorded while training was $4.1e$-$4$ and $5.4e$-$4$ respectively. In Figure \ref{fig:cost}, we show the training and cross-validation cost as a function of a number of training epochs. The cross-validation cost didn't show any significant rise during the training process. A low dropout rate of 10$\%$ was used to facilitate ensemble learning in the network. There was no gain in model performance with further increase in data-size and network depth. \subsection{Testing of the trained network} \begin{figure}[bt] \centering \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_tbnn.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_tbnn.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_tbnn.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_dns.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_dns.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_dns.eps}} \caption{Alignment of $\boldsymbol{\mathcal{P^{TBNN}}}$-eigenvectors ($\boldsymbol{p_i}$) with $\boldsymbol{S}$-eigenvectors ($\boldsymbol{s_i}$). Here, $i$ (= $\alpha$, $\beta$ or $\gamma$) denotes the three eigenvectors corresponding to the three eigenvalues $\alpha>\beta>\gamma$. (JHTD isotropic turbulence testing dataset, Reynolds number 433 \cite[]{JHUTD_1, JHUTD_2})} \label{fig:s_dot_p_tbnn} \end{figure} The primary testing of the trained TBNN model was performed on a separate testing dataset (other than training and validation data) of isotropic turbulence (JHTD \cite[]{JHUTD_1, JHUTD_2}) The relative Frobenius$-$norm error of the pressure-Hessian obtained from the trained model on the testing dataset was found to be $0.6491$. On the same dataset, an error of $0.7764$ was obtained by the RFDM model. Hence, in terms of element-wise mean squared error comparison, the accuracy of the trained TBNN model is comparable to the existing RFDM model. However, just the element-wise comparison is not a wise comparison metric for comparing tensorial quantities. We have earlier (in figure \ref{fig:s_dot_p_rfdm}) seen that the RFD model fails to capture the alignment statistics with the strain-rate tensor. In figure \ref{fig:s_dot_p_tbnn}, we present the alignment of the pressure-Hessian eigenvectors predicted by the TBNN (figure \ref{fig:s_dot_p_tbnn}(a,b,c)) with the strain-rate eigenvectors compared against that obtained from DNS (figure \ref{fig:s_dot_p_tbnn}(d,e,f)). We observe that although the alignment statistics (figure \ref{fig:s_dot_p_tbnn}(a,b,c)) have improved as compared to the RFD model results (figure \ref{fig:s_dot_p_rfdm}(a,b,c)), the obtained statistics are still far-off from that obtained from DNS. \section{Modified neural network architecture} \label{s:NN_mod} \begin{figure}[bt] \centering \includegraphics[width=8cm]{cost_new.eps} \caption{Decay of cost function during training for the modified TBNN. Mini-batch size=256, 1 epoch = 924 iterations of the optimizer.} \label{fig:cost_mod} \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_tbnn_mod.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_tbnn_mod.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_tbnn_mod.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_dns.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_dns.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_dns.eps}} \caption{Alignment of $\boldsymbol{\mathcal{P^{TBNN}}}$-eigenvectors ($\boldsymbol{p_i}$) obtained from modified TBNN with $\boldsymbol{S}$-eigenvectors ($\boldsymbol{s_i}$). Here, $i$ (= $\alpha$, $\beta$ or $\gamma$) denotes the three eigenvectors corresponding to the three eigenvalues $\alpha>\beta>\gamma$. (JHTD isotropic turbulence testing dataset \cite[]{JHUTD_1, JHUTD_2}, Reynolds number 433 \cite[]{JHUTD_1, JHUTD_2})}. \label{fig:s_dot_p_tbnn_mod} \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_tbnn_madrid.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_tbnn_madrid.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_tbnn_madrid.eps}} \caption{Alignment of $\boldsymbol{\mathcal{P^{TBNN}}}$-eigenvectors ($\boldsymbol{p_i}$) obtained from modified TBNN with $\boldsymbol{S}$-eigenvectors ($\boldsymbol{s_i}$). Here, $i$ (= $\alpha$, $\beta$ or $\gamma$) denotes the three eigenvectors corresponding to the three eigenvalues $\alpha>\beta>\gamma$. (UP Madrid isotropic turbulence testing dataset \cite{cardesa2017}, Reynolds number 315)}. \label{fig:s_dot_p_tbnn_mod_madrid} \end{figure} We have observed that TBNN is unable to capture the alignment statistics of the pressure-Hessian tensor. It implies that assuming the pressure-Hessian to lie on the tensor basis of the strain-rate and rotation-rate tensors is not an appropriate modelling assumption. Constraining the network to obey tensor invariance properties restricts us to the use only global normalization of the input tensors. However, the velocity gradients in a turbulent flow field are known to be highly intermittent. Hence, global normalization of the input tensors might not be an effective strategy for such highly intermittent tensors. The learning of important feature mappings by a neural network relies heavily on effective normalization strategies. At this juncture, we performed several experiments on the TBNN by choosing various normalization strategies which allow TBNN to deviate from its tensor invariance characteristics. We found out through hit-and-trial that normalizing the tensor basis such that all its elements are scaled between [0, 1], yields tremendous improvement in network output. Two matrices $\bold{F}^{(i)}$ and $\bold{G}^{(i)}$ are used to scale the tensor basis: \begin{equation} {F^{(i)}}_{pq} = max \left({T^{(i)}}_{pq} \right) \end{equation} \begin{equation} {G^{(i)}}_{pq} = min \left({T^{(i)}}_{pq} \right) \end{equation} Using $\bold{F}^{(i)}$ and $\bold{G}^{(i)}$ the tensor basis can be appropriately scaled using the following relationship: \begin{equation} \boldsymbol{T}^{(i)'} = (\boldsymbol{T}^{(i)} - \boldsymbol{G}^{(i)}) \oslash (\boldsymbol{F}^{(i)} - \boldsymbol{G}^{(i)}), \end{equation} where, symbol $\oslash$ represents the Hadamard division between the two tensor. With this normalization, the network loses most of the properties of the original TBNN. However, it leads to significant improvements in alignment statistics of the predicted output. We employ the modified network with the same settings (viz. the number of hidden layers, neurons per layer, activation function, learning rate, etc.) as used with the original TBNN network. In figure \ref{fig:cost_mod}, we show the learning curve obtained while training the modified TBNN. We use an early stopping criterion while training the network, at the point when the validation-loss curve becomes almost flat (no further decline with increasing epochs). \subsection{Testing modified TBNN for isotropic turbulence flow} \label{ss:tbnn_iso} In figure \ref{fig:s_dot_p_tbnn_mod} we show the alignment statistics obtained on the testing dataset with the modified TBNN on isotropic turbulence testing dataset (JHTD \cite[]{JHUTD_1, JHUTD_2}). We observe that modified TBNN predictions (figure \ref{fig:s_dot_p_tbnn_mod}(a,b,c)) demonstrate excellent alignment statistics as compared to that obtained from DNS (figure \ref{fig:s_dot_p_tbnn_mod}(d,e,f)). Although this testing dataset is extracted at different grid locations other than the training dataset, it still has the same Reynolds number as the training dataset (433). To make a better judgement of the generalization of the learnt pressure-Hessian mapping, we scrutinize the performance of the trained model for isotropic turbulence dataset at a different Reynolds number of 315. This dataset is extracted from the UP Madrid turbulence dataset \cite[]{cardesa2017}). In figure \ref{fig:s_dot_p_tbnn_mod_madrid} we plot the alignment statistics obtained from the learnt modified TBNN model for this dataset (at Renolds number of 315). We find that similar statistics are retrieved at Renolds number of 315 as well (Figure \ref{fig:s_dot_p_tbnn_mod_madrid}). Hence, we can conclude that the trained modified TBNN has learnt key physical features that can be generalized for an isotropic turbulent flow independent of its Reynolds number. \subsection{Testing modified TBNN for turbulent channel flow} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_tbnn_channel.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_tbnn_channel.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_tbnn_channel.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_1_dns_channel.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_2_dns_channel.eps}} \subfigure[]{ \includegraphics[width=4.5cm]{s_dot_p_3_dns_channel.eps}} \caption{Alignment of $\boldsymbol{\mathcal{P^{TBNN}}}$-eigenvectors ($\boldsymbol{p_i}$) with $\boldsymbol{S}$-eigenvectors ($\boldsymbol{s_i}$). Here, $i$ (= $\alpha$, $\beta$ or $\gamma$) denotes the three eigenvectors corresponding to the three eigenvalues $\alpha>\beta>\gamma$. (UT Texas and JHTD channel flow dataset \cite[]{JHUTD_3}, friction velocity Reynolds number of 1000.} \label{fig:s_dot_p_channel} \end{figure} The modified TBNN was trained using an isotropic turbulent flow dataset. We saw in the previous section \ref{ss:tbnn_iso} that the network can learn key features of isotropic turbulent flows, that leads to accurate predictions of pressure-Hessian especially in terms of the alignment statistics with the strain-rate eigenvectors. We, now take a step ahead and scrutinize the trained model for a different type of flow viz. the channel flow, to which the network was not exposed while training. The presence of solid walls in a channel flow leads to the generation of boundary layers near the walls. The pressure and velocity profiles in a boundary layer are very different from that observed in isotropic flow, that has no solid walls. Hence, we cannot expect our trained model to predict the pressure-Hessian for turbulent channel flow accurately. In fact, when we pass the velocity gradient information through the trained network, a very large relative Frobenius$-$norm error of 2.1838 is obtained on the predicted solution. However, the predicted output of the modified TBNN still retrieves accurate alignment statistics with the strain-rate eigenvectors, as shown in figure \ref{fig:s_dot_p_channel}. Hence, there does exist a relevant mapping between pressure-Hessian and velocity gradients that can ensure correct alignment with the strain-rate eigenvectors. The network has been able to learn this key physical mapping which is possibly independent of the type of flow and its Reynolds number (at least for isotropic and channel turbulent flows). As discussed in section \ref{s:NN}, the evolution of pressure-Hessian is expected to be governed by a large spectrum of flow quantities, their derivatives and evolution history. However, the major focus of this work has been to explore the maximum potential of local velocity gradients to describe the pressure-Hessian. We report that using only the local velocity gradient tensor we can model the pressure-Hessian such that it at least aligns with the strain-rate eigenvectors appropriately. \subsection{Predicted coefficients by the modified network} In Figure \ref{fig:coeffs}, we show the scatter plot of the coefficients predicted by the modified TBNN. We observe that each of these ten coefficients ($C^{(i)}$) have negligible variance. The overall distribution can effectively be replaced by the mean value of the distribution of each of the coefficients. Further, we find that by using the mean value of the coefficients, we retrieve the same statistics as obtained by passing the velocity gradient information through the modified TBNN. \begin{figure}[t] \centering \subfigure{ \includegraphics[width=3.8cm]{coeff_1.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_2.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_3.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_4.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_5.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_6.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_7.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_8.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_9.eps}} \subfigure{ \includegraphics[width=3.8cm]{coeff_10.eps}} \caption{Scatter plot of the ten coefficients predicted by the modified TBNN.} \label{fig:coeffs} \end{figure} With this revelation, it is no longer required to use the trained network for pressure-Hessian estimation. Rather, we can just use a very simple process for pressure-Hessian prediction: \begin{enumerate} \item Non-dimensionalize, the velocity gradient tensor, using the mean value of the Frobenius norm of the whole sample. \item Calculate the ten tensor basis and five independent invariants of strain-rate and rotation-rate tensors using equations \ref{eq:basis} and \ref{eq:invariants}. \item Normalize the tensor basis using the scaling matrices used for the trained network (details in Appendix \ref{s:appendix}). \item Take a linear combination of the tensor basis using the mean value of the coefficients obtained from the trained network. This would yield the modelled normalized pressure-Hessian (refer Appendix \ref{s:appendix}). \item Scale the predicted pressure-Hessian back to its original dimensional form, using the same scaling matrices that were used while training the network. \item Enforce the predicted solution to have the desired trace (since the trace of $\mathcal{\boldsymbol{P}}$ is the same as the trace of $\boldsymbol{-A}^2$) \end{enumerate} The complete details of the step-by-step process for calculation of the modelled pressure-Hessian tensor are presented in Appendix \ref{s:appendix}. \section{Conclusions} \label{s:summary} In this work, we first scrutinize the state of the art RFD model \cite[]{chevillard2006lagrangian} for pressure-Hessian prediction, in terms of its alignment statistics with the strain-rate eigenvectors. We report that the eigenvectors of the pressure-Hessian obtained from RFD model are either mostly parallel or perpendicular to the strain-rate eigenvectors. To decipher a better functional mapping between pressure-Hessian and velocity gradient, we employ a tensor basis neural network (TBNN) architecture \cite[]{ling2016}. The neural network is trained on high-resolution isotropic turbulence data at a Renolds number of 433. With the help of TBNN, the pressure-Hessian tensor is modelled in terms of the trace-free and symmetric tensor basis of strain-rate and rotation-rate tensors. We report that the accuracy of the predicted pressure-Hessian by TBNN is comparable to that obtained from the state of the are RFD model. However, only a marginal improvement in the alignment statistics of the TBNN output is observed. Further, we report that by scaling the tensor basis of strain-rate and rotation-rate tensors such that each element of the basis lies between $[0, 1]$, the predicted output of the neural network yields excellent alignment statistics with the strain-rate tensor for isotropic turbulent flows at different Reynolds number. Further, we test the trained model for turbulent channel flow dataset, to which the network was not exposed while training. We find that although there is significant error in element-wise comparison, the statistics of alignments obtained with the strain-rate eigenvectors are in good agreement with DNS results. With this finding, we come to the conclusion that there does exist a relevant physical mapping between pressure-Hessian and velocity gradients which enforce their eigenvectors to align appropriately with each other. This mapping is found to be independent of the type of flow and its Reynolds number (at least for isotropic turbulence and channel flow). The modified TBNN has been able to learn this key mapping by appropriately normalizing the tensor basis of strain-rate and rotation-rate tensors. Finally, we find that the distribution of the coefficients of the tensor basis obtained from the neural network has negligible variance. With this revelation, we have been able to identify ten unique coefficients of the tensor basis, the linear combination over which can be used to model the pressure-Hessian tensor directly.
2,869,038,155,214
arxiv
\section{Introduction} \def{\sf R}{{\sf R}} \defC^\infty{C^\infty} The current work is concerned with developing an effective and universal approach to treating and extending three identities, each of which plays a central role in the constraint of classes of non-linear geometric PDE problems. Of these, the most easily stated arises in the problem of conformally prescribing scalar curvature; that is of determining, on a fixed conformal structure $(M^n,c)$, which functions may be the scalar curvature $\operatorname{Sc}^g$ for some $g\in c$. This problem is especially interesting on the sphere, where it is known as the Nirenberg problem. While there are obvious constraints arising from the Gauss-Bonnet theorem, from the seminal work \cite{KW74} of Kazdan-Warner it follows that there are positive functions on the 2-sphere $S^2$ that are not the curvatures of metrics that are pointwise conformal to the standard metric. A similar result was found in higher dimensions \cite{KW75}, and in all cases the results are a consequence of an identity satisfied by the first spherical harmonics. A well-known formulation and extension of these results is due to Bourguignon and Ezin \cite{BoE}, and is based around their identity: For any conformal (Killing) vector field $X$, of a closed Riemannian $n$-manifold $(M,g)$, the scalar curvature satisfies \begin{equation}\label{BoEr} \int_M {\mathcal L}_X \operatorname{Sc}^g~dv_g=0, \end{equation} where ${\mathcal L}_X$ denotes the Lie derivative. Earlier Pohozaev described an identity which applies to, for example, star shaped manifolds $M$ with smooth boundary $\partial M$ in Euclidean space \cite{poh65}. It was used, for example, to establish non-existence results for a class of semi-linear variants of eigenvalue boundary problems. These take the form $\Delta u+\lambda f(u)=0$ with $u|_{\partial M}=0$. Here $f$ is a non-linear function that satisfies $f(0)=0$. The Pohozaev identity states \begin{equation}\label{poh} \lambda n \int_M F(u) + \frac{2-n}{2} \lambda\int_M f(u)u =\frac{1}{2}\int_{\partial M} (x\hbox to 2.5pt{\hss$\ccdot$\hss} \nu)(\nabla_\nu u)^2; \end{equation} $x$ is the Euler vector field, $\nu$ is the outward unit normal, $\nabla_\nu$ the directional derivative along $\nu$, and $F(u)=\int_0^u f(t)dt$. Remarkably the identities \nn{BoEr} and \nn{poh} are related. More precisely there is an identity due to Schoen \cite[Proposition 1.4]{schoen} which, at least for $n\geq 3$, includes both as special cases: For any conformal vector field $X$ on a Riemannian $n$-manifold $(M,g)$ with smooth boundary $\partial M$, the following identity holds \begin{equation}\label{sch} \int_M {\mathcal L}_X \operatorname{Sc} ~dv_g = \frac{2n}{n-2}\int_{\partial M}\left(\operatorname{Ric}-\frac{1}{n}\operatorname{Sc}\hbox to 2.5pt{\hss$\ccdot$\hss} g \right)(X,\nu)d\sigma_g ; \end{equation} here $\nu$ is the outward normal, and $\operatorname{Ric}$ denotes the Ricci curvature. This was proved using the Bianchi identities, and used as a balancing condition for approximate solutions to a PDE problem linked to the Yamabe equation. Since $\partial M$ may be empty it is clear that \nn{sch} extends \nn{BoEr} for the cases $n\geq 3$. In Section \ref{pohS} we describe, for the reader's convenience, how to recover \nn{poh} from \nn{sch}. The three identities have had a major impact in non-linear and geometric analysis, and are still used extensively in the current literature. This has motivated the development of analogous and related identities: For Kazdan-Warner type identities recovering or generalising \nn{BoEr} see for example \cite{AHA}, \cite{Bo86}, \cite{BrO91}, \cite{CY}, \cite{delRob}, \cite{GuoHL}, \cite{V2}; for the Pohozaev identity \nn{poh} see \cite{PRS}, \cite{PS}, \cite{Wag}; and for Schoen's identity (\ref{sch}) (which is sometimes also referred to as a ``Pohozaev identity'') \cite{ezin}, \cite{Gursky}. This is by no means a complete list. Many of the works in the area treat specific curvature quantities, and are motivated by particular geometric problems. Exceptions include \cite{BrO91} which gives an analogue of \nn{BoEr} for all heat invariants corresponding to a conformally covariant operator. Most notably, by an an elegant and powerful argument, Bourguignon describes in \cite{Bo86} a very general framework for extending the ``Kazdan-Warner identity'' \nn{BoEr}; this is further developed and applied by Delanoe and Robert in \cite{delRob}. The identities and works mentioned suggest the following problems: For what scalar invariants $V=V(g)$ (replacing/generalising $\operatorname{Sc}^g$) do we expect an analogue of the classical Kazdan-Warner identity \nn{BoEr}? Any such identity gives an immediate constraint for conformal curvature prescription on the sphere. Similarly, for what scalar invariants $V=V(g)$ do we expect an analogue of \nn{sch}? Note that this identity gives a non-trivial constraint in a vastly wider range of geometric structures, so any extension has great potential for application. The third main problem is to precisely relate the two types of identity. For example if, in some general situation of closed manifolds, $V$ satisfies an analogue of the Kazdan-Warner identity then do we expect it to also satisfy the Schoen identity \nn{sch} on manifolds with boundary? That there should be some subtlety here is clear from the factor of $\frac{1}{n-2}$ in \nn{sch}; Schoen's construction apparently does not recover \nn{BoEr} in dimension 2. In the current work we obtain essentially complete answers to the questions posed by showing that a closely related set of general principles underlie the Kazdan-Warner and Pohozaev-Schoen type identities. (Here we restrict to the case where $X$ is a conformal vector field. There are clearly extensions to related settings, but this will be taken up elsewhere.) The principles involved are strongly related to the notion of symmetry and conservation that dates back to the work of D.\ Hilbert and E.\ Noether, and indeed this is our starting point in Section \ref{HN}. Overall we obtain very general extensions of the Kazdan-Warner and Pohozaev-Schoen identities. Concerning the former, the main results are Theorems \ref{ukw}, Corollary \ref{aa}, Theorem \ref{nocvt}, and Theorem \ref{bway}. The first three of these show that an identity of the type \nn{BoEr} is available for any natural scalar invariant which is conformally variational (as defined in Section \ref{cvar}) for suitable functionals of increasing generality; in each case the result is a direct consequence of symmetry invariance, or in other words of a gauge invariance, in the action functional concerned. The last Theorem \ref{bway} extends these results to show that in fact {\em any} conformally variational natural scalar satisfies such an identity. In this case the argument (cf.\ \cite{Bo86,delRob}) is less direct and uses now the invariance of a 1-form on the space of metrics (in a conformal class), combined with the Lelong-Ferrand-Obata theory \cite{L-F,O}. The last mentioned approach appears to be necessary for a class of critical cases, but it misses the connection with the Schoen type identities \nn{sch}. On the other hand the very simple argument behind Theorem \ref{ukw} involves specialising total metric variations, and so is linked to locally conserved 2-tensors (as explained in Section \ref{cvar}). Through this its proof is intimately connected to Theorem \ref{maini} which extends \nn{sch} to an identity that holds for the trace and trace-free parts of any locally conserved 2-tensor. This is a very large class of invariants that need not be natural (see e.g.\ Section \ref{se}). It is precisely the difference between Theorem \ref{ukw} (or Theorem \ref{maini}) and Theorem \ref{bway} that is behind the $\frac{1}{n-2}$ factor mentioned earlier, and the generalisation of this phenomenon. See also Corollary \ref{ci} and the discussion below. In Section \ref{consS} we show that the generalised Schoen identity of Theorem \ref{maini} is a precise complement to the usual conservation theory extant in the Physics literature. The main results mentioned, and their proofs, appear to unify, simplify, and considerably extend most of the existing related results in the literature; see Section \ref{exs} where we show a number of new results, as well the simplification and unification of a number of recent particular results in the literature. Specific examples treated include Gauss-Bonnet curvatures, Q-curvatures, renormalised volume coefficients, and the mean curvature of a conformal immersion. Although we do not directly discuss extensions of the Pohozaev identity \ref{poh} it is clear that such can be obtained from Theorem \ref{maini} by, for example, an analogue of treatment in Section \ref{pohS}. ARG would like to thank Alice Chang, Paul Yang, Matt Gursky and Fr\'{e}d\'{e}ric Rochon for useful discussions at the meeting ``Geometric and Nonlinear Partial Differential Equations'' held at Mission Beach Resort, Queensland, 2010. A draft of this work was presented there. Discussions with Robin Graham and Andreas Juhl at the Tambara Institute of Mathematics worskop `` Parabolic Geometries and Related Topics I'', November 2010, are also much appreciated. In particular Graham provided an answer to a question posed in the first draft, see Theorem \ref{HET}. \section{The Hilbert-Noether identities for gradients} \label{HN} Until further notice we shall suppose we work on a closed (compact without boundary) oriented connected manifold $M$, of dimension $n\geq 2$ and usually equipped with a Riemannian metric $g$. However we also consider the space ${\mathcal M}$ of such metrics on $M$, that space equipped with the compact open $C^\infty$ topology. For simplicity all structures and sections throughout shall be considered smooth ($C^\infty$). A real valued functional ${\mathcal S}$ on ${\mathcal M}$ is called a {\em Riemannian functional} if it is diffeomorphism invariant in the sense that it satisfies \begin{equation}\label{Rinv} {\mathcal S}(\phi^* g)= {\mathcal S}(g), \end{equation} for all $g\in {\mathcal M}$, and for all diffeomorphisms $\phi:M\to M$. A {\em natural scalar (Riemannian) invariant} (see e.g.\ \cite{Stredder}) is a scalar valued function which is given by a universal expression, which is polynomial in the finite jets of the metric and its inverse, and which has the property that for any diffeomorphism $\phi:M\to M$ we have \begin{equation}\label{natural} \phi^* L(g)= L(\phi^* g). \end{equation} An important class of Riemannian functionals, and our main (though certainly not exclusive) focus here, arise from the integral of such Lagrangians: that is $g\mapsto {\mathcal S} (g)$ where \begin{equation}\label{action} {\mathcal S}(g)= \int_M L(g) dv_g \end{equation} where $dv_g$ is the metric measure. \begin{remark}\label{natc} One may construct natural invariants in an obvious way by complete contractions, using the metric, its inverse, and the volume form, of expressions polynomial in the Riemann curvature, and its Levi-Civita covariant derivatives. In fact all natural invariants arise this way as follows by a well known argument using Weyl's classical invariant theory and Riemann normal coordinates, see e.g.\ \cite{ABP}. \end{remark} \subsection{Total metric variations}\label{total} The tangent space to ${\mathcal M}$ is naturally identified with the (smooth) section space of $S^2M$. A differentiable Riemannian functional is said to have a {\em gradient} $B(g)$ at $g$, if $B(g)$ is a smooth section of $S^2M$ and, for all $h\in \Gamma (S^2M)$, \begin{equation}\label{core} {\mathcal S}'(g)(h)=\int_M (h,B(g)) dv_g \end{equation} where $(\hbox to 2.5pt{\hss$\ccdot$\hss},\hbox to 2.5pt{\hss$\ccdot$\hss})$ denotes the local pairing of tensors given by metric contraction. In an abstract index notation we shall write $B_{ab}$ for $B(g)$. If we specialise now to $h$ arising from the pullback along a diffeomorphism generated by a vector field $X$, then $h={\mathcal L}_X g$ and we have \begin{equation}\label{main} 0=\int_M ({\mathcal L}_X g ,B}%{E\hspace{-1mm} L (g)) dv_g, \end{equation} from the diffeomorphism invariance of the Riemannian functional ${\mathcal S}$. In terms of the Levi-Civita connection $\nabla$ (for $g$), we have $({\mathcal L}_X g)_{ab}= 2\nabla_{(a}X_{b)}$. Thus integrating by parts in \nn{main} we see that $$ 0=\int_M X^b\nabla^aB}%{E\hspace{-1mm} L_{ab} dv_g. $$ Since the $X^b$ is arbitrary we conclude \begin{equation}\label{div} \nabla^aB}%{E\hspace{-1mm} L_{ab} =0, \end{equation} and we shall say that $B$ is {\em locally conserved}. This is a standard identity for the gradient of a Riemannian functional, and is attributed to Hilbert \cite{Besse,Hil}. Identities derived from symmetries or ``gauge invariance'', such as this, are often called Noether identities in the literature. Now we consider the case where ${\mathcal S}(g)$ is given by a natural Lagrangian, as in \nn{action}. It follows from the result mentioned in Remark \ref{natc}, and integration by parts, that each directional derivative of ${\mathcal S}(g)$ is of the form \nn{core} where $B(g)$ is a natural (tensor-valued) invariant. From this in turn we conclude that ${\mathcal S}(g)$ is differentiable and so the above discussion applies immediately; in particular the natural tensor $B=B(g)$ satisfies \nn{div}. \begin{remark} \label{others} Although the argument above has used a compact Riemannian setting, as an aside here we note the following: since $B$ is given by a universal expression in terms of the Riemannian curvature and its covariant derivatives, it follows that the local result \nn{div} holds on any manifold and in any signature. \end{remark} To see how non-trivial results may arise from the diffeomorphism invariance of an action it is useful to understand, via an infinitesimal argument, how the gradient is generated in the case of a natural Lagrangian function $L=L(g)$. Consider a curve of metrics $g^t$ through $g=g^0$. Calculating the derivative of \nn{action} at $t=0$ involves computing the linearisation of $L(g)$ (at $g$), $$ L'(h):=\dfrac{d}{dt}\Big|_{t=0}L (g^t), $$ and also the contribution from the measure: $$ \dfrac{d}{dt}\Big|_{t=0} dv_{g^t}= \frac{1}{2}g^{ab}h_{ab} dv_g . $$ Putting these together we have \begin{equation}\label{precore} \dfrac{d}{dt}\Big|_{t=0}{\mathcal S}(g^t)= \int_M \big(L'(h)+ \frac{1}{2} L(g) g^{ab}h_{ab} \big)dv_g . \end{equation} However for $h$ arising from an infinitesimal diffeomorphism we have, as mentioned, $h={\mathcal L}_X g$. Thus $\frac{1}{2}g^{ab}h_{ab}=\nabla_a X^a =\div X$. On the other hand the infinitesimal version of the naturality condition \nn{natural} is \begin{equation}\label{inf} L'( {\mathcal L}_X g)= {\mathcal L}_X L (g) \end{equation} and so for $h={\mathcal L}_X g$ we have $(L'(h)+ \frac{1}{2} L(g) g^{ab}h_{ab})= \div (L(g) X)$ whence the right hand side of \nn{precore} is zero. The non-trivial identity \nn{core} arises by calculating in another order. We first integrate \nn{precore} by parts to yield \nn{core}, and then proceed as argued earlier. So the information contained in the difference between the two ways of calculating arises entirely from \nn{inf}. \subsection{Generalised energy-momentum tensors}\label{se} The local conservation of natural gradients is a unifying feature in the discussion which follows. In fact, as we shall see, a broader class of gradients satisfy \nn{div}. Suppose that rather than restrict to $L$ being a natural scalar invariant of $(M,g)$, we allow $L$ as follows. We assume $L$ is a scalar valued function which is given by a universal expression, which is polynomial in the finite jets of the metric and its inverse, and also in the finite jets of a collection of other fields that we shall collectively denote $\Psi$ (and regard as a single field). So we may write $L=L(g,\Psi)$. The fields that make up $\Psi$ may be tensor fields, but also could include for example connections. We shall not be concerned with the details; it is rather naturality in this context that is important. We shall insist that $L$ satisfies \begin{equation}\label{natural2} \phi^* L(g, \Psi)= L(\phi^* g, \phi^*\Psi) \end{equation} for any diffeomorphism $\phi:M\to M$. So certainly we require that the nature of the fields $\Psi $ is such that their pullback under diffeomorphism makes sense, but this is a very weak restriction. We shall call such $L(g,\Psi)$ {\em coupled scalar invariants}. Now we assume that $$ {\mathcal S}(g,\Psi):=\int_M L (g,\Psi) dv_g $$ is separately Frechet differentiable with respect to $g$ and $\Psi$, and that there are respective partial gradients $B(g,\Psi)$, $E(g,\Psi)$, satisfying $$ (D_1{\mathcal S}(g,\Psi))(h)=\int_M (h,B(g,\Psi)) dv_g $$ and $$ (D_2{\mathcal S}(g,\Psi))(h)=\int_M \langle \psi,E(g,\Psi) \rangle dv_g $$ where $\psi$ is in the formal tangent space at $\Psi$ to the field (system) $\Psi$ and $\langle \hbox to 2.5pt{\hss$\ccdot$\hss} , \hbox to 2.5pt{\hss$\ccdot$\hss} \rangle$ is the pointwise dual pairing that arises naturally in the problem. (In the other display the notation is as in \nn{core}.) We shall refer to $B(g,\Psi)$ as the metric gradient. The equation $E(g,\Psi)=0$ is a generalised Euler-Lagrange system. We have the following result. \begin{theorem}\label{gg} On $(M,g)$, let $\Psi_0$ be a solution of the generalised Euler-Lagrange system $$ E(g,\Psi)=0. $$ Then the metric gradient $B(g,\Psi_0)$ is locally conserved, that is \begin{equation}\label{div2} \nabla^a B_{ab}(g,\Psi_0)=0. \end{equation} \end{theorem} \begin{proof} From \nn{natural2} it follows that $S(g,\Psi)$ is diffeomorphism invariant. Thus, differentiating $S(g,\Psi)$ along the pullback of an infinitesimal diffeomorphism generated by a vector field $X$, and using the chain and product rule under the integral, we have a generalisation of \nn{main}, viz.\ $$ 0=\int_M \Big( ({\mathcal L}_X g , B (g,\Psi)) + \langle {\mathcal L}_X \Psi , E(g,\Psi) \rangle \Big) dv_g , $$ where the derivative of \nn{natural2} is used. Thus if we calculate along $\Psi_0$ satisfying $E(g,\Psi_0) =0$, then this reduces to $0=\int_M ({\mathcal L}_X g , B (g,\Psi)) dv_g$ and we argue as below \nn{main} to conclude \nn{div2}. \end{proof} \begin{remark} The argument above is a minor variant of that in \cite{HE}, which treats the case that $L$ depends on at most first covariant derivatives of $\Psi$. In that setting $E(g,\Psi_0)=0$ gives the standard Euler-Lagrange equations of continuum mechanics and they term $B(g,\Psi_0)$ an ``energy-momentum tensor''. In certain contexts the same $B(g,\Psi_0)$ is sometimes termed a stress-energy tensor \cite{B,BE}. \end{remark} \subsection{Conformal variations}\label{cvar} On a manifold $M$, a natural scalar invariant $V$ is said to be {\em conformally variational} within a conformal class of metrics ${\mathcal C}=\{\widehat{g}=e^{2\Upsilon}g\mid\Upsilon\in C^\infty(M)\}$ if there is a functional ${\mathcal S}(g)$ on ${\mathcal C}$ with \begin{equation}\label{Fbul} {\mathcal S}^\bullet(g)(\w)=2 \int_M\w V \,dv_g\,,\qquad\mbox{all }\w\inC^\infty(M). \end{equation} As above $dv_g$ is the Riemannian measure, and here \begin{equation}\label{bul} {\mathcal S}^\bullet(g)(\w):=\dfrac{d}{d s}\Big|_{s=0}{\mathcal S}(e^{2 s \w}g). \end{equation} In \nn{bul}, the curve of metrics $e^{2 s \w}g$ may be replaced by any curve with the same initial tangent $g^\bullet=2\w g$. The property of being variational can depend both on $L$, and on the conformal class ${\mathcal C}$. We shall consider first two important cases, with the first case as follows. \begin{definition}\label{cv} We shall say that $V$, a natural scalar invariant, is \underline{naturally} conformally variational if it arises as in \nn{Fbul} above from a Riemannian functional ${\mathcal S}$ that admits a gradient (as in \nn{core}) for any $g\in {\mathcal C}$. \end{definition} Suppose now ${\mathcal S}$ is as in Definition \ref{cv} and we calculate \nn{bul} via a specialisation of the total metric variation computation \nn{core}. It follows that \begin{equation}\label{confv} {\mathcal S}^\bullet(g)(\w)=2\int_M (\omega g, B}%{E\hspace{-1mm} L) \,dv_g \end{equation} whence, in particular, $V=g^{ab}B}%{E\hspace{-1mm} L_{ab}$. We summarise this observation. \begin{lemma}\label{Vori} If ${\mathcal S}$ is a Riemannian functional with gradient $B_{ab}$ at $g$, then the function $V$ in \nn{Fbul} is given by $g^{ab}B_{ab}$. \end{lemma} Recall that for total metric variations the key integral relation underlying the Hilbert-Noether identity is \nn{main}. Comparing this with \nn{confv} we see that, in the restricted setting of conformal variations, \nn{main} still yields constraints provided ${\mathcal L}_Xg=2\omega g$. But this exactly means that $X$ is a conformal vector field and $\omega=\frac{1}{n}\div X$. Then \nn{main} states $$ 0= \int_M (\div X)V dv_g. $$ So, integrating by parts, we have the following. \begin{theorem}\label{ukw} If $V$ is naturally conformally variational, then for any conformal vector field $X^a$ on a closed Riemannian manifold $(M,g)$, we have \begin{equation}\label{confN} 0= \int_M (\div X) V dv_g= - \int_M ({\mathcal L}_X V) dv_g. \end{equation} \end{theorem} One might suppose that Definition \ref{cv}, as used in Theorem \ref{ukw}, is restrictive. In fact in most cases it is not. To make this precise we need a further definition. A natural invariant $L$ (possibly tensor valued) is said to have {\em weight} $\ell$ if uniform dilation of the metric has the effect $L[A^2g]=A^{\ell}L[g]$ for all $0<A\in{\mathbb{R}}$. For example, the scalar curvature has weight $-2$. It is not essentially restrictive to consider only invariants of a well defined weight, since it is easily shown that any natural scalar invariant is a sum of such. The key to the claim that began this paragraph is the following result. \begin{proposition} \cite{BrGovar} \label{self} If~ $V$, of weight $\ell\neq -n$, is a conformally variational local scalar invariant on a closed Riemannian conformal $n$-manifold $(M,{\mathcal C})$, then \begin{equation}\label{selfe} {\mathcal S}(g):=(n+\ell)^{-1}\int_M V dv_g \end{equation} is a Riemannian functional for $V$ in ${\mathcal C}$; that is \nn{Fbul} holds. \end{proposition} Now by the discussion of natural Lagrangians in Section \ref{total}, it follows that \nn{core} holds for ${\mathcal S}$ as in \nn{selfe}, and so ${\mathcal S}$ satisfies Definition \ref{cv}. Thus we have the following. \begin{corollary}\label{aa} On a closed Riemannian $n$-manifold a natural scalar invariant ~$V$, of weight $\ell\neq -n$, is conformally variational if and only if it is naturally conformally variational. \end{corollary} The scalar curvature is well known to be conformally variational and so Theorem \ref{ukw} certainly extends the results of Bourguignon-Ezin \cite{BoE} for the scalar curvature in dimensions $n\geq 3$. In fact conformally variational invariants are not at all rare, and so the extension is vast; we shall take up this point in Section \ref{exs}. Next we show that a slight variant of the above also recovers and extends the identity from \cite{BoE,KW74} for the Gauss curvature in dimension 2. Above we used that it is insightful to use the gradient $B$ when this is available. That observation will also be critical in the next section. However the existence of a total metric variation gradient, as in \nn{core}, is not necessary to see a Kazdan-Warner type identity arise from gauge invariance. \begin{definition}\label{nocv} We shall say that $V$, a natural scalar invariant, is \underline{normally} conformally variational if it arises via \nn{Fbul} with ${\mathcal S}$ a Riemannian functional. \end{definition} Note that this is a strictly broader class of invariants than above: if $V$ is naturally conformally variational then it is normally conformally variational. \begin{theorem}\label{nocvt} The identity \nn{confN} holds if we assume only that $V$ is normally conformally variational (with also the other conditions of Theorem \ref{ukw} imposed). \end{theorem} \begin{proof} We follow the idea of Section \ref{total}, but restrict at the outset to the case that $X$ is a conformal vector field. Again from the diffeomorphism invariance of the Riemannian functional we have $0={\mathcal S}'(g)(h)$ where $h={\mathcal L}_X g$. But $h$ is a conformal variation: $h=\frac{2}{n}(\div X) g$. So ${\mathcal S}'(g)(h)={\mathcal S}^\bullet(g)( \frac{1}{n}\div X)$ and since, by assumption, $V$ and ${\mathcal S}$ are related by \nn{Fbul} the result follows. \end{proof} \begin{example}\label{poly} On a closed Riemannian 2-manifold $(M,g)$ if we take ${\mathcal S}(g):=\det \Delta_{g}/ A(g)$, where $A(g)$ is the total area and $\det \Delta_{g}$ is the functional determinant of the Laplace-Beltrami operator, then there is the Polyakov formula \cite{OPS,P} for conformal variation $$ {\mathcal S}^\bullet(g)(\omega)=c\hbox to 2.5pt{\hss$\ccdot$\hss} \int_M \omega Q dv_g $$ where $Q$ is the Gauss curvature and $c\neq 0$ is a constant. ${\mathcal S}(g)$ is a Riemannian functional and so we conclude from Theorem \ref{nocvt} that for any conformal vector field $X$ on $M$ we have $\int_M {\mathcal L}_X Q~dv_g =0$. \end{example} \begin{remark} In view of the derivations in Theorems \ref{ukw} and \ref{nocvt} it is clear that the Kazdan-Warner identities are related to Noether-Hilbert principles. Note here we do not expect an analogue of \nn{div}: The result here is necessarily global, since the common ground between \nn{main} and \nn{confv} involves conformal vector fields which are global objects. \end{remark} From the proof of Theorem \ref{nocvt} it is evident that we may obtain an identity at a particular $g_1\in {\mathcal C}$ without the full force of \nn{Rinv}. Indeed we simply need ${\mathcal S}'(g_1)(h)=0$ where $h$ is ${\mathcal L}_X g_1$ and $X$ a conformal vector field. If $V(g)$ is a conformally variational natural invariant this is achieved by the functional ${\mathcal S}(g)=\int_M \omega V(g) dv_g$ on $\C$, where $g=e^{2\omega}g_1$, $\omega \in C^\infty (M)$. This follows from the following argument, which is a trivial adaption of a result from \cite{Bo86,delRob}. \begin{theorem}\label{bway} Suppose that $X$ is a conformal vector field on a closed Riemannian conformal manifold $(M,{\mathcal C})$, and that $V=V(g)$ is a conformally variational natural scalar invariant. Then $$ \int_M (\div X) V(g) ~dv_g . $$ is independent of the choice of metric $g\in {\mathcal C}$, and hence is zero. \end{theorem} \begin{proof} Fix any metric $g_0\in {\mathcal C}$. If $V$ is conformally variational then the linearisation of the map $\omega \mapsto V(e^{2\omega} g_0)$, $\omega\in C^\infty(M)$, is formally-self-adjoint (see e.g.\ \cite{BrGovar}). Identifying $C^\infty (M)$ with the tangent space to ${\mathcal C}$, it follows that the 1-form on ${\mathcal C}$ $$ C^\infty (M)\ni \omega \mapsto \int_M \omega V(g)~ dv_g $$ is closed \cite{Bo86,BrGovar}. Now suppose that $\tilde{X}$ is the vector field on ${\mathcal C}$ induced by a conformal diffeomorphism $X$ on $M$. From the diffeomorphism invariance of this 1-form it is annihilated by ${\mathcal L}_{\tilde{X}}$. Then using the Cartan formula ${\mathcal L}_{\tilde{X}}= d\i_{\tilde{X}} +\i_{\tilde{X}} d$, and the identification of $\tilde{X}$ with $\frac{1}{n}\div X$, it follows that $ \int_M (\div X)V(g) ~dv_g $ is constant on ${\mathcal C}$ as claimed. It follows that if there is a metric $g_0\in {\mathcal C}$ such that $V(g_0)$ is constant then for any metric $g\in {\mathcal C}$ we have $$ \int_M {\mathcal L}_X V ~dv_g=0. $$ In particular this holds on the sphere $S^n$ with its standard conformal structure. However by the Lelong-Ferrand-Obata theorem \cite{L-F,O} if $M$ is any other conformal manifold then $X$ is necessarily a Killing vector field. In that case we have ${\mathcal L}_X V =\div (V X)$ and so $\int_M {\mathcal L}_X V ~dv_g=0$. \end{proof} \begin{remark}\label{215} While this Theorem gives the strongest result, it uses a less direct argument than that of Theorems \ref{ukw} and \ref{nocvt}, and this argument partly loses contact with the Hilbert-Noether principles, and in most cases is not necessary (as follows from Corollary \ref{aa}). Most importantly, as we shall see below, the proof of Theorem \ref{ukw} naturally suggests, and links it to, a generalisation of the Schoen identity. On the other hand for natural invariants of weight $-n$ we expect to need stronger arguments: for example if $V$ is a conformal covariant of weight $-n$, then $V dv_g $ is a conformally invariant $n$-form and so $\int_M V dv_g$ is conformally invariant. It is easily seen that such a matching of weights between $dv_g$ and $V$ causes a breakdown in the argument of Theorem \ref{ukw}. The result in \cite{delRob} corresponding to Theorem \ref{bway} uses that the linearisation of $\omega \mapsto V( e^{2\omega}g)$ is formally self adjoint, without any explicit mention that $V$ is variational. But for a natural scalar invariant this self-adjointness condition is equivalent to it being conformally variational, as follows from a trivial variant of \cite[Lemma 2(ii)]{BrGovar}. \end{remark} \medskip \section{Manifolds with boundary and conservation} \label{bc} Let $M$ be Riemannian manifold with boundary $\partial M$. To avoid unnecessary restriction we allow here the possibility that $\partial M$ is the empty set. In this setting, and using a different approach to the above, we derive a result that strictly generalises Theorem \ref{ukw} and the Schoen identity \nn{sch}. \subsection{A generalisation of the Schoen identity}\label{subsch} On $M$, let $B$ be a symmetric 2-tensor with compact support, and $X$ any tangent vector field. Then by the Gauss formula for Stokes' Theorem, $$ \int_M \nabla^a(B_{ab}X^b) dv_g=\int_{\partial M} B_{ab} X^a \nu^b d\sigma_g, $$ where $\nu$ and $d\sigma_g$ are, respectively, the outward unit normal and the induced metric measure along $\partial M$. Now if the tensor $B$ is locally conserved, meaning that $ \nabla^a B_{ab}=0 $, then \begin{equation}\label{key1} 2 \nabla^a(B_{ab}X^b)=2 B_{ab}\nabla^a X^b =(B,{\mathcal L}_X g). \end{equation} In particular if $X$ is a conformal vector field then $$ \nabla^a(B_{ab}X^b) =\frac{1}{n} V \div X $$ where $V$ is the metric trace of $B$, i.e.\ $V:=g^{ab}B_{ab}$, and $\div X =\nabla_a X^a$. So \begin{equation}\label{prelim} n\int_{\partial M} B_{ab} X^a \nu^b d\sigma_g= \int_M V (\div X ) dv_g~. \end{equation} A related identity arises from the (metric) trace-free part of $B_{ab}$, that is $$ \stack{o}{B}_{ab}:= B_{ab}-\frac{1}{n}g_{ab}V. $$ If $X$ is a conformal Killing vector field then $$ \nabla^a(\stack{o}{B}_{ab} X^b)= (\nabla^a\stack{o}{B}_{ab})X^b + \stack{o}{B}_{ab}\nabla^a X^b. $$ But then $\stack{o}{B}_{ab}\nabla^a X^b=0$, since $\stack{o}{B}_{ab}$ is symmetric trace-free, while $\frac{1}{2}(\nabla^a X^b+(\nabla^b X^a)= \frac{1}{n}g^{ab}\div X$. For the other term observe that $$ X^b\nabla^a\stack{o}{B}_{ab}= X^b(\nabla^aB_{ab}-\frac{1}{n}\nabla_b V)=-\frac{1}{n}X^b\nabla_b V. $$ Thus $$ \begin{array}{rl} \int_M {\mathcal L}_X V~ dv_g & = - n\int_{M} \nabla^a(\stack{o}{B}_{ab} X^b) dv_g\\ &= - n\int_{\partial M} \stack{o}{B}_{ab} X^a \nu^b d\sigma_g \end{array} $$ \smallskip \noindent Recalling also \nn{div}, we summarise as follows. \begin{theorem}\label{maini} On an oriented Riemannian manifold $M$ with boundary $\partial M$ the following holds. If $B$ is a locally conserved symmetric 2-tensor, of compact support, and $X$ is a conformal vector field, then \begin{equation}\label{b} \int_M {\mathcal L}_X V ~ dv_g= - n \int_{\partial M} \stack{o}{B}_{ab} X^a \nu^b d\sigma_g, \end{equation} where $V$ is the metric trace of $B$, i.e.\ $V=g^{ab}B_{ab}$. In particular this holds for any gradient tensor or generalised energy-momentum tensor $B$ that has compact support. \end{theorem} In particular the above applies when $\partial M=\emptyset$. Thus we have another Kazdan-Warner type result. For emphasis we state this specialisation. \begin{corollary}\label{kwa} On a Riemannian manifold $M$, without boundary, let $V$ be the metric trace of a compactly supported and locally conserved symmetric 2-tensor $B$. Then for any conformal vector field $X$ we have $$ \int_M {\mathcal L}_X V dv_g=0. $$ \end{corollary} Using Proposition \ref{self} and the result \nn{confv}, Theorem \ref{maini} also gives the following result. \begin{corollary} \label{ci} Suppose that the natural scalar invariant $V$ is naturally conformally variational on a compact $n$-manifold with boundary. Then $V=g^{ab}B_{ab}$ where $B_{ab}$ is a natural gradient of some Riemannian functional, and the relation \nn{b} holds for any conformal vector field $X$. If $V$ has a well defined weight $\ell\neq -n$, then $B$ is gradient of the functional $$ {\mathcal S} (g) =\frac{1}{n+\ell}\int_M V dv_g. $$ \end{corollary} \medskip A special case of Theorem \ref{maini} arises when $B_{ab}$ is (a non-zero multiple of) the {\em Einstein tensor} $$ B_{ab}:=P_{ab}-g_{ab}J, $$ which is the gradient arising from the Einstein-Hilbert action; here $n\geq 3$ and we assume compact support. So then $V=(1-n)J$. Here $P_{ab}$ is the Schouten tensor and $J=g^{ab}P_{ab}$; in terms of the Ricci and scalar curvatures, this is characterised by $$ \operatorname{Ric}_{ab}=(n-2)P_{ab}+ J g_{ab}, $$ whence $\operatorname{Sc} = 2(n-1)J$. So then ($2\times$) \nn{b} states $$ 2(1-n) \int_M {\mathcal L}_X J ~ dv_g =- 2n\int_{\partial M} P_{(ab)_0}X^a \nu^b d\sigma_g, $$ where $(\cdots)_0$ indicates the trace-free symmetric part. In other terms we obtain, $$ \int_M {\mathcal L}_X \operatorname{Sc} dv_g = \frac{2n}{n-2}\int_{\partial M} \operatorname{Ric}_{(ab)_0}X^a \nu^b d\sigma_g, $$ as a special case of Theorem \ref{maini}. This is precisely the Schoen identity \nn{sch} from the introduction. \begin{remark} The identity \nn{prelim} is widely used in the literature, see e.g.\ \cite{B,BE,BR,ezin,OP} and references therein. \end{remark} \begin{remark} Theorem \ref{maini} produces a Schoen-type identity for every locally conserved symmetric 2-tensor, and thus in particular for every natural gradient, or generalised energy-momentum tensor. The surprising aspect of the Theorem is that it provides a rather subtle global relation between the trace and trace-free parts of a locally conserved 2-tensor. \end{remark} \subsection{Conserved quantities }\label{consS} A Killing vector field $X$ is of course also a conformal Killing vector field. However Theorem \ref{ukw} and Corollary \ref{kwa} are vacuous for such $X$: if $X$ is a Killing vector then for any function $f$ on a closed Riemannian manifold $M$ we have $\int_M {\mathcal L}_X f~dv_g=0$, since $\div X=0$ and so ${\mathcal L}_X f= \div (f X)$. In both Theorem \ref{ukw} and Corollary \ref{kwa} the function $V$ is the trace of a locally conserved symmetric 2-tensor $B$. Thus these results are also obviously vacuous if in fact $B$ is trace-free, so $B=\stack{o}{B}$, even if $X$ is not Killing. It is natural to ask of the meaning of the corresponding Pohozaev-Schoen type identities in these degenerate cases. This brings us to the following result, at least part of which is well known in the physics literature (see e.g.\ \cite{HE}). \begin{proposition}\label{std} Suppose that $X$ is a (conformal) Killing vector field on a Riemannian manifold, and $B$ is a locally conserved (metric trace-free) symmetric 2-tensor. Then the corresponding current $ J_a:=B_{ab}X^b $ is locally conserved, that is \begin{equation}\label{conse} \div J=0. \end{equation} \end{proposition} \begin{proof} If $B$ is a symmetric 2-tensor that satisfies $\nabla^aB_{ab}=0$ then for any vector field $X^b$, and setting $ J_a:=B_{ab}X^b $, it follows immediately from \nn{key1} that $\nabla_a J^a$ is zero if and only if ${\mathcal L}_X g$ is pointwise orthogonal to $B$. Thus in particular if ${\mathcal L}_X g=0$ this holds. It also holds if instead ${\mathcal L}_X g=\frac{2}{n}(\div X) g$, provided $B$ its trace-free. \end{proof} For either of the cases in the Proposition, it is easily seen that the Pohozaev-Schoen type identity of Theorem \ref{maini} is equivalent to the usual flux conservation law for conserved currents. On the other hand if $B$ is locally conserved but not necessarily trace-free then from \nn{key1} we have, in our current notation, $\div J= V \div X /n$, for a conformal Killing vector field $X$. Then on the left-hand-side of \nn{prelim} $\int_{\partial M}B_{ab}X^a\nu^b~d\sigma_g= \int_{\partial M} J_a\nu^a ~d\sigma_g $ is a measure of flux reflecting conservation failure. Thus we see that the identity of Theorem \ref{maini} is exactly a complement of the usual conservation law for conserved currents. To underscore this point we note here that the Proposition above provides a route to proliferating conserved quantities on geometries with symmetry. \begin{theorem} \label{cons} Each natural scalar invariant $L$ determines a corresponding natural gradient \begin{equation}\label{gradi} B^L_{ab}, \end{equation} and so the following: \begin{itemize} \item On any Riemannian manifold with a Killing vector field $X$ one obtains a corresponding canonical and locally conserved current $J^L_a$, (i.e. $J^L$ satisfies \nn{conse}). \item If $L$ has the property that, on closed manifolds, ${\mathcal S}(g)=\int_M L dv_g$ is conformally invariant, then $B^L_{ab}$ is conformally covariant and trace-free. It follows that on any Riemannian manifold with a conformal Killing vector field $X$ the corresponding canonical and locally conserved current $J^L_a$ is conformally covariant. In this case the local conservation equation \nn{conse} is conformally invariant. \item In either case $L$ determines a non-local invariant $$ I^L_{\Sigma}:=\int_{\Sigma} J^L_a d\sigma^a, $$ for each hypersurface $\Sigma$, with the property that $I^L_{\Sigma_1}=I^L_{\Sigma_2}$ if $\Sigma_1$ and $\Sigma_2$ are homologous hypersurfaces sharing the same boundary. If ${\mathcal S}(g)=\int_M L dv_g$ is conformally invariant then $I^L_\Sigma$ is conformally invariant. \end{itemize} \end{theorem} \begin{proof} We observed in section \ref{total} that on closed manifolds the action determined by $L$, viz.\ $S(g)=\int_M L ~dv_g$ has a corresponding natural gradient $B^L_{ab}$, and this is locally conserved, cf.\ \nn{div}. Then, as a natural tensor, $B^L_{ab}$ is given by a universal formula in terms of partial (metric or volume form) contractions of Levi-Civita covariant derivatives of the Riemannian curvature. We now take this universal formula as defining the symmetric and locally conserved tensor $B^L_{ab}$. Thus the first result then follows from Proposition \ref{std} with \begin{equation}\label{jl} J^L_a:= B^L_{ab}X^b. \end{equation} Now set $V:=g^{ab}B_{ab}^L$. If ${\mathcal S}(g)$ is conformally invariant (on closed manifolds) then \nn{core} must be zero when, for example $g^t=e^{2t\omega}g$, $\omega\in C^\infty (M)$. But in this case $h=2\omega g$, so $$ \int_M \omega V ~dv_g=0. $$ This must hold for arbitrary $\omega\in C^\infty (M)$, and so $V=0$, i.e.\ $B^L$ is metric trace-free. Again this must be also true of the universal formula for $B^L$. Thus the claim that $J^L$ (as in \nn{jl} with X now conformal Killing) is conserved, as stated in the second point of the Theorem, also follows from Proposition \ref{std}. An easy argument involving second variations of ${\mathcal S}(g)$, that mix conformal and total metric variations, then shows that $B^L$ is necessarily conformally invariant (see e.g.\ \cite{TomSrni} where also the notion of conformal invariance, as used here, is discussed). Finally for the second point, if $L$ has a well-defined weight (and any $L$ is a sum of such) then ${\mathcal S}(g)$ conformally invariant and non-trivial implies this weight is $-n$. Since any natural scalar is a sum of invariants each of which has a well-defined weights, it follows that we may assume without loss of generality that $L$ has weight $-n$. It follows that $B^L$, and hence also $J^L$, has weight $2-n$ and in fact they are then conformally covariant of weight $2-n$. In this case it is well known (and easily verified) that the equation \nn{conse} is conformally invariant. The third point is then immediate from the divergence theorem, save for the comment about conformal invariance. But the latter is an easy consequence of the weight of $J^L$ and its conformal covariance. \end{proof} \begin{remark} If $L$ is a coupled scalar invariant, in the sense of Section \ref{se}, then we may replace $B^L$, In Theorem \ref{cons}, by the corresponding generalised energy-momentum tensor. \end{remark} \subsection{Other signatures} For simplicity of exposition in the above we have restricted to Riemannian signature. In fact all results above in Section \ref{bc} extend as stated to pseudo-Riemannian manifolds of any signature with the following restrictions and minor adjustments: the boundary conormal $\nu_a$ is nowhere null; it is normalised so that $g_{ab}\nu^a\nu^b=\pm 1$; and it satisfies that at any point of the boundary $\nu_aX^a$ is positive if $X^a$ is an outward pointing tangent vector. The restriction that $\nu_a$ be nowhere null can be removed if statements are adjusted appropriately. We leave this to the reader. \section{Examples and Applications} \label{exs} Theorems \ref{ukw} and \ref{maini} are already very general. For example begin with {\em any} natural scalar invariant $L$. Since natural scalar invariants are easily written down using Weyl's classical invariant theorem \cite{ABP}, we may readily proliferate examples. Then generically the total metric variation of ${\mathcal S}(g):= \int_M L(g) dv_g$ will yield a corresponding non-trivial Euler-Lagrange tensor $B^L$ via \nn{core}. Exceptions are those natural scalars $L$ whose integral is a topological (or smooth structure) invariant, such as the Pfaffian in even dimensions. In any case of $B^L\neq 0$ Theorem \ref{maini} is non-trivial. If the integral of $L$ is conformally invariant (so the weight of $L$ is $-n$) then $B^L$ is trace-free (by Theorem \ref{cons}), and the left-hand-side of \nn{b} vanishes in Theorem \ref{maini}; the latter nevertheless yielding a non-trivial constraint as discussed in Section \ref{consS}. Otherwise, by \nn{confv} we see that $V^L:=\operatorname{tr}^g(B^L)$ is a conformally variational natural scalar and Theorem \ref{ukw} and Theorem \ref{maini} apply non-trivially. Suppose that $V^L$ has a well defined weight $\ell\neq -n$ (as follows if $L$ does). Then the map $$ L\to (n+\ell)^{-1} V^L $$ may be regarded as a projection to the {\em conformally variational part} of $L$, as follows from Proposition \ref{self}. Ignoring possible deeper applications, this at least shows that conformally variational scalar invariants are, in a suitable sense, extremely common. We discuss some cases below. Note that if $V$ is a weight $\ell\neq -n$ scalar invariant, then it being conformally variational immediately implies it has some properties which are analogous to the scalar curvature. In particular we have the following. Let us write $B^V_{ab}$ for the gradient of the functional ${\mathcal S}^V(g):= (n+\ell)^{-1}\int_M V dv_g$. Then $V=g^{ab}B^V_{ab}$ and, denoting by $\stack{o}{B}^V$ the trace-free part of $B^V$, we have this observation: \begin{proposition}\label{ccurv} If $\stack{o}{B}^V_{ab}=0$ then $V=$~constant. \end{proposition} \begin{proof} This is an immediate consequence of $\nabla^a B^V_{ab}=0 $. \end{proof} \noindent The point is that in the case of $V$ being the scalar curvature $\stack{o}{B}^V_{ab}=0$ expresses the Einstein equations. In that setting the result in the Proposition is often viewed as a consequence of the Bianchi identities, but we see here that it can be seen to arise from the fact that the Einstein tensor is locally conserved (and so the proposition may be extended in an obvious way). The discussion here is still unnecessarily restrictive. Further examples arise from more general Riemannian functionals (e.g.\ Example \ref{poly}), the use of generalised energy-momentum tensors and so forth. We conclude this section with some special cases. \subsection{Local conformal invariants} If $V(g)$ is a natural (scalar) conformal invariant of weight $\ell$, meaning that $V(e^{2\omega}g)=e^{\ell \omega} V(g)$ then if $\ell\neq -n$ it is easily verified that $V(g)$ is naturally conformally variational, with \nn{selfe} giving a functional. For example if $W$ denotes the Weyl curvature then $|W|^2$ is a weight $-4$ conformal invariant, and so is conformally variational in dimensions greater than 4. \subsection{Q-curvatures} On Riemannian $n$-manifolds, there is an important class of natural scalar curvature quantities $Q_m$, parametrised by positive even integers $m$ with $m\notin\{n,n+2,n+4,\ldots\}$, which are sometimes termed {\em subcritical Q-curvatures} \cite{Tomsharp}. In a conformal sense these generalise the scalar curvature: $Q_2$ is the scalar curvature ($n\geq 3$) and if $\widehat{g}=e^{2\omega}g$, $\omega\in C^\infty(M)$, then \begin{equation}\label{Qt} Q^{\widehat{g}}_m = u^{\frac{n+m}{m-n}}\left(\delta S^g_md + Q^g_m\right)u, \end{equation} where, $u=e^{\frac{n-m}{2}\omega}$, $\delta=-\div$ is the formal adjoint of the exterior derivative $d$ and $S^g_m$ is an appropriate operator. The differential operator $P_m: = \delta S^g_md + Q^g_m $ is conformally invariant, and is ($\frac{2}{n-m}\times)$ the GJMS operator \cite{GJMS} with leading term the Laplacian power $\Delta^{m/2}$. Thus \nn{Qt} is a higher order analogue of the Yamabe equation (which controls scalar curvature prescription). Considering now a curve $\widehat{g}=e^{2 s \omega}g$, and differentiating at $s=0$, we find $$ (Q_m)^\bullet= -m Q_m \omega + \frac{n-m}{2} \delta S^g_md \omega. $$ It follows easily that $Q_m$ is naturally conformally variational and arises from an action as given in Proposition \ref{self} (with $\ell=-m$). Thus on closed manifolds the $Q_m$ are constrained by \nn{confN} of Theorem \ref{ukw}. \medskip The critical Q-curvature $Q_n$ is a weight $-n$ Riemannian invariant on even $n$-manifolds, and is conformally variational \cite{Tomsharp,BrO91}, although not known to be naturally so. Thus it satisfies the Kazdan-Warner type identity of Theorem \ref{bway} (and cf.\ \cite{delRob} who first prove this and also the subcritical cases). In dimension 2 the critical $Q$ curvature is the Gauss curvature and so is also covered by Theorem \ref{nocvt}. It seems likely that the higher dimensional critical Q-curvatures could also be treated this way, but we shall not take that up here. An easy proof using conformal diffeomorphism invariance also follows from Theorem 7.1 of \cite{BrGoPont}. In summary: There are Q-curvatures $Q_m$ for even integers $m\notin\{n+2,n+4,\ldots\}$ and the following holds. \begin{proposition}\label{Qcase} For any conformal vector field $X$ on a closed $(M^n,g)$, $n\geq 2$, we have $ \int_M {\mathcal L}_X Q_{m} ~ dv_g=0. $ \end{proposition} In dimensions $n\geq 2$, $Q_2$ is a non-zero multiple of the scalar curvature. Explicit formulae for the Q-curvatures $Q_4$, $Q_6$, and $Q_8$, as well as an algorithm for generating the higher $Q_m$, may be found in \cite{GoPet}. An alternative algorithm may be found in \cite{GrH}. A recursive approach for the Q-curvature is developed in \cite{JQ}. \subsection{Higher Einstein tensors}\label{hein} Throughout the following we work on a manifold of dimension $n\geq 3$ and take $m\in 2\mathbb{Z}_{>0}$, with $m\notin\{n+2,n+4,\ldots\}$. With $Q_m$, as above and $\dim M=n\geq 3$, we define a class of natural tensors. \begin{definition}\label{heind} Let $E^{(m)}$ be the symmetric natural 2-tensor defined by \nn{core} (i.e.\ $E^{(m)}:=B$) where $$ {\mathcal S}(g):= (n-m)^{-1}\int_M Q_m^g dv_g, \quad \mbox{if}\quad m\neq n , $$ and~ ${\mathcal S}(g):= \int_M Q_m^g dv_g$, if $m=n$. Then we shall call $E^{(m)}$ a \underline{higher Einstein tensor}. \end{definition} The term ``higher Einstein'' is partly suggested by \nn{Qt} and the following:\\ \begin{itemize} \item For $m=2$, and $n\geq 3$, $E^{(m)}$ is the usual Einstein tensor (up to a non-zero constant). \item Since each $E^{(m)}$ arises as a total metric variation we have $$ \nabla^a E^{(m)}_{ab}=0, $$ as a special case of \nn{div}. \item Proposition \ref{ccurv} holds with $V= Q_m$ and $B^V=E^{(m)}$, with $m\neq n$. \item $Q_m =g^{ab} E^{(m)}_{ab} $, for $m\neq n$, and on Einstein manifolds $Q_m$ is constant \cite{FGambnew,Gopowers}. \end{itemize} \begin{remark} In the case of even manifolds $M^n$ and $m=n$, $E^{(m)}$ is the Fefferman-Graham obstruction tensor of \cite{FGast}, see \cite{GrH}. (In dimension $n=4$ this is the well-known Bach tensor.) Thus in this case $E^{(m)}$ is trace-free and conformally invariant. \end{remark} The following is a special case of Theorem \ref{maini}. \begin{proposition}\label{pSE} Let $X$ be a conformal vector field on a compact manifold $M$ with boundary $\partial M$. Then \begin{equation}\label{Qsch} \int_N {\mathcal L}_X Q_m ~ dv_g= - n\int_{\partial N} \stack{o}{E}^{(m)}_{ab} X^a \nu^b d\sigma_g, \end{equation} for $m\neq n$, and where $ \stack{o}{E}^{(m)}_{ab}$ is the trace-free part of $E^{(m)}_{ab}$. \end{proposition} \noindent Thus on even manifolds we may view the $E^{(m)}$ as ``interpolating'' between the usual Einstein tensor and the Fefferman-Graham obstruction tensor. The latter vanishes on Einstein manifolds \cite{FGast,GoPetOb,GrH}. These observations suggest an interesting problem: \begin{quote} \noindent{\bf Question:} Do the $\stack{o}{E}^{(m)}_{ab}$ vanish on Einstein manifolds? \end{quote} Since we posed this it has been observed by Graham that there a simple argument confirming that the answer is yes. So the higher Einstein tensors provide a strict weakening of the Einstein condition. \begin{theorem}\label{HET} If $(M,g)$ Einstein then $\stack{o}{E}^{(m)}_{ab}=0$ for all $m\in 2\mathbb{Z}_{>0}$, with $m\notin\{n+2,n+4,\ldots\}$. \end{theorem} \begin{proof} \cite{Grpriv} On any Riemannian manifold, the Q-curvatures may be given by formulae, the terms of which are simply complete metric contractions of covariant derivatives of the Ricci curvature, see Proposition 3.5 of \cite{FGambnew}, and the subsequent discussion there. On the other hand in (3.20) of the same source it is observed that a metric variation $h=d g^t/dt|_{t=0}$ induces a variation of the Ricci curvature which may be expressed purely in terms of covariant derivatives of $h$. Specifically: $$ \frac{d}{dt}\Big|_{t=0} \operatorname{Ric}_{ij}(g^t)= \frac{1}{2}(\nabla^k \nabla_j h_{ik} + \nabla^k \nabla_i h_{jk} - \nabla^k \nabla_k h_{ij} - \nabla_i \nabla_j h^k{}_{k}). $$ The induced variation of the Levi-Civita takes a similar form $$ \frac{1}{2}g^{k\ell}(\nabla_j h_{i\ell}+\nabla_i h_{j\ell}-\nabla_\ell h_{ij}). $$ Putting these things together it follows easily that, on any Riemannian manifold, there is a formula for the $E^{(m)}_{ab}$ which is a linear combination of terms, each of which is a partial metric contraction of covariant derivatives of the Ricci curvature. Again no other curvature is involved in the formula. It follows easily that on an Einstein manifold $E^{(m)}_{ab}$ is simply a constant multiple of the metric. \end{proof} \begin{remark} Note that the constancy of the Q-curvatures on Einstein manifolds (mentioned earlier) is seen to be consistent with Theorem \ref{HET}, by dint of Proposition \ref{ccurv}, at least for $m\neq n$. (Of course to establish the result that the Q-curvatures are constant in this setting, including now the critical case, one would more easily use the first line of the proof of Theorem \ref{HET}. The fact that one can argue this way was pointed out for the GJMS operators in the paragraph after Proposition 7.9 of \cite{FGambnew}.) There is an analogue of Theorem \ref{HET}, and its proof, for the gradient (as in \nn{core}) of any natural scalar field arising as the restriction of a natural scalar on the Fefferman-Graham ambient manifold. In particular this applies to the $\stack{o}{B}^{(k)}_{ab}(g)$ arising from the renormalised volume coefficients, as discussed in Proposition \ref{vkKW} below. These also vanish on Einstein manifolds. \end{remark} \begin{remark} In \cite{Gursky} Gursky makes several interesting remarks concerning the gradients $E^{(m)}_{ab}$. These are related to some of the ideas of Section \ref{subsch}. Surprisingly he is also able to define an analogue of $\stack{o}{E}^{(4)}_{ab}$ for conformally flat 4-manifolds, and (in this setting) this yields an identity of the form \nn{Qsch} for the critical Q-curvature. It would be interesting to investigate whether his tensor can be derived from a symmetry principle. \end{remark} \subsection{Renormalised volume coefficients} \label{rvc} Beginning with a manifold $(M^n,g)$ $n\geq 3$, these natural scalar invariants $v_{k}$ arise (see e.g. \cite{Grsrni}) in the problem of \cite{FGast} of finding a 1-parameter $h_r$ of metrics, with $h_0=g$ and so that $$ g_+:=\frac{dr^2+h_r}{r^2} $$ is an asymptotic solution to $\operatorname{Ric}^{g_+}=-n g_+$ along $r=0$ in $M_+:=M\times (0,\epsilon)$. The renormalised volume coefficients $v_k$ are defined by a volume form expansion $$ \left( \frac{\det g_\rho}{ \det g_0} \right) \sim 1 + \sum_{k=1}^\infty v_k\rho^k , $$ in the new variable $\rho=-\frac{1}{2}r^2$ with $g_\rho:= h_r$. In odd dimension $n$ this determines $v_k$ for $k\in \mathbb{Z}_{\geq 1}$, but in even dimensions the mentioned formal problem is obstructed at finite order and so the $v_k$ are in general defined for $k\in \{1,\cdots ,\frac{n}{2}\}$ (but are defined for $k\in \mathbb{Z}_{\geq 1}$ in certain special cases, for example if $g$ is Einstein or locally conformally flat). Chang and Fang considered the $v_k(g)$ for the Yamabe type problem of conformally prescribing constant $v_k(g)$ \cite{CF}. They showed that for $n\neq 2k$ the equation $v_k(g)=$constant is the Euler-Lagrange equation for the functional $\int_M v_k(g) dv_g$, under conformal variations satisfying the volume constraint $\int_M dv_g=1$. This also follows from \cite[Theorem 1.5]{Gr} where Graham has shown that for $k\in \mathbb{Z}$, with $2k\leq n$ if $n$ even, the infinitesimal conformal variation of the $v_k$ takes the form \begin{equation}\label{gv} \frac{d}{dt} v_k(e^{2t\omega})|_{t=0} = -2k \omega v_k +\nabla_a(L^{ab}_{(k)}\nabla_b \omega), \end{equation} with $L^{ab}_{(k)}$ a symmetric tensor (in fact more detail is given in \cite{Gr}). It follows that, for $n\neq 2k$, the $v_{k}$ are naturally conformally variational. Thus from Theorems \ref{ukw}, \ref{maini} and Corollary \ref{ci} we have immediately the following. \begin{proposition}\label{vkKW} Let $k\in \mathbb{Z}$, with $2k<n$ if $n$ even. The $v_k$ satisfy Theorem \ref{ukw}. Moreover if we write $B^{(k)}_{ab}(g)$ for the gradient determined by \nn{core} with ${\mathcal S}(g):=(n-2k)^{-1}\int_M v_k(g)~ dv_g$ then on any compact manifold $N$, of dimension $n\neq 2k$, with boundary $\partial N$, and with $X$ a conformal vector field, we have $$ \int_N {\mathcal L}_X v_k ~ dv_g= - n\int_{\partial N} \stack{o}{B}^{(k)}_{ab} X^a \nu^b d\sigma_g, $$ where $ \stack{o}{B}^{(k)}_{ab}$ is the trace-free part of $B^{(k)}_{ab}$. \end{proposition} For the $v_k$, with $2k<n$ if $n$ even, this result specialises to Kazdan-Warner type identities via Corollary \ref{kwa}. Note that the differential operator on the right-hand-side of \nn{gv} is formally self-adjoint, so from \cite[Lemma 2(ii)]{BrGovar} (see Remark \ref{215}) this shows that the $v_k$ are conformally variational, {\em including} $v_{n/2}$ for $n$ even. Thus from Theorem \ref{bway}, or equivalently \cite[Theorem 2.1]{delRob}, we extend the above result as follows. \begin{proposition}\label{vnKW} Let $n=2k$, then for any conformal vector field $X$ on a closed $(M^n,g)$ we have $$ \int_M {\mathcal L}_X v_{k}~ dv_g=0. $$ \end{proposition} \begin{remark} The Kazdan-Warner type identities for the $v_k$ are first due to \cite{GuoHL}. They use \nn{gv} and a specific calculation that follows the ideas of \cite{BoE}. Our point is that, since \nn{gv} shows that the $v_k$ are conformally variational, the results can also be deduced immediately from the general principles. Importantly using also the stronger fact that for $k\neq 2n$ the $v_{k}$ are naturally conformally variational we also obtain the generalised Schoen-type identity of Proposition \ref{vkKW}. For $k=1,2$, or when $g$ is locally conformally flat, the $v_k$ agree with the elementary symmetric functions $\sigma_k(g^{-1}P)$ of the Schouten tensor $P$, see \cite{CF,Gr}. So as noted in \cite{GuoHL} the Kazdan-Warner type identities for $v_k$ include also the similar results of Viaclovsky for the $\sigma_k(g^{-1}P)$, \cite{V2}. \end{remark} \subsection{Gauss-Bonnet invariants and Einstein-Lovelock Tensors} For $k\in \mathbb{Z}_{\geq 1}$ with $k\leq [n/2]$, the $2k$-Gauss-Bonnet curvature $S^{(2k)}$ is the complete contraction of the $k^{\rm th}$ tensor power of the Riemann curvature by the generalised Kronecker tensor, and has the property that in dimension $2k$ it is exactly the Pfaffian, i.e.\ the Chern-Gauss-Bonnet integrand (at least up to a nonzero constant). On $(M,g)$ closed and Riemannian, with ${\mathcal S}^{(2k)}(g):= 2 \int_M S^{(2k)}(g) ~dv_g$ the gradient $G^{(2k)}_{ab}=B_{ab}$ (in the sense of \nn{core}) is called the Einstein-Lovelock tensor \cite{Love25,Labbi} if $2k\neq n$. (If $2k= n $, then ${\mathcal S}^{(2k)}(g)$ is a multiple of the Euler characteristic.) Thus $G^{(2k)}_{ab}$ is locally conserved $\nabla^a G^{(2k)}_{ab}=0$, for $2k\neq n $ $S^{(2k)}$ is naturally conformally variational, and as a special case of Theorem \ref{maini} we have the following. \begin{proposition}\label{gb} Let $X$ be a conformal vector field on a compact manifold $M$ with boundary $\partial M$. Then for $2k\neq n $ $$ \int_N {\mathcal L}_XS^{(2k)} ~ dv_g= - \frac{n}{2(n-2k)}\int_{\partial N} \stack{o}{G}^{(2k)}_{ab} X^a \nu^b~ d\sigma_g, $$ where $ \stack{o}{G}^{(2k)}_{ab}$ is the trace-free part of $G^{(2k)}_{ab}$. In particular on closed manifolds $\int_N {\mathcal L}_XS^{(2k)} ~ dv_g=0$. \end{proposition} \begin{remark} The last conclusion giving a Kazdan-Warner type identity is also given by \cite{GuoHL} using a direct calculation. Moreover they show that this also holds in the case $2k=n$. In fact it is easily verified that for $k$,such that the $S^{(2k)}$ are defined, the linearisation of the map $\omega \mapsto S^{(2k)}(e^{2\omega} g_0)$, $\omega\in C^\infty(M)$, is formally-self-adjoint. So that result may also be obtained from Theorem \ref{bway}, or equivalently \cite[Theorem 2.1]{delRob}. \end{remark} \subsection{Mean curvature of Euclidean hypersurfaces} Let $(M,g)$ be a codimension one submanifold of Euclidean space $ \mathbb{E}^{n+1}$, with $g_{ab}$ the pullback metric (i.e. the first fundamental form). We write $\nabla_a$ to denote the Levi-Civita connection of $g$. Let us write $II_{ab}$ for the second fundamental form on $M$ induced by the embedding. Then, since $\mathbb{E}^{n+1}$ is flat, $II_{ab}$ satisfies the contracted Codazzi equation (see e.g.\ \cite{HE}) \begin{equation}\label{ceqn} \nabla^a II_{ab}- n \nabla_b H =0, \end{equation} where, as usual, $g^{ab}$ is the inverse to $g_{bc}$ and $H:= \frac{1}{n} g^{cd}II_{cd} $ is the mean curvature of the embedding. Thus the symmetric 2-tensor $$ B_{ab}:= II_{ab}-n g_{ab} H $$ is locally conserved everywhere on $M$: $\nabla^a B_{ab}=0$. It follows immediately that Theorem \ref{maini} gives a Pohozaev-Schoen type identity on $M$ (which we may take to have a boundary) with $V= n(1-n)H$. In particular as a special case of Corollary \ref{kwa} we recover the following result. \begin{theorem} \label{AHA} Let $\i : S^n\to \mathbb{E}^{n+1}$ be a conformal immersion with mean curvature $H$. Then for any conformal vector field $X$ on $S^n$ we have $$ \int_{S^n} {\mathcal L}_X H dv_g=0, $$ where we view $H$ as a function on $S^n$, and $dv_g$ is the pullback by $\i$ of the first fundamental form measure. \end{theorem} This Theorem is first due to Ammann et al.\ \cite{AHA}. There it is established using the fact that, on $ \mathbb{E}^{n+1} $, the restriction of a parallel spinor to $M$ satisfies a certain semilinear variant of the Dirac equation. They show that any spinor satisfying such an equation satisfies a Pohozaev-Schoen type identity. The argument above provides a direct alternative argument for the Kazdan-Warner type result in Theorem \ref{AHA}; in particular it avoids the use of spinor fields. The Pohozaev-Schoen identity of \cite{AHA} is interesting and it would be interesting to investigate whether or not it is a special case of \nn{b}. \subsection{The Pohozaev identity}\label{pohS} That the classical Pohozaev identity of \cite{poh65}, $$ \lambda n \int_M F(u) + \frac{2-n}{2} \lambda\int_M f(u)u =\frac{1}{2}\int_{\partial M} (x\hbox to 2.5pt{\hss$\ccdot$\hss} \nu)(\nabla_\nu u)^2, $$ follows from the identity of Schoen is stated in \cite{schoen}. We have not been able to find the argument written anywhere so, for the convenience of the reader, and since it is an idea that generalises, we shall give here the derivation. For any conformal vector field $X$ on a Riemannian $n$-manifold $(M,g)$ with smooth boundary $\partial M$, the following identity holds $$ \int_M {\mathcal L}_X \operatorname{Sc} ~dv_g = \frac{2n}{n-2}\int_{\partial M}\left(\operatorname{Ric}-\frac{1}{n}\operatorname{Sc}\hbox to 2.5pt{\hss$\ccdot$\hss} g \right)(X,\nu)d\sigma_g ; $$ here $\nu$ is the outward normal, and $\operatorname{Ric}$ denotes the Ricci curvature. We start by taking $M \subset {\mathbb{R}}^n$ with metric $g = u^{4/(n-2)} g_0$, $g_0$ the Euclidean metric (with $dvol_{n-1}^0$ on $\partial M$ and $dvol_n^0$ on $M$ resp.). Then with $p = \frac{n+2}{n-2}$ we have that $$\operatorname{Sc} = -\frac{4(n-1)}{n-2} u^{-p} \Delta u$$ ($\Delta$ the Euclidean Laplacian) and we take the (Euler) conformal vector field $X = x_i \frac{\partial}{\partial x_i}$ (summation convention) to get $$ {\mathcal L}_X \operatorname{Sc} = -\frac{4(n-1)}{n-2} \left( x_i(-p)u^{-p-1}\frac{\partial u}{\partial x_i} \Delta u + u^{-p} x_i \frac{\partial}{\partial x_i} \Delta u \right)$$ and so with the relevant volumes, noting that $dvol_n = u^{p+1} dvol_n^0$ and $dvol_{n-1} = u^{2(n-1)/(n-2)} dvol_{n-1}^0$, $$ {\mathcal L}_X \operatorname{Sc} dvol_n = -\frac{4(n-1)}{n-2} \left( x_i(-p)\frac{\partial u}{\partial x_i} \Delta u + u x_i \frac{\partial}{\partial x_i} \Delta u \right) dvol_n^0$$ (from now on the volume forms will be understood as the Euclidean ones, and omitted). In a similar way we can find the boundary term, using $u^{p-1} = e^{2f}$ and $$\operatorname{Ric} = (2-n)[\nabla df - df \otimes df] + [\Delta f - (n-2)|df|^2]g_0$$ which means that $$\operatorname{Ric}_{ij} = (2-n)[\frac{\partial^2f}{\partial x_i \partial x_j} - \frac{\partial f}{ \partial x_i} \frac{\partial f}{\partial x_j}] + [\frac{\partial^2f}{\partial x_k^2} - (n-2)\frac{\partial f}{\partial x_k} \frac{\partial f}{ \partial x_k}]\delta_{ij}$$ and similarly in terms of $u$ and its derivatives. Now we first use the relation between the unit normals $\nu = u^{-2/(n-2)} \nu^0$, and then consider the identity in cases where $u$ is very small along $\partial M$; finally taking the limiting case that $u = 0$ on $\partial M$. We find Schoen's identity then simplifies and determines the following relation: $$-\frac{4(n-1)}{n-2} \int_M (-p)x_i \frac{\partial u}{\partial x_i} \Delta u + u x_i \frac{\partial}{\partial x_i} \Delta u = $$ $$ \frac{2n}{n-2}\int_{\partial M} \Bigl( (2-n)[-\frac{p-1}{2}\frac{\partial u}{\partial x_i} \frac{\partial u}{\partial x_j} - (\frac{p-1}{2})^2 \frac{\partial u}{\partial x_i} \frac{\partial u}{\partial x_j}] + $$ $$[-\frac{p-1}{2} \frac{\partial u}{\partial x_k} \frac{\partial u}{\partial x_k} - (n-2) (\frac{p-1}{2})^2 \frac{\partial u}{\partial x_k} \frac{\partial u}{\partial x_k}]\delta_{ij}\Bigr) \nu_i^0 x_j$$ where as before $p-1 = \frac{4}{n-2}$. Using the fact that $\nabla u$ is normal to the boundary we obtain $$4(n-1)\frac{n+2}{n-2} \int_M x_i \frac{\partial u}{\partial x_i} \Delta u -4(n-1) \int_M u x_i \frac{\partial}{\partial x_i} \Delta u $$ $$= 2n\frac{2n-2}{n-2} \int_{\partial M} u_{\nu}^2 (\nu^0 \hbox to 2.5pt{\hss$\ccdot$\hss} x).$$ Here we have used e.g. $$\frac{\partial u}{\partial x_i} \frac{\partial u}{\partial x_j} \nu_i^0 x_j = u_{\nu}^2 (\nu^0 \hbox to 2.5pt{\hss$\ccdot$\hss} x).$$ Now the second integral over $M$ may be integrated by parts (and no boundary term) to get the new integrand $$ -x_i \frac{\partial u}{\partial x_j} \frac{\partial^2 u}{\partial x_i \partial x_j} -u \Delta u$$ so we arrive at, after another integration by parts in the first term above, this time in the $x_i$ variable, $$4(n-1)\frac{n+2}{n-2} \int_M x_i \frac{\partial u}{\partial x_i} \Delta u -4(n-1) \frac{n+2}{2} \int_M |\nabla u|^2 $$ $$= 2\frac{n+2}{n-2} (n-1) \int_{\partial M} u_{\nu}^2 (\nu^0 \hbox to 2.5pt{\hss$\ccdot$\hss} x).$$ With (from the assumptions on $u$ in the Pohozaev identity) $$\int_M |\nabla u|^2 = \lambda \int_M u f(u)$$ $$\int_M x_i \frac{\partial u}{\partial x_i}\Delta u = n \lambda \int_M F(u)$$ we finally get $$2 n \lambda \int_M F(u) - (n-2) \lambda \int_M u f(u) = \int_{\partial M} u_{\nu}^2 (\nu^0 \hbox to 2.5pt{\hss$\ccdot$\hss} x)$$ which is the classical identity we wanted.
2,869,038,155,215
arxiv
\section{Introduction} In this paper, the success probabilities for two prominent methods, {\em viz}, general integer sieve method and elliptic curve method, are presented. The estimates are specialized for the elliptic curve factorization algorithm. The random variables studied are (1) the number generated by exponentiating a chosen fixed base random number to various random integer exponents, for general integer sieve method, and (2) the group orders of the elliptic curve groups, with restriction to $\mod \fieldChar$, for each (as yet unknown) prime factor $\fieldChar$ of the integer modulus to be factored. The common assumptions taken in our estimates are that the probabilistic events arising from the consideration of various different smaller prime numbers being factors of any particular realization (sample) of the random variable are mutually independent. With the assumption of independence of events corresponding to divisibility by different smaller prime numbers, the probabilities of success are shown to be fairly optimistic. The general integer sieve needs the random base point to be a group generator (primitive in this sense), which may be difficult to ensure. The merits of elliptic curve method are highlighted, with a caution concerning the widths of the intervals of the possible group orders. Nevertheless, the estimated probabilities of success do not depend too heavily on this fact, as they are applicable to random samples form any arbitrary interval of considerable width, for asymptotic analysis. \section{\label{Sec-Formulation}Estimation of Success Probabilities} Let $\Integers$ be the ring of integers, and $\PositiveIntegers$ be the set of positive integers. Let $N$ be a very large positive integer to be factored, and let $\basel{\Integers}{N}$ be the ring of integers with arithmetic operations taken $\mod N$. Let $L_{\min}, L_{\max} \in \Integers$ be such that $L_{\min} < L_{\max}$ and $L_{\max}-L_{\min}$ is very large. The consecutive prime numbers are listed in the ascending order as follows: $2 = \basel{\primeInt}{1}, \, 3 = \basel{\primeInt}{2}, \, 5 = \basel{\primeInt}{3}, ....$, so that $\basel{\primeInt}{i}$ is the $i$-th prime number, for $i \in \PositiveIntegers$. Let $k$ be a small positive integer, but still large enough that the asymptotic estimates hold good, and let $n$ be the largest positive integer, such that $\basel{\primeInt}{n} < \max \{ |L_{\min}|, \, |L_{\max}| \}$. Let $X$ be a random variable taking integer values in the interval ${\mathcal I} = \bgls L_{\min}\, ,~ L_{\max}\bgrs$, with uniform probability distribution. \begin{proposition} \label{prop-01} In the notation just discussed, the probability $\basel{\pi}{X}(z)$ of the event that a sample of the random variable $X$ is divisible by a positive integer $z \geq 2$ is approximately $\frac{1}{z}$, and more precisely the following bounds hold good: \begin{equation} \frac{1}{z} - \frac{1}{L_{\max}-L_{\min}} ~~ \leq ~~ \basel{\pi}{X}(z) ~~ \leq ~~ \frac{1}{z} + \frac{1}{L_{\max}-L_{\min}} \label{Bounds-on-pi-X-z} \end{equation} \end{proposition} \proof For every positive integer $ z \geq 2$, the number of integer multiples of $z$ in ${\mathcal I}$ are between $\bglb \frac{L_{\max}-L_{\min}}{z}-1 \bgrb$ and $\bglb \frac{L_{\max}-L_{\min}}{z}+1 \bgrb$. Thus, the probability that a random sample of $X$ is divisible by $z$ is between $\frac{1}{z} - \frac{1}{L_{\max}-L_{\min}}$ and $\frac{1}{z} + \frac{1}{L_{\max}-L_{\min}}$, which justifies the assumptions, with appropriate choices of $z$. \qed\\ The conjunct consideration concerning the divergence of $ \sum_{i} \frac{1}{\basel{\primeInt}{i}}$ and the convergence of $ \sum_{i} \frac{1}{\basel{\primeInt^{2}}{i}}$ necessitates taking product spaces. Moreover, the estimates are presented only for elliptic curve factorization algorithm. \subsection{\label{Sec-Elliptic-Curve}Success of Elliptic Curve Factorization} Let $r = \left \lceil \frac{\log (N)}{\log (\basel{\primeInt}{k})} \right \rceil$, where the choice of $k$, the number of smaller prime factors to be used, is assumed to be considerably larger than $2$, such as about $1000$. Actually, $k$ can run into tens of thousands, for practical purposes, and constrained by the condition that $\basel{\primeInt^{r}}{k} \geq N$. If $\basel{\primeInt}{k}$ is too small, then $r$ can be so large that the estimated failure probabilities may become irrelevant. Let $\basel{\mathcal C}{l}\bglb \basel{\Integers}{N} \bgrb$ be elliptic curves, defined over $\basel{\Integers}{N}$, for $1 \leq l \leq r$. Let $\fieldChar$ be a large but unknown prime integer factor $N$, such that $\fieldChar \leq \sqrt{N}$, and $\basel{\mathcal C}{l}\bglb\basel{\Integers}{\fieldChar}\bgrb$ be the corresponding elliptic curves restricted to $\basel{\Integers}{\fieldChar}$, for $1 \leq l \leq r$. The group order of $\basel{\mathcal C}{l}\bglb\basel{\Integers}{\fieldChar}\bgrb$ is $\fieldChar+1-\basel{a}{l}$, where $-2\sqrt{\fieldChar} \leq \basel{a}{l} \leq 2\sqrt{\fieldChar}$, by Hasse-Weil bounds for the elliptic curve group orders. The probability distribution of $\fieldChar+1-t$ of the group order of ${\mathcal C}\bglb\basel{\Integers}{\fieldChar}\bgrb$, as obtained by taking $\mod \fieldChar$ restriction of a randomly generated elliptic curve ${\mathcal C}\bglb \basel{\Integers}{N} \bgrb$ is assumed to be uniform over the interval ${\mathcal I} = [(\sqrt{\fieldChar}-1)^2\, ,~ (\sqrt{\fieldChar}+1)^2]$. \begin{proposition} \label{prop-02} Let $\basel{\mathcal C }{l}\bglb \basel{\Integers}{N} \bgrb$, for $1 \leq l \leq r+2$, be any $(r+2)$ independent samples of the elliptic curves, and $\fieldChar$ be a fixed (though unknown yet) prime factor of $N$, such that $\fieldChar \leq \sqrt{N}$. Let ${\mathcal E}_{k+1}$ be the random event that each of the $(r+2)$ group orders $\fieldChar + 1 - \basel{a}{l}$ of the elliptic curves $\basel{\mathcal C }{l}\bglb \basel{\Integers}{\fieldChar} \bgrb$, for $1 \leq l \leq r+2$, is divisible by a prime factor at least as large as $\basel{\primeInt}{k+1}$, where the prime number $\fieldChar$ is assumed to be such that $\fieldChar \, \vert \, N$ and $\fieldChar \geq \basel{\primeInt}{k+1}$. Then, $Pr\bglb {\mathcal E}_{k+1}\bgrb \leq \frac{(r+2)(r+1)+8}{2 \times 4 \times (\basel{\primeInt}{k+1}-1)}$ $~+~$ ${\mathcal O}\bglb \frac{(r+2)(r+1)}{8}\times \frac{ \log (~ \log (\fieldChar) ~ )}{\sqrt{\fieldChar}}\bgrb$. Further, if the approximation $\basel{q}{i} \approx i \log (i)$, for sufficiently large positive integer $i$, is permitted, then $Pr\bglb {\mathcal E}_{k+1}\bgrb \leq \frac{(r+2)(r+1)+8}{2 \times 4 \times k \times (\log (k+1))^{2}}$ $~+~$ ${\mathcal O}\bglb \frac{(r+2)(r+1)}{8}\times \frac{ \log (~ \log (\fieldChar) ~ )}{\sqrt{\fieldChar}}\bgrb$. \end{proposition} \proof Before proceeding with the proof, a justification for the validity of the approximation in the last part is as follows: by the prime number theorem, $i \approx \frac{\basel{\primeInt}{i}}{\log(\basel{\primeInt}{i})} < \frac{\basel{\primeInt}{i}}{\log(i)}$, and $ \basel{\primeInt}{i} $ is likely to be larger than $i \log (i)$. It may also be noticed that $\frac{(r+2)(r+1)+8}{8k (\log (k+1))^{2}} \approx \frac{(r+2)(r+1)+8}{8\basel{\primeInt}{k} (\log (k+1))}$. The random event ${\mathcal E}_{k+1}$ in the statement is broken up into the following two parts: ${\mathcal E}_{k+1} \subseteq E_{k+1, \, 1} \cup E_{k+1, \, 2}$, where \begin{enumerate} \item $E_{k+1, \, 1}$ is the event that there are distinct prime numbers $\basel{\primeInt}{\basel{i}{l}} \geq \basel{\primeInt}{k+1}$, for $1 \leq l \leq r+2$, such that $\basel{\primeInt}{\basel{i}{l}} \, \mid \, (\fieldChar+1-\basel{a}{l})$ and $\basel{\primeInt}{\basel{i}{l}} \, \nmid \, (\fieldChar+1-\basel{a}{l'})$, for $l' \neq l$ and $1 \leq l,\, l \leq r+2$, and \item $E_{k+1, \, 2}$ is the event that there is a prime number $\basel{\primeInt}{i} \geq \basel{\primeInt}{k+1}$, such that $\basel{\primeInt}{i} \, \mid \, (\fieldChar+1-\basel{a}{l})$ and $\basel{\primeInt}{i} \, \mid \, (\fieldChar+1-\basel{a}{l'})$, for two indexes $l$ and $l'$, $l' \neq l$, where $1 \leq l,\, l \leq r+2$. \end{enumerate} The two events listed above are not mutually exclusive, but an upper found for the sum of their probabilities is found, as an estimate for the upper bound of the event in the statement. \paragraph{Part (1). ~~}For the event $E_{k+1,\, 1}$, it is observed that, from the simultaneous congruence relations $\fieldChar +1 \equiv \basel{a}{l} \mod \basel{\primeInt}{\basel{i}{l}}$, for $1 \leq l \leq r+2$, the fixed number $\fieldChar+1$ can be recovered by the Chinese remainder theorem. The mapping $\basel{a}{l} \mapsto \basel{a}{l} \mod \basel{\primeInt}{\basel{i}{l}}$, for $1 \leq l \leq r+2$, induces the homomorphism $(\basel{a}{1}, \, \cdots, \, \basel{a}{r+2}) \mapsto (\basel{a}{1} \mod \basel{\primeInt}{\basel{i}{1}}, \, \cdots, \, \basel{a}{r+2} \mod \basel{\primeInt}{\basel{i}{r+2}})$, that preserves the algebraic structure. In the proof, it is assumed that the probability distributions remain uniform under the mapping $\basel{a}{l} \mapsto \basel{a}{l} \mod \basel{\primeInt}{\basel{i}{l}}$, for $1 \leq l \leq r+2$, with restriction on the domain of possible values of $ (\basel{a}{1} \mod \basel{\primeInt}{\basel{i}{1}}, \, \cdots, \, \basel{a}{r+2} \mod \basel{\primeInt}{\basel{i}{r+2}})$. By the mutual independence of $\basel{a}{l}$, for $1 \leq i \leq r+2$, there are at least $4^{r+2} \prod_{l = 1}^{r+2} \sqrt{\basel{\primeInt}{\basel{i}{l}}}$ many possibilities, in all, for the set of possible realizations $ (\basel{a}{1} \mod \basel{\primeInt}{\basel{i}{1}}, \, \cdots, \, \basel{a}{r+2} \mod \basel{\primeInt}{\basel{i}{r+2}})$, after taking into account the restriction that $|a_{l}| \leq 2\sqrt{\fieldChar}$. The fixed number $\fieldChar+1$ must belong to the set of positive integers that can be reconstructed by any realization of $ (\basel{a}{1} \mod \basel{\primeInt}{\basel{i}{1}}, \, \cdots, \, \basel{a}{r+2} \mod \basel{\primeInt}{\basel{i}{r+2}})$, with $\fieldChar$ constrained to be a prime number. Now, the number of possibilities for the realizations for $ (\basel{a}{1} \mod \basel{\primeInt}{\basel{i}{1}}, \, \cdots, \, \basel{a}{r+2} \mod \basel{\primeInt}{\basel{i}{r+2}})$, that could result in the reconstruction of $\fieldChar+1$, with $\fieldChar$ restricted to be a prime number at most $\sqrt{N}$ (or of bit size at most $\frac{\log_{2}(N)}{2}$), is smaller than $\prod_{l = 1}^{r} \sqrt{\basel{\primeInt}{\basel{i}{l}}}$, because $\bglb\sqrt{\basel{\primeInt}{k}}\bgrb^{r} \geq \sqrt{N} > \frac{\fieldChar+1}{2}$. Thus, $Pr\bglb E_{k+1,\, 1} \bgrb$ $ \leq $ $\frac{1}{\sqrt{\basel{\primeInt}{\basel{i}{r+1}}\basel{\primeInt}{\basel{i}{r+2}}}}$ $\leq \frac{1}{\basel{\primeInt}{k+1}}$. A justification for this approach is given in a separate paragraph following the proof of the second part. \paragraph{Part (2). ~~} For the event $E_{k+1,\, 2}$, a slightly weaker proof is given in this paragraph, and a more accurate proof is given the correction part below. The event that a prime number $\basel{\primeInt}{i} \geq \basel{\primeInt}{k+1}$, such that $\basel{\primeInt}{i}$ divides the group orders of both $\basel{\mathcal C}{l}\bglb \basel{\Integers}{N}\bgrb$ and $\basel{\mathcal C}{l'}\bglb \basel{\Integers}{N}\bgrb$, for some $l$ and $l'$, $l \neq l'$ and $1 \leq l, \, l' \leq r+2$, occurs with probability $\frac{(r+2)(r+1)}{2 \basel{\primeInt^{2}}{i}}$, for any $i$, where $i \geq k+1$. This probability also accounts for the possibility that $\basel{\primeInt}{i} \, \vert \, \fieldChar+1-\basel{a}{l}$ and $\basel{\primeInt}{i} \, \vert \, \fieldChar+1-\basel{a}{l'}$, in case $\basel{a}{l} = \basel{a}{l'}$, but $l \neq l'$, where $1 \leq l, \, l' \leq r+2$, for some prime number $\fieldChar \, \mid \, N$ and $\fieldChar \geq \basel{\primeInt}{k+1}$. However, there are at least four possibilities that $\basel{\primeInt}{i}$ divides either component of the pairs $(\fieldChar+1-\basel{a}{l} \, , \, \fieldChar+1-\basel{a}{l'})$, $(\fieldChar'+1-\basel{a'}{l} \, , \, \fieldChar+1-\basel{a}{l'})$, $(\fieldChar+1-\basel{a}{l} \, , \, \fieldChar'+1-\basel{a'}{l'})$ and $(\fieldChar'+1-\basel{a'}{l} \, , \, \fieldChar'+1-\basel{a'}{l'})$, for two distinct prime factors $\fieldChar$ and $\fieldChar'$ of the composite number $N$, of which only one possibility is taken into account, for a fixed $\fieldChar$. Thus, a multiplier by at most the fraction $\frac{1}{4}$ must be applied. Now, $\sum_{i \geq k+1} \frac{1}{\basel{\primeInt^{2}}{i}} < $ $\sum_{i \geq k+1} \bgls \frac{1}{\basel{\primeInt}{i}-1}-\frac{1}{\basel{\primeInt}{i}} \bgrs$. $< \frac{1}{\basel{\primeInt}{k+1}-1}$. The result follows by adding it to probability bound in the first part. If the approximation $\basel{q}{i} \approx i \log (i)$ is permitted, the probability bound in the second part is as follow: $\sum_{i \geq k+1} \frac{1}{\basel{\primeInt^{2}}{i}} \approx $ $\sum_{i \geq k+1} \frac{1}{i^{2} (\log (i))^{2}} < $ $\frac{1}{(\log (k+1))^{2}}\sum_{i \geq k+1} \frac{1}{i^{2}} < $ $\frac{1}{(\log (k+1))^{2}}\sum_{i \geq k+1} \bgls \frac{1}{i-1} - \frac{1}{i} \bgrs $ $ < \frac{1}{k(\log (k+1))^{2}}$. \qed \\ In the following, a justification for the upper bound for $Pr\bglb E_{k+1, \, 1} \bgrb$ and a small correction to the upper bound for $Pr\bglb E_{k+1,\, 2} \bgrb$, assuming that $N$ is a random integer modulus of a prescribed bit size, are given. \paragraph{Justification for Upper Bound for $Pr\bglb E_{k+1,\, 1} \bgrb$.~~} Conditional and joint probabilities over the possible random modulus integer $N$, of bit size equal to a prescribed parameter $(\lceil \basel{\log}{2}(N)\rceil)$, for independent realizations of the tuples $(\basel{a}{1}, \, \ldots, \, \basel{a}{r+2})$, with appropriate restrictions on the domains of possible values, are taken into consideration. Let the sequences $(\basel{i}{1}, \, \ldots, \basel{i}{r+2})$, for $\basel{i}{l} \neq \basel{i}{l'}$ and $k+1 \leq \basel{i}{l},\, \basel{i}{l'} \leq n$, where $1 \leq l, \, l' \leq r+2$, $l \neq l'$ and $n$ is the largest positive integer such that $\basel{q}{n} \leq (N^{\frac{1}{4}}+1)^{2}$, be enumerated in some particular total order, denoted by $\prec$. Let $X_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})}$ be the event that the group order of $\basel{\mathcal C }{l}\bglb \basel{\Integers}{N} \bgrb$ is divisible by $\basel{\primeInt}{\basel{i}{l}}$, for $1 \leq l \leq r+2$, over all possible integer moduli of bit size $(\lceil \basel{\log}{2}(N)\rceil)$, excluding the events $X_{(\basel{j}{1}, \, \ldots, \basel{j}{r+2})}$, for $(\basel{j}{1}, \, \ldots, \basel{j}{r+2}) \prec (\basel{i}{1}, \, \ldots, \basel{i}{r+2})$, if any. Now \begin{small} \begin{eqnarray*} && \ltab \ltab Pr\bglb E_{k+1,\, 1} \bgrb ~~ \leq ~~ \sum_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})} \bgls \tab \tab Pr\bglb X_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})} \bgrb ~ \times \\ && \tab \tab \tab \tab \tab \tab Pr\bglb \tab \textrm {the event that~} \fieldChar \textrm{~ is a large prime number} \\ && \tab \tab \tab \tab \tab \tab \tab \tab \textrm{~~ of bit size at most~} \frac{\log_{2}(N)}{2} , \textrm{~such that,}\\ && \tab \tab \tab \tab \tab \tab \tab \tab \textrm{~~ for every~} l, ~~ \basel{\primeInt}{\basel{i}{l}} \, \mid \, \fieldChar + 1 - \basel{a}{l} , \textrm{~~and} \\ && \tab \tab \tab \tab \tab \tab \tab \textrm{~~ for some~} l', ~~ \basel{\primeInt}{\basel{j}{l'}} \, \nmid \, \fieldChar + 1 - \basel{a}{l'} , \textrm{~ whenever~} \\ && \tab \tab \tab \tab \tab \tab \tab \tab \tab \tab (\basel{j}{1}, \, \ldots, \basel{j}{r+2})\prec (\basel{i}{1}, \, \ldots, \basel{i}{r+2})\,, \\ && \tab \tab \tab \tab \tab \tab \tab \tab \tab \textrm{~where~} 1 \leq l,\, l' \leq r+2 \tab \bgrb \tab \tab \bgrs\\ && \tab \tab \leq ~~ \sum_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})} \bgls \tab \tab Pr\bglb X_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})} \bgrb ~ \times \\ && \tab \tab \tab \tab \tab \tab Pr\bglb \tab \textrm {the event that~} \fieldChar \textrm{~ is a large prime number} \\ && \tab \tab \tab \tab \tab \tab \tab \tab \textrm{~~ of bit size at most~} \frac{\log_{2}(N)}{2} , \textrm{~such that,}\\ && \tab \tab \tab \tab \tab \tab \tab \tab \textrm{~~ for every~} l, ~~ \basel{\primeInt}{\basel{i}{l}} \, \mid \, \fieldChar + 1 - \basel{a}{l} , \\ && \tab \tab \tab \tab \tab \tab \tab \tab \tab \textrm{~where~} 1 \leq l \leq r+2 \tab \bgrb \tab \tab \bgrs\\ && \tab \tab \leq ~~ \sum_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})} Pr\bglb X_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})} \bgrb ~ \times ~ \frac{1}{\basel{\primeInt}{k+1}} ~~~~ \leq ~~~~ \frac{1}{\basel{\primeInt}{k+1}} \end{eqnarray*} \end{small} \paragraph{Small Correction of Upper Bound for $Pr\bglb E_{k+1,\, 2} \bgrb$. ~~} Taking the upper estimate $ \frac{1}{\basel{\primeInt}{i}}+\frac{1}{4\sqrt{\fieldChar}}$ in place of $\frac{1}{\basel{\primeInt}{i}}$, for $k+1 \leq i \leq n$, the following is obtained: \begin{small} \begin{eqnarray*} && \ltab Pr\bglb E_{k+1, 2} \bgrb ~~ \leq ~~ \sum_{i = k+1}^{n} \left ( \frac{1}{\basel{\primeInt}{i}}+\frac{1}{4\sqrt{\fieldChar}}\right )^{2} ~~ = ~~ \sum_{i = k+1}^{n} \left ( \frac{1}{\basel{\primeInt^{2}}{i}} ~ + ~ \frac{1}{8 \basel{\primeInt}{i} \sqrt{\fieldChar}} ~ + ~\frac{1}{16\fieldChar}\right ) \end{eqnarray*} \end{small} \lspace where $n$ is constrained to be the largest positive integer such that $\basel{\primeInt}{n}$ may possibly divide both $\fieldChar+1-a$ and $\fieldChar+1-a'$, for some $-2\sqrt{\fieldChar} \leq a,\, a' \leq 2\sqrt{\fieldChar}$. Since $\gcd(\fieldChar+1-a\, ,\, \, \fieldChar+1-a')$ must divide $|a-a'| \leq 4\sqrt{\fieldChar}$, it may be assumed that $n \leq \frac{4\sqrt{\fieldChar}}{\log (4\sqrt{\fieldChar})}$, when $a \neq a'$. The terms accrued from \begin{small} \begin{enumerate} \item the sum $\frac{1}{\sqrt{\fieldChar}}\sum_{i = k+1}^{n} \frac{1}{\basel{\primeInt}{i}}$, which can be replaced with $\frac{\log \bglb \log (\basel{\primeInt}{n}) \bgrb}{\sqrt{\fieldChar}} \approx \frac{\log \bglb 2\log (\sqrt{\fieldChar}+1) \bgrb}{\sqrt{\fieldChar}}~$; \item the event that $a = a'$, which is $\frac{1}{4\sqrt{\fieldChar}}$, for independent samples $a$ and $a'$, assuming values from the interval $[-2\sqrt{\fieldChar}\, ,\, \, 2\sqrt{\fieldChar}]$ ; and \item the sum $\sum_{i = k+1}^{n} \frac{1}{\fieldChar}$, which can be replaced with $ \frac{(4\sqrt{\fieldChar})}{ \fieldChar\log (4\sqrt{\fieldChar}) }$ $ = $ $ \frac{4}{\sqrt{\fieldChar} \log (4\sqrt{\fieldChar}) }$ \end{enumerate} \end{small} are insignificant for large $\fieldChar$. In the statement of the proposition, the effect of the correction terms is reflected in the addend ${\mathcal O}\bglb \frac{(r+2)(r+1)}{8}\times \frac{ \log (~ \log (\fieldChar) ~ )}{\sqrt{\fieldChar}}\bgrb$. \\ The methods for justification and correction terms are similar to {\em{a priori}} and {\em{a posteriori}} estimation of the probabilities. To be more explicit, the probability that a random prime $\fieldChar$ being a factor of the random modulus $N$, where $N$ satisfies the requirements specified by $X_{(\basel{i}{1}, \, \ldots, \basel{i}{r+2})}$, with specified bit size of $\basel{\log}{2}(N)$ of a fixed number, assuming uniform likelihood among all such prime numbers that may arise, is estimated and shown to be upper bounded by $\frac{1}{\basel{\primeInt}{k+1}}$. If we were to take $\frac{1}{\fieldChar}$ for the probability distribution of this event, we would, actually, get an even smaller upper bound for $Pr\bglb E_{k+1,\, 1} \bgrb$. This indirect approach is necessitated by the difficulties arising out of the need to deal with the principle of inclusion-and-exclusion in the estimation of the probability of union of events, from the probabilities of independent individual atomic events. For instance, if $Pr\bglb {\mathcal E}_{k+1} \bgrb$ is replaced with something like $\frac{\sum_{i = k+1}^{n} \frac{1}{\basel{\primeInt}{i}}}{\sum_{i = 1}^{n} \frac{1}{\basel{\primeInt}{i}}}$, for some large enough $n$, the resulting failure probability may become totally unrealistic. If hyperelliptic curve method can be adapted for factorization, the success probability may hopefully become better. \section{\label{Sec-General-Comparison}Comparison with General Integer Sieve Factorization} Let $N$ be a large composite positive integer, and $g \in \basel{\Integers^{*}}{N}$, where $\basel{\Integers^{*}}{N}$ is the group of invertible elements $\mod N$, with respect to the multiplication $\mod N$. For a randomly chosen $t \in \basel{\Integers}{N}$, estimates for the probability of the event that every prime factor of $g^{t} \mod N$ is at most $\basel{\primeInt}{k}$ remain elusive. The operational theory of general integer sieve method is described below. Let $d_{j}$ be the discrete logarithm of $\basel{\primeInt}{j}$, assuming that $\basel{\primeInt}{j}$ belong to the cyclic subgroup generated by $g$, for $1 \leq j \leq k$. After collecting sufficient number of samples, a system linear equations of the form $\sum_{j = 1}^{k} \basel{\nu}{i, \, j} d_{j} \equiv \basel{t}{i} \mod \phi(N)$ is formed, for $1 \leq i \leq k$, where $\phi(N)$ is the Euler function of $N$, which is the group order of $ \basel{\Integers^{*}}{N}$. Any such relation arise as a result of the factorization $g^{\basel{t}{i}} = \prod_{j = 1}^{k} \basel{\primeInt^{\basel{\nu}{i, j}}}{i}$, for some random samples $\basel{t}{i}$, for $1 \leq i \leq k$. From every new relation $\sum_{j = 1}^{k} \basel{\nu}{k+l, \, j} d_{j} \equiv \basel{t}{k+l} \mod \phi(N)$, a vector, consisting of integers $\basel{\tau}{k+l, \, i}$, $1 \leq i \leq k$, as components, may be hopefully found, such that $\sum_{i = 1}^{k} \basel{\tau}{k+l, \, i}\basel{\nu}{i, \, j} \equiv 0 \mod \phi(N)$, for $l = 1, 2, 3, \ldots$. Some of the relations may be redundant, leading to trivial relations. In fact, if two linearly independent relations $\sum_{j = 1}^{k} \basel{\nu}{i, \, j} d_{j} \equiv \basel{t}{i} \mod \phi(N)$, for $i = 1$ and $2$, are obtained, then a linear relation of the form $\sum_{j = 1}^{k} \basel{c}{j} d_{j} \equiv 0 \mod \phi(N)$, for some integers $\basel{c}{j}$, $1 \leq j \leq k$, not all $0$, can be found. In addition, if $\rho \, \mid \, \basel{c}{j}$, $1 \leq j \leq k$, for some integer $\rho \geq 2$, then a relation of the form $h^{\rho} = 1 \mod N$, for some $h \in \basel{\Integers^{*}}{N}$, can be found out. Linear relations, like $\sum_{j = 1}^{k} \basel{c}{j} d_{j} \equiv 0 \mod \phi(N)$, are called trivial, if it so happens that $\sum_{j = 1}^{k} \basel{c}{j} d_{j} = 0$, even without applying $\mod \phi(N)$. For quadratic integer sieve, $\mod 2$ restriction (which can be interpreted as the situation corresponding to $\rho = 2$) is taken, with a view to improve the efficiency, because if $g^{2t} = 1 \mod N$, for some integer $t$, then, with $h = g^{t}$, $(h-1)$ and $(h+1)$ may yield nontrivial factors of $N$ by $\gcd$. The estimation of probability of generating a linear relation in $\basel{d}{j}$, for $1 \leq j \leq k$, does not carry over from elliptic curve method to general integer sieve, as the term $(\fieldChar+1)$ plays a pivotal role in our estimation of error probabilities of the elliptic curve factorization method. As for the primitiveness of the chosen base element $g$, it may be observed that the cardinality of $\basel{\Integers^{*}}{N}$ is $\phi(N)$, and among the elements of $\basel{\Integers^{*}}{N}$, there are about $\phi\bglb \phi(N) \bgrb$ elements that can be primitive (group generator) elements. For multiple base elements, the primitiveness constraint may be overcome, but the probability of generating a linear relation is less clearly understood. Subsequently, the merits of elliptic curve factorization method are described. \paragraph{Merits of Elliptic Curve Factorization} \begin{enumerate} \item the method is probabilistic polynomial time algorithm under the assumption of uniform probability of the group orders for random modulus of given size ; \item the space requirement is quite small, compared to integer sieve method ; \item if at least one sample of $k$-smooth group order is realized, then the factorization produces a result ; and \item it is not necessary to assume that the initial random point for any selected curve is a group generator \end{enumerate} However, diligence must be exercised while exponentiating by a prime number $\basel{\primeInt}{i}$, in that the exponentiation may be conducted for at most $\frac{\log(N)}{2\log(\basel{\primeInt}{i})}$ times, for every positive integer $i \leq k$. The number of curve samples also plays an important role, which must be taken in parallel, for each exponentiation by $\basel{\primeInt}{i}$, $1 \leq i \leq k$. \section{Conclusion} The probability analysis for the elliptic curve factorization is presented. The method is shown to be a probabilistic polynomial time algorithm, under reasonable assumptions on the probability distribution of the group orders that arise, when restriction to a fixed (but unknown) smaller prime factor of the modulus integer to be factored is taken. The integer modulus to be factored is treated as a random variable of fixed size, because it is an input to the factorization algorithm. The analysis takes into account the {\em{a priori}} and {\em{a posteriori}} probabilities. The probability of successful factorization is fairly optimistic.
2,869,038,155,216
arxiv
\section{Introduction} Heegaard Floer homology, developed by Ozsv\'ath and Szab\'o \cite{OSz3Manifold} in the early 2000s, has been an extremely effective tool for answering classical questions about 3-manifolds, particularly concerning the genera of embedded surfaces \cite{OSzGenus}. However, surprisingly little is known about the relationship between Heegaard Floer homology and topological properties of Heegaard splittings, even though a Heegaard diagram is an essential ingredient in defining the Heegaard Floer homology of a closed $3$-manifold $Y$. In particular, a Heegaard diagram provides a presentation of the fundamental group of $Y$, and it is natural to ask how this presentation is related to the Heegaard Floer chain complex. In this paper, we shall investigate one such connection. A \emph{left-ordering} on a non-trivial group $G$ is a total order $<$ on the elements of $G$ such that $g < h$ implies $kg < kh$ for any $g,h,k \in G$. A group $G$ is called \emph{left-orderable} if it is nontrivial and admits at least one left-ordering. The question of which $3$-manifolds have left-orderable fundamental group has been of considerable interest and is closely connected to the study of foliations. For instance, if $Y$ admits an $\mathbb{R}$-covered foliation (i.e., a taut foliation such that the leaf-space of the induced foliation on the universal cover $\widetilde{Y}$ is homeomorphic to $\mathbb{R}$), then $\pi_1(Y)$ is left-orderable. Boyer, Rolfsen, and Wiest \cite{BoyerRolfsenWiest} showed that the fundamental group of any irreducible $3$-manifold $Y$ with $b_1(Y)>0$ is left-orderable, reducing the question to that of rational homology spheres. In its simplest form, Heegaard Floer homology associates to a closed, oriented $3$-manifold $Y$ a $\mathbb{Z}/2\mathbb{Z}$--graded, finitely generated abelian group $\HF(Y)$. This group is computed as the homology of a free chain complex $\CF(\mathcal{H})$ associated to a Heegaard diagram $\mathcal{H}$ for $Y$; different choices of diagrams for the same manifold yield chain-homotopy-equivalent complexes. The group $\CF(\mathcal{H})$ depends only on the combinatorics of $\mathcal{H}$, but the differential on $\CF(\mathcal{H})$ involves counts of holomorphic curves that rely on auxiliary choices of analytic data. If $Y$ is a rational homology sphere, then the Euler characteristic of $\HF(Y)$ is equal to $\abs{H_1(Y;\mathbb{Z})}$, which implies that the rank of $\HF(Y)$ is at least $\abs{H_1(Y;\mathbb{Z})}$. $Y$ is called an \emph{L-space} if $\HF(Y) \cong \mathbb{Z}^{\abs{H_1(Y;\mathbb{Z})}}$; thus, L-spaces have the simplest possible Heegaard Floer homology. Examples of L-spaces include $S^3$, lens spaces (whence the name), all manifolds with finite fundamental group, and double branched covers of alternating (or, more broadly, \emph{quasi-alternating}) links. Additionally, Ozsv\'ath and Szab\'o \cite{OSzGenus} showed that if $Y$ is an L-space, it does not admit any taut foliation; whether the converse is true is an open question. The following related conjecture, stated formally by Boyer, Gordon, and Watson \cite{BoyerGordonWatson}, has recently been the subject of considerable attention: \begin{conj} \label{conj:Lspace} Let $Y$ be a closed, connected, 3-manifold. Then $\pi_{1}(Y)$ is not left-orderable if and only if $Y$ is an L-space. \end{conj} This conjecture is now known to hold for all geometric, non-hyperbolic 3-manifolds \cite{BoyerGordonWatson}.\footnote{Specifically, work of Boyer, Rolfsen, and Wiest \cite{BoyerRolfsenWiest} and Lisca and Stipsicz \cite{LiscaStipsiczInvariants3} gives the result for Seifert manifolds with base orbifold $S^2$, as was also observed by Peters \cite{PetersLSpaces}. The cases of Seifert manifolds with non-orientable base orbifold and of Sol manifolds follow from \cite{BoyerRolfsenWiest} and \cite{BoyerGordonWatson}.} Additionally, Boyer, Gordon, Watson \cite{BoyerGordonWatson} and Greene \cite{GreeneAlternating} have shown that the double branched cover of any non-split alternating link in $S^3$ --- which is generically a hyperbolic $3$-manifold --- has non-left-orderable fundamental group. In this paper, we prove the ``if'' direction of Conjecture \ref{conj:Lspace} for manifolds that are ``L-spaces on the chain level.'' To be precise, we call a 3-manifold $Y$ a \emph{strong L-space} if it admits a Heegaard diagram $\mathcal{H}$ such that $\CF(\mathcal{H}) \cong \mathbb{Z}^{\abs{H_1(Y;\mathbb{Z})}}$. This purely combinatorial condition implies that the differential on $\CF(\mathcal{H})$ vanishes, without any consideration of holomorphic disks. We call such a Heegaard diagram a \emph{strong Heegaard diagram}. By considering the presentation for $\pi_1(Y)$ associated to a strong Heegaard diagram, we prove: \begin{theorem} \label{thm:main} If $Y$ is a strong L-space, then $\pi_1(Y)$ is not left-orderable. \end{theorem} The standard Heegaard diagram for a lens space is easily seen to be a strong diagram. Moreover, Greene \cite{GreeneSpanning} constructed a strong Heegaard diagram for the double branched cover of any alternating link in $S^3$; indeed, Boyer, Gordon, and Watson's proof that the fundamental group of such a manifold is not left-orderable essentially makes use of the group presentation for $\pi_1$ associated to that Heegaard diagram. At present, we do not know of any strong L-space that cannot be realized as the double branched cover of an alternating link; while it seems unlikely that every strong L-space can be realized in this manner, it is unclear what obstructions could be used to prove this claim. (Indeed, the question of finding an alternate characterization of alternating links is a famous open problem posed by R. H. Fox.) Nevertheless, our theorem seems like a useful step in the direction of Conjecture \ref{conj:Lspace} in that it relies only on data contained in the Heegaard Floer chain complex. On the other hand, the following theorem, which is well-known but does not appear in the literature, does indicate that being a strong L-space may be a fairly restrictive condition: \begin{theorem} \label{thm:S3} If $Y$ is an integer homology sphere that is a strong L-space, then $Y \cong S^3$. \end{theorem} In particular, there exist integer homology spheres that are L-spaces (e.g., the Poincar\'e homology sphere) but not strong L-spaces. The fact that the condition of being a strong L-space detects $S^3$ suggests that it might be possible to obtain a more explicit characterization or even a complete classification of strong L-spaces. Below, we shall present a graph-theoretic proof of Theorem \ref{thm:S3} due to Josh Greene. In fact, this proof can be extended to classify the finitely many strong L-spaces with $|H_1(Y;\mathbb{Z})|\leq 3$, and it is natural to ask whether, for any $n$, there are finitely many strong L-spaces with $|H_1(Y;\mathbb{Z})| \leq n$. \subsection*{Acknowledgments} The authors are grateful to Josh Greene, Eli Grigsby, Peter Ozsv\'ath, and Liam Watson for helpful conversations, and to the Simons Center for Geometry and Physics, where much of the work in this paper was completed while the authors were visiting in May 2011. \section{Proofs of Theorem \ref{thm:main} and \ref{thm:S3}} To prove Theorem \ref{thm:main}, we will use a simple obstruction to left-orderability that can be applied to group presentations. Let $X$ denote the set of symbols $\{0,+,-,*\}$. These symbols are meant to represent the possible signs of real numbers: $+$ and $-$ represent positive and negative numbers, respectively, and $*$ represents a number whose sign is not known. As such, we define a commutative, associative multiplication operation on $X$ by the following rules: (1) $0 \cdot \epsilon = \epsilon \cdot 0 = 0$ for any $\epsilon \in X$; (2) $+ \cdot + = - \cdot - = +$; (3) $+ \cdot - = - \cdot + = -$; and (4) $\epsilon \cdot * = * \cdot \epsilon = *$ for $\epsilon \in \{+,-,*\}$. A group presentation $\mathcal{G} = \gen{ x_1,\dots, x_m | r_1, \dots, r_n}$ gives rise to an $m \times n$ matrix $E(\mathcal{G}) = (\epsilon_{i,j})$ with entries in $X$ by the following rule: \begin{equation} \label{eq:epsilonij} \epsilon_{i,j} = \begin{cases} 0 & \text{if neither $x_i$ nor $x_i^{-1}$ occur in $r_j$} \\ + & \text{if $x_i$ appears in $r_j$ but $x_i^{-1}$ does not} \\ - & \text{if $x_i^{-1}$ appears in $r_j$ but $x_i$ does not} \\ * & \text{if both $x_i$ and $x_i^{-1}$ occur in $r_j$}. \\ \end{cases} \end{equation} \begin{lemma} \label{lemma:notLO} Let $\mathcal{G} = \gen{ x_1,\dots, x_m | r_1, \dots, r_n}$ be a group presentation such that for any $d_1, \dots, d_m \in \{0,+,-\}$, not all zero, the matrix $M$ obtained from $E(\mathcal{G})$ by multiplying the $i{}^{\text{th}}$ row by $d_i$ has a nonzero column whose nonzero entries are either all $+$ or all $-$. Then the group $G$ presented by $\mathcal{G}$ is not left-orderable. \end{lemma} \begin{proof} Suppose that $<$ is a left-ordering on $G$, and let $d_i$ be $0$, $+$, or $-$ according to whether $x_i=1$, $x_i>1$, or $x_i<1$ in $G$. Since $G$ is nontrivial, at least one of the $d_i$ is nonzero. If the $j{}^{\text{th}}$ column of $M$ is nonzero and has entries in $\{0,+\}$, the relator $r_j$ is a product of generators $x_i$ that are all nonnegative in $G$, and at least one of which is strictly positive. Thus, $r_j>1$ in $G$, which contradicts the fact that $r_j$ is a relator. An analogous argument applies for a nonzero column with entries in $\{0,-\}$. \end{proof} We shall focus on presentations with the same number of generators as relations. For a permutation $\sigma \in S_n$, let $\sign(\sigma) \in \{+,-\}$ denote the sign of $\sigma$ ($+$ if $\sigma$ is even, $-$ if $\sigma$ is odd). The key technical lemma is the following: \begin{lemma} \label{lemma:matrix} Let $\mathcal{G} = \gen{ x_1,\dots, x_n | r_1, \dots, r_n}$ be a group presentation such that $E(\mathcal{G})$ has the following properties: \begin{enumerate} \item There exists at least one permutation $\sigma_0 \in S_n$ such that the entries $\epsilon_{1,\sigma_0(1)}, \dots, \epsilon_{n,\sigma_0(n)}$ are all nonzero. \item For any permutation $\sigma \in S_n$ such that $\epsilon_{1,\sigma(1)}, \dots, \epsilon_{n,\sigma(n)}$ are all nonzero, we have $\epsilon_{1,\sigma(1)}, \dots, \epsilon_{n,\sigma(n)} \in \{+,-\}$. \item For any two permutations $\sigma, \sigma'$ as in (2), we have \[ \sign(\sigma) \cdot \epsilon_{1,\sigma(1)} \cdot \dots \cdot \epsilon_{n,\sigma(n)} = \sign(\sigma') \cdot \epsilon_{1,\sigma'(1)} \cdot \dots \cdot \epsilon_{n,\sigma'(n)}. \] \end{enumerate} Then the group $G$ presented by $\mathcal{G}$ is not left-orderable. \end{lemma} In other words, if we consider the formal determinant \[ \det(E(\mathcal{G})) = \sum_{\sigma \in S_n} \sign(\sigma) \cdot \epsilon_{1,\sigma(1)} \cdot \dots \cdot \epsilon_{n,\sigma(n)}, \] condition (1) says that at least one summand is nonzero, condition (2) says that no nonzero summand contains a $*$, and condition (3) says that every nonzero summand has the same sign. \begin{proof} By reordering the generators and relations, it suffices to assume that $\sigma_0$ from condition (1) is the identity, so that $\epsilon_{i,i} \ne 0$ for $i= 1, \dots, n$, and hence $\epsilon_{i,i} \in \{+,-\}$ by condition (2). We shall show that $E(\mathcal{G})$ satisfies the hypotheses of Lemma \ref{lemma:notLO}. Suppose, then, toward a contradiction, that $d_1, \dots, d_n$ are elements of $\{0,+,-\}$, not all zero, such that every nonzero column of the matrix $M$ obtained as in Lemma \ref{lemma:notLO} contains a nonzero off-diagonal entry (perhaps a $*$) that is not equal to the diagonal entry in that column. Denote the $(i,j){}^{\text{th}}$ entry of $M$ by $m_{i,j}$. \begin{figure} \[ \begin{pmatrix} + & 0 & 0 & 0 & \fbox{$-$} \\ 0 & + & \fbox{$-$} & * & 0 \\ \fbox{$-$} & 0 & + & * & 0 \\ 0 & 0 & 0 & \fbox{$+$} & 0 \\ + & \fbox{$-$} & 0 & 0 & + \\ \end{pmatrix} \qquad \qquad \left( \vcenter{ \xymatrix@R=6pt@C=6pt{ + \ar[dd] & & & & - \ar[llll] \\ & + \ar[ddd] & - \ar[l] & {*} & \\ - \ar[rr] & & + \ar[u] & {*} & \\ & & & + & \\ + & - \ar[rrr] & & & + \ar[uuuu] }} \right) \] \caption{Illustration of the proof of Lemma \ref{lemma:matrix}. In the matrix $M$ shown at left, the entries $m_{i, \sigma(i)}$ are highlighted, where $\sigma$ is the permutation constructed in the proof. To find $\sigma$, we start with the $+$ in the upper left corner, travel to a $-$ in the same column, and then travel to the diagonal entry in the same row as this $-$. Repeating this procedure, we eventually obtain a closed loop, as shown at right.} \label{fig:connect} \end{figure} We may inductively construct a sequence of distinct indices $i_1, \dots, i_k \in \{1, \dots, n\}$ such that \begin{enumerate} \item[(A)] $m_{i_j,i_j} \in \{+,-\}$ for each $j=1, \dots, m$, and \item[(B)]$m_{i_{j+1},i_{j}} \ne 0$ and $m_{i_{j+1},i_j} \ne m_{i_j,i_j}$ \end{enumerate} for each $j=1, \dots, k$, taken modulo $k$. This is done by ``connecting the dots'' as in Figure \ref{fig:connect}. Specifically, we begin by choosing any $i_1$ such that $m_{i_1,i_1} \ne 0$. Given $i_j$, our assumption on $M$ states that we can choose $i_{j+1}$ satisfying assumption (B) above; we then have $m_{i_{j+1},i_{j+1}} \ne 0$ since otherwise the whole $i_{j+1}{}^{\text{th}}$ row would have to be zero. Repeating this procedure, we eventually obtain an index $i_k$ that is equal to some previously occurring index $i_{k'}$, where $k'+1 <k$. The sequence $i_{k'+1}, \dots, i_k$, relabeled accordingly, then satisfies the assumptions (A) and (B). Define a $k$-cycle $\sigma\in S_n$ by $\sigma(i_j) = i_{j+1}$ for $j=1, \dots, k$ mod $k$, and $\sigma(i') = i'$ for $i' \not\in \{i_1, \dots, i_k\}$. By construction, $\epsilon_{i, \sigma(i)} \ne 0$ for each $i = 1, \dots, n$, so the sequence $(\epsilon_{1, \sigma(1)}, \dots, \epsilon_{n,\sigma(n)})$ contains no $*$s by condition (2). The sequences $(\epsilon_{1, \sigma(1)}, \dots, \epsilon_{n,\sigma(n)})$ and $(\epsilon_{1,1}, \dots, \epsilon_{n,n})$ differ in exactly $k$ entries, and the signature of $\sigma$ is $(-1)^{k-1}$. This implies that \[ \sign(\sigma) \cdot \epsilon_{1, \sigma(1)} \cdot \dots \cdot \epsilon_{n,\sigma(n)} = (-1)^{2k-1} \sign(\operatorname{id}) \cdot \epsilon_{1, 1} \cdot \dots \cdot \epsilon_{n,n}, \] which contradicts condition (3). This completes the proof. \end{proof} Now we will apply Lemma \ref{lemma:matrix} to prove Theorem \ref{thm:main}. We first recall some basic facts about the Heegaard Floer chain complex. A \emph{Heegaard diagram} is a tuple $\mathcal{H} = (\Sigma, \bm\alpha, \bm\beta)$, where $\Sigma$ is a closed, oriented surface of genus $g$, $\bm\alpha = (\alpha_1, \dots, \alpha_g)$ and $\bm\beta = (\beta_1, \dots, \beta_g)$ are each $g$-tuples of pairwise disjoint simple closed curves on $\Sigma$ that are linearly independent in $H_1(\Sigma;\mathbb{Z})$, and each pair of curves $\alpha_i$ and $\beta_j$ intersect transversely. A Heegaard diagram $\mathcal{H}$ determines a closed, oriented $3$-manifold $Y = Y_\mathcal{H}$ with a self-indexing Morse function $f: Y \to [0,3]$ such that $\Sigma = f^{-1}(3/2)$, the $\alpha$ circles are the belt circles of the $1$-handles of $Y$, and the $\beta$ circles are the attaching circles of the $2$-handles. If we orient the $\alpha$ and $\beta$ circles, the Heegaard diagram determines a group presentation \[ \pi_1(Y) = \gen{a_1, \dots, a_g \mid b_1, \dots, b_g}, \] where the generators $a_1, \dots, a_g$ correspond to the $\alpha$ circles, and $b_j$ is the word obtained as follows: If $p_1, \dots, p_k$ are the intersection points of $\beta_j$ with the $\alpha$ curves, indexed according to the order in which they occur as one traverses $\beta_i$, and $p_\ell \in \alpha_{i_\ell} \cap \beta_i$ for $\ell = 1, \dots, k$, then \begin{equation} \label{eq:relation} b_j = \prod_{\ell = 1}^k a_{i_\ell}^{\eta(p_i)}, \end{equation} where $\eta(p_i) \in \{\pm 1\}$ is the local intersection number of $\alpha_{i_\ell}$ and $\beta_j$ at $p_i$. Let $\Sym^g(\Sigma)$ denote the $g{}^{\text{th}}$ symmetric product of $\Sigma$, and let $\mathbb{T}_\alpha, \mathbb{T}_\beta \subset \Sym^g(\Sigma)$ be the $g$-dimensional tori $\alpha_1 \times \dots \times \alpha_g$ and $\beta_1 \times \dots \times \beta_g$, which intersect transversely in a finite number of points. Assuming $Y$ is a rational homology sphere, $\CF(\mathcal{H})$ is the free abelian group generated by points in $\mathfrak{S}_\mathcal{H} = \mathbb{T}_\alpha \cap \mathbb{T}_\beta$.\footnote{For general $3$-manifolds, we must restrict to a particular class of so-called admissible diagrams.} More explicitly, these are tuples $\mathbf{x} = (x_1, \dots, x_g)$, where $x_i \in \alpha_i \cap \beta_{\sigma(i)}$ for some permutation $\sigma \in S_g$. The differential on $\CF(\mathcal{H})$ counts holomorphic Whitney disks connecting points of $\mathfrak{S}_\mathcal{H}$ (and depends on an additional choice of a basepoint $z \in \Sigma$), but we do not need to describe this in any detail here. Orienting the $\alpha$ and $\beta$ circles determines orientations of $\mathbb{T}_\alpha$ and $\mathbb{T}_\beta$. For $\mathbf{x} \in \mathfrak{S}_\mathcal{H}$, let $\eta(\mathbf{x})$ denote the local intersection number of $\mathbb{T}_\alpha$ and $\mathbb{T}_\beta$ at $\mathbf{x}$. If $\mathbf{x} = (x_1, \dots, x_g)$ with $x_i \in \alpha_i \cap \beta_{\sigma(i)}$, we have \begin{equation} \label{eq:grading} \eta(\mathbf{x}) = \sign(\sigma) \prod_{i=1}^g \eta(x_i). \end{equation} These orientations determine a $\mathbb{Z}/2$-valued grading $\gr$ on $\CF(Y)$ by the rule that $(-1)^{\gr(\mathbf{x})} = \eta(\mathbf{x})$; the differential shifts this grading by $1$. If $Y$ is a rational homology sphere, then with respect to this grading, we have $\chi( \CF(\mathcal{H})) = \pm \abs{H_1(Y;\mathbb{Z})}$, and we may choose the orientations such that the sign is positive. (See \cite[Section 5]{OSzProperties} for further details.) The proof of Theorem \ref{thm:main} is completed with the following: \begin{lemma} If $\mathcal{H}$ is a strong Heegaard diagram for a strong L-space $Y$, then the corresponding presentation for $\pi_1(Y)$ satisfies the hypotheses of Lemma \ref{lemma:matrix}. \end{lemma} \begin{proof} If $\rank( \CF(\mathcal{H})) = \chi(\CF(\mathcal{H})) = \abs{H_1(Y;\mathbb{Z})}$, then $\CF(\mathcal{H})$ is supported in a single grading, so $\eta(\mathbf{x}) = 1$ for all $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$. The result then follows quickly from equations \eqref{eq:epsilonij}, \eqref{eq:relation}, and \eqref{eq:grading}. Specifically, since $\mathfrak{S}_\mathcal{H} \ne \emptyset$, there exists $\sigma_0 \in S_g$ such that $\alpha_i \cap \beta_{\sigma_0(i)} \ne \emptyset$ for each $i$, and hence $\epsilon_{i, \sigma_0(i)} \ne 0$. If $\alpha_i$ and $\beta_j$ contain a point $x$ that is part of some $\mathbf{x} \in \mathfrak{S}_\mathcal{H}$, then every other point $x' \in \alpha_i \cap \beta_j$ has $\eta(x') = \eta(x)$, and hence $\epsilon_{i,j} = \eta(\mathbf{x}) \in \{+, -\}$. Finally, if $\mathbf{x} = (x_1, \dots, x_g)$ and $\mathbf{x}' = (x'_1, \dots, x'_g)$, with $x_i \in \alpha_i \cap \beta_{\sigma(i)}$ and $x'_i \in \alpha_i \cap \beta_{\sigma'(i)}$, then equation \eqref{eq:grading} and the fact that $\eta(\mathbf{x}) = \eta(\mathbf{x}')$ imply the final hypothesis. \end{proof} Finally, to prove Theorem \ref{thm:S3}, we use a simple graph-theoretic argument. Given a Heegaard diagram $\mathcal{H}$, let $\Gamma_\mathcal{H}$ denote the bipartite graph with vertex sets $\mathcal{A} = \{A_1, \dots, A_g\}$ and $\mathcal{B} = \{B_1, \dots, B_g\}$, with an edge connecting $A_i$ and $B_j$ for each intersection point in $\alpha_i \cap \beta_j$. The set $\mathfrak{S}_\mathcal{H}$ thus corresponds to the set of perfect matchings on $\Gamma_\mathcal{H}$. \begin{lemma} \label{lemma:destab} If $\mathcal{H}$ is a Heegaard diagram of genus $g>1$, and $\Gamma_\mathcal{H}$ contains a leaf (a $1$-valent vertex), then $Y_\mathcal{H}$ admits a Heegaard diagram $\mathcal{H}'$ of genus $g-1$ with a bijection between $\mathfrak{S}_\mathcal{H}$ and $\mathfrak{S}_{\mathcal{H}'}$. \end{lemma} \begin{proof} If $A_i$ is $1$-valent, then the curve $\alpha_i$ intersects one $\beta$ curve, say $\beta_j$, in a single point and is disjoint from the remaining $\beta$ curves. By a sequence of handleslides of the $\alpha$ curves, we may remove any intersections of $\beta_j$ with any $\alpha$ curve other than $\alpha_i$, without introducing or removing any intersection points. We may then destabilize to obtain $\mathcal{H}'$. Since every element of $\mathfrak{S}_\mathcal{H}$ includes the unique point of $\alpha_i \cap \beta_j$, we have a bijection between $\mathfrak{S}_\mathcal{H}$ and $\mathfrak{S}_{\mathcal{H}'}$. (Indeed, $\Gamma_\mathcal{H}'$ is obtained from $\Gamma_\mathcal{H}$ by deleting $A_i$ and $B_j$, which does not change the number of perfect matchings.) The case where $B_i$ is $1$-valent is analogous. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:S3}] Let $\mathcal{H}$ be a strong Heegaard diagram for $Y$ whose genus $g$ is minimal among all strong Heegaard diagrams for $Y$. Suppose, toward a contradiction, that $g>1$. By Lemma \ref{lemma:destab}, $\Gamma_\mathcal{H}$ has no leaves. By assumption, $\Gamma_\mathcal{H}$ has a single perfect matching $\mu$. We direct the edges of $\Gamma_\mathcal{H}$ by the following rule: an edge points from $\mathcal{A}$ to $\mathcal{B}$ if it is included in $\mu$ and from $\mathcal{B}$ to $\mathcal{A}$ otherwise. Thus, every vertex in $\mathcal{A}$ has exactly one outgoing edge, and every vertex in $\mathcal{B}$ has exactly one incoming edge. We claim that $\Gamma_\mathcal{H}$ contains a directed cycle $\sigma$. To see this, let $\gamma$ be a maximal directed path in $\Gamma_\mathcal{H}$ that visits each vertex at most once, and let $v$ be the initial vertex of $\gamma$. If $v \in \mathcal{B}$, then there is a unique directed edge $e$ in $\Gamma_\mathcal{H}$ from some point $w \in \mathcal{A}$ to $v$, and $e$ is not included in $\gamma$. Likewise, if $v \in \mathcal{A}$, then there is an edge $e$ not in $\gamma$ connecting $v$ and some point $w \in \mathcal{B}$ since $v$ is not a leaf, and $e$ is directed from $w$ to $v$ since the only outgoing edge from $v$ is in $\gamma$. In either case, the maximality of $\gamma$ implies that $w \in \gamma$, which means that $\gamma \cup e$ contains a directed cycle. However, $(\mu \smallsetminus \sigma) \cup (\sigma \smallsetminus \mu)$ is then another perfect matching for $\Gamma_\mathcal{H}$. Thus, the Heegaard diagram $\mathcal{H}$ is a torus with a single $\alpha$ curve and a single $\beta$ curve intersecting in a single point, which describes the standard genus-1 Heegaard splitting of $S^3$. \end{proof} \bibliographystyle{amsplain}
2,869,038,155,217
arxiv
\section{Introduction}\label{one} Symmetries have not played as big a role in open quantum dynamics as in the complete quantum dynamics of a closed system. One reason may be that structures where we might see symmetries are changed. The Schr\"{o}dinger picture can look quite different for the open quantum dynamics of a subsystem.\cite{Nielsen2000,Breuer2002,AttalH,AttalM,AttalR,Alicki2007,Rivas2011} There may be no Schr\"{o}dinger equation, no wave function or state vector, because the state of the subsystem can be a mixed state described by a density matrix, not a pure state described by a state vector, even when the the state of the entire system that contains the subsystem is a pure state described by a state vector and the dynamics of the entire system is described by a Schr\"{o}dinger equation. The state of the subsystem can change between more or less pure or mixed as a pure state of the entire system changes in time. The time dependence of the density matrix for the subsystem may be described by a Gorini-Kossakowski-Sudarshan/Lindblad equation \cite{GKS,Lindblad} when necessary assumptions are satisfied or approximations are made. Completely positive maps of density matrices \cite{Kraus1971,Kraus1983} may be used when the density matrix for the entire system is a product of the density matrix for the subsystem and a density matrix for the rest of the entire system and, since that condition changes in time, maps with various different properties \cite{pechukas94,alicki95,pechukas95,stelmachovic01a,jordan04,jordan05,jordan05b,jordan08,Rybar,jordan09,jordan10} also may be brought in, further developing the basic picture \cite{ECGMathewsRau,jordan1961map,jordan1962map} of quantum dynamics as linear maps of density matrices. The open dynamics of a subsystem does not look so different in the Heisenberg picture. The operators that represent the physical quantities of the subsystem are changed in time the same as for any other physical quantities of the entire system. The dynamics for the subsystem is seen simply by looking only at changes for the physical quantities of the subsystem. We will take this point of view here to look at symmetries of open quantum dynamics that can be described by unitary symmetry operators. This lets us use a framework that is the same for the symmetries of the dynamics of the entire system. It helps us see which symmetries come in only with the open dynamics for the subsystem, not as symmetries of the dynamics of the entire system. It also helps us see which symmetries depend only on the dynamics and which depend also on correlations, or absence of correlations, between the subsystem and the rest of the entire system or on the state of the rest of the entire system. It gives us exact equations and lets us avoid concerns about approximations used to write equations of motion for density matrices in the Schr\"{o}dinger picture. In the quantum mechanics of an entire closed system, a unitary operator $U$ describes a symmetry for the quantum dynamics generated by a Hamiltonian operator $H$ if $U$ commutes with $H$. In terms of mean values, this means that \begin{equation} \label{mvbasic} \text{Tr} \left[We^{itH}U^{\dagger}QUe^{-itH}\right] = \text{Tr} \left[WU^{\dagger}e^{itH}Qe^{-itH}U\right] \end{equation} for density matrices $W$ for states and operators $Q$ for physical quantities, for any time $t$. In Section II, we will see that, conversely, if Eq.(\ref{mvbasic}) holds for all $W$ and $Q$, and the spectrum of $H$ has a lower bound, then $U$ commutes with $H$. We will see that a physically equivalent conclusion about the symmetry of the dynamics generated by $H$ is obtained without the assumption that the spectrum of $H$ has a lower bound. The mean values in Eq.(\ref{mvbasic}) are physically meaningful numbers. They include the mean values for projection operators, which are the probabilities the states assign to possible values of physical quantities. The statement about mean values made with Eq.(\ref{mvbasic}) describes the symmetry in physical terms. This is for a quantum system that is closed, which means there is no need to consider that it might interact with anything else. An open quantum system is a subsystem $S$ of a larger system and interacts with the subsystem $R$ that is the remainder, or rest of the larger system (which could be a reservoir). We will consider the open dynamics for $S$ that is the result of the dynamics generated by a Hamiltonian operator $H$ in the entire system of $S$ and $R$ combined. The mean values $\text{Tr} [WQ]$ for the operators $Q$ for the physical quantities of $S$ are changed to $\text{Tr} [We^{itH}Qe^{-itH}]$. The unitary operator $U$ describes a symmetry for the open dynamics of $S$ if Eq.(\ref{mvbasic}) holds just for the operators $Q$ for the physical quantities of $S$. We assume it holds for all the states of $S$. And the states of $R$? Correlations between $S$ and $R$? We will consider different possibilities. In Section II, we assume that Eq.(\ref{mvbasic}) holds for all the states of the entire system of $S$ and $R$. The symmetries of the open dynamics of $S$ apply to all the states of $S$ and do not depend on the states of $R$ or on correlations, or absence of correlations, between $S$ and $R$. We call these \textit{independent} symmetries. We find that, at least in simple examples, this often implies that $U$ commutes with $H$; then there are no independent symmetries for the open dynamics of $S$ beyond those that are symmetries for the entire dynamics of $S$ and $R$. In Section IV, we assume at first that there are no correlations between the states of $S$ and $R$ and we assume that Eq.(\ref{mvbasic}) holds for all the states of $S$ but only for particular states of $R$. Then, to complete Section IV, we admit correlations between $S$ and $R$ and assume that Eq.(\ref{mvbasic}) holds for all the states of $S$ but only for particular states of $R$ and particular correlations between the states of $S$ and $R$. In that section, the symmetries of the open dynamics of $S$ apply to all the states of $S$ but do depend on the state of $R$ and on correlations, or absence of correlations, between $S$ and $R$. We call these \textit{dependent} symmetries. This is a new kind of symmetry, different from that of the dynamics of an entire closed system. We consider only a few examples. Further results from collaboration are being reported separately.\cite{Seo} The symmetries are generally not related to constants of the motion for the open dynamics of the subsystem. this is discussed in Section III. We follow common physics practice and write a product of operators for separate systems, for example a product of Pauli matrices $\Sigma$ and $\Xi$ for the two qubits considered in Section II.A, simply as $\Sigma \Xi$, not $\Sigma \otimes \Xi$. Occasionally we insert a $\otimes$ for emphasis or clarity. \section{Independent symmetries}\label{two} Here we consider symmetries of the open dynamics of $S$ that do not depend on the states of $R$ or on correlations, or absence of correlations, between $S$ and $R$. Throughout this section, we assume that Eq.(\ref{mvbasic}) holds for all $W$ for all the states of the entire system of $S$ and $R$ combined, and for all the $Q$ for the physical quantities of $S$, which means that \begin{equation} \label{opbasic} e^{itH}U^{\dagger}QUe^{-itH} = U^{\dagger}e^{itH}Qe^{-itH}U \end{equation} for all the $Q$ for $S$, and for any time $t$. The expression of the symmetry is that the overall changes of the operators $Q$ for $S$ are the same whether the symmetry transformation is before or after the dynamics. Multiplying both sides of this Eq.(\ref{opbasic}) on the left by $U$ and on the right by $U^{\dagger}$ gives \begin{eqnarray} \label{opbasic2} e^{itH}Qe^{-itH} & = & Ue^{itH}U^{\dagger}QUe^{-itH}U^{\dagger} \nonumber \\ & = & e^{itUHU^{\dagger}}Qe^{-itUHU^{\dagger}}. \end{eqnarray} This expresses the symmetry in another way: the changes in time of the operators $Q$ for $S$ are the same for the dynamics generated by $UHU^{\dagger}$ as for the dynamics generated by $H$. The changes in time of the operators $Q$ for $S$ may be different in the dynamics generated by $U^{\dagger}HU$; this is shown by the example in Section II.E. The dynamics generated by $U^{\dagger}HU$ does give the same changes in time of the operators $Q$ for $S$ as the dynamics generated by $H$ if $U$ is an element of a group of unitary operators that represents a group of independent symmetries of the open dynamics, because then the requirement that the inverse of each element of the group also is an element of the group means that Eq.(\ref{opbasic}) holds when $U$ is replaced by $U^{\dagger}$. This is the case in the analog here of the familiar situation where a one-parameter group of symmetries is represented by a one-parameter group of unitary operators constructed from an Hermitian generator. Suppose $U$ is $e^{-i\theta J}$ with $J$ an Hermitian operator and $\theta $ a real parameter, and suppose that Eq.(\ref{opbasic}) holds for all real $\theta $. Then for a given $\theta $, it holds also for $-\theta $, so it holds for the given $\theta $ when $J$ is replaced by $-J$, which means that Eq.(\ref{opbasic}) holds with $U = e^{-i\theta J}$ replaced by $U^{\dagger} = e^{i\theta J}$. Suppose $U_1$ and $U_2$ are operators for $S$; they do not involve $R$. If $U_1$ and $U_2$ represent independent symmetries for the open dynamics generated by $H$, then $U_1U_2$ also does, because Eq.(\ref{opbasic}) for $U_1U_2$ is implied by its holding successively for $U_1$ and then $U_2$. Whether a set of independent symmetries represented by operators $U$ for $S$ generates a group depends on whether the $U^{\dagger}$ operators represent independent symmetries for $H$. The symmetries of the complete dynamics of an entire quantum system are simple in ways that the symmetries of the open dynamics of a subsystem are not. If $U$ describes a symmetry for the dynamics of an entire system, then Eq.(\ref{mvbasic}) and, equivalently, \begin{equation} \label{mvbasicS} \text{Tr} \left[Ue^{-itH}We^{itH}U^{\dagger}Q\right] = \text{Tr} \left[e^{-itH}UWU^{\dagger}e^{itH}Q\right] \end{equation} hold for all $Q$ and $W$ for the entire system, so Eq.(\ref{opbasic}) holds for all the $Q$ for the physical quantities of the entire system and \begin{equation} \label{opbasicS} Ue^{-itH}We^{itH}U^{\dagger} = e^{-itH}UWU^{\dagger}e^{itH} \end{equation} holds for all the density matrices $W$ for the states of the entire system. This shows that if $U$ describes a symmetry for an entire system, then $U^{\dagger}$ also does, whether $U$ commutes with $H$ or not, because the density matrices $W$ are linear combinations of projection operators that represent physical quantities, so Eq.(\ref{opbasic}) holds when $Q$ is replaced by a density matrix, and the operators $Q$ for the physical quantities are linear combinations of projection operators that are density matrices, so Eq.(\ref{opbasicS}) holds when $W$ is replaced by any operator that represents a physical quantity. Combining this with the observation just made that if $U_1$ and $U_2$ represent symmetries, then their product $U_1U_2$ also does, we see that the operators $U$ that describe symmetries for the dynamics of an entire system form a group. This is not generally true for the open dynamics of a subsystem. We have a Heisenberg picture of the symmetries in Eq.(\ref{opbasic}) and a Schr\"{o}dinger picture in Eq.(\ref{opbasicS}), for the dynamics of an entire system. The Schr\"{o}dinger picture is not so simple for the open dynamics of a subsystem. From Eq.(\ref{mvbasic}) or, equivalently, Eq.(\ref{mvbasicS}) holding for all $Q$ for $S$, we have \begin{equation} \label{mvopbasicS} \text{Tr}_R \left[Ue^{-itH}We^{itH}U^{\dagger}\right] = \text{Tr}_R \left[e^{-itH}UWU^{\dagger}e^{itH}\right]. \end{equation} Since the dynamics is for the entire system of $S$ and $R$ combined, a change in the density matrix for $S$ is generally obtained by calculating the change of the density matrix $W$ for the entire system and taking the trace for $R$ at the end. The density matrix for the entire system, not just the density matrix for $S$, is needed at the start. The only situation that gives our framework for symmetries a place in the Schr\"{o}dinger picture is when there are no initial correlations between $S$ and $R$ so the density matrix $W$ for the entire system is a product $\rho_S \rho_R$ of density matrices $\rho_S$ for $S$ and $\rho_R$ for $R$. Then the dynamics changes $\rho_S$ in time $t$ to \begin{equation} \label{Satt} \Phi (\rho_S) = \text{Tr}_R \left[e^{-itH}\rho_S \rho_Re^{itH}\right] \end{equation} with a map $\Phi $ that is completely positive. From Eq.(\ref{mvopbasicS}) for $U$ an operator $U_S$ that is only for $S$, we have \begin{eqnarray} \label{rhosym} U_S \Phi (\rho_S)U_S^{\dagger} & = & \text{Tr}_R \left[U_Se^{-itH}\rho_S \rho_Re^{itH}U_S^{\dagger}\right] \nonumber \\ & = & \text{Tr}_R \left[ e^{-itH}U_S \rho_S U_S^{\dagger} \rho_R e^{itH}\right] \nonumber \\ & = & \Phi (U_S\rho_S U_S^{\dagger}) \end{eqnarray} which expresses the symmetry in terms of the map of the density matrices for $S$. A different choice of $\rho_R$ in Eq.(\ref{Satt}) defines a different map $\Phi $, but it is not changed when $W$ is changed to $U_SWU_S^{\dagger}$. Other kinds of maps \cite{pechukas94,alicki95,pechukas95,stelmachovic01a,jordan04,jordan05b,jordan10} can be used when there are initial correlations between $S$ and $R$. Each map applies to a set of density matrices for $S$. Different maps are defined by different correlations between $S$ and $R$ as well as by different states of $R$. Changing $W$ to $U_SWU_S^{\dagger}$ can change the correlations and change the map, so Eq.(\ref{mvopbasicS}) is generally not an expression of the symmetry in terms of the map that describes the change in time of the density matrices for $S$. Symmetries that depend on correlations generally cannot readily be seen in the Schr\"{o}dinger picture. A symmetry of the open dynamics of $S$ can imply properties of the dynamics, for the entire system of $S$ and $R$, that are not implied by the symmetries of the dynamics of the entire system. This is shown by an example worked out in Section II.F. Let \vspace{-0.4cm} \begin{equation} \label{R(t)} R(t) = Ue^{-itH}U^{\dagger}e^{itH}. \end{equation} By multiplying both sides on the right by $e^{-itH}U$, we see that \begin{equation} \label{R(t)HU} Ue^{-itH} = R(t)e^{-itH}U \end{equation} for all $t$. Conversely, this Eq.(\ref{R(t)HU}) for all $t$ implies that Eq.(\ref{opbasic}) holds for all $t$ and all $Q$ for $S$ if the $R(t)$ commute with all the $Q$ for $S$. By multiplying both sides of Eq.(\ref{opbasic}) on the left by $Ue^{-itH}$ and on the right by $U^{\dagger}e^{itH}$ we see that the $R(t)$ for all $t$ commute with all the $Q$ for $S$. \\ \\ \textit{Theorem 1}. The $R(t)$ for all real $t$ commute with $e^{isH}Qe^{-isH}$ for all the $Q$ for $S$ and all real $s$. \\ \textit{Proof}. From Eq.(\ref{R(t)HU}) we get \begin{eqnarray} \label{prf1} R(s+t)e^{-i(s+t)H}U & = & Ue^{-i(s+t)H} = Ue^{-isH}e^{-itH} \nonumber \\ & = & R(s)e^{-isH}Ue^{-itH} = R(s)e^{-isH}R(t)e^{-itH}U \\ \label{prf2} R(s+t) & = & R(s)e^{-isH}R(t)e^{isH} \end{eqnarray} and we see that, since all the $Q$ for $S$ commute with both $R(s+t)$ and $R(s)$, they must commute with $e^{-isH}R(t)e^{isH}$ which means that every $R(t)$ commutes with $e^{isH}Qe^{-isH}$ for every $s$ for all the $Q$ for $S$. This completes the proof of Theorem 1. \\ \\ This theorem implies that the $R(t)$ commute with $[Q,H]$, $[[Q,H],H]$ and all the successive commutators $[[[Q,H],H] ...,H]$. The operators required to commute with the $R(t)$ as a result of Theorem 1 have to be worked out specifically case by case. In some cases, $H$ commutes with the $R(t)$. Then Eq.(\ref{prf2}) implies that \begin{equation} \label{Rst} R(s+t)=R(s)R(t). \end{equation} If the state-vector space is finite-dimensional, this brings us to the conclusion that $U$ commutes with $H$. \\ \\ \textit{Theorem 2}. For operators on a finite-dimensional space, if the $R(t)$ commute with $H$ then $U$ commutes with $H$. \\ \textit{Proof}. If the $R(t)$ commute with $H$, then Eq.(\ref{Rst}) holds and implies that $R(t)=e^{-itG}$ with $G$ a Hermitian operator that commutes with $H$, and Eq.(\ref{R(t)HU}) implies that \begin{equation} \label{UHU2} Ue^{-itH}U^{\dagger} = e^{-itG}e^{-itH} \end{equation} \vspace{-0.7cm} so \begin{equation} \label{H+G2} UHU^{\dagger} = H + G. \end{equation} Since the spectrum of $UHU^{\dagger}$ is the same as the spectrum of $H$, this says that the spectrum of $H$ is the same as the spectrum of $H + G$. For operators on a finite-dimensional space, this implies that $G$ is zero. This completes the proof of Theorem 2. \\ \\ In some cases, $R(t)$ commutes with all the operators for the entire system of $S$ and $R$. Then $R(t)$ must be a multiple of the identity operator for all $t$. If the spectrum of $H$ has a lower bound, this brings us to the conclusion, again, that $U$ commutes with $H$. \\ \\ \textit{Theorem 3}. If $R(t)$ is a multiple of the identity operator for all $t$, and the spectrum of $H$ has a lower bound, then $U$ commutes with $H$. \\ \textit{Proof}. Eq.(\ref{Rst}) implies that $R(t)=e^{-itr}$ with $r$ a real number, and then Eq.(\ref{R(t)HU}) implies that \begin{equation} \label{UHU} Ue^{-itH}U^{\dagger} = e^{-itr}e^{-itH} \end{equation} \vspace{-0.7cm} so \begin{equation} \label{H+r} UHU^{\dagger} = H + r. \end{equation} Since the spectrum of $UHU^{\dagger}$ is the same as the spectrum of $H$, this says that the spectrum of $H$ is the same as the spectrum of $H + r$, which implies that $r$ is zero if the spectrum of $H$ has a lower bound. This completes the proof of Theorem 3. \\ \\ An example worked out in Section II.G shows that the assumption that the $R(t)$ are multiples of the identity operator is necessary for this theorem. An example worked out in Section II.E shows that the assumption that the spectrum of $H$ has a lower bound also is necessary for this theorem. Now we can prove the statements that were left unproved in the Introduction. They are based on the assumption that Eq.(\ref{mvbasic}) holds for all $W$ and all $Q$ for the entire system of $S$ and $R$. This implies that Eq.(\ref{opbasic}) holds for all the $Q$ for the entire system of $S$ and $R$. Then the $R(t)$ commute with all the $Q$ for the entire system of $S$ and $R$, so the $R(t)$ must be multiples of the identity operator, and Theorem 3 implies that $U$ commutes with $H$ if the spectrum of $H$ has a lower bound. Without the assumption that the spectrum of $H$ has a lower bound, we still get \begin{equation} \label{rHU} Ue^{-itH} = e^{-itr}e^{-itH}U \end{equation} from Eq.(\ref{UHU}). This is a statement about the symmetry of the dynamics generated by $H$ that is physically equivalent to the statement that $U$ commutes with $H$; the phase factor $e^{-itr}$ makes no difference. This does not say, as the statement that $U$ commutes with $H$ does, that the values of the quantity represented by $H$ are the same as the values of the quantity represented by $UHU^{\dagger}$. Having completed the Introduction, we go back to independent symmetries of the open dynamics of $S$, and consider simple examples. \subsection{One and one qubits example} \label{qubit} Let $S$ be a qubit described by Pauli matrices $\Sigma_1$, $\Sigma_2$, $\Sigma_3$ and $R$ a qubit described by Pauli matrices $\Xi_1$, $\Xi_2$, $\Xi_3$. In this example, there are no independent symmetries of the open dynamics of $S$ that are not also symmetries of the entire dynamics of $S$ and $R$. To see this, we will work out the commutators $[\Sigma_j, H]$ and $[[\Sigma_j, H], H]$ for any $H$ and apply Theorems 1 and 2. We write the Hamiltonian as \begin{equation} \label{HPauli} H = \frac{1}{2}\sum_{j=1}^3 \alpha_j \Sigma_j + \frac{1}{2}\sum_{j=1}^3 \beta_j \Xi_j + \frac{1}{2}\sum_{j=1}^3 \gamma_j \Sigma_j \Xi_j \end{equation} with real numbers $\alpha_j $, $\beta_j $, $\gamma_j $. Any Hamiltonian can be put in this form \cite{zhang03a} by rotations of the $\Sigma_j$ and $\Xi_k $ that change $\sum_{j=1}^3 \sum_{k=1}^3 \gamma_{jk}\Sigma_j \Xi_k \; $ to $\; \sum_{j=1}^3 \gamma_j \Sigma_j \Xi_j $. The $R(t)$ commute with $\Sigma_1$, $\Sigma_2$, $\Sigma_3$. By calculating the three commutators $[\Sigma_j, H]$ and multiplying each by each of the two $\Sigma_k$ for $k \neq j$, we see that Theorem 1 implies that the $R(t)$ commute with \begin{equation} \gamma_2 \Sigma_3 \Xi_2 - \gamma_3 \Sigma_2 \Xi_3, \, \, \, \gamma_3 \Sigma_1 \Xi_3 - \gamma_1 \Sigma_3 \Xi_1, \, \, \, \gamma_1 \Sigma_2 \Xi_1 - \gamma_2 \Sigma_1 \Xi_2, \nonumber \end{equation} \vspace{-1.0cm} \begin{eqnarray} \label{qlist1} & \gamma_1 \Xi_1 - i\gamma_3 \Sigma_2 \Xi_3, \, \, \, & \gamma_1 \Xi_1 + i\gamma_2 \Sigma_3 \Xi_2, \nonumber \\ & \gamma_2 \Xi_2 - i\gamma_1 \Sigma_3 \Xi_1, \, \, \, & \gamma_2 \Xi_2 + i\gamma_3 \Sigma_1 \Xi_3, \nonumber \\ & \gamma_3 \Xi_3 - i\gamma_2 \Sigma_1 \Xi_2, \, \, \, & \gamma_3 \Xi_3 + i\gamma_1 \Sigma_2 \Xi_1, \end{eqnarray} which implies that the $R(t)$ commute with $\gamma_1 \Xi_1$, $\gamma_2 \Xi_2$, $\gamma_3 \Xi_3$. If any two of $\gamma_1 $, $\gamma_2 $, $\gamma_3 $ are not zero, the $R(t)$ must commute with $\Xi_1$, $\Xi_2$, $\Xi_3$ so, since they also commute with $\Sigma_1$, $\Sigma_2$, $\Sigma_3$, the $R(t)$ must be multiples of the identity operator and either Theorem 2 or Theorem 3 implies that $U$ commutes with $H$. If $\gamma_1 $, $\gamma_2 $, $\gamma_3 $ are all zero, there is no interaction between $S$ and $R$. Then the dynamics in $S$ is generated just by the Hamiltonian $\frac{1}{2}\sum_{j=1}^3 \alpha_j \Sigma_j$ for $S$ independently of the dynamics generated by $\frac{1}{2}\sum_{j=1}^3 \beta_j \Xi_j$ for $R$. We will not consider this case. We choose a representative of the three cases where just one of $\gamma_1 $, $\gamma_2 $, $\gamma_3 $ is not zero and consider the case where $\gamma_1 $ and $\gamma_2 $ are zero and $\gamma_3 $ is not zero. Then the $R(t)$ commute with $\Xi_3 $. Since they also commute with $\Sigma_1$, $\Sigma_2$, $\Sigma_3$, they must be functions of $\Xi_3 $, so \begin{equation} \label{a0a3} R(t) = a_0(t) + a_3(t)\Xi _3 \end{equation} with complex numbers $a_0(t)$ and $a_3(t)$ for each $t$. By calculating $[[\Sigma_1, H], H]$, for the $H$ with $\gamma_1 $ and $\gamma_2 $ zero, and multiplying by $\Sigma_2$, we see that Theorem 1 implies that each $R(t)$ commutes with $\beta _1 \Xi_2 - \beta _2 \Xi_1$. This implies that either the $a_3(t)$ in Eq.(\ref{a0a3}) is zero and $R(t)$ is a multiple of the identity operator, or $\beta_1 $ and $\beta_2 $ are zero. Either way, $R(t)$ commutes with $H$ for each $t$ and Theorem 2 implies that $U$ commutes with $H$. This completes the proof that in this example there are no independent symmetries of the open dynamics of $S$ that are not also symmetries of the entire dynamics of $S$ and $R$. \subsection{One and two qubits example} \label{qubit2} A minimal expansion of the preceding example will make room for different results. Let $S$ remain a single qubit described by Pauli matrices $\Sigma_1$, $\Sigma_2$, $\Sigma_3$ but now let $R$ be two qubits described by Pauli matrices $\Xi_1$, $\Xi_2$, $\Xi_3$ and $\Pi_1$, $\Pi_2$, $\Pi_3$. \subsubsection{No new symmetries} \label{none} We consider two different example Hamiltonians. The first is \begin{equation} \label{Hq3} H = \Sigma_3 \Xi_3 + \Sigma_3 \Pi_3. \end{equation} The $R(t)$ commute with $\Sigma_1$, $\Sigma_2$, $\Sigma_3$. By calculating the commutator $[\Sigma_2, H]$ and multiplying by $\Sigma_1$, we see that Theorem 1 implies that the $R(t)$ commute with $\Xi_3 + \Pi_3$, so the $R(t)$ commute with $H$ and Theorem 2 implies that $U$ commutes with $H$. \subsubsection{Many new symmetries} \label{many} Still looking for different results, we consider another Hamiltonian, \begin{equation} \label{Hq32} H = \Sigma_3 \Xi_3 + \Xi_3 \Pi_3. \end{equation} Now, for example, $U = \Pi_1$ describes one of many independent symmetries of the open dynamics of $S$ that are not also symmetries of the entire dynamics of $S$ and $R$. We can see that Eq.(\ref{opbasic}) is satisfied very simply for this $H$ and $U$ and $Q$ a $\Sigma_j$ because on the left side one $\Pi_1$ commutes with $\Sigma_j$ and cancels out with the other $\Pi_1$, leaving \begin{equation} e^{it\Xi_3 \Pi_3}e^{it\Sigma_3 \Xi_3}\Sigma_j e^{-it\Sigma_3 \Xi_3}e^{-it\Xi_3 \Pi_3} = e^{it\Sigma_3 \Xi_3}\Sigma_j e^{-it\Sigma_3 \Xi_3} \end{equation} because the $\Xi_3 \Pi_3$ commutes with everything else, and on the right side one $\Pi_1$ anticommutes with the two $\Xi_3 \Pi_3$ and commutes with everything else as it moves through and cancels out with the other $\Pi_1$, leaving \begin{equation} e^{-it\Xi_3 \Pi_3}e^{it\Sigma_3 \Xi_3}\Sigma_j e^{-it\Sigma_3 \Xi_3}e^{it\Xi_3 \Pi_3} = e^{it\Sigma_3 \Xi_3}\Sigma_j e^{-it\Sigma_3 \Xi_3}. \end{equation} So, since $U = \Pi_1$ does not commute with $H$, it describes an independent symmetry of the open dynamics of $S$ that is not a symmetry of the entire dynamics of $S$ and $R$. \subsection{One and many angular momenta example} \label{JKLM} It can still happen that $U$ must commute with $H$ when $S$ and $R$ are both large systems and when the part of each that interacts with the other is small. Here is an example. Suppose there are operators $J_1$, $J_2$, $J_3$ for $S$ that have angular-momentum commutation relations \begin{equation} \label{JJcom} [J_j, J_k] = i\sum_{l=1}^3\epsilon_{jkl}J_l \, \, \, \text{for} \, \, \, j,k = 1, 2, 3. \end{equation} We do not assume that these operators involve all of $S$ or even a large part of $S$. We do assume that the state-vector space for $S$ is finite dimensional. Suppose $R$ has two sets of angular-momentum operators, $K_1$, $K_2$, $K_3$ and $L_1$, $L_2$, $L_3$, so each set has angular-momentum commutation relations the same as Eq.(\ref{JJcom}) for the $J$, and the $K$ commute with the $L$. We do not assume that the operators $K$ and $L$ describe all of $R$. We do assume that the state-vector space for $R$ is finite dimensional. We will see that the example can easily be extended by putting more angular-momentum operators in with the $K$ and $L$. We work with the operators \begin{equation} \label{Jpm} J_\pm = \frac{1}{\sqrt{2}}(J_1 \pm iJ_2), \end{equation} which have commutation relations \begin{equation} \label{JJpmcom} [J_+, J_-] = J_3, \, \, \, [J_3, J_\pm ] = \pm J_\pm , \end{equation} and with the same combinations and commutation relations for the $K$ and $L$. Let \begin{equation} \label{HJKL} H = J_+K_- + J_-K_+ + K_+L_- + K_-L_+. \end{equation} The $R(t)$ commute with $J_1$, $J_2$, $J_3$ and all the other operators for $S$. From the commutators \begin{equation} \label{HcomJKL} [J_\pm , H] = \pm J_3K_\pm , \end{equation} \begin{eqnarray} \label{HHcomJKL} [[J_\pm , H], H] & = & J_3J_\pm K_3 + J_3K_3L_\pm \nonumber \\ & = & \pm J_+K_-K_\pm \mp J_-K_+K_\pm , \end{eqnarray} we see that Theorem 1 implies that the $R(t)$ commute with $K_+$, $K_-$, $K_3$ and $L_+$, $L_-$, $L_3$, so the $R(t)$ commute with $H$ and Theorem 2 implies that $U$ commutes with $H$. The same result may be obtained when more angular-momentum operators are added to the chain with the $K$ and $L$. If $M_+$, $M_-$, $M_3$ are added and \begin{eqnarray} \label{HJKLM} H & = & J_+K_- + J_-K_+ + K_+L_- + K_-L_+ \nonumber \\ & & + \, L_+M_- + L_-M_+ , \end{eqnarray} then the commutators $[[[J_\pm , H], H], H]$ have terms $\pm J_3K_3L_3M_\pm $ and show that Theorem 1 implies that the $R(t)$ commute with $M_+$, $M_-$, $M_3$ as well as $K_+$, $K_-$, $K_3$ and $L_+$, $L_-$, $L_3$, so the $R(t)$ still commute with $H$ and again Theorem 2 implies that $U$ commutes with $H$. \subsection{One and one oscillators example with lower bound} \label{oscillatorw} Let $S$ be an oscillator described by raising and lowering operators $A$ and $A^{\dagger}$ and $R$ an oscillator described by raising and lowering operators $B$ and $B^{\dagger}$ so \begin{equation} \label{ABcom} [A,A^{\dagger}] = 1, \, \, \, \, [B,B^{\dagger}] = 1, \end{equation} and $A$ and $A^{\dagger}$ commute with $B$ and $B^{\dagger}$. The space of state vectors for $S$ and $R$ combined has orthonormal basis vectors $|m,n\rangle $ for $m=0,1,2,...$ and $n=0,1,2,...$ where \begin{eqnarray} \label{AmBn} A|m,n\rangle & = & m^\frac{1}{2} |m-1,n \rangle , \nonumber \\ A^{\dagger} |m,n\rangle & = & (m+1)^\frac{1}{2} |m+1,n \rangle , \nonumber \\ B|m,n\rangle & = & n^\frac{1}{2} |m,n-1 \rangle ,\nonumber \\ B^{\dagger} |m,n\rangle & = & (n+1)^\frac{1}{2} |m,n+1 \rangle . \end{eqnarray} Let \vspace{-0.7cm} \begin{eqnarray} \label{How} H & = & A^{\dagger}A + B^{\dagger}B + AB^{\dagger} + A^{\dagger}B \nonumber \\ & = & (A + B)^{\dagger}(A + B). \end{eqnarray} The $R(t)$ commute with $A$ and $A^{\dagger}$. From the commutators \begin{equation} \label{comos} [A,H] = A + B, \, \quad \, [H,A^{\dagger} ] = A^{\dagger} + B^{\dagger} , \end{equation} we see that Theorem 1 implies that the $R(t)$ commute with $B$ and $B^{\dagger}$. Then the $R(t)$ must be multiples of the identity operator and, since the spectrum of $H$ does have a lower bound, Theorem 3 implies that $U$ commutes with $H$. In this example, there are no independent symmetries of the open dynamics of $S$ that are not also symmetries of the entire dynamics of $S$ and $R$. \subsection{One and one oscillators example without lower bound} \label{oscillatorwo} If the terms without interactions are removed from the Hamiltonian, the spectrum of the Hamiltonian loses its lower bound. We get an example that shows that the assumption that the spectrum of $H$ has a lower bound is necessary for Theorem 3. Again, let $S$ be an oscillator described by $A$ and $A^{\dagger}$ and $R$ an oscillator described by $B$ and $B^{\dagger}$, with Eqs.(\ref{ABcom}) and (\ref{AmBn}), but now let \begin{equation} \label{Hoswo} H = AB^{\dagger} + A^{\dagger}B. \end{equation} From the commutators \begin{equation} \label{comos} [A,H] = B, \, \quad \, [H,A^{\dagger} ] = B^{\dagger} , \end{equation} we see that Theorem 1 implies that the $R(t)$ commute with $B$ and $B^{\dagger}$ and conclude that the $R(t)$ must be multiples of the identity operator, the same as in the preceding example. Then the $R(t)$ satisfy Eq.(\ref{Rst}), so $R(t)=e^{-itr}$ with $r$ a real number, and Eq.(\ref{R(t)HU}) implies that \begin{equation} \label{H+r2} UHU^{\dagger} = H + r. \end{equation} To find a $U$, let \begin{equation} \label{JK} J = \frac{1}{\sqrt{2}}(A + B), \, \quad \, K = \frac{1}{\sqrt{2}}(A - B). \end{equation} Then $J$ and $J^{\dagger}$ commute with $K$ and $K^{\dagger}$, and \begin{equation} \label{JKcom} [J,J^{\dagger}] = 1, \, \quad \, [K,K^{\dagger}] = 1, \end{equation} so $J$, $J^{\dagger}$ and $K$, $K^{\dagger}$ are oscillator raising and lowering operators like $A$, $A^{\dagger}$ and $B$, $B^{\dagger}$, and \begin{equation} \label{HJK} H = J^{\dagger}J - K^{\dagger}K. \end{equation} There are orthonormal vectors $|j,k \rangle_{J,K} $ for $j=0,1,2,...$ and $k=0,1,2,...$ where \begin{equation} \label{JKbasis} J^{\dagger}J|j,k \rangle_{J,K} = j|j,k \rangle_{J,K} , \, \quad \, K^{\dagger}K|j,k \rangle_{J,K} = k|j,k \rangle_{J,K} . \end{equation} The space spanned by the vectors $|j,k \rangle_{J,K} $ is the same as the space spanned by the vectors $|m,n\rangle $ of Eqs.(\ref{AmBn}) because $A$ and $B$ are linear combinations of $J$ and $K$, so all the operators $A$, $A^{\dagger}$, $B$, $B^{\dagger}$, $J$, $J^{\dagger}$, $K$, $K^{\dagger}$ are defined on both spaces, and neither space has a partial subspace that is invariant for all the operators. The $|j,k \rangle_{J,K} $ are eigenvectors of $H$ with \begin{equation} \label{Heigen} H|j,k \rangle_{J,K} = (j - k)|j,k \rangle_{J,K} , \end{equation} so the spectrum of $H$ is all the integers, and the $r$ in Eq.(\ref{H+r2}) must be an integer. Let \begin{eqnarray} \label{Vjk} V^{\dagger}|j,k \rangle_{J,K} & = & |j,k-1 \rangle_{J,K} \, \, \, \text{for} \, \, \, k>j \nonumber \\ V^{\dagger}|j,k \rangle_{J,K} & = & |j+1,k \rangle_{J,K} \, \, \, \text{for} \, \, \, j\geq k. \end{eqnarray}It gives \begin{eqnarray} \label{HV} HV^{\dagger} & = & V^{\dagger}(H+1), \nonumber \\ VHV^{\dagger} & = & H+1, \end{eqnarray} and $U = V^r$ satisfies Eq.(\ref{H+r2}) for any positive integer $r$. This Eq.(\ref{H+r2}) shows that Eq.(\ref{opbasic2}) holds for all the $Q$ for $S$ and $R$ as well as for the $Q$ for $S$. Although $U$ does not commute with $H$, the symmetry described by $U$ holds for the entire system of $S$ and $R$ as well as for $S$. This example shows that the assumption that the spectrum of $H$ has a lower bound is necessary for Theorem 3. The $R(t)$ are multiples of the identity operator, but the spectrum of $H$ does not have a lower bound, and $U$ does not commute with $H$. \subsection{Two oscillators and anything example} \label{oscillator2any} We can expand the examples of Sections II.D and II.E to show that a symmetry of the open dynamics of $S$ can imply properties of the dynamics, for the entire system of $S$ and $R$, that are not implied by the symmetries of the dynamics of the entire system. Let $S$ be two oscillators described by $A$, $A^{\dagger}$ and $B$, $B^{\dagger}$, with Eqs.(\ref{ABcom}) and (\ref{AmBn}), and let \begin{eqnarray} \label{HJKM} H & = & (AB^{\dagger} + A^{\dagger}B)M \nonumber \\ & = & (J^{\dagger}J - K^{\dagger}K)M \end{eqnarray} as in Eqs.(\ref{Hoswo}) and (\ref{JK})-(\ref{HJK}). We do not assume that R is any particular system. We only assume that $M$ is an Hermitian operator for $R$ that has a discrete spectrum of eigenvalues $m$ that label basis vectors in the space of state vectors for $R$. For the entire system of $S$ and $R$ combined, there are orthonormal basis vectors $|j,k,m \rangle $ for $j=0,1,2,...$ and $k=0,1,2,...$, similar to those of Eqs.(\ref{JKbasis}) and (\ref{Heigen}), now with $m$ ranging over the eigenvales of $M$, and \begin{eqnarray} \label{JKbasis} J^{\dagger}J|j,k,m \rangle & = & j|j,k,m \rangle \nonumber \\ K^{\dagger}K|j,k,m \rangle & = & k|j,k,m \rangle \nonumber \\ M|j,k,m \rangle & = & m|j,k,m \rangle \nonumber \\ H|j,k,m \rangle & = & (j - k)m|j,k,m \rangle . \end{eqnarray} As in Eq.(\ref{Vjk}), let \begin{eqnarray} \label{Vjkm} V^{\dagger}|j,k,m \rangle & = & |j,k-1,m \rangle \, \, \, \text{for} \, \, \, k>j \nonumber \\ V^{\dagger}|j,k.m \rangle & = & |j+1,k,m \rangle \, \, \, \text{for} \, \, \, j\geq k. \end{eqnarray} This is an operator for $S$; it does not depend on $R$. It gives \begin{eqnarray} \label{HVM} HV^{\dagger} & = & V^{\dagger}(H+M), \nonumber \\ VHV^{\dagger} & = & H+M. \end{eqnarray} Since $M$ commutes with $H$ and with all the operators $Q$ for $S$, we have \begin{eqnarray} \label{Vbasic2} Ve^{itH}V^{\dagger}QVe^{-itH}V^{\dagger} & = & e^{it(H+M)}Qe^{-it(H+M)} \nonumber \\ & = & e^{itH}Qe^{-itH} \end{eqnarray} so Eq.(\ref{opbasic}) holds when $U$ is $V$, or when $U$ is $V^r$ for any positive integer $r$. This is a symmetry of the open dynamics of $S$ that is not a symmetry of the dynamics for the entire system of $S$ and $R$; the $V$ and $U$ here do not commute with $H$. Nevertheless, this symmetry implies a property of the dynamics, for the entire system of $S$ and $R$, that is not implied by the symmetries of the dynamics for the entire system that are described by operators for $S$. If $U$ is an operator for $S$ that does commute with $H$, then $U$ commutes with $J^{\dagger}J - K^{\dagger}K$ and with \begin{equation} \label{Hf} H_f = f(J^{\dagger}J - K^{\dagger}K)M \end{equation} for any function $f$ of $J^{\dagger}J - K^{\dagger}K$, so $U$ describes a symmetry for the dynamics generated by $H_f$ as well as for the dynamics generated by the original $H$ where $f(j-k)$ is $j-k$. Knowing all these symmetries for the dynamics of the entire system provides no knowledge of $f$. Knowing that the $V$ of Eq.(\ref{Vjkm}) describes a symmetry of the open dynamics of $S$ makes it clear that the open dynamics is generated by the original $H$ where $f(j-k)$ is $j-k$. We can see this because, from Eq.(\ref{Vjkm}), \begin{equation} \label{HVf} H_fV^{\dagger}|j,k,m \rangle = f(j-k+1)mV^{\dagger}|j,k,m \rangle \end{equation} so \begin{equation} \label{HVf} VH_fV^{\dagger} = f(J^{\dagger}J - K^{\dagger}K+1)M \end{equation} and Eq.(\ref{opbasic2}) implies that if V describes an independent symmetry for the open dynamics of $S$, then \begin{equation} \label{QbracH} [Q,H_f] = [Q,VH_fV^{\dagger}] \end{equation} and \begin{equation} \label{Qbracf} [Q, f(J^{\dagger}J - K^{\dagger}K+1) - f(J^{\dagger}J - K^{\dagger}K)] = 0 \end{equation} for all the $Q$ for $S$, so the difference between $f(J^{\dagger}J - K^{\dagger}K+1)$ and $f(J^{\dagger}J - K^{\dagger}K)$ is a multiple of the identity operator for $S$. This means that $f$ can be taken to have a constant slope. Multiplying $M$ by this constant and dividing $f$ by it changes the slope of $f$ to $1$ and gives \begin{equation} \label{fwewant} f(J^{\dagger}J - K^{\dagger}K) = J^{\dagger}J - K^{\dagger}K + C \end{equation} with $C$ a constant, so $H_f$ differs from the original $H$ where $f(j-k)$ is $j-k$ only by the operator $CM$ which commutes with $H$ and with all the operators $Q$ for $S$ and does not change the open dynamics for $S$. \subsection{One and two oscillators example} \label{oscillator12} Expansion of the example of Section II.D to one oscillator for $S$ and two oscillators for $R$ will give an example of an independent symmetry of the open dynamics of $S$ that is not a symmetry of the entire dynamics of $S$ and $R$. It will show that the assumption that the $R(t)$ are multiples of the identity operator is necessary for Theorem 3. Let $S$ be an oscillator described by raising and lowering operators $A$ and $A^{\dagger}$ as in Section II.D and let $R$ be two oscillators described by raising and lowering operators $B$ and $B^{\dagger}$ and $C$ and $C^{\dagger}$ similar to those in Section II.D. Let \begin{equation} \label{J} J = \frac{1}{\sqrt{3}}(A + B + C), \nonumber \end{equation} \begin{equation} \label{K} K = \frac{1}{\sqrt{2}}(B - C), \nonumber \end{equation} \begin{equation} \label{L} L = \frac{1}{\sqrt{6}}(2A - B - C), \end{equation} \begin{eqnarray} H & = & A^{\dagger}A + B^{\dagger}B + C^{\dagger}C + 3/2 \nonumber \\ & & + AB^{\dagger} + A^{\dagger}B + BC^{\dagger} + B^{\dagger}C + CA^{\dagger} + C^{\dagger}A \nonumber \\ & = & 3J^{\dagger}J + 3/2. \end{eqnarray} Then $J$ and $J^{\dagger}$ commute with $K$ and $K^{\dagger}$ and with $L$ and $L^{\dagger}$, and $K$ and $K^{\dagger}$ commute with $L$ and $L^{\dagger}$, and \begin{equation} \label{JKcom} [J,J^{\dagger}] = 1, \, \quad \, [K,K^{\dagger}] = 1, \, \quad \, [L,L^{\dagger}] = 1. \end{equation} The $R(t)$ commute with $A$ and $A^{\dagger}$. From the commutators \begin{equation} \label{comos2} [A,H] = A + B + C, \, \quad \, [H,A^{\dagger} ] = A^{\dagger} + B^{\dagger} + C^{\dagger}, \end{equation} we see that Theorem 1 implies that the $R(t)$ commute with $B + C$ and $B^{\dagger} + C^{\dagger}$, so the $R(t)$ commute with $J$, $J^{\dagger}$, $L$, $L^{\dagger}$ and $H$. Then the $R(t)$ must be functions of $K$ and $K^{\dagger}$. Since the $R(t)$ commute with $H$, they satisfy Eq.(\ref{Rst}), so $R(t)=e^{-itG}$ with $G$ a function of $K$ and $K^{\dagger}$. To find a $U$ that describes an independent symmetry, we work with the orthonormal basis vectors $|j, k, l \rangle $ for $j=0,1,2,...$, $k=0,1,2,...$ and $l=0,1,2,...$ where \begin{eqnarray} \label{JjKkLl} J|j,k,l\rangle & = & j^\frac{1}{2} |j-1,k,l \rangle , \nonumber \\ J^{\dagger} |j,k,l\rangle & = & (j+1)^\frac{1}{2} |j+1,k,l \rangle , \nonumber \\ K|j,k,l\rangle & = & k^\frac{1}{2} |j,k-1,l \rangle , \nonumber \\ K^{\dagger} |j,k,l\rangle & = & (k+1)^\frac{1}{2} |j,k+1,l \rangle , \nonumber \\ L|j,k,l\rangle & = & l^\frac{1}{2} |j,k,l-1 \rangle , \nonumber \\ L^{\dagger} |j,k,l\rangle & = & (l+1)^\frac{1}{2} |j,k,l+1 \rangle , \nonumber \\ J^{\dagger}J|j,k,l \rangle & = & j|j,k,l \rangle , \nonumber \\ K^{\dagger}K|j,k,l \rangle & = & k|j,k,l \rangle , \nonumber \\ L^{\dagger}L|j,k,l \rangle & = & l|j,k,l \rangle , \nonumber \\ H|j,k,l \rangle & = & (3j + 3/2)|j,k,l \rangle . \end{eqnarray} One choice for $U$ is to let \begin{eqnarray} \label{JjKkLlGU} G|j,k=0,l\rangle & = & 3|j,k=0,l \rangle , \nonumber \\ G|j,k,l\rangle & = & 0 \, \, \, \text{for}\, \, \, k \neq 0, \nonumber \\ U^{\dagger} |j,k=0,l\rangle & = & |j+1,k=1,l \rangle , \nonumber \\ U^{\dagger} |j,k=1,l\rangle & = & |j,k=0,l \rangle , \nonumber \\ U^{\dagger} |j=0,k,l\rangle & = & |j=0,k-1,l \rangle \, \, \, \text{for}\, \, \, k>1 , \nonumber \\ U^{\dagger} |j,k,l\rangle & = & |j,k,l \rangle \, \, \, \text{for}\, \, \, j>0, \, \, k>1. \end{eqnarray} This gives \begin{eqnarray} \label{HG} HU^{\dagger} & = & U^{\dagger}(H + G) \nonumber \\ UHU^{\dagger} & = & H + G \nonumber \\ Ue^{-itH}U^{\dagger} & = & e^{-itG}e^{-itH} \nonumber \\ Ue^{-itH} & = & e^{-itG}e^{-itH}U \end{eqnarray} which gives Eq.(\ref{R(t)HU}), which implies that Eq.(\ref{opbasic}) holds for all $t$ and all the $Q$ for $S$ because the $R(t)$ do commute with all the $Q$ for $S$.. The spectrum of $H$ has a lower bound, and $H$ commutes with $G$ and the $R(t)$, but $U$ does not commute with $H$ or $G$. This example shows that the assumption that the $R(t)$ are multiples of the identity operator is necessary for Theorem 3. The arbitrary and awkward character of this example shows that other choices of $U$ would work as well. We know that the changes in time of the operators $Q$ for $S$ are the same for the dynamics generated by $UHU^{\dagger}$ as for the dynamics generated by $H$. And for $U^{\dagger}HU\, $? In this example we have \begin{equation} \label{UHU2} U^{\dagger}HU = H - U^{\dagger}GU, \end{equation} \begin{eqnarray} \label{jkUGU} U^{\dagger} GU|j,k=1,l\rangle & = & 3|j,k=1,l \rangle \, \, \, \text{for}\, \, \, j \neq 0, \nonumber \\ U^{\dagger} GU|j,k,l\rangle & = & 0 \, \, \, \text{for}\, \, \, k \neq 1 \, \, \, \text{or}\, \, \, j = 0, \end{eqnarray} and $A = \sqrt{\frac{1}{3}}J + \sqrt{\frac{2}{3}}L$. We see that $U^{\dagger}GU$ is a function of $J^{\dagger}J$ and $K^{\dagger}K$ and commutes with $L$, and $L^{\dagger}$, and \begin{equation} \label{AcomUGU} [A, \; U^{\dagger} GU]|j=1,k=1,l\rangle = \sqrt{3}|j=0,k=1,l \rangle , \nonumber \end{equation} \begin{equation} \label{AdcomUGU} [A^{\dagger}, \; U^{\dagger} GU]|j=0,k=1,l\rangle = -\sqrt{3}|j=1,k=1,l \rangle , \end{equation} so $[A, \; U^{\dagger} GU]$ and $[A^{\dagger}, \; U^{\dagger} GU]$ are not zero. The changes in time of the operators $A$ and $A^{\dagger}$ for $S$ are not the same in the dynamics generated by $U^{\dagger}HU$ as in the dynamics generated by $H$. \section{Constants of the motion}\label{three} If we think about constants of the motion the same way we think about independent symmetries, and consider a statement that an operator $Q$ for $S$ represents a quantity that is a constant of the motion for the open dynamics of $S$, we could say that the statement should hold for all possible initial states of $S$ and for any state of $R$ and any correlations, or absence of correlations, between $S$ and $R$. Just for the mean value to be constant we would have \begin{equation} \label{constant} \text{Tr} \left[We^{itH}Qe^{-itH}\right] = \text{Tr} \left[ WQ\right] \end{equation} for density matrices $W$ for all the states of the entire system of $S$ and $R$ combined, which implies that \begin{equation} \label{constan} e^{itH}Qe^{-itH} = Q. \end{equation} In particular, if $Q$ is a unitary symmetry operator, or an Hermitian operator that is a generator of a one-parameter group of symmetry operators, for independent symmetries, we would say that $Q$ can represent a constant of the motion for the open dynamics of $S$ only if $Q$ commutes with $H$, which means that it describes a symmetry for the dynamics of the entire system of $S$ and $R$ combined. On the other hand, if we think about constants of the motion the same way we think about dependent symmetries, we could say that an operator $Q$ for $S$ represents a quantity that is a constant of the motion for the open dynamics of $S$ if it is constant for all possible initial states of $S$ but only for particular states of $R$ or correlations, or absence of correlations, between $S$ and $R$. We could say it is a \textit{dependent constant of the motion}. We will see an example in Section III.A of an Hermitian operator that is a generator of a one-parameter group of unitary operators that describe dependent symmetries but does not represent a dependent constant of the motion. More examples are being considered.\cite{Seo} \section{Dependent symmetries}\label{four} Now we consider dependent symmetries. At first, we assume there are no correlations between $S$ and $R$ and consider symmetries of the open dynamics of $S$ that depend on the state of $R$. We assume that the density matrix for $S$ and $R$ is a product $W=\rho_S\rho_R$ with $\rho_S$ a density matrix for $S$ and $\rho_R$ a density matrix for $R$. The mean value for a product of operators $A$ for $S$ and $B$ for $R$ is \begin{equation} \label{mvAB} \langle AB \rangle = \text{Tr} \left[WAB\right] = \text{Tr}_S \left[\rho_S A\right]\text{Tr}_R \left[\rho_R B\right] = \langle A \rangle \langle B \rangle . \end{equation} We assume that Eq.(\ref{mvbasic}) holds for all $\rho_S$ but only for particular $\rho_R$. The symmetries of the open dynamics of $S$ apply to all the states of $S$ but depend on the state of $R$. Then \begin{equation} \label{opbasicdep} \text{Tr}_R \left[\rho_R e^{itH}U^{\dagger}QUe^{-itH}\right] = \text{Tr}_R \left[\rho_R U^{\dagger}e^{itH}Qe^{-itH}U\right] \end{equation} for all the $Q$ for $S$, and for any time $t$, but only for particular $\rho_R$. The changes for $S$ are the same whether the symmetry transformation is before or after the dynamics. Suppose $U$ is an operator $U_S$ that is just for $S$; it does not involve $R$. Then multiplying both sides of Eq.(\ref{opbasicdep}) on the left by $U$ and on the right by $U^{\dagger}$ gives \begin{eqnarray} \label{opbasicdep2} \text{Tr}_R \left[\rho_R e^{itH}Qe^{-itH}\right] & = & \text{Tr}_R \left[\rho_R U_Se^{itH}U_S^{\dagger}QU_Se^{-itH}U_S^{\dagger}\right] \nonumber \\ & = & \text{Tr}_R \left[\rho_R e^{itU_SHU_S^{\dagger}}Qe^{-itU_SHU_S^{\dagger}}\right]. \end{eqnarray} The changes in time for $S$ are the same for the dynamics generated by $U_SHU_S^{\dagger}$ as for the dynamics generated by $H$. We know, from looking at independent symmetries, that generally the changes in time for $S$ may be different in the dynamics generated by $U^{\dagger}HU$ than in the dynamics generated by $H$. They are not different when $U$ is an operator $U_R$ that is just for $R$. Then $U$ and $U^{\dagger}$ cancel out of the left side of Eq.(\ref{opbasicdep}) and can be inserted on the right side to give \begin{eqnarray} \label{opbasicdep3} \text{Tr}_R \left[\rho_R e^{itH}Qe^{-itH}\right] & = & \text{Tr}_R \left[\rho_R U_R^{\dagger}e^{itH}U_RQU_R^{\dagger}e^{-itH}U_R \right] \nonumber \\ & = & \text{Tr}_R \left[\rho_R e^{itU_R^{\dagger}HU_R}Qe^{-itU_R^{\dagger}HU_R}\right], \end{eqnarray} showing that the changes in time for $S$ are the same in the dynamics generated by $U_R^{\dagger}HU_R$ as in the dynamics generated by $H$. Also, when $U$ is an operator $U_R$ that is just for $R$, canceling $U$ and $U^{\dagger}$ out of the left side of Eq.(\ref{opbasicdep}) and moving $U$ to an equivalent position in the trace on the right side gives \begin{equation} \label{opbasicrho} \text{Tr}_R \left[\rho_R e^{itH}Qe^{-itH}\right] = \text{Tr}_R \left[U_R\rho_R U_R^{\dagger}e^{itH}Qe^{-itH}\right]. \end{equation} The changes in time for $S$ are the same for the state represented by $U_R\rho_R U_R^{\dagger}$ as for the state represented by $\rho_R $. Suppose $U_1$ and $U_2$ are operators for $S$; they do not involve $R$. If $U_1$ and $U_2$ represent dependent symmetries for the open dynamics generated by $H$ and for the state of $R$ represented by $\rho_R $, then so does $U_1U_2$, because Eq.(\ref{opbasicdep}) for $U_1U_2$ is implied by its holding successively for $U_1$ and then $U_2$. Whether a set of dependent symmetries represented by operators $U$ for $S$ generates a group depends on whether the $U^{\dagger}$ operators represent dependent symmetries for the same $H$ and the same state of $R$. Properties of the open dynamics often can be seen from a symmetry without working with the dynamics. Suppose $U$ is again an operator $U_S$ that is just for $S$; it does not involve $R$. If $U_S$ represents a dependent symmetry for the open dynamics generated by $H$ and for the state of $R$ represented by $\rho_R $, and if there are operators $Q$ and $Q_k$ for $S$ and numbers $d_k$ such that \begin{equation} \label{Qkcomb} U_S^{\dagger}QU_S = \sum_k d_k Q_k, \end{equation} then Eq.(\ref{opbasicdep}) implies that \begin{equation} \label{Qkcombt} U_S^{\dagger}\text{Tr}_R\left[\rho_R e^{itH}Qe^{-itH}\right]U_S = \sum_k d_k \text{Tr}_R \left[\rho_R e^{itH}Q_ke^{-itH}\right]. \end{equation} In particular, if $Q$ commutes with $U_S$ then $\text{Tr}_R[\rho_R e^{itH}Qe^{-itH}]$ commutes with $U_S$; and if $Q$ anticommutes with $U_S$ then $\text{Tr}_R[\rho_R e^{itH}Qe^{-itH}]$ anticommutes with $U_S$. An example in Section IV.A shows how this can reduce what needs to be done to work out the dynamics. The unitary symmetry operators, and the Hermitian operators that are generators for one-parameter groups of symmetry operators, generally do not represent constants of the motion for the open dynamics of $S$. This also is seen in the example of Section IV.A. To consider symmetries that also depend on correlations between $S$ and $R$, we just assume that Eq.(\ref{mvbasic}) holds for density matrices $W$ that describe all the states of $S$ but only particular correlations between $S$ and $R$ and particular states of $R$. The changes for $S$ made by $U$ and the dynamics generated by $H$ are seen in the changes of the mean values of basic operators $Q$ for $S$ calculated with those $W$. This is illustrated in the example that follows. \subsection{One and one qubits example} \label{qubitdep} Let $S$ be a qubit described by Pauli matrices $\Sigma_1$, $\Sigma_2$, $\Sigma_3$ and $R$ a qubit described by Pauli matrices $\Xi_1$, $\Xi_2$, $\Xi_3$, as in Section II.A. Let \begin{equation} \label{Hqd} H = \frac{1}{2} \left[ \gamma_1 \Sigma_1 \Xi_1 + \gamma_2 \Sigma_2 \Xi_2 + \gamma_3 \Sigma_3 \Xi_3 \right]. \end{equation} The three matrices $\Sigma_1 \Xi_1 $, $\Sigma_2 \Xi_2 $, $\Sigma_3 \Xi_3 $ commute with each other. (The different $\Sigma_j $ anticommute and the different $\Xi_j $ anticommute, so the different $\Sigma_j \Xi_j $ commute.) This allows us to easily compute \begin{eqnarray} \label{US1} e^{itH}\Sigma_1 e^{-itH} & = & \Sigma_1 e^{-it \gamma_2 \Sigma_2 \Xi_2 } e^{-it \gamma_3 \Sigma_3 \Xi_3 } \nonumber \\ &=& \Sigma_1 \cos \gamma_2 t\cos \gamma_3 t + \underline{\Xi_1 \sin \gamma_2 t\sin \gamma_3 t} \nonumber \\ && - \Sigma_2 \Xi_3 \cos \gamma_2 t\sin \gamma_3 t + \underline{\Sigma_3 \Xi_2 \sin \gamma_2 t\cos \gamma_3 t} \end{eqnarray} using the algebra of Pauli matrices, and similarly \begin{eqnarray} \label{US2} e^{itH}\Sigma_2 e^{-itH} &=& \Sigma_2 \cos \gamma_3 t\cos \gamma_1 t + \underline{\Xi_2 \sin \gamma_3 t\sin \gamma_1 t} \nonumber \\ && - \underline{\Sigma_3 \Xi_1 \cos \gamma_3 t\sin \gamma_1 t} + \Sigma_1 \Xi_3 \sin \gamma_3 t\cos \gamma_1 t, \end{eqnarray} \begin{eqnarray} \label{US3} e^{itH}\Sigma_3 e^{-itH} &=& \Sigma_3 \cos \gamma_1 t\cos \gamma_2 t + \Xi_3 \sin \gamma_1 t\sin \gamma_2 t \nonumber \\ && - \underline{\Sigma_1 \Xi_2 \cos \gamma_1 t\sin \gamma_2 t} + \underline{\Sigma_2 \Xi_1 \sin \gamma_1 t\cos \gamma_2 t}. \end{eqnarray} Let $U=\Sigma_3 $. Then Eq.(\ref{opbasicdep}) holds when $Q$ is $\Sigma_1 $, $\Sigma_2 $ or $\Sigma_3 $ if $\text{Tr}_R $ of each underlined term of Eqs.(\ref{US1})-(\ref{US3}) is zero, because $U$ and $U^{\dagger}$ cancel out of the left side of Eq.(\ref{opbasicdep}) after they change the sign of the whole left side when $Q$ is $\Sigma_1 $ or $\Sigma_2 $ and make no change when $Q$ is $\Sigma_3 $, and $U$ and $U^{\dagger}$ cancel out of the right side after they change the sign of each $\Sigma_1 $ and $\Sigma_2 $. The alternative Eq.(\ref{opbasicdep2}) also holds then, because changing $H$ to $UHU^{\dagger}$ just changes the signs of $\gamma_1$ and $\gamma_2$ and the underlined terms are the terms that change sign when this is done. Either way, we see that $U=\Sigma_3 $ describes a symmetry of the open dynamics of $S$ if each underlined term is zero when $\Xi_1$, $\Xi_2$, $\Xi_3$ are replaced by $\langle \Xi_1 \rangle $, $\langle \Xi_2 \rangle $, $\langle \Xi_3 \rangle $. There are various ways this can happen. Either \begin{equation} \label{ul1} \gamma_1 = 0 \, \, \, \text{and} \, \, \, \gamma_2 = 0, \end{equation} \begin{equation} \label{ul2} \text{or} \, \, \, \gamma_2 = 0, \, \, \gamma_3 = 0 \, \, \, \text{and} \, \, \, \langle \Xi_1 \rangle = 0, \end{equation} \begin{equation} \label{ul3} \text{or} \, \, \, \gamma_3 = 0, \, \, \gamma_1 = 0 \, \, \, \text{and} \, \, \, \langle \Xi_2 \rangle = 0, \end{equation} \begin{equation} \label{ul4} \text{or} \, \, \, \langle \Xi_1 \rangle = 0 \, \, \, \text{and} \, \, \, \langle \Xi_2 \rangle = 0. \end{equation} In the first case, $U=\Sigma_3 $ commutes with $H$. In the other three cases, the symmetry depends on the state of $R$. The quantity represented by $\Sigma_3 $ is a constant of the motion for the open dynamics of $S$ only in the first case where $\Sigma_3 $ commutes with $H$; we can see from Eq.(\ref{US3}) that $\langle \Sigma_3 \rangle $ changes in time for various states of $S$ if $\gamma_1 $ and $\gamma_2 $ are not both zero, so $\Sigma_3 $ can not represent a dependent constant of the motion for any state of $R$. This symmetry alone implies that the underlined terms of Eqs.(\ref{US1})-(\ref{US3}) are zero because, according to Eqs.(\ref{Qkcomb})-(\ref{Qkcombt}), it requires $\text{Tr}_R[\rho_R e^{itH}\Sigma_1e^{-itH}]$ and $\text{Tr}_R[\rho_R e^{itH}\Sigma_2e^{-itH}]$ to anticommute with $\Sigma_3$ and requires $\text{Tr}_R[\rho_R e^{itH}\Sigma_3e^{-itH}]$ to commute with $\Sigma_3$. Stronger symmetry can imply more properties of the dynamics. The rotation operators \begin{equation} \label{rotR} U(u) = e^{-iu(1/2)\Sigma_3} \end{equation} for all real $u$ represent dependent symmetries in the case where $\gamma_1$ and $\gamma_2$ are equal and $\langle \Xi_1 \rangle $ and $\langle \Xi_2 \rangle $ are zero; it is easy to check, using Eqs.(\ref{US1})-(\ref{US3}), that Eq.(\ref{opbasicdep}) is satisfied with $\Sigma_1$, $\Sigma_2$ or $\Sigma_3$ for $Q$. According to Eqs.(\ref{Qkcomb})-(\ref{Qkcombt}), this symmetry requires that $\text{Tr}_R[\rho_R e^{itH}\Sigma_1e^{-itH}]$ and $\text{Tr}_R[\rho_R e^{itH}\Sigma_2e^{-itH}]$ rotate like $\Sigma_1$ and $\Sigma_2$ when put between $U(u)^{\dagger}$ and $U(u)$, which implies that \begin{eqnarray} \label{vec12rot} \text{Tr}_R\left[\rho_R e^{itH}\Sigma_1e^{-itH}\right] & = & A_{11}(t)\Sigma_1 - A_{12}(t)\Sigma_2 \nonumber \\ \text{Tr}_R\left[\rho_R e^{itH}\Sigma_2e^{-itH}\right] & = & A_{11}(t)\Sigma_2 + A_{12}(t)\Sigma_1 \end{eqnarray} with real functions $A_{11}(t)$ and $A_{12}(t)$. When this symmetry is assumed, only four of the twelve terms of Eqs.(\ref{US1})-(\ref{US3}) need to be calculated from the dynamics. The symmetry generator $\Sigma_3 $ can not represent a dependent constant of the motion for any state of $R$ when $\gamma_1$ and $\gamma_2$ are not zero, because then, again, Eq.(\ref{US3}) implies that $\langle \Sigma_3 \rangle $ changes in time for various states of $S$. To admit correlations between $S$ and $R$, we ask whether Eq.(\ref{mvbasic}) holds when $Q$ is $\Sigma_1 $, $\Sigma_2 $, $\Sigma_3 $, for density matrices $W$ for all the states of $S$ but only for particular correlations \begin{equation} \label{corrjk} \Gamma_{jk} = \langle \Sigma_j \Xi_k \rangle - \langle \Sigma_j \rangle \langle \Xi_k \rangle \, \, \, \text{for}\, \, \, j,k,=1,2,3 \end{equation} between $S$ and $R$ and particular states of $R$ described by $\langle \Xi_1 \rangle $, $\langle \Xi_2 \rangle $, $\langle \Xi_3 \rangle $. For $U=\Sigma_3 $ again, Eq.(\ref{mvbasic}) holds when $Q$ is $\Sigma_1 $, $\Sigma_2 $, $\Sigma_3 $ if the mean value of each underlined term of Eqs.(\ref{US1}) - (\ref{US3}) is zero. With correlations included, the things that have to be zero for this to happen are now either \begin{equation} \label{ul1cor} \gamma_1 \, \, \, \text{and} \, \, \, \gamma_2 , \end{equation} \begin{equation} \label{ul2} \text{or} \, \, \, \gamma_2 , \, \, \gamma_3 , \, \, \, \Gamma_{21} , \, \, \Gamma_{31} \, \, \text{and} \, \, \, \langle \Xi_1 \rangle , \end{equation} \begin{equation} \label{ul3} \text{or} \, \, \, \gamma_3 , \, \, \gamma_1 , \, \, \, \Gamma_{32} , \, \, \Gamma_{12} \, \, \text{and} \, \, \, \langle \Xi_2 \rangle , \end{equation} \begin{equation} \label{ul4} \text{or} \, \, \, \Gamma_{12} , \, \, \Gamma_{21} , \, \, \Gamma_{31} , \, \, \Gamma_{32} , \, \, \langle \Xi_1 \rangle \, \, \text{and} \, \, \, \langle \Xi_2 \rangle . \end{equation} \section{Outlook}\label{five} There are many more symmetries for the open dynamics of a subsystem than for the complete dynamics of the closed system that contains it. We have seen this by looking at symmetries described by unitary operators. The unitary symmetry operators can be just for the subsystem $S$, or just for $R$, the rest of the closed system, or for the entire system of $S$ and $R$ combined. There are symmetries of a new kind that depend on correlations, or absence of correlations, between $S$ and $R$ or on the state of $R$. We have seen that the symmetries can reveal properties of the dynamics and reduce what needs to be done to work out the dynamics. A symmetry of the open dynamics of a subsystem can imply properties of the dynamics for the entire system that are not implied by the symmetries of the dynamics of the entire system. These observations are a beginning. Further examples and applications should be explored with hope that some can be put to significant use. One step being reported separately is a collaboration looking at more examples of dependent symmetries.\cite{Seo}
2,869,038,155,218
arxiv
\section{Introduction} We consider \emph{online convex optimization} (OCO) of a sequence of convex functions $f_1,\ldots,f_T$ over a given bounded convex domain, which become available one by one over the course of $T$ rounds \citep{ShalevShwartz2011,HazanOCOBook2016}. Typically $f_t(\w) = \textsc{loss}(\w,\x_t,y_t)$ represents the \emph{loss} of predicting with parameters $\w$ on the $t$-th data point $(\x_t,y_t)$ in a machine learning task. At the start of each round $t$, a learner has to predict the best parameters $\w_t$ for the function $f_t$ before finding out what $f_t$ is, and the goal is to minimize the \emph{regret}, which is the difference in the sum of function values between the learner's predictions $\w_1,\ldots,\w_T$ and the best fixed oracle parameters $\u$ that could have been chosen if all the functions had been given in advance. A special case of OCO is prediction with expert advice \citep{cesa06}, where the functions $f_t(\w) = \w^\top \vloss_t$ are convex combinations of the losses $\vloss_t = (\loss_{t,1},\ldots,\loss_{t,K})^\top$ of $K$ expert predictors and the domain is the probability simplex. Central results in these settings show that it is possible to control the regret with almost no prior knowledge at all about the functions. For instance, knowing only an upper bound $G$ on the $\ell_2$-norms of the gradients $\grad_t = \nabla f_t(\w_t)$, the online gradient descent (OGD) algorithm guarantees $O(G \sqrt{T})$ regret by tuning its learning rate hyperparameter $\eta_t$ proportional to $1/(G\sqrt{t})$ \citep{Zinkevich2003}, and in the case of prediction with expert advice the Hedge algorithm achieves regret $O(L\sqrt{T\ln K})$ knowing only an upper bound $L$ on the range $\max_k \ell_{t,k} - \min_k \ell_{t,k}$ of the expert losses \citep{FreundSchapire1997}. Here $G$ is the $\ell_2$-Lipschitz constant of the learning task\footnote{We slightly abuse terminology here, because the standard definition of a Lipschitz constant requires an upper bound on the gradient norms for any parameters $\w$, not just for $\w = \w_t$, and may therefore be larger.}, and $L/2$ is the $\ell_1$-Lipschitz constant over the probability simplex. The above guarantees are tight if we make no further assumptions about the functions $f_t$ \citep{HazanOCOBook2016,CesaBianchiEtAl1997}, but they can be significantly improved if the functions have additional special structure that makes the learning task easier. The literature on online learning explores multiple orthogonal dimensions in which tasks may be significantly easier in practice (see `related work' below). Here we focus on the following regret guarantees that are known to exploit multiple types of easiness at the same time: \begin{align} \text{OCO:}& &O\left(\sqrt{V_T^\u d \log T}\right) \text{ for all $\u$,} \quad \text{with $V_T^\u = \sum_{t=1}^T ((\w_t - \u)^\top \grad_t)^2$,}\label{eqn:ourmetagradbound}\\ \text{Experts:}& &O\left(\sqrt{\E_{\rho(k)}[V_T^k] \KL(\rho\|\pi)}\right) \text{ for all $\rho$,} \quad \text{with $V_T^k = \sum_{t=1}^T ((\w_t - \e_k)^\top \vloss_t)^2$,}\label{eqn:oursquintbound} \end{align} where $d$ is the number of parameters and $\KL(\rho\|\pi) = \sum_{k=1}^K \rho(k) \ln \rho(k)/\pi(k)$ is the Kullback-Leibler divergence of a data-dependent distribution $\rho$ over experts from a fixed prior distribution~$\pi$. The OCO guarantee is achieved by the MetaGrad algorithm \citep{Erven2016}, and implies regret that grows at most logarithmic in $T$ both in case the losses are curved (exp-concave, strongly convex) and in the stochastic case whenever the losses are independent, identically distributed samples with variance controlled by the Bernstein condition \citep{Erven2016,koolen2016}. The guarantee for the expert case is achieved by the Squint algorithm \citep{koolen2015,squintPAC}. It also exploits special structure along two dimensions simultaneously, because the $V_T^k$ term is much smaller than $L^2 T$ in many cases \citep{GaillardStoltzVanErven2014,koolen2016} and the so-called \emph{quantile bound} $\KL(\rho\|\pi)$ is much smaller than the worst case $\ln K$ when multiple experts make good predictions \citep{ChaudhuriFreundHsu2009,ChernovVovk2010}. Squint and MetaGrad are both based on the same technique of tracking the empirical performance of \emph{multiple learning rates} in parallel over a quadratic approximation of the original loss. A computational difference though is that Squint is able to do this by a continuous integral that can be evaluated in closed form, whereas MetaGrad uses a discrete grid of learning rates. Unfortunately, to achieve \eqref{eqn:ourmetagradbound} and \eqref{eqn:oursquintbound}, both MetaGrad and Squint need knowledge of the Lipschitz constant ($G$ or $L$, respectively). Overestimating $G$ or $L$ by a factor of $c > 1$ has the effect of reducing the effective amount of available data by the same factor $c$, but underestimating the Lipschitz constant is even worse because it can make the methods fail completely. In fact, the ability to adapt to $G$ has been credited \citep{WardWuBottou2018} as one of the main reasons for the practical success of the AdaGrad algorithm \citep{DuchiHazanSinger2011,McMahanStreeter2010}. Thus getting the Lipschitz constant right makes the difference between having practical algorithms and having promising theoretical results. For OCO, an important first step towards combining Lipschitz adaptivity to $G$ with regret bounds of the form \eqref{eqn:ourmetagradbound} was taken by \citet{cutkosky2017}, who aimed for \eqref{eqn:ourmetagradbound} but had to settle for a weaker result with $G \sum_{t=1}^T \|\grad_t\|_2 \|\w_t - \u\|_2^2$ instead of $V_T^\u$. Although not sufficient to adapt to the Bernstein condition, they do provide a series of stochastic examples where their bound already leads to fast $O(\ln^4 T)$ rates. For the expert setting, \citet{Wintenberger2017} has made significant progress towards a version of \eqref{eqn:oursquintbound} without the quantile bound improvement, but he is left with having to specify an initial guess $L_\text{guess}$ for $L$ that enters as $O(\ln \ln (L/L_\text{guess}))$ in his bound, which may yet be arbitrarily large when the initial guess is on the wrong scale. \paragraph{Main Contributions} Our main contributions are that we complete the process began by \citet{cutkosky2017} and \citet{Wintenberger2017} by showing that it is indeed possible to achieve \eqref{eqn:ourmetagradbound} and \eqref{eqn:oursquintbound} without prior knowledge of $G$ or $L$. In fact, for the expert setting we are able to adapt to the tighter quantity $B \geq \max_k |(\w_t - \e_k)^\top \vloss_t|$. We achieve these results by dynamically updating the set of active learning rates in MetaGrad and Squint depending on the observed Lipschitz constants. In both cases we encounter a similar tuning issue as \citet{Wintenberger2017}, but we avoid the need to specify any initial guess using a new restarting scheme, which restarts the algorithm when the observed Lipschitz constant increases too much. In addition to these main results, we remove the need to specify the number of rounds $T$ in advance for MetaGrad by adding learning rates as $T$ gets larger, and we improve the computational efficiency of how it handles constraints on the domain of prediction: by a minor extension of the black-box reduction for projections of \citet{cutkosky2018}, we incur only the computational cost of projecting on the domain of interest in \emph{Euclidean} distance. This should be contrasted with the usual projections in time-varying Mahalanobis distance for second-order methods like MetaGrad. \paragraph{Related Work} If adapting to the Lipschitz constant were our only goal, a well-known way to achieve it for OCO would be to change the learning rate in OGD to $\eta_t \propto 1/\sqrt{\sum_{s\leq t} \|\grad_s\|_2^2}$, which leads to $O(\sqrt{\sum_{t\leq T} \|\grad_t\|_2^2}) = O(G \sqrt{T})$ regret. This is the approach taken by AdaGrad (for each dimension separately) \citep{DuchiHazanSinger2011,McMahanStreeter2010}. In prediction with expert advice, Lipschitz adaptive methods are sometimes called \text{scale-free} and have previously been obtained by \citet{cbms07,rooij14} with generalizations to OCO by \citet{OrabonaPal2015}. In addition, the first two of these works obtain a data-dependent variance term that is different from $V_T^k$ in \eqref{eqn:oursquintbound}, but no quantile bounds are known for the former. Results for the latter have previously been obtained by \citet{GaillardStoltzVanErven2014,Wintenberger2014Arxiv} without quantile bounds, and with a slightly weaker notion of variance by \citet{AdaNormalHedge}. Quantile bounds without variance adaptivity were introduced by \citet{ChaudhuriFreundHsu2009,ChernovVovk2010}. These may be interpreted as measures of the complexity of the comparator $\rho$. The corresponding notion in OCO is to adapt to the norm of $\u$, which has been achieved in various different ways, see for instance \citep{McMahanAbernethy2013,cutkosky2018}. For curved functions, existing results achieve fast rates assuming that the degree of curvature is known \citep{HazanAgarwalKale2007}, measured online \citep{BartlettHazanRakhlin2007,Do2009} or entirely unknown \citep{Erven2016,cutkosky2018}. Fast rates are also possible for slowly-varying linear functions and, more generally, optimistically predictable gradient sequences \citep{hazan2010extracting,GradualVariationInCosts2012,RakhlinSridharan2013}. We view our results as a step towards developing algorithms that automatically adapt to multiple relevant measures of difficulty at the same time. It is not a given that such combinations are always possible. For example, \citet{CutkoskyBoahen2017Impossible} show that Lipschitz adaptivity and adapting to the comparator complexity in OCO, although both achievable independently, cannot both be realized at the same time (at least not without further assumptions). A general framework to study which notions of task difficulty do combine into achievable bounds is provided by \citet{FosterRakhlinSridharan2015}. \citet{FosterRakhlinSridharan2017} characterize the achievability of general data-dependent regret bounds for domains that are balls in general Banach spaces. \paragraph{Outline} We add Lipschitz adaptivity to Squint for the expert setting in Section~\ref{Squint2}. Then, in Section~\ref{MetaC}, we do the same for MetaGrad in the OCO setting. The developments are analogous at a high level but differ in the details for computational reasons. We highlight the differences along the way. Section~\ref{MetaC} further describes how to avoid specifying $T$ in advance for MetaGrad. Then, in Section~\ref{four}, we add efficient projections for MetaGrad, and finally Section~\ref{sec:conclusion} concludes with a discussion of directions for future work. \iffalse section{Introduction} Any source on prediction with expert advice will start with the celebrated minimax regret bound of the following form \[ R_T^k ~\le~ \sqrt{\frac{T}{2} \ln K} \qquad \text{for each expert $k \in \{1,\ldots,K\}$} , \] and follow up with the remark that its multiplicative constant is optimal in the limit of large $T,K$ \citep[Theorem 3.7]{cesa06}. Despite the mathematical strength and elegance, matching minimax algorithms are found to underwhelm in practice, whereas simple heuristics shine. This observation spurred multiple lines of research into adaptive algorithms with individual-sequence regret bounds with refined dependencies on the data and the comparator that hold under possibly relaxed assumptions. Properties of bounds that have been identified as desirable are \begin{itemize} \item \textbf{quantile} bounds improve when multiple experts are good, a necessity when dealing with continuous expert spaces. These bounds also typically allow guarantees that are non-uniform across experts (adapting e.g.\ to available prior knowledge) \item \textbf{first-order} bounds improve when the loss $L^*$ of the comparator is small. \item \textbf{second-order} bounds improve when some measure of variance is small. The literature distinguishes two flavours of variance, namely the variance of the loss \citep{hazan10} and the variance of the excess loss \citep{Gaillard2014}. The latter is particularly interesting because it can be shown that algorithms with guarantees for squared excess losses have constant regret in many statistical cases \citep{koolen2016}. \item \textbf{scale-free} bounds are for settings where no a-priori range of the losses can be assumed. Scale-free algorithms are unaffected by scaling the losses, while scale-free bounds scale along with any scaling of the losses imposed. \citet{cbms07} call algorithms/bounds that are also invariant under translations of the losses \textbf{fundamental}. \item \textbf{timeless} algorithms/bounds, as advocated by \citet{rooij14}, are unaffected when rounds are inserted with all-identical losses. Timelessness was proposed as a sanity-check to ``protest'' crudely measuring the complexity of the problem by its number of rounds. An algorithm can always be made timeless by simply ignoring any all-identical-loss round, but this is clunky and discontinuous, calling for prediction rules that are naturally smoothly timeless. \end{itemize} Taking stock (see Table~\ref{tab:stock}), we see that no algorithm currently has all desirable features. The closest candidates are \textsc{AdaHedge} by \citet{rooij14}, which is fundamental second-order timeless but not quantile, and the later \textsc{Squint} by \citet{koolen2015} which is second-order quantile timeless but not scale-free.\footnote{% Note that \textsc{AdaHedge} is timeless, and so are its refined bounds. Similarly, \textsc{Squint} is timeless (regardless of the prior on the learning rate $\eta$), and so is its CV bound but not its improper bound (which features a $\ln \ln T$). } Moreover, second-order bounds are by nature fundamental and timeless. This strongly suggests that it is possible to obtain everything. However, this seems to require a new idea in terms of algorithm design. That is where we come in. \begin{table}[h] \centering \begin{tabular}{llll} & \textsc{AdaHedge} & \textsc{Squint} & \textsc{Squint2} \\ & \citet{rooij14} & \citet{koolen2015} & this paper \\ \midrule Quantile & \XSolidBrush & \Checkmark & \Checkmark\\ Second-order & \Checkmark & \Checkmark & \Checkmark\\ Scale-free & \Checkmark & \XSolidBrush & \Checkmark\\ Translation-invariant & \Checkmark & \Checkmark & \Checkmark\\ Timeless & \Checkmark & \Checkmark & \Checkmark \end{tabular} \caption{State of algorithms on the above two dimensions.} \label{tab:stock} \end{table} \todo[inline]{If one does not aim for second order (like e.g.\ coin betting) then adaptivity is perhaps more easy, e.g.\ doubling trick on $B_T$?} \todo[inline]{Explain why a simple doubling trick on $B_T$ does not work for second order stuff.} \todo[inline]{Also relate to the latest BOA algorithm of \cite{Wintenberger2017}. Answer: BOA has second-order guarantees with non-quantile (but with non-uniform prior) with range adaptivity at the cost of a $\ln \ln \frac{B_T}{B_1}$ for some estimated $B_1$. Wintenberger either is not aware or is carefully hiding the fact that picking $B_1$ ``negligible'' explodes his bound. } \begin{table}[h] \centering \begin{tabular}{p{6.5cm}p{2cm}p{2cm}l} & \textsc{MetaGrad} & \textsc{FreeRex-}\textsc{Momentum} & \textsc{MetaGrad+C}{} \\ & \citet{Erven2016} & \citet{cutkosky2017} & this paper \\ \midrule No prior knowledge of the gradient range & \XSolidBrush & \Checkmark &\Checkmark \\ No prior knowledge of the horizon &\XSolidBrush & \Checkmark & \Checkmark \\ Log. regret under exp-concavity & \Checkmark & \XSolidBrush & \Checkmark \\ Log. regret under the Bernstein condition & \Checkmark & \XSolidBrush\footnote{\citet{cutkosky2017} show that their algorithm still achieves a regret of order $O(\ln^4 T)$ under a condition they name $\alpha$-acute convexity. The link of the latter to the more common Bernstein condition is unclear.} & \Checkmark \\ Worst-case time complexity per round in the constrained OCO setting & $O(C_{\mathcal{U}} \ln T$) & $O(C_{\mathcal{U}})$\footnote{While the original algorithm is designed for unbounded OCO, their algorithm can still be used for bounded $\mathcal{U}$ via a simple reduction proposed by \citep{cutkosky2018}} & $O(C_{\mathcal{U}} + d^2 \ln T)$ \end{tabular} \caption{State of algorithms on the above two dimensions. $C_{\mathcal{U}}$ denotes the worst-case cost of performing a projection into the set $\mathcal{U}\subset \mathbb{R}^d$. \todo[inline]{On the features (rows) in this table, it seems FreeRex is not better than gradient-norm adaptive GD. Add features or omit FreeRex. We want to see the Parato frontier of algorithms.} } \label{tab:metagrad} \end{table} \subsection{Related Work} \subsubsection{Adaptive Expert Algorithms} \begin{itemize} \item First-order adaptivity: Auer et al: $L^*$ \item Second-order: Adahedge/CBMS, modified prod (Gaillard,Stoltz,Van Erven) \item Quantile bounds: NormalHedge, Chernov-Vovk, (precursor: Poland\&Hutter for non-uniform prior) \item Second-order+quantile: AdaNormalHedge, Squint \item Optimistic (predictible/slowly varying/...) \end{itemize} \subsubsection{Adaptive OCO Algorithms} \begin{itemize} \item Lipschitz-adaptivity: GD with $\eta_t = \frac{D}{\sqrt{\epsilon + \sum_{s=1}^t \|\bm{g}_s\|^2}}$ [Don't know reference for this; \citet{McMahan2017} discusses it for $d=1$], diagonal version of AdaGrad \citep{DuchiHazanSinger2011,McMahanStreeter2010} uses this per dimension. \citet{McMahan2017} reviews data-dependent regularizers and has a good discussion of AdaGrad (explicitly interpreted as per-coordinate learning rates); he claims that you need an initial guess $\epsilon = G$ for the FTRL-version of AdaGrad, but you can get away with $\epsilon = 0$ for the Mirror-Descent version of AdaGrad; otherwise not much novelty in \citep{McMahan2017}, so don't cite too much. \citet{WardWuBottou2018} (who call this AdaGrad even though it is not) prove Lipschitz-adaptivity for this method in non-convex stochastic optimization with a bound $c_G^2$ on the \emph{expected} gradient norm squared. They actually get an extra term $\log \frac{c_G^2}{B_0}$ in their convergence rate. \citet{cutkosky2018} cite \citep{SrebroEtAl2010} for the claim that a regret bound in terms of $\|\bm{g}_t\|^2$ implies fast rates for smooth losses. This is not precise: \citet{SrebroEtAl2010} do not directly say anything about regret in this case, but their Lemma~3.1 says that for smooth, \emph{non-negative} loss the dual norm of the gradients can be bounded in terms of the square root of the function value. This might be small if the algorithm is converging to the minimum function value and this minimum value is 0, but showing that takes some work that is not done in \citep{SrebroEtAl2010}. \item Adapt to $G$ and $D$ simultaneously: not possible \citep{CutkoskyBoahen2017Impossible} (actually, Manfred and Wojtek have a paper submitted to ICML where they show you can get around this impossibility result if you know $\bm{x}_t$ before making a prediction) \citet{OrabonaPal2015} do have an ``adaptive'' bound $O(D^2\sqrt{\sum_{t=1}^T \|\bm{g}_t\|^2})$, which they achieve by generalizing AdaHedge to FTRL, but it has suboptimal dependence on $D$ (no square root) because they just omit it when tuning the learning rate. They also try to hide this suboptimality in their discussion, which is not good. They do give a table with overview of `scale-free' algorithms: notably, AdaGrad is only scale-free in its MD version and not in its FTRL version. \item \cite{cutkosky2017} adapt to $G$ and a class of stochastic settings. They say they would like to combine adaptivity to $G$ with the MetaGrad regret bound, but instead they settle for $R_T = O\left(\sqrt{G_T \sum_{t=1}^T \|\bm{g}_t\| \|\bm{w}_t - \bm{w}^*\|^2}\right)$, which implies ``logarithmic regret'' $O(\log^4(T))$ for ``$\alpha$-acutely-convex'' functions. $\alpha$-acutely convex is a stochastic condition, which implies the $(B,\beta)$-Bernstein condition with $B=G_T/\alpha$ and $\beta=1$, so even the constants are what we would expect (\citet{cutkosky2017} have a factor $G_T$ in their regret bound, which we incur via the Bernstein constant). \item Adapt to curvature: \citet{BartlettHazanRakhlin2007} adapt to strong convexity (need to observe strong convexity per round); \citet{cutkosky2018} adapt to strong convexity (in Banach spaces) without needing to observe strong convexity per round, but they lose logarithmic factors; MetaGrad adapts to exp-concavity and strong convexity without needing to observe the strong convexity per round, but loses a factor $d$ for strong convex losses [this should be fixable, but we have not done it] \item Other well-known second-order methods in the mistake-bound model: AROW and the second-order perceptron \todo{TIM: I still need to look at these, but they are not going to be very relevant.} \item (Offset-) Rademacher complexity and its empirical versions. \end{itemize} Notes: if you need a reference showing that $O(DG \sqrt{T})$ is the optimal rate in some sense, then \citet{cutkosky2017} refers to Jacob Abernethy, Peter L Bartlett, Alexander Rakhlin, and Ambuj Tewari, COLT 2008. I could check this out. \fi \section{Problem Setting and Notation} In OCO, a learner repeatedly chooses actions $\w_t$ from a closed convex set $\mathcal{U} \subseteq \mathbb{R}^d$ during rounds $t=1,\ldots,T$, and suffers losses~$f_t(\w_t)$, where $f_t: \mathcal{U} \to \mathbb{R}$ is a convex function. The learner's goal is to achieve small \emph{regret} $R_T^\u = \sum_{t=1}^T f_t(\w_t) - \sum_{t=1}^T f_t(\u)$ with respect to any comparator action $\u \in \mathcal{U}$, which measures the difference between the cumulative loss of the learner and the cumulative loss they could have achieved by playing the oracle action~$\u$ from the start. A special case of OCO is prediction with expert advice, where $f_t(\w) = \w^\top \vloss_t$ for $\vloss_t \in \mathbb{R}^K$ and the domain $\mathcal{U}$ is the probability simplex $\simplex_K = \{(w_1,\ldots,w_K) : w_i \geq 0, \sum_i w_i = 1\}$. In this context we will further write $\p$ instead of $\w$ for the parameters to emphasize that they represent a probability distribution. We further define $[K] = \{1,\ldots,K\}$. \section{Adaptive Second-order Quantile Method for Experts} \label{Squint2} In this section, we present an extension of the \textsc{Squint} algorithm that adapts automatically to the loss range in the setting of prediction with expert advice. Throughout this section, we denote $r^k_t \coloneqq \inner{\what{\bm{p}}_t- \bm{e}_k}{\bm{\ell}_t}$ and $v^k_t \coloneqq (r^k_t)^2$, where $\bh{p}_t \in \triangle_K$ is the weight vector played by the algorithm at round $t$ and $\bm{\ell}_t$ is the observed loss vector. The cumulative regret with respect to expert $k$ is given by $\tmp{R}^k_t\coloneqq \sum_{s=1}^t r^k_s$. We use $\tmp{V}^k_t \coloneqq \sum_{s=1}^t v^k_s$ to denote the cumulative squared excess loss (which can be regarded as a measure of variance) of expert $k$ at round $t$. In the next subsection, we review the \textsc{Squint} algorithm. \subsection{The \textsc{Squint} Algorithm} \label{AdaptiveSquint} We first describe the original \textsc{Squint} algorithm, as introduced by \cite{koolen2015}. Let $\pi$ and $\gamma$ be prior distributions with supports on $[K]$ and $\left]0, \frac{1}{2}\right]$, respectively. Then \textsc{Squint} outputs predictions \begin{gather} \label{Squintforcaster} \p_{t+1} \propto \underset{\pi(k)\gamma(\eta)}{\mathbb{E}}\left[ \eta e^{- \sum_{s=1}^t f_s(k,\eta)} \bm{e}_k \right], \shortintertext{where $f_t(k,\eta)$ are quadratic \emph{surrogate losses} defined by} \label{surrogatesquint0} f_t(k,\eta) \coloneqq - \eta \inner{\bh{p}_t-\bm{e}_k}{\bm{\ell}_t} + \eta^2 \inner{\bh{p}_t-\bm{e}_k}{\bm{\ell}_t}^2. \end{gather} \cite{koolen2015} propose to use the \emph{improper} prior $\gamma(\eta) = \frac{1}{\eta}$ which does not integrate to a finite value over its domain, but because of the weighting by $\eta$ in \eqref{Squintforcaster} the predictions $\p_{t+1}$ are still well-defined. The benefit of the improper prior is that it allows calculating $\p_{t+1}$ in closed form \citep{koolen2015}. For any distribution $\rho \in \simplex_K$, \textsc{Squint} achieves the following bound: \begin{align} \tmp{R}^{\rho}_T = O\left(\sqrt{\tmp{V}^{\rho}_T\left( \KL(\rho || \pi ) + \ln \ln T\right)}\right), \label{Squintbound} \end{align} where $R_T^{\rho} = \mathbb{E}_{\rho(k)}\left[R_T^{k} \right]$ and $V_T^{\rho} = \mathbb{E}_{\rho(k)}\left[V_T^{k} \right]$. This version of Squint assumes the loss range $\max_k \ell_{t,k} - \min_k \ell_{t,k}$ is at most $1$, and can fail otherwise. In the next subsection, we present an extension of \textsc{Squint} which does not need to know the Lipschitz constant. \subsection{Lipschitz Adaptive Squint} \let\scale\bar We first design a version of \textsc{Squint}, called \textsc{Squint+C}{}, that still requires an initial estimate $B > 0$ of the Lipschitz constant. The next section will be devoted to setting this parameter online. For now, we consider it fixed. In addition to this, the algorithm takes a prior distribution $\pi \in \triangle_K$. In a sequence of rounds $t = 1, 2, \ldots$ the algorithm predicts with $\hat\p_t \in \triangle_K$, and receives a loss vector $\vloss_t^k \in \mathbb R^K$. We denote the \emph{instantaneous regret of expert $k$ in round $t$} by $r_t^k \df \tuple{\hat \p_t - \e_k, \vloss_t}$. We denote the observed Lipschitz constant in round $t$ at point $\hat \p_t$ by $ b_t \df \max_k \abs{r_t^k}$, and we denote its running maximum by $B_t \df B \lub \max_{s \le t} b_s$, and we use the convention that $B_0=B$. In addition, we will also require a clipped version of the loss vector $\scale{\vloss_t} = \frac{B_{t-1}}{B_t} \vloss_t$, and we denote by $\scale{r}_t^k = \tuple{\hat \p_t - \e_k, \scale \vloss_t}$ the rescaled instantaneous regret. We will use that $\abs{\scale r_t^k} \le B_{t-1}$. It suffices to control the regret for the clipped loss, because the cumulative difference is a negligible lower-order constant\footnote{We learned this technique from Ashok Cutkosky}: \begin{equation}\label{eq:ashok} R_T^k - \scale R_T^k ~\df~ \sum_{t=1}^T \del*{r_t^k - \scale r_t^k} ~=~ \sum_{t=1}^T \del*{B_t - B_{t-1}} \frac{r_t^k}{B_t} ~\le~ B_T - B_0 . \end{equation} This means we can focus on regret for $\scale \vloss_t$, for which the range bound $\abs{\scale r_t^k} \le B_{t-1}$ is available \emph{ahead} of each round $t$. To motivate \textsc{Squint+C}{}, we define the potential function after $T$ rounds by \begin{equation}\label{eq:sq.pot} \Phi_T \df \sum_k \pi_k \int_0^\frac{1}{2 B_{T-1}} \frac{e^{\eta \scale{R}_T^k - \eta^2 \scale{V}_T^k} -1}{\eta} \dif \eta \quad \text{where} \quad \scale R_T^k \df \sum_{t=1}^T \scale r_t^k ~~ \text{and} ~~ \scale V_T^k \df \sum_{t=1}^T (\scale r_t^k)^2 . \end{equation} We also define $\Phi_0 = 0$ (due to the integrand being zero), even though it involves the meaningless $B_{-1}$ in the upper limit. The algorithm is now derived from the desire to keep this potential under control. As we will see in the analysis, this requirement uniquely forces the choice of weights \begin{equation}\label{eq:sq.weights} \hat p_{T+1}^k ~\propto~ \pi_k \int_0^\frac{1}{2 B_T} e^{\eta \scale{R}_T^k - \eta^2 \scale{V}_T^k} \dif \eta . \end{equation} Like the original \textsc{Squint}, we see that the weights $\hat \p_{t+1}$ can be evaluated in closed form using Gaussian CDFs. The regret analysis consists of two parts. First, we show that the algorithm keeps the potential small. \begin{lemma}\label{lem:pot.is.small} Given parameter $B\geq0$, \textsc{Squint+C}{} ensures $\Phi_T \le \ln \frac{B_{T-1}}{B}$. \end{lemma} The next step of the argument is to show that small potential is useful. The argument here follows \cite{koolen2015}, specifically the version by \cite{squintPAC}. We have \begin{lemma}\label{lem:small.is.good} Definition \eqref{eq:sq.pot} implies that for any comparator distribution $\rho \in \triangle_K$ the regret is at most \begin{gather} \scale R_T^\rho ~\le~ \sqrt{2 \scale V_T^\rho} \del*{ 1+ \sqrt{2 C_T^{\rho}} } + 5 B_{T-1} \del*{C_T^{\rho}+ \ln 2}, \quad \text{where,} \\ C^{\rho}_T ~\df~ \KL \delcc*{\rho}{\pi} + \ln \del*{ \Phi_T + \frac{1}{2} + \ln \left(2+ \sum_{t=1}^{T-1} \frac{b_t}{B_t} \right) } \end{gather} \end{lemma} Keeping only the dominant terms, this reads \[ \scale R_T^\rho ~=~ O\del*{ \sqrt{\scale V_T^\rho \del*{\KL \delcc*{\rho}{\pi} + \ln \Phi_T + \ln \ln T}} } . \] The significance of \eqref{eq:ashok}, Lemmas~\ref{lem:pot.is.small} and ~\ref{lem:small.is.good} is that we obtain a bound of the form \[ R_T^\rho ~=~ O \del*{ \sqrt{V_T^\rho \del*{\KL \delcc*{\rho}{\pi} + \ln \ln \frac{TB_{T-1}}{B}}} + 5 B_T \del*{\KL \delcc*{\rho}{\pi} + \ln \ln \frac{TB_{T-1}}{B} } } . \] However, there does not seem to be any safe a-priori way to tune $B=B_0$. If we set it too small, the factor $\ln \ln \frac{B_{T-1}}{B}$ explodes. If we set it too large, the lower-order contribution $B_{T-1} \ge B$ blows up. It does not appear possible to bypass this tuning dilemma within the current construction. Fortunately, we are able to resolve it using restarts. Algorithm~\ref{bb1alg}, which applies to both \textsc{Squint+C}{} and \textsc{MetaGrad+C}{} (presented in the next section), monitors a condition of the sequences $(b_t)$ and $(B_t)$ to trigger restarts. \begin{algorithm}[tbp] \caption{Restarts to make {\textsc{Squint+C}} or {\textsc{MetaGrad+C}} scale-free} \label{bb1alg} \begin{algorithmic}[1] \REQUIRE {\textsc{Alg}} is either {\textsc{Squint+C}} or {\textsc{MetaGrad+C}}, taking as input parameter an initial scale $B$ \STATE Play $\w_1$ until the first time $t=\tau_1$ that $b_t \neq 0$. \STATE \label{line:runmetagrad} Run {\textsc{Alg}} with input $B = B_{\tau_1}$ until the first time $t=\tau_2$ that $\displaystyle \frac{B_t}{B_{\tau_1}} > \sum_{s=1}^t \frac{b_s}{B_s}$.\\ \STATE Set $\tau_1 = \tau_2$ and goto line \ref{line:runmetagrad}. \end{algorithmic} \end{algorithm} \begin{theorem} \label{blackboxreduction0} Let \textsc{Squint+L}{} be the result of applying Algorithm~\ref{bb1alg} to \textsc{Squint+C}{} (as \textsc{Alg}). \textsc{Squint+L}{} guarantees, for any comparator $\rho\in \triangle_K$, \begin{align} R_T^\rho ~\le~ 2\sqrt{ V_T^\rho} \del*{ 1+ \sqrt{2 \Gamma_T^{\rho}} } + 10 B_{T} \del*{ \Gamma_T^{\rho} + \ln 2} + 4 B_T, \end{align} where $ \Gamma_T^{\rho} ~\df~\KL \delcc*{\rho}{\pi}+ \ln \del*{\ln \left(\sum_{t=1}^{T-1} \frac{b_{t}}{B_t}\right) + \frac{1}{2}+ \ln \left(2+\sum_{t=1}^{T-1} \frac{b_{t}}{B_t}\right)}$. \end{theorem} Theorem \ref{blackboxreduction0} shows that the bound on the regret of \textsc{Squint+L}{} has a term of order $O(\ln \ln \sum_{t=1}^{T-1} \frac{b_{t}}{B_t})=O(\ln \ln T)$, which does not depend on the initial guess $B_0$ anymore. \section{Adaptive Method for Online Convex Optimization} \label{MetaC} We consider the Online Convex Optimization (OCO) setting where at each round $t$, the learner predicts by playing $\what{\bm{u}}_t$ in a closed convex set $\mathcal{U} \subset \mathbb{R}^d$, then the environment announces a convex function $\ell_t : \mathcal{U}\rightarrow [0,+\infty[$ and the learner suffers loss $\ell_t(\what{\bm{u}}_t)$. The goal of the learner is to minimize the regret with respect to the single best action $\bm{u}\in \mathcal{U}$ in hindsight (after $T$ rounds); that is, minimizing $\tmp{R}^{\bm{u}}_T \coloneqq \sum_{t=1}^T \ell_t(\what{\bm{u}}_t) - \sum_{t=1}^T \ell_t(\bm{u})$ for the worst case $\bm{u}\in \mathcal{U}$. Since the losses are convex, it suffices to bound the sum of linearized losses $\tilde{R}^{\bm{u}}_T \df \sum_{t=1}^T \inner{\what{\bm{u}}_t - \bm{u}}{\bm{g}_t}$, where $\bm{g}_t \coloneqq \nabla \ell_t(\what{\bm{u}}_t)$. We will assume that the set $\mathcal{U}$ is bounded and let $D\in ]0,+\infty[$ be its diameter \begin{align} \label{rad}D \coloneqq \sup_{\bm{u}, \bm{v}\in \mathcal{U}} \norm{\bm{u} - \bm{v}}_2.\end{align} Our main contribution in this section is to devise a simple modification of \textsc{MetaGrad} --- \textsc{MetaGrad+C}{} --- which, without prior knowledge of the maximum value of the gradient range $G \coloneqq \max_{t \le T} \norm{\nabla \ell_t(\what{\bm u}_t)}$, guarantees the following regret bound \begin{align} \forall \bm{u}\in \mathcal{U}, \quad R_T^\u \leq \tilde{R}^{\bm{u}}_T ~=~ O \del*{\sqrt{\tmp{V}^{\bm{u}}_T d\ln T } + B_T d \ln T}, \label{mresult} \end{align} where $\tmp{V}^{\bm{u}}_T \coloneqq \sum_{t=1}^T \inner{\bh{u}_t - \bm{u}}{\bm{g}_t}^2$. Consequently, this algorithm inherits the fast convergence results of standard \textsc{MetaGrad} \citep{Erven2016}. In particular, it was shown that due to the form of the bound in \eqref{mresult}, \textsc{MetaGrad} achieves a logarithmic regret when the sequence of losses are exp-concave \citep{Erven2016}. Furthermore, when the sequence of gradient functions $(\nabla \ell_t)$ are i.i.d distributed with common distribution $\mathbb{P}$ and satisfy the ($B, \beta$)-Bernstein condition for $B > 0$ and $\beta \in [0, 1]$ with respect to the risk minimizer $\bm{u}^* =\argmin_{\bm{u}\in \mathcal{U}}\mathbb{E}_{f \sim \mathbb{P}}[f(\bm{u})]$, then \textsc{MetaGrad} (and thus \textsc{MetaGrad+C}{}) achieves the expected regret \begin{align*} \mathbb{E}\left[\tmp{R}^{\bm{u}^*}_T\right] ~=~ O \del*{ (d \ln T)^{\frac{1}{2-\beta}} T^{\frac{1-\beta}{2-\beta}} + d \ln T}. \end{align*} See \citep{koolen2016} for more detail. \subsection{The \textsc{MetaGrad} Algorithm} The \textsc{MetaGrad} algorithm runs several sub-algorithms at each round; namely, a set of slave algorithms, which learn the best action in $\mathcal{U}$ given a learning rate $\eta$ in some predefined grid $\mathcal{G}$, and the master algorithm, which learns the best learning rate. The goal of \textsc{MetaGrad} is to maximize the sum of payoff functions $\sum_{t=1}^T f_t(\bm{u},\eta)$ over all $\eta \in \mathcal{G}$ and $\bm{u}\in \mathcal{U}$ simultaneously, where \begin{align} \label{surrmeta} f_t(\bm{u},\eta) \coloneqq - \eta \inner{\bh{u}_t - \bm{u}}{\bm{g}_t} + \eta^2 \inner{\bh{u}_t - \bm{u}}{\bm{g}_t}^2,\quad t\in [T], \end{align} and $\bh{u}_t$ is the master prediction at round $t\geq 1$. Each slave algorithm takes as input a learning rate from a finite, exponentially-spaced grid $\mathcal{G}$ (with $\ceil{\log_2 \sqrt{T}}$ points) within the interval $\left[\frac{1}{5DG\sqrt{T}}, \frac{1}{5DG}\right]$, where $G$ is an upper bound on the norms of the gradients. In this case, the bound $G$ must be known in advance. In what follows, we let $\mathbf{M}_t \coloneqq \sum_{s=1}^t \bm{g}_s\bm{g}_s^{\T}$, for $ t \geq 0$. \paragraph{Slave predictions.} Every slave $\eta \in \mathcal G$ starts with $\bh{u}_1^\eta = \bm{0}$. At the end of round $t \ge 1$, it receives the master prediction $\bh{u}_t$ and updates the prediction in two steps \begin{gather} {\bm{u}}^{\eta}_{{t+1}} \coloneqq \bh{u}^{\eta}_t - \eta \mathbf{\Sigma}^{\eta}_{{t+1}} \bm{g}_t \left(1 + 2 \eta \left( \bh{u}^{\eta}_t -\bh{u}_t \right)^{\T}\bm{g}_t\right) , \text{ where }\ \mathbf{\Sigma}^{\eta}_{{t+1}} \coloneqq \left( \tfrac{\mathbf{I}}{D^2} +2\eta^2\mathbf{{M}}_t \right)^{-1}, \label{quadprog0}\\ \label{gaussian} \text{ and } \ \ \bh{u}^{\eta}_{{t+1}} = \argmin_{\bm{u}\in \mathcal{U}} \left(\bm{u}_{{t+1}}^{\eta}- \bm{u} \right)^{\T}\left( \mathbf{\Sigma}^{\eta}_{{t+1}}\right)^{-1} \left(\bm{u}_{{t+1}}^{\eta}- \bm{u} \right) , \end{gather} \paragraph{Master predictions} After receiving the slaves predictions, $\left(\bh{u}^{\eta}_t\right)_{\eta \in \mathcal{G}}$, the master algorithm aggregates them and outputs $\bh{u}_t\in \mathcal{U}$ according to: \begin{align} \bh{u}_{t} \coloneqq \frac{\sum_{\eta\in \mathcal{G}} \eta w^{\eta}_t \bh{u}^{\eta}_{t} }{\sum_{\eta \in \mathcal{G}}\eta w^{\eta}_t };\quad w^{\eta}_t \coloneqq e^{- \sum_{s=1}^{t-1} f_s(\bh{u}^{\eta}_s,\eta)}, \label{masterpred} \end{align} As mentioned earlier, the \textsc{MetaGrad} algorithm requires the knowledge of the maximum value of the gradient range $G$ and the horizon $T$ in advance. These are needed to define the grid of the slave algorithms. In the analysis of \textsc{MetaGrad}, it is crucial for the $\eta$'s to be in the right interval in order to invoke a Gaussian exp-concavity result for the surrogate losses in \eqref{surrmeta} (see e.g.\ \citep[Lemma 10]{Erven2016}). In the next subsection, we explore a natural extension of \textsc{MetaGrad} which does not require the knowledge of the gradient range or the horizon $T$. \subsection{An Extension of \textsc{MetaGrad} for Unknown Gradient Range and Horizon} We present a natural extension of \textsc{MetaGrad}, called \textsc{MetaGrad+C}{}, which does not assume any knowledge on the gradient range or the horizon. Contrary to the original \textsc{MetaGrad} which requires knowledge of the horizon $T$ to define the grid for the slaves, \textsc{MetaGrad+C}{} circumvents this by defining an infinite grid $\mathcal{G}$, in which, at any given round $t\geq1$, only a finite number of slaves (up to $\log_2 t$ many) output a prediction (see Remark \ref{numslaves}). Each slave $\eta$ in this grid receives a prior weight $\pi(\eta) \in[0,1]$, where $\sum_{\eta\in \mathcal{G}} \pi(\eta) =1$. The expressions of $\mathcal{G}$ and $\pi$ are given by \begin{align} \mathcal{G} \coloneqq \left\{ \eta_i \coloneqq \tfrac{2^{-i}}{5 B}: i \in \mathbb{N} \right\} \label{Ggrid} ;\quad \pi(\eta_i) \coloneqq \tfrac{1}{(i+1)(i+2)}, \ i\in \mathbb{N}. \end{align} where $B>0$ is the input to \textsc{MetaGrad+C}. \subsubsection{Algorithm Description} \paragraph{Preliminaries.} As in the previous subsection, we let $\bh{u}_t$ and $\bh{u}^{\eta}_{t}$ be the predictions of the master and slave $\eta$, respectively, at round $t\geq1$ (we give their explicit expressions further below). Let $(b_t)$ and $(B_t)$ be the sequences in $\mathbb{R}_{\geq0}$ defined by \begin{align} b_t \coloneqq D \norm{\bm{g}_t}_2; \quad \quad \quad B_t \coloneqq B \vee \max_{s\in [t]} b_s, \quad t\in[T], \label{littleb} \end{align} where $B$ is the input of \textsc{MetaGrad+C}, and we use the convention that $B_0 = B$. Using the sequence $(B_t)$, we define the clipped gradients $\bar{\bm{g}}_t \coloneqq \frac{B_{t-1}}{B_t} \bm{g}_t$, and $\forall \bm{u}\in\mathcal{U},\forall t\geq 1, \forall \eta >0$, we let \begin{align} \bar{r}^{\bm{u}}_t \coloneqq \inner{\bh{u}_t-\bm{u}}{\bar{\bm{g}}_t},\quad \quad \bar{f}_t(\bm{u},\eta)\coloneqq - \eta \bar{r}^{\bm{u}}_t + \left(\eta \bar{r}^{\bm{u}}_t\right)^2,\quad \quad \bar{\mathbf{M}}_t\coloneqq \sum_{s=1}^t \bar{\bm{g}}_s \bar{\bm{g}}_s^{\T}. \label{clippedstuff} \end{align} For each slave $\eta\in \mathcal{G}$, we define the time $s_\eta$ to be \begin{align} \label{threshold} s_\eta ~\df~ \min\setc*{t \ge 0}{ \eta \geq \frac{1}{D \sum_{s=1}^t \norm{\bar{\bm{g}}_s}_2 + B_t} }, \end{align} and define the set $\mathcal{A}_t$ of ``active'' slaves by \begin{align} \mathcal{A}_t \coloneqq \{ \eta \in \mathcal{G}_t : s_\eta < t \}, \quad \text{where} \quad \mathcal{G}_t \coloneqq \mathcal{G} \cap \left[0, \tfrac{1}{5B_{t-1}}\right] , \quad t\geq 1. \end{align} \paragraph{Slaves' predictions.} A slave $\eta \in \mathcal{A}_t$ issues its first prediction $\bh{u}_t^\eta = \bm{0}$ in round $t=s_\eta+1$. From then on, it receives the master prediction $\bh{u}_t$ as input and updates in two steps as \begin{gather} \bm{u}^{\eta}_{{t+1}} \coloneqq \bh{u}^{\eta}_t - \eta \mathbf{\Sigma}^{\eta}_{{t+1}} \bar{\bm{g}}_t \left(1 + 2 \eta \left( \bh{u}^{\eta}_t -\bh{u}_t \right)^{\T}\bar{\bm{g}}_t\right), \text{ where }\ \mathbf{\Sigma}^{\eta}_{{t+1}} \coloneqq \left( \tfrac{\mathbf{I}}{D^2} +2\eta^2\left(\bar{\mathbf{M}}_t -\bar{\mathbf{M}}_{s_{\eta}}\right) \right)^{-1}, \nonumber \\ \text{ and }\ \ \bh{u}^{\eta}_{{t+1}} = \argmin_{\bm{u}\in \mathcal{U}} \left(\bm{u}_{{t+1}}^{\eta}- \bm{u} \right)^{\T}\left( \mathbf{\Sigma}^{\eta}_{{t+1}}\right)^{-1} \left(\bm{u}_{{t+1}}^{\eta}- \bm{u} \right). \label{quadprog} \end{gather} Slaves that are outside the set $\mathcal{A}_t$ at round $t$ are irrelevant to the algorithm\footnote{The predictions of the slaves outside $\mathcal{A}_t$ do not appear anywhere in the description or analysis of the algorithm. Alternatively, we may think of each slave $\eta$ as operating with $\eta_t=0$ in the first $s_\eta$ rounds and with $\eta_t=\eta$ afterwards. The presence of the factor $\eta$ in \eqref{masterpred} renders the master oblivious to inactive slaves. }. Note that restricting the slaves to the set $\mathcal{G}_t$ is similar to clipping the upper integral range in the \textsc{Squint+C}{} case. \paragraph{Master predictions.} At each round $t\geq1$, the master algorithm receives the slaves predictions $(\bh{u}_t^{\eta})_{t\in \mathcal{A}_{t}}$ and outputs the $\widehat{\bm{u}}_t$: \begin{align} \label{newmaster} \bh{u}_t = \frac{\sum_{\eta \in \mathcal{A}_{t}}\eta w^{\eta}_t \bh{u}_t^{\eta}}{\sum_{\eta \in \mathcal{A}_{t}} \eta w^{\eta}_t }; \quad w^{\eta}_t \coloneqq \pi(\eta) e^{- \sum_{s=s_{\eta}+1}^{t-1} \bar{f}_s(\bh{u}^{\eta}_s,\eta)}, \quad t\geq 1. \end{align} \begin{remark}[Number of active slaves] \label{numslaves} At any round $t\geq 1$, the number of active slaves is at most $\floor{\log_2 t}$. In fact, if $\eta \in \mathcal{A}_t$, then by definition $\eta \geq 1/(D\sum_{s=1}^{s_{\eta}}\norm{\bm{g}_s}_2 + B_{s_{\eta}}) \geq 1/(t B_{t-1})$ (since $s_{\eta}\leq t-1$), and thus $\mathcal{A}_t \subset [1/(tB_{t-1}), 1/(5B_{t-1})]$. Since $\mathcal{A}_t$ is an exponentially-spaced grid with base $2$, there are at most $\floor{\log_2 t}$ slaves in $\mathcal{A}_t$. \end{remark} \subsubsection{Analysis} To analyse the performance of \textsc{MetaGrad+C}, we consider the potential function \begin{align} \label{masterpot} \Phi_t \coloneqq \pi(\mathcal{G}_t\setminus \mathcal{A}_t) + \sum_{\eta\in \mathcal{A}_t} \pi(\eta) e^{-\sum_{s=s_{\eta}+1}^t \bar{f}_s(\bh{u}^{\eta}_s,\eta)}, \quad t\geq 0.\end{align} For $\bm{u}\in \mathcal{U}$, we define the pseudo-regret $\tilde{R}^{\bm{u}}_T \coloneqq \sum^T_{t=1} \inner{\bh{u}_t - \bm{u}}{{\bm{g}}_t}$ and its clipped version $\cliplinregret^{\bm{u}}_T \coloneqq \sum^T_{t=1} \inner{\bh{u}_t - \bm{u}}{\bar{\bm{g}}_t}$. The following analogue to \eqref{eq:ashok} relates these two regrets. \begin{lemma} \label{relatingtheregret} Let $(b_t)$ and $(B_t)$ be as in \eqref{littleb}, respectively, then for all $\bm{u}\in \mathcal{U}$, \begin{align} \label{clippedrel} \tilde{R}^{\bm{u}}_{T} \leq \cliplinregret^{\bm{u}}_{T} +B_T. \end{align} \end{lemma} Similarly to the \textsc{Squint} case, one can use the prod-bound to control the growth of this potential function as shown in the proof of the following lemma (see Appendix \ref{MetaGrad2proofs}): \begin{lemma} \label{lemmameta} \textsc{MetaGrad+C}{} guarantees that $\Phi_T \leq \dots \leq \Phi_0 = 1$, for all $T \in \mathbb{N}$. \end{lemma} We now give a bound on the clipped regret $\cliplinregret^{\u}_T$ in terms of the clipped variance $\clipvar^{\u}_T \coloneqq \sum_{t=1}^T (\bar{r}^{\u}_t)^2$: \begin{theorem} \label{naivemeta} Given input $B>0$, the clipped pseudo-regret for {\textsc{MetaGrad+C}} is bounded by \begin{equation} \cliplinregret_T^\u\leq 3\sqrt{\clipvar_T^\u C_T} + 15 B_T C_T \quad \text{for any $\u \in \mathcal{U}$,} \label{naivebound} \end{equation} where $C_T \coloneqq d\ln\left(1 + \frac{2 \sum_{t=1}^{T-1} b_t^2 + 2 B^2_{T-1}}{25 d B^2_{T-1}}\right) + 2 \ln \left( \log^+_2 \frac{\sqrt{\sum_{t=1}^Tb^2_t }}{B} +3 \right) + 2$ and $\log_2^+ = 0 \vee \log_2 $. \end{theorem} \begin{remark} \label{truebound} We can relate the clipped pseudo-regret to the ordinary regret via $R_T^\u \leq \linregret_T^\u \leq \cliplinregret_T^\u + B_T$ (see \eqref{clippedrel}) and on the right-hand side we can also use that $\clipvar_T^\u \leq V_T^\u$. \end{remark} An important thing to note from the result of Theorem \ref{naivemeta} is that the ratio $\sqrt{\sum_{t=1}^Tb^2_t}/B$, could in principle be arbitrarily large if the input $B$ is too small compared to the actual regret range. To resolve this issue, one can use the same restart trick as in the \textsc{Squint} case: \begin{theorem} \label{blackboxreduction1} Let {\textsc{MetaGrad+L}} be the result of applying Algorithm~\ref{bb1alg} to {\textsc{MetaGrad+C}}. Then the regret for {\textsc{MetaGrad+L}} is bounded by \begin{align} \linregret_T^\u \leq 3\sqrt{V_T^\u \Gamma_T} + 15 B_T \Gamma_T + 4 B_T \quad \text{for all $\u \in \mathcal{U}$,}\label{bbbound} \end{align} where $\Gamma_T \coloneqq 2 d\ln\left(\frac{27}{25} + \frac{2}{25d} \sum_{t=1}^{T} \frac{b_t^2}{B^2_{t}}\right) + 4 \ln \left( \log^+_2 \sqrt{\sum_{t=1}^T (\sum_{s=1}^t \frac{b_s}{B_s})^2} +3 \right)+ 4 = O(d \ln T)$. \end{theorem} In Theorem \ref{blackboxreduction1}, we have replaced the ratio $\sqrt{\sum_{t=1}^Tb^2_t} /B$ appearing in the (clipped) pseudo-regret bound of \textsc{MetaGrad+C}{} by the term $\sigma_T \coloneqq \sqrt{\sum_{t=1}^T (\sum_{s=1}^t \frac{b_s}{B_s})^2}$ which is always smaller than $T^{\frac{3}{2}}$, but this is acceptable since $\sigma_T$ appears inside a $\ln \ln$. From the bound of Theorem \ref{blackboxreduction1} on can easily recover an ordinary regret bound, i.e. a bound on $R^{\bm{u}}_t, \bm{u}\in \mathcal{U}$ (see Remark \ref{truebound}).\\ \section{Efficient Implementation Through a Reduction to the Sphere} \label{four} Using \textsc{MetaGrad+C}{} (or \textsc{MetaGrad}), the computation of each vector $\bh{u}^{\eta}_t$ requires a (Mahalanobis) projection step onto an arbitrary convex set $\mathcal{U}$. Numerically, this typically requires $O(d^p)$ floating point operations (flops), for some $p \in \mathbb{N}$ which depends on the topology of the set $\mathcal{U}$. Since $p$ can be large in many applications, evaluating $\bh{u}^{\eta}_{t}$ at each grid point $\eta$ can become computationally prohibitive, especially when the number of grid points grows with $T$ --- in the case or \textsc{MetaGrad+C}{} there can be at most $\floor{\log_2 T}$ slaves at round $T\geq1$ (see Remark \ref{numslaves}). \subsection{An efficient implementation of \textsc{MetaGrad+C}{} on the ball} \label{ballefficient} In this subsection, we assume that $\mathcal{U}$ is the ball of diameter $D$, i.e.\ $\mathcal{U}=\mathcal{B}_{D} \coloneqq \left\{\bm{u} \in \mathbb{R}^d \colon \norm{\bm{u}}_2 \leq D/2 \right\}$. In order to compute the slave prediction $\bh{u}^{\eta}_{t+1}$, for $t\geq 1$ and $\eta \in \mathcal{A}_t$, the following quadratic program needs to be solved: \begin{align} \bh{u}^{\eta}_{t+1} = \argmin_{\bm{u}\in \mathcal{U}} \left(\bm{u}_{t+1}^{\eta}- \bm{u} \right)^{\T}\left( \mathbf{\Sigma}^{\eta}_{t+1}\right)^{-1} \left(\bm{u}_{t+1}^{\eta}- \bm{u} \right), \label{quadprog2} \end{align} where $\bm{u}^{\eta}_{t+1}$ (the unprojected prediction) and $\mathbf{\Sigma}^{\eta}_{t+1}$ (the co-variance matrix) are defined in \eqref{quadprog}. Since $\mathcal{U}$ is a ball, \eqref{quadprog} can be solved efficiently using the result of following lemma: \begin{lemma} \label{redquad} Let $t\geq 1$, $\eta \in \mathcal{A}_t$, and $\bm{v}^{\eta}_{t+1}\coloneqq \left(\tfrac{\mathbf{I}}{D^2} +2\eta^2\left(\bar{\mathbf{M}}_t -\bar{\mathbf{M}}_{s_{\eta}}\right)\right) \bm{u}^{\eta}_{t+1}$. Let $\mathbf{Q}_t$ be an orthogonal matrix which diagonalizes $\bar{\mathbf{M}}_{t}$, and $\mathbf{\Lambda}_t \coloneqq \left[\lambda^i_t\right]_{i=1}^t$ the diagonal matrix which satisfies $\mathbf{Q}_t \bar{\mathbf{M}}_t \mathbf{Q}^{\T}_t = \mathbf{\Lambda}_t$. The solution of \eqref{quadprog2} is given by $\bh{u}^{\eta}_{t+1}=\bm{u}^{\eta}_{t+1}$, if $\bm{u}^{\eta}_{t+1} \in \mathcal{U}$; and otherwise, $\bh{u}^{\eta}_{t+1} = \mathbf{Q}_t^{\T} (x_t^{\eta}\mathbf{I} +2 \eta^2 (\mathbf{\Lambda}_t- \mathbf{\Lambda}_{s_{\eta}} ))^{-1} \mathbf{Q}_t \bm{v}^{\eta}_{t+1}$, where $x_{t}^{\eta}$ is the unique solution of \begin{align} \rho_t^{\eta}(x) \coloneqq \sum_{i=1}^{d} \frac{\inner{\bm{e}_i}{\mathbf{Q}_t \bm{v}^{\eta}_{t+1}}^2}{(x+2\eta^2 (\lambda^i_t -\lambda^i_{s_{\eta}}))^2} =\frac{D^2}{4}, \label{proxyfun} \end{align} \end{lemma} The proof of the lemma is in Appendix \ref{fourproof}. Note that since the matrix $\bar{\mathbf{M}}_t$ is symmetric for all $t\geq 1$, the existence of the matrices $\mathbf{Q}_t$ and $\mathbf{\Lambda}_t$ in Lemma \ref{redquad} is always guaranteed. Since $\rho_t^{\eta}$ in \eqref{proxyfun} is strictly convex and decreasing, one can use the Newton method to efficiently solve $\rho_t^{\eta}(x)=D^2/4$. Thus, since the computation of $\mathbf{Q}_t \bm{v}^{\eta}_{t+1}$ only involves matrix-vector products, Lemma \ref{redquad} gives an efficient way of solving \eqref{quadprog2}. \begin{algorithm}[tbp] \begin{algorithmic} \REQUIRE: A bounded convex set $\mathcal{U}\in \mathbb{R}^d$ with diameter $D$, and a fast \textsc{MetaGrad+C}{} implementation on the ball $\mathcal{B}_{D}$, \textsc{MetaGrad+C}{}, taking input $B$. \FOR{$t=1$ \KwTo $T$} \STATE Get $\bh{u}_t$ from \textsc{MetaGrad+C}{} \STATE Play $\bh{w}_t = \Pi_{\mathcal{U}}(\bh{u}_t)$, receive $\mathring{\bm{g}}_t = \nabla \ell_t(\bh{w}_t)$ \STATE Set $\bm{g}_t \in \tfrac{1}{2} \left( \mathring{\bm{g}}_t +\norm{\mathring{\bm{g}}_t} \partial \op{d}_{\mathcal{U}}(\bh{u}_t) \right)$ \STATE Send $\bm{g}_t$ to \textsc{MetaGrad+C}{} \ENDFOR \caption{Fast implementation of \textsc{MetaGrad+C}{} on any bounded convex set $\mathcal{U}$ via reduction to the ball.} \label{OCOGeneral} \end{algorithmic} \end{algorithm} \paragraph{Implementation on the ball.} At round $t\geq 1$, the implementation of \textsc{MetaGrad+C}{} on the ball $\mathcal{B}_{D}$ keeps in memory the orthogonal matrix $\mathbf{Q}_{t-1}$ which diagonalizes $\bar{\mathbf{M}}_{t-1}$. In this case, since $\bar{\mathbf{M}}_t = \bar{\mathbf{M}}_{t-1}+ \bar{\bm{g}}_t \bar{\bm{g}}_t^{\T}$ it is possible to compute the new matrices $\mathbf{Q}_t$ and $\mathbf{\Lambda}_t$ in $O(d^2)$ flops \citep{stor2015}. Note that this operation only needs to be performed once --- the diagonalization does not depend on $\eta$. Therefore, computing $\mathbf{Q}_t \bm{v}^{\eta}_{t+1}$ (and thus $\bh{u}^{\eta}_{t+1}$) can be performed in only $O(d^2)$ flops. Thus, aside from the matrix-vector products, the time complexity involved in computing $\bh{u}_{t+1}^{\eta}$ for a given $\eta\in \mathcal{A}_t$ is of the same order as that involved in solving $\rho_t^{\eta}(x)=D^2/4$. \subsection{A Reduction to the ball} In this subsection, we make use of a recent technique by \cite{cutkosky2018} that reduces constrained optimization problems to unconstrained ones, to reduce any OCO problem on an arbitrary bounded convex set $\mathcal{U}\subset\mathbb{R}^d$ to an OCO problem on a ball, where we can apply \textsc{MetaGrad+C}{} efficiently. Let $D$ be the diameter of $\mathcal{U}\in \mathbb{R}^d$ as in \eqref{rad}, so that the ball $\mathcal{B}_D$ of radius $D/2$ encloses $\mathcal{U}$. For $\bm{u}\in \mathcal{U}$, we denote $\op{d}_{\mathcal{U}}(\bm{u}) = \min_{\bm{w} \in \mathcal U} \norm{\bm{u}-\bm{w}}_2$ the \emph{distance function} from the set $\mathcal{U}$, and we define $\Pi_{\mathcal{U}}(\u)\coloneqq \{\w\in \mathcal{U}: \norm{\bm{w}-\u}_2 = \op{d}_{\mathcal{U}}(\u) \}$. Algorithm \ref{OCOGeneral} reduces the OCO problem on the set $\mathcal{U}$ to one on the ball $\mathcal{B}_{D}$, where \textsc{MetaGrad+C}{} is used as a black-box to solve it efficiently. As a result, Algorithm \ref{OCOGeneral} (including its \textsc{MetaGrad+C}{} subroutine) only performs a single Euclidean projection (as opposed to the projection in Mahalanobis distance as in \eqref{quadprog2}) onto the set $\mathcal{U}$, which is applied to the output of \textsc{MetaGrad+C}{} --- the \textsc{MetaGrad+C}{} subroutine only performs projections onto the ball $\mathcal{B}_D$, which can be done efficiently as described in the previous subsection. Let $\mathring{\tmp{R}}^{\bm{u}}_T \coloneqq \sum_{t=1}^T \inner{\bh{w}_t - \bm{u}}{\mathring{\grad}_t}$ and $\mathring{\tmp{V}}^{\bm{u}}_T \coloneqq \sum_{t=1}^T \inner{\bh{w}_t - \bm{u}}{\mathring{\grad}_t}^2$ be the pseudo-regret and variance of Algorithm \ref{OCOGeneral}. The following theorem, whose proof is in Appendix \ref{fourproof}, shows how the regret guarantee of \textsc{MetaGrad+C}{} readily transfers to Algorithm \ref{OCOGeneral}: \begin{theorem} \label{reductionbound} Algorithm \ref{OCOGeneral}, which uses \textsc{MetaGrad+C}{} as a black-box, guarantees: \begin{align} \sum_{t=1}^T \left(\ell_t(\bh{w}_t) - \ell_t(\bm{u})\right) \leq \mathring{\tmp{R}}^{\bm{u}}_T \leq 3 \sqrt{\mathring{\tmp{V}}^{\bm{u}}_T {\Gamma}_T} + 24 B_T {\Gamma}_T +B_T, \ \text{ for }\u\in \mathcal{U}, \label{alg1bound} \end{align} where $ {\Gamma}_T \coloneqq d\ln\left(\frac{27}{25} + \frac{2 \sum_{t=1}^{T-1} {b}_t^2}{25 d{B}^2_{T-1}}\right) + 2 \ln \left( \log^+_2 \frac{\sqrt{\sum_{t=1}^T {b}_t^2}}{B} +3 \right) + 2= O(d \ln T)$, and \begin{align} {b}_t \coloneqq D \norm{{\bm{g}}_t}_2; \quad \quad \quad {B}_t \coloneqq B \vee \max_{s\in [t]} {b}_s, \quad t\in[T]. \label{littlebbre} \end{align} \end{theorem} Note that Algorithm \ref{OCOGeneral}, guarantees the same type of regret as \textsc{MetaGrad+C}, and thus can also adapt to exp-concavity of the losses $(\ell_t)$ and the Bernstein condition. \section{Conclusion} \label{sec:conclusion} We present algorithms that adapt to the Lipschitz constant of the loss for OCO and experts. Stepping back, we see that an interesting combination of problem complexity dimensions can be adapted to, with hardly any overhead in either regret or computation. The main question for future work is to obtain a better understanding of the landscape of interactions between measures of problem complexity and their algorithmic reflection. One surprising conclusion from our work, which provides a curious contrast with incompatibility of Lipschitz adaptivity with comparator complexity adaptivity in general OCO \citep{CutkoskyBoahen2017Impossible}, is the following observation. Our results for the expert setting, which we phrased for a finite set of $K$ experts, in fact generalise unmodified to priors with infinite support. Considering a countable set of experts, we find a scenario where the comparator complexity $\KL(\rho\|\pi)$ is unbounded, yet our Squint strategy adapts to the Lipschitz constant of the loss without inflating the regret compared to an a-priori known complexity by more than a constant. A final very interesting question is when it is possible to exploit scenarios with large ranges that occur only very infrequently. A example of this is found in statistical learning with heavy-tailed loss distributions. Martingale methods for such scenarios that are related to our potential functions suggest that it may be necessary to replace the ``surrogate'' negative quadratic term $f_t(\u,\eta)$ that our algorithms include in the exponent by another function appropriate for the specific distribution \cite[Table~3]{linecrossing}. It is not currently clear what individual sequence analogues can be obtained. \DeclareRobustCommand{\VAN}[3]{#3}
2,869,038,155,219
arxiv
\section{Introduction and statements of the theorems} In 1917, Ramanujan introduced a novel idea which enabled him to derive an elegant functional equation of the classical Riemann zeta function. He showed that for $\Re(s)>1$, the Riemann zeta function $\zeta(s):=\sum_{n \ge 1} \frac{1}{n^s}$ satisfies the following formula: \begin{equation}\label{rama} 1=\sum_{k\ge 0} (s-1)_k (\zeta(s+k)-1), \end{equation} where the right hand side of \eqref{rama} converges normally on any compact subset of $\Re(s) > 1$ and $$ (s)_k := \frac{s \cdots (s+k)}{(k+1)!} $$ for any $k \ge 0$ and $s \in {\mathbb C}$. An elementary proof of this formula, as suggested by Ecalle \cite{JE}, can be deduced from the identity $$ (n-1)^{1-s}-n^{1-s}= \sum _{k\ge 0} (s-1)_k \ n^{-s-k}, $$ which is valid for any natural number $n \ge 2$ and any $s \in {\mathbb C}$. In fact, Ecalle \cite{JE} also suggested how one can derive a formula similar to \eqref{rama}, for the multiple zeta functions. Following Ecalle's indication, the last author along with Mehta and Viswanadham \cite{MSV} derived such a formula for the multiple zeta functions and studied the meromorphic continuations as well as the set of polar singularities of them (see \cite{MSV} and \cite{JO} for details). Meromorphic continuations of the multiple zeta functions was proved first by Zhao~\cite{JZ}. Around the same time, Akiyama, Egami and Tanigawa \cite{AET} gave an alternate proof of meromorphic continuations along with the exact set of polar hyperplanes for these functions. In~\cite{MSV}, the last author along with Mehta and Viswanadham introduced the method of matrix formulation to write the down the residues of the multiple zeta functions in a computable form, and thereby reproved the theorem of Akiyama, Egami and Tanigawa. In this paper, we generalise the identity of Ramanujan to obtain meromorphic continuations as well as the set of possible singularities of the multiple Lerch zeta functions (defined below). When $r=1$, it was done by the last author in \cite{BS}. Let $r > 0 $ be a natural number and $U_r$ be an open subset of ${\mathbb C}^r$ defined as follows: $$ U_r := \{ (s_1, \ldots, s_r) \in {\mathbb C}^r ~|~ \Re(s_1 + \cdots + s_i) > i ~\text{ for all }~ 1 \le i \le r \}. $$ Then for real numbers $ \lambda_1, \ldots, \lambda_r, \alpha_1, \ldots, \alpha_r \in [0,1)$ and complex $r$-tuples $(s_1, \ldots, s_r) \in U_r$, the multiple Lerch zeta function of depth $r$ is defined by \begin{equation}\label{lerch} L_r ( \lambda_1, \ldots, \lambda_r; \alpha_1, \ldots, \alpha_r ; s_1, \ldots, s_r ) := \sum_{n_1>\cdots>n_r>0} \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1)^{s_1} \cdots (n_r + \alpha_r)^{s_r}}, \end{equation} where $e(a) := e^{2 \pi \iota a}$ for $a \in {\mathbb R}$. The series on the right hand side of \eqref{lerch} is normally convergent on compact subsets of $U_r$ (see \propref{norm}) and hence defines a holomorphic function there. Before we state our theorems, let us introduce few more notations. For integers $1 \le i \le r$ and $k \ge 0$, let \begin{equation*} H_{i,k} ~:=~ \{ (s_1, \ldots, s_r) \in {\mathbb C}^r ~|~ s_1 + \cdots + s_i = i - k \}. \end{equation*} Also for $1 \le i \le r$, let \begin{equation*} \mu_i ~:=~ \sum_{j=1}^i \lambda_j \end{equation*} and ${\mathbb Z}_{\le j}$ denote the set of integers less than and equal to $j$. In this article we prove the following theorems. \begin{thm}\label{easy} Assume that $\mu_i \not\in {\mathbb Z}$ for all $1 \le i \le r$. Then $L_r(\lambda_1,\ldots,\lambda_r; \alpha_1,\ldots,\alpha_r; s_1,\ldots,s_r)$ can be extended analytically to the whole of ${\mathbb C}^r$. \end{thm} \begin{rmk}\label{easy-kn}\rm If $r=1$ and $\lambda_1 \notin {\mathbb Z}$, Lerch \cite{ML} showed that $L_1(\lambda_1;\alpha_1;s_1)$ can be extended to an entire function of ${\mathbb C}$. \end{rmk} \begin{thm}\label{multiple-Lerch} With the notations as above, let $i_1 < \cdots < i_{m}$ be the only indices for which $\mu_{i_j} \in {\mathbb Z},~ 1 \le j \le m$. \begin{itemize} \item If $i_1 =1$, then $L_r(\lambda_1,\ldots,\lambda_r; \alpha_1,\ldots,\alpha_r; s_1,\ldots,s_r)$ can be meromorphically continued to ${\mathbb C}^r$ with possible simple poles along the hyperplanes $$ H_{1,0} \ \text{ and } \ H_{i_j, k} ~\text{ for }~ 2 \le j \le m \ \text{ with }~(i_j -k) \in {\mathbb Z}_{\le j}. $$ \item If $i_1 \ne 1$, then $L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots,\alpha_r; s_1,\ldots,s_r)$ can be meromorphically continued to ${\mathbb C}^r$ with possible simple poles along the hyperplanes $$ H_{i_j, k} ~\text{ for }~ 1 \le j \le m \ \text{ with }~(i_j -k) \in {\mathbb Z}_{\le j}. $$ \end{itemize} \end{thm} \begin{rmk}\label{multiple-Lerch-kn}\rm \thmref{multiple-Lerch} is well known in the special case when $r=1$. In this case, $L_1(\lambda_1; \alpha_1, s_1)$ where $\lambda_1 \in {\mathbb Z}$ is essentially the Hurwitz zeta function and hence can be extended analytically to ${\mathbb C}$, except at $s=1$, where it has a simple pole with residue $1$. \end{rmk} Komori \cite{YK} considered certain several variable generalisations of the Lerch zeta function and derived meromorphic continuations of these functions through integral representation. He also obtained a rough estimation of their possible singularities (see \cite{YK}, \S 3.6). Now if we choose $\lambda_i = 0$ for $1 \le i \le r$ in \eqref{lerch}, then we get $$ L_r (0, \ldots, 0; \alpha_1, \ldots, \alpha_r; s_1, \ldots, s_r) = \zeta_r(s_1, \ldots, s_r; \alpha_1, \ldots, \alpha_r), $$ the multiple Hurwitz zeta function of depth $r$, and further if $\alpha_i =0$ for $1 \le i \le r$, we get $$ L_r (0, \ldots, 0; 0, \ldots, 0; s_1, \ldots, s_r) = \zeta_r(s_1, \ldots, s_r), $$ the multiple zeta function of depth $r$. Akiyama and Ishikawa \cite{AI} obtained the meromorphic continuation of the multiple Hurwitz zeta functions together with their possible polar singularities. In the special case when $\alpha_i \in {\mathbb Q}$ for $1 \le i \le r$, they also derived the exact set of singularities. This has also been done \cite{MV}. Using Mellin-Barnes integral formula, Matsumoto \cite{KM} showed meromorphic continuation of multiple Hurwitz zeta functions with possible set of singularities. Finally we refer to the interested reader the following papers, namely \cite{FKMT} and \cite{TO} where similar themes are addressed. An expression for residues along these possible polar hyperplanes were obtained in \cite{KM,MV}. For the multiple Hurwitz zeta functions, we are now able to characterise the exact set of singularities. This complete characterisation is new. More precisely, we have the following theorem. \begin{thm}\label{multiple-Hurwitz} The multiple Hurwitz zeta function $\zeta_r(s_1, \ldots, s_r;\alpha_1, \ldots, \alpha_r)$ has meromorphic continuation to ${\mathbb C}^r$. Further, all its poles are simple and they are along the hyperplanes $$ H_{1,0} \phantom{m}\text{and}\phantom{m} ~H_{i,k}~ \text{ for }~ 2 \le i \le r, ~k \ge 0 $$ except when $i =2, ~k \in K$, where $$ K := \{ n \in {\mathbb N} ~|~ B_{n}(\alpha_2 - \alpha_1)=0 \} $$ and $B_{n} (t)$ denotes the $n$-th Bernoulli polynomial defined by generating series $$ \frac{xe^{tx}}{e^x -1} = \sum_{ n \ge 0} B_{n}(t)\frac{x^{n}}{ n!}. $$ \end{thm} Before proceeding further, we indicate, compare and contrast some of the other existing works vis-{\`a}-vis our work. In \cite{MS}, the authors obtain meromorphic continuation for multiple Hurwitz zeta function of an arbitrary depth $r$ using Binomial expansion. In order to do so, they deduce a functional equation involving various Multiple Hurwitz zeta functions of a fixed depth $r$ (see Theorem 5.2). The novelty of our work is to deduce a functional equation involving Multiple Hurwitz zeta functions of depth $r$ with Multiple Hurwitz zeta functions of depth $r-1$ (see Theorem 4). This is the crucial ingredient which enables us to derive information about the poles and residues of such functions, which was not done in \cite{MS}. The use of Binomial expansion has also been exploited in \cite{DE} for proving the meromorphic continuation of multiple Hurwitz zeta functions. More precisely, he uses products of Binomial expansions which we avoid. Also he deals only with the diagonal vectors in the $r$-dimensional complex plane while we allow arbitrary vectors in ${\mathbb C}^r$. Furthermore, the author does not deal with the poles and residues of these functions. The paper is distributed as follows. In the next section, we prove some intermediate results and derive functional identities for the multiple Lerch zeta function which is a generalisation of the identity of Ramanujan (see \thmref{gen-rama}). In Section 3, we derive meromorphic continuation of the multiple Lerch zeta functions as well as their possible set of singularities using these functional identities. In Section 4, we follow \cite{MSV} to write down the relevant functional identity for the multiple Hurwitz zeta functions in terms of infinite matrices, in order to obtain an expression for residues along the singular hyperplanes (see \thmref{residues-mhzf}). Finally in Section 5, we complete the proof of \thmref{multiple-Hurwitz}. For this we need to use some fundamental properties of the zeros of the Bernoulli polynomials. These results are discussed in \S5.1. \section{Intermediate results and generalised Ramanujan's identity} In this section, we derive an analogue of \eqref{rama} (see \eqref{trans} below) for the multiple Lerch zeta functions. In order to establish \eqref{trans} we need some intermediate results. Before we state our theorem, we start with the notion of normal convergence. \begin{defn} Let $X$ be a set and $(f_i)_{i \in I}$ be a family of complex valued functions defined on $X$. We say that the family of functions $(f_i)_{i \in I}$ is normally summable on $X$ or the series $\sum_{i \in I} f_i$ converges normally on $X$ if $$ \|f_i\|_X := \sup_{x \in X} |f(x)| < \infty , ~\text{ for all }i \in I $$ and the family of real numbers $(\| f_i \|_X)_{i \in I}$ is summable. \end{defn} \begin{defn} Let $X$ be an open subset of ${\mathbb C}^r$ and $(f_i)_{i \in I}$ be a family of meromorphic functions on $X$. We say that $(f_i)_{i \in I}$ is normally summable or $\sum_{ i \in I} f_i$ is normally convergent on all compact subsets of $X$ if for any compact subset $K$ of $X$, there exists a finite set $J \subset I$ such that each $f_i$ for $i \in I \setminus J$ is holomorphic in an open neighbourhood of $K$ and the family $(f_i|K)_{ i \in {I \setminus J}}$ is normally summable on $K$. In this case, $\sum_{ i \in I} f_i$ is a well defined meromorphic function on~$X$. \end{defn} We now have the following theorem. \begin{thm}\label{gen-rama} Let $r \ge 2$ be a natural number, $\lambda_1, \ldots, \lambda_r, \alpha_1, \ldots, \alpha_r \in [0,1)$. Then for any $(s_1, \ldots, s_r) \in U_r$, we have \begin{equation}\label{trans} \begin{split} &e(\lambda_1) \sum_{k \ge -1} (s_1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\mu_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k+1,s_3,\ldots,s_r)\\ &= (1- e(\lambda_1)) L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r)\\ &+ \sum _{k\ge 0} (s_1)_k L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k+1,s_2,\ldots,s_r), \end{split} \end{equation} where $(s)_{-1}:=1$ and for $k \ge 0$, $$(s)_k:=\frac{s\cdots(s+k)}{(k+1)!},$$ and the series on both sides of \eqref{trans} converge normally on every compact subsets of $U_r$. \end{thm} If $\lambda_1=0$, we rewrite \eqref{trans} as, \begin{equation}\label{trans2} \begin{split} &\sum_{k \ge -1} (s_1-1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\lambda_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k,s_3,\ldots,s_r)\\ &= \sum _{k\ge 0} (s_1-1)_k L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k,s_2,\ldots,s_r). \end{split} \end{equation} From now on, we will call the identities \eqref{trans} and \eqref{trans2} as the generalised Ramanujan's identity for the multiple Lerch zeta functions. In order to prove \thmref{gen-rama}, we introduce another notation and prove some intermediate results. For any $m \ge 0$, let $$ U_r(m):=\{(s_1, \ldots, s_r) \in {\mathbb C}^r ~|~ \Re(s_1+ \cdots + s_i) > i - m ~\text{ for all }~ 1\leq i \leq r \}. $$ Note that $U_r = U_r(0)$. We first observe that the series on the right hand side of \eqref{lerch} is normally convergent on compact subsets of $U_r$. For this we need the following lemma from \cite{MSV}. \begin{lem}\label{mzf} For an integer $r \ge 1$, the family of functions $$ \left ( \frac{1}{n_1^{s_1} \cdots n_r^{s_r}} \right )_{n_1>\cdots>n_r>0} $$ converges normally on any compact subset of $U_r$. \end{lem} \begin{prop}\label{norm} For an integer $r \ge 1$ and $\lambda_1, \ldots, \lambda_r, \alpha_1, \ldots, \alpha_r \in [0,1)$, the family of functions $$ \left ( \frac{e(\lambda_1 n_1)\cdots e(\lambda_r n_r)} {(n_1+\alpha_1)^{s_1} \cdots (n_r+\alpha_r)^{s_r}} \right )_{n_1>\cdots>n_r>0} $$ converges normally on any compact subset of $U_r$. \end{prop} \begin{proof} The proposition follows immediately from \lemref{mzf} as in $U_r$ $$ \left | \frac{e(\lambda_1 n_1)\cdots e(\lambda_r n_r)} {(n_1+\alpha_1)^{s_1} \cdots (n_r+\alpha_r)^{s_r}} \right | \le \left | \frac{1}{n_1^{s_1} \cdots n_r^{s_r}} \right |. $$ \end{proof} We further need the following propositions. \begin{prop}\label{lerch-1} Let $m \ge 0, r \ge 2$ be natural numbers and $\lambda_1, \ldots, \lambda_r, \alpha_1, \ldots, \alpha_r \in [0,1)$. Then the family of functions $$ \left ( (s_1)_k \frac{e(\lambda_1n_1) e(\lambda_2n_2)\cdots e(\lambda_r n_r)} {(n_1+ \alpha_1)^{s_1+ k + 1} (n_2 + \alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}}\right )_{n_1> \cdots > n_r>0, \atop k\ge m -1} $$ is normally summable on compact subsets of $U_r(m)$. \end{prop} \begin{proof} Let $K$ be a compact subset of $U_r(m)$ and $ S:=\sup_{(s_1,\ldots,s_r) \in K} |s_1|$. Since $r \ge 2$, one has $n_1 \ge 2$ and hence for $k \ge m -1$ and $(s_1, \ldots, s_r) \in U_r(m)$, we have $$ \left\| (s_1)_k \frac{e(\lambda_1n_1) \cdots e(\lambda_r n_r)} {(n_1+ \alpha_1)^{s_1+ k + 1} (n_2 + \alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}} \right \|_K \le \frac{(S)_k}{2^{k - m + 1}} \left\| \frac{1} {n_1^{s_1+m} n_2^{s_2} \cdots n_r^{s_r}} \right \|_K. $$ Note that $(s_1, \ldots, s_r) \in U_r(m)$ if and only if $(s_1+ m, s_2, \ldots, s_r) \in U_r$. Now the proof of \propref{lerch-1} follows from \lemref{mzf} and the fact that the series $$ \sum_{k \ge m -1} \frac{(S)_k}{2^{k- m + 1}} $$ converges. \end{proof} \begin{prop}\label{lerch-2} Let $ m \ge 0, r \ge 2$ be natural numbers and $\lambda_1, \ldots, \lambda_r, \alpha_1, \ldots, \alpha_r \in [0,1)$. Then the family of functions $$ \left((s_1)_k (\alpha_2 - \alpha_1)^{k+1} \frac{e(\mu_2 n_2) e(\lambda_3 n_3) \cdots e(\lambda_r n_r)} {(n_2+\alpha_2)^{s_1 + s_2 + k + 1} ~(n_3+\alpha_3)^{s_3} \cdots (n_r + \alpha_r)^{s_r}} \right)_{n_2 > \cdots > n_r >0, \atop k \ge m - 1} $$ is normally summable on any compact subset of $U_r(m + 1)$ and hence on $U_r$. \end{prop} \begin{proof} As before, let $K$ be a compact subset of $U_r(m +1)$ and $$ S:=\sup_{(s_1,\ldots,s_r) \in K} |s_1|. $$ Then for $k \ge m-1, r \ge 2$ and $(s_1, \ldots, s_r) \in U_r(m)$, one has \begin{align*} &\left \| (s_1)_k \frac{(\alpha_2 - \alpha_1)^{k+1} e(\mu_2 n_2) e(\lambda_3 n_3) \cdots e(\lambda_r n_r)} {(n_2+\alpha_2)^{s_1 + s_2 + k +1} (n_3+\alpha_3)^{s_3} \cdots (n_r + \alpha_r)^{s_r}} \right \|\\ &\le {(S)_k}{ (\alpha_2 - \alpha_1)^{k+1}} \left\| \frac{1}{n_2^{s_1 + s_2 + m } n_3^{s_3} \cdots n_r^{s_r}} \right \|. \end{align*} Note that \begin{eqnarray*} (s_1, \ldots,s_r) \in U_r(m + 1) &\implies& (s_1 + s_2, s_3, \ldots, s_r) \in U_{r-1}(m) \\ &\implies& (s_1+ s_2 + m, s_3, \ldots, s_r) \in U_{r-1}. \end{eqnarray*} The proof now follows from \lemref{mzf} (for $(r-1)$) and the fact that $$ \sum_{k \ge m -1} {(S)_k}{ (\alpha_2 - \alpha_1)^{k+1}} $$ converges, as $|\alpha_2 - \alpha_1| <1$. \end{proof} \begin{prop}\label{lerch-3} Let $r \ge 2$ be an integer and $\lambda_1, \ldots, \lambda_r, \alpha_1, \ldots, \alpha_r \in [0,1)$. The family of functions $$ \left(\frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1-1)^{s_1}(n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}} \right)_{n_1 > \cdots > n_r >0} $$ is normally summable on any compact subset of $U_r$. \end{prop} \begin{proof} Note that $$ \left | \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1-1)^{s_1} (n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}} \right | \le \left | \frac{1}{(n_1-1)^{s_1} n_2^{s_2} \cdots n_r^{s_r}} \right |. $$ Also note that, \begin{align*} \left | \sum_{n_1 \ge n_2 +1} (n_1-1)^{-s_1} \right| & \le n_2^{-\Re(s_1)} + \sum_{n_1 \ge n_2 +1} n_1^{-\Re(s_1)}\\ & \le n_2^{-\Re(s_1)} + \int_{n_2}^\infty x^{-\Re(s_1)}~dx\\ & = n_2^{-\Re(s_1)} + \frac{1}{\Re(s_1)-1} n_2^{1-\Re(s_1)}. \end{align*} The proof follows from \lemref{mzf}. \end{proof} \subsection{Proof of \thmref{gen-rama}} We begin with the following identity which is valid for any integer $n \ge 2$, any real number $\alpha \ge 0$ and any complex number $s$: \begin{equation}\label{trick} (n+\alpha-1)^{-s}=\sum_{k \ge -1} (s)_k (n+\alpha)^{-s-k-1}. \end{equation} This identity is easily obtained by writing the left hand side as $(n+\alpha)^{-s}(1-\frac{1}{n+\alpha})^{-s}$ and expanding $(1-\frac{1}{n+\alpha})^{-s}$ as a Taylor series in $\frac{1}{n+\alpha}$. In \eqref{trick} we replace $n,\alpha,s$ by $n_1,\alpha_1,s_1$ respectively and then multiply both sides by $$ \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}}, $$ and sum for $n_1>\cdots>n_r>0$. Using \propref{lerch-3}, we get that \begin{equation}\label{LHS-1} \begin{split} & \sum_{n_1 > \cdots > n_r >0} \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1-1)^{s_1} (n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}}\\ & = e(\lambda_1) \sum_{n_1 > \cdots > n_r >0} \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1)^{s_1} (n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}}\\ & + e(\lambda_1) \sum_{n_2 > \cdots > n_r >0} \frac{e(\mu_2 n_2) e(\lambda_3 n_3) \cdots e(\lambda_r n_r)} {(n_2+\alpha_1)^{s_1} (n_2+\alpha_2)^{s_2} (n_3+\alpha_3)^{s_3} \cdots (n_r + \alpha_r)^{s_r}}. \end{split} \end{equation} Now, $$ (n_2+\alpha_1)^{-s_1} = \sum_{k \ge -1} (s_1)_k (\alpha_2-\alpha_1)^{k+1} (n_2+\alpha_2)^{-s_1-k-1}. $$ Hence using \propref{lerch-2} (for $m=0$), we obtain that \begin{equation}\label{LHS-2} \begin{split} & \sum_{n_1 > \cdots > n_r >0} \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1-1)^{s_1} (n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}}\\ & = e(\lambda_1) L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r)\\ & + e(\lambda_1) \sum_{k \ge -1} (s_1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\mu_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k+1,s_3,\ldots,s_r). \end{split} \end{equation} On the other hand, using \eqref{trick} and \propref{lerch-1} (for $m=0$), we get that \begin{equation}\label{RHS} \begin{split} & \sum_{n_1 > \cdots > n_r >0} \frac{e(\lambda_1 n_1) \cdots e(\lambda_r n_r)} {(n_1+\alpha_1-1)^{s_1} (n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}}\\ & = \sum _{k\ge -1} (s_1)_k L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k+1,s_2,\ldots,s_r) \end{split} \end{equation} Now equating the right hand sides of \eqref{LHS-2} and \eqref{RHS} we deduce \eqref{trans}. This together with \propref{lerch-1}, \propref{lerch-2} completes the proof. \qed \section{Proofs of \thmref{easy} and \thmref{multiple-Lerch}} In this section, we use the generalised Ramanujan's identities \eqref{trans} and \eqref{trans2} to prove \thmref{easy} and \thmref{multiple-Lerch}. We will prove these theorems by induction on depth $r$. We assume that the multiple Lerch zeta function of depth $(r-1)$ has already been extended to ${\mathbb C}^r$ and then by induction on $m \ge 1$ we extend the multiple Lerch zeta function of depth $r$ to each of $U_r(m)$. Since, $(U_r(m))_{m \ge 1}$ is an open covering of ${\mathbb C}^r$ we will get our desired result. \subsection{Proof of \thmref{easy}} When $r=1$, then \thmref{easy} is true by \rmkref{easy-kn}. Now let $r \ge 2$ and $\mu_i \not\in {\mathbb Z}$ for $1 \le i \le r$. For any $m \ge 1$, we rewrite \eqref{trans} as \begin{equation*} \begin{split} &e(\lambda_1) \sum_{k \ge m-2} (s_1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\mu_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r;s_1+s_2+k+1,s_3,\ldots,s_r)\\ &+e(\lambda_1) \sum_{-1 \le k \le m-3} (s_1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\mu_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r;s_1+s_2+k+1,s_3,\ldots,s_r)\\ &= (1- e(\lambda_1)) L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1,\ldots,s_r)\\ &+ \sum _{k\ge m-1} (s_1)_k L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1+k+1,s_2,\ldots,s_r)\\ &+ \sum _{0 \le k \le m-2} (s_1)_k L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1+k+1,s_2,\ldots,s_r). \end{split} \end{equation*} Now by virtue of \propref{lerch-1}, \propref{lerch-2} and the induction hypothesis for multiple Lerch zeta functions of depth $(r-1)$, we see that all the $k$-sums in \eqref{trans} are analytic in $U_r(1)$. Therefore \eqref{trans} defines an analytic continuation of $$ L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1,\ldots,s_r) $$ as $e(\lambda_1)\neq 1$. Now suppose that we have an analytic continuation of $$ L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1,\ldots,s_r) $$ to $U_r(m-1)$ which satisfies \eqref{trans} in $U_r(m-1)$. Thus we get that the sum $$ \sum _{0 \le k \le m-2} (s_1)_k L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1+k+1,s_2,\ldots,s_r) $$ is analytic in $U_r(m)$. Again we appeal to \propref{lerch-1}, \propref{lerch-2} and the induction hypothesis for multiple Lerch zeta functions of depth $(r-1)$ to deduce that all the $k$-sums in \eqref{trans} are analytic in $U_r(m)$. Hence we obtain an analytic continuation of $$ L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r;s_1,\ldots,s_r) $$ to $U_r(m)$. Since, $(U_r(m))_{m \ge 1}$ is an open covering of ${\mathbb C}^r$, this completes the proof. \qed \subsection{Proof of \thmref{multiple-Lerch}.} When $r=1$, then \thmref{multiple-Lerch} follows from \rmkref{easy-kn} if $\lambda_1 \not\in {\mathbb Z}$ and from \rmkref{multiple-Lerch-kn} if $\lambda_1 \in {\mathbb Z}$. Now suppose $r \ge 2$ and \thmref{multiple-Lerch} is true for multiple Lerch zeta function of depth $(r-1)$. \subsection{Case 1 : $i_1 =1$} In this case we have $\lambda_1 =0$ and hence use \eqref{trans2}. We recall, \begin{equation}\tag{\ref{trans2}} \begin{split} &\sum_{k \ge -1} (s_1-1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\lambda_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k,s_3,\ldots,s_r)\\ &= \sum _{k\ge 0} (s_1-1)_k L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k,s_2,\ldots,s_r). \end{split} \end{equation} To prove this case, we establish the meromorphic continuation of $$ (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ to ${\mathbb C}^r$ using \eqref{trans2} and determine all its possible singularities. For any $m \ge 1$, we know by \propref{lerch-1} and \propref{lerch-2} that the family of functions $$ \left( (s_1-1)_k \frac{e(\lambda_2 n_2)\cdots e(\lambda_r n_r)} {(n_1+\alpha_1)^{s_1+k}(n_2+\alpha_2)^{s_2} \cdots (n_r + \alpha_r)^{s_r}} \right)_{n_1 > \cdots > n_r >0, \atop k \ge m} \\ $$ and $$ \left( (s_1-1)_k (\alpha_2-\alpha_1)^{k+1} \frac{e(\lambda_2 n_2)e(\lambda_3n_3)\cdots e(\lambda_r n_r)} {(n_2+\alpha_2)^{s_1+s_2+k} \ (n_3+\alpha_3)^{s_3} \cdots (n_r + \alpha_r)^{s_r}} \right)_{n_2 > \cdots > n_r >0, \atop k \ge m - 1} $$ are normally summable on every compact subset of $U_r(m)$. Now for any $m \ge 1$, we rewrite \eqref{trans} as \begin{equation*} \begin{split} &\sum_{k \ge m-1} (s_1-1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\lambda_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k,s_3,\ldots,s_r)\\ &+\sum_{-1 \le k \le m-2} (s_1-1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\lambda_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k,s_3,\ldots,s_r)\\ &= (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r)\\ &+ \sum _{k\ge m} (s_1-1)_k L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k,s_2,\ldots,s_r)\\ &+ \sum _{1\le k\le m-1} (s_1-1)_k L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k,s_2,\ldots,s_r). \end{split} \end{equation*} Using the above observation, we obtain that both the infinite $k$-sums in the above equation are analytic in $U_r(m)$. From the induction hypothesis we deduce that the sum $$ \sum_{-1 \le k \le m-2} (s_1-1)_k (\alpha_2-\alpha_1)^{k+1} L_{r-1}(\lambda_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k,s_3,\ldots,s_r) $$ has a meromorphic continuation to ${\mathbb C}^r$. Now if we have that the function $$ (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ has a meromorphic continuation to $U_r(m-1)$ for each $m \ge 1$, then we can deduce that the sum $$ \sum _{1\le k\le m-1} (s_1-1)_k L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1+k,s_2,\ldots,s_r) $$ has a meromorphic continuation to $U_r(m)$ for each $m \ge 1$. Therefore we obtain a meromorphic continuation of $$ (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ to $U_r(m)$ by means of \eqref{trans2}. Since, $(U_r(m))_{m \ge 1}$ is an open covering of ${\mathbb C}^r$, we obtain a meromorphic continuation of $$ (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ to ${\mathbb C}^r$. Now for the set of singularities, we see from \eqref{trans2} that the singularities of $$ (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ can only come from that of $$ L_{r-1}(\lambda_2,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k,s_3,\ldots,s_r) $$ for all $k \ge -1$, and these singularities are known from the induction hypothesis. Finally we deduce that $$ (s_1-1) L_r(0,\lambda_2,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ has only possible polar singularities along the hyperplanes $$ H_{i_j, k} ~\text{ for }~ 2 \le j \le m \ \text{ with }~(i_j -k) \in {\mathbb Z}_{\le j}. $$ This completes the proof of this case. \subsection{Case 2 : $i_1 \neq 1$} Since in this case the applicable generalised Ramanujan's identity is \eqref{trans}, proof of this case follows exactly the line of argument for the proof of \thmref{easy}. The only difference would be that on each of $U_r(m)$ the depth $r$ multiple Lerch zeta function can only be extended as a meromorphic function. This is because of the induction hypothesis which implies that the depth $(r-1)$ multiple Lerch zeta functions $$ L_{r-1}(\mu_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k+1,s_3,\ldots,s_r) $$ for $k \ge -1$ can only be extended as meromorphic functions to ${\mathbb C}^r$. Now for the set of singularities, we see from \eqref{trans} that the singularities of $$ L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ can only come from that of $$ L_{r-1}(\mu_2,\lambda_3,\ldots,\lambda_r; \alpha_2, \ldots, \alpha_r; s_1+s_2+k+1,s_3,\ldots,s_r) $$ for $k \ge -1$. These singularities are known from the induction hypothesis and hence we deduce that $$ L_r(\lambda_1,\ldots,\lambda_r; \alpha_1, \ldots, \alpha_r; s_1,\ldots,s_r) $$ has only possible polar singularities along the hyperplanes $$ H_{i_j, k} ~\text{ for }~ 1 \le j \le m \ \text{ with }~(i_j -k) \in {\mathbb Z}_{\le j}. $$ \qed \section{Explicit computations of residues for multiple Hurwitz zeta functions} To get hold of the exact set of singularities we need to calculate the residues of the multiple Lerch zeta functions along its possible polar hyperplanes. For a hyperplane $H_{i,k}$, by residue of $$ L_r ( \lambda_1, \ldots, \lambda_r; \alpha_1, \ldots, \alpha_r ; s_1, \ldots, s_r ) $$ along $H_{i,k}$ we mean the restriction to $H_{i,k}$ of the meromorphic function $$ (s_1+\cdots+s_i-i+k) L_r ( \lambda_1, \ldots, \lambda_r; \alpha_1, \ldots, \alpha_r ; s_1, \ldots, s_r ). $$ It turns out that to study non-vanishing of these residues one needs information about zero sets of a family of polynomials with two variables (see \rmkref{gen-Ber} below). But for multiple Hurwitz zeta functions we only have to deal with the family of Bernoulli polynomials. As the zero set of Bernoulli polynomials are well-studied we just have enough information to determine the exact set of singularities of the multiple Hurwitz zeta functions. In what follows, we obtain a computable expression for residues of the multiple Hurwitz zeta functions. Note that the applicable generalised Ramanujan's identity in this case is \eqref{trans2}. Following this process one can also obtain similar expression for residues of the multiple Lerch zeta functions. For brevity, we do not include this here. We begin this section with some elementary remarks about infinite triangular matrices. Let $R$ be a commutative ring with unity. By ${\bf T}(R)$ we denote the set of upper triangular matrices of type ${\mathbb N} \times {\mathbb N}$ with coefficients in $R$. Adding or multiplying such matrices involves only finite sums, hence ${\bf T}(R)$ is a ring, and even an $R$-algebra. The group of invertible elements of ${\bf T}(R)$ are the matrices whose diagonal elements are invertible. Now let ${\bf P}$ be a matrix in ${\bf T}(R)$ with all diagonal elements equal to $0$, and $f = \sum_{n \ge 0} a_n x^n \in R[[x]]$ be a formal power series, then the series $\sum_{n \ge 0} a_n {\bf P}^n$ converges in ${\bf T}(R)$ and we denote its sum by $f({\bf P})$. For our purpose, we take $R$ to be the field of rational fractions ${\mathbb C}(t)$ in one indeterminate $t$ over ${\mathbb C}$. Recall that from \thmref{gen-rama}, we get that the multiple Hurwitz zeta function of depth $r$ satisfy the following generalised Ramanujan's identity: \begin{equation}\label{tf-mhzf-2} \begin{split} & \sum_{k \ge -1} (s_1-1)_k \ (\alpha_2-\alpha_1)^{k+1} \ \zeta_{r-1}(s_1+s_2+k,s_3,\ldots,s_r;\alpha_2,\alpha_3,\ldots,\alpha_r)\\ &=\sum_{k\ge 0} (s_1-1)_k \ \zeta_r(s_1+k,s_2,\ldots,s_r;\alpha_1,\alpha_2,\ldots,\alpha_r), \end{split} \end{equation} where both the above series of meromorphic functions converge normally on all compact subsets of ${\mathbb C}^r$. Formula \eqref{tf-mhzf-2} together with the set of relations obtained by applying successively the change of variable $s_1 \mapsto s_1+n$ for $n \ge 1$ to \eqref{tf-mhzf-2}, can be written as \begin{equation}\label{mat-tf-mhzf} \begin{split} &{\bf A_2}(\alpha_2-\alpha_1;s_1-1) {\bf V}_{r-1}( s_1+s_2-1,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)\\ &={\bf A_1}(s_1-1) {\bf V}_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r). \end{split} \end{equation} Here for an indeterminate $t$, we have \begin{equation} \label{eqA1} {\bf A_1}(t) := \left( \begin{array}{c c c c} t & \frac{t(t+1)}{2!} & \frac{t(t+1)(t+2)}{3!} & \cdots\\ 0 & t+1 & \frac{(t+1)(t+2)}{2!} & \cdots\\ 0 & 0 & t+2 & \cdots\\ \vdots & \vdots & \vdots & \ddots \end{array} \right), \end{equation} \begin{equation}\label{eqA2} {\bf A_2}(\alpha_2-\alpha_1;t) := \left( \begin{array}{c c c c} 1 & t(\alpha_2-\alpha_1) & \frac{t(t+1)}{2!}(\alpha_2-\alpha_1)^2 & \cdots \\ 0 & 1 & (t+1)(\alpha_2-\alpha_1) & \cdots \\ 0 & 0 & 1 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right) \end{equation} and \begin{equation}\label{eqV-mhzf} {\bf V}_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r) := \left( \begin{array}{c} \zeta_r(s_1,s_2,\ldots,s_r;\alpha_1, \ldots, \alpha_r)\\ \zeta_r(s_1+1,s_2,\ldots,s_r;\alpha_1, \ldots, \alpha_r)\\ \zeta_r(s_1+2,s_3\ldots,s_r;\alpha_1, \ldots, \alpha_r)\\ \vdots \end{array} \right). \end{equation} Note that the matrix ${\bf A_1}(t)$ can be written as $$ {\bf A_1}(t) = {\bf \Delta}(t) f({\bf M}(t+1)), $$ where $f$ is the formal power series $$ f(x):= \frac{e^x-1}{x} = \sum_{n \ge 0} \frac{x^n}{(n+1)!}, $$ and ${\bf \Delta}(t), {\bf M}(t) $ are as follows: $$ {\bf \Delta}(t) := \left( \begin{array}{c c c c} t & 0 & 0 & \cdots \\ 0 & t+1 & 0 & \cdots \\ 0 & 0 & t+2 & \cdots\\ \vdots & \vdots & \vdots & \ddots \end{array} \right) \ \text{ and } \ {\bf M}(t) := \left( \begin{array}{c c c c} 0 & t & 0 & \cdots\\ 0 & 0 & t+1 & \cdots\\ 0 & 0 & 0 & \cdots\\ \vdots & \vdots & \vdots & \ddots \end{array} \right). $$ It is easy to see that ${\bf \Delta}(t), {\bf M}(t) $ satisfy the following commuting relation: \begin{equation}\label{delta-M} {\bf \Delta}(t) {\bf M}(t+1) = {\bf M}(t){\bf \Delta}(t). \end{equation} Thus using \eqref{delta-M}, we have $$ {\bf A_1}(t) = f({\bf M}(t)) {\bf \Delta}(t). $$ Further, it is also possible to write that $$ {\bf A_2}(\alpha_2-\alpha_1; t)=h({\bf M}(t)), $$ where $h$ denotes the power series $$ e^{(\alpha_2-\alpha_1)x} = \sum_{n \ge 0} (\alpha_2-\alpha_1)^n \frac{x^n}{n!}. $$ Clearly the matrix ${\bf A_2}(\alpha_2-\alpha_1; t)$ is invertible and we see that $$ {\bf A_2}(\alpha_2-\alpha_1;t)^{-1} {\bf A_1}(t) = \frac{f}{h}({\bf M}(t)) \ {\bf \Delta}(t) = {\bf \Delta}(t) \ \frac{f}{h}({\bf M}(t+1)). $$ Hence the inverse of the matrix ${\bf A_2}(\alpha_2-\alpha_1;t)^{-1} {\bf A_1}(t)$ is given by $$ {\bf B}(\alpha_2-\alpha_1;t):= {\bf A_1}(t)^{-1} {\bf A_2}(\alpha_2-\alpha_1;t) =\frac{h}{f}({\bf M}(t+1)) \ {\bf \Delta}(t)^{-1} ={\bf \Delta}(t)^{-1} \ \frac{h}{f}({\bf M}(t)), $$ where $\frac{h}{f}$ is the exponential generating series of the Bernoulli polynomials evaluated at the point $(\alpha_2-\alpha_1)$, i.e. $$ \frac{h}{f}(x)=\frac{x e^{(\alpha_2-\alpha_1)x}}{e^x-1} = \sum_{n \ge 0} \frac{B_n(\alpha_2-\alpha_1)}{n!}x^n. $$ More precisely, we have \begin{equation} \label{eqB2} {\bf B}(\alpha_2-\alpha_1;t)= \left( \begin{array}{c c c c c} \frac{1}{t} & \frac{B_1(\alpha_2-\alpha_1)}{1!} & \frac{(t+1)B_2(\alpha_2-\alpha_1)}{2!} & \frac{(t+1)(t+2)B_3(\alpha_2-\alpha_1)}{3!} & \cdots \\ 0 & \frac{1}{t+1} & \frac{B_1(\alpha_2-\alpha_1)}{1!} & \frac{(t+2)B_2(\alpha_2-\alpha_1)}{2!} & \cdots \\ 0 & 0 & \frac{1}{t+2} & \frac{B_1(\alpha_2-\alpha_1)}{1!} & \cdots \\ 0 & 0 & 0 & \frac{1}{t+3} & \cdots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right). \end{equation} However, we can not express the column vector ${\bf V}_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$ as the product of the matrix ${\bf B}(\alpha_2-\alpha_1;s_1-1)$ and the column vector ${\bf V}_{r-1}( s_1+s_2-1,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)$. This is because the infinite series involved in this product are not convergent. To get around this difficulty we perform a truncation process. We first rewrite \eqref{mat-tf-mhzf} in the form \begin{equation}\label{mat-tf-mhzf-2} \begin{split} &{\bf \Delta}(s_1-1)^{-1} {\bf V}_{r-1}( s_1+s_2-1,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)\\ &=\frac{f}{h}({\bf M}(s_1)) {\bf V}_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r). \end{split} \end{equation} For notational convenience, let us denote $\frac{f}{h}({\bf M}(s_1))$ by ${\bf X}(s_1)$. We then choose an integer $q \ge 1$ and define $$ I := \{ k ~|~ 0 \le k \le q-1 \} \phantom{m}\text{and} \phantom{m} J := \{ k ~|~ k \ge q \}. $$ Then we write our matrices as block matrices, for example $$ {\bf X}(s_1) = \left( \begin{array}{c c} {\bf X}^{II}(s_1) & {\bf X}^{IJ}(s_1)\\ {\bf 0}^{JI} & {\bf X}^{JJ}(s_1) \end{array} \right). $$ Hence from \eqref{mat-tf-mhzf-2} we get that \begin{equation} \label{dvxvxv} \begin{split} & {\bf \Delta}^{II}(s_1-1)^{-1} {\bf V}_{r-1}^I(s_1+s_2-1,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)\\ =&\ {\bf X}^{II}(s_1) {\bf V}_r^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r) + {\bf X}^{IJ}(s_1) {\bf V}_r^J(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r). \end{split} \end{equation} Since ${\bf X}^{II}(s_1)$ is a finite invertible square matrix, we have $$ {\bf X}^{II}(s_1)^{-1} {\bf \Delta}^{II}(s_1-1)^{-1} = {\bf B}^{II}(\alpha_2-\alpha_1;s_1-1). $$ Therefore we deduce from \eqref{dvxvxv} that \begin{equation}\label{vbvy} \begin{split} &{\bf V}_r^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)\\ & = {\bf B}^{II}(\alpha_2-\alpha_1;s_1-1) {\bf V}_{r-1}^I(s_1+s_2-1,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)\\ &+{\bf Y}^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r) , \end{split} \end{equation} where \begin{equation} \label{yxxv} {\bf Y}^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r) = - {\bf X}^{II}(s_1)^{-1} {\bf X}^{IJ}(s_1) {\bf V}_r^J(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r). \end{equation} All the series of meromorphic functions involved in the products of matrices in formulas \eqref{vbvy} and \eqref{yxxv} converge normally on all compact subsets of ${\mathbb C}^r$. Moreover, all entries of the matrices on the right hand side of \eqref{yxxv} are holomorphic on the open set $U_r(q)$, translate of $U_r$ by $(-q,0,\ldots,0)$. Therefore the entries of ${\bf Y}^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$ are also holomorphic in $U_r(q)$. Let us denote $\xi_q(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$ to be the first entry of the column vector ${\bf Y}^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$. Then we get from \eqref{vbvy} that \begin{equation}\label{explicit-mhzf} \begin{split} &\zeta_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)\\ &=\ \frac{1}{s_1-1} \zeta_{r-1}(s_1+s_2-1,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)\\ &+ \sum_{k=0}^{q-2} \frac{s_1\cdots (s_1+k-1)}{(k+1)!} B_{k+1}(\alpha_2-\alpha_1) \ \zeta_{r-1}(s_1+s_2+k,s_3,\ldots,s_r; \alpha_2, \ldots, \alpha_r)\\ &+ \xi_q(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r), \end{split} \end{equation} and $\xi_q(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$ is holomorphic in the open set $U_r(q)$. In the above formula, whenever empty products and empty sums appear, they are assumed to be $1$ and $0$ respectively. Formula \eqref{explicit-mhzf} can also be obtained by using the Euler-Maclaurin summation formula which was done in~\cite{AI}. \begin{rmk}\label{gen-Ber} \rm Matrix formulation of the generalised Ramanujan's identity \eqref{trans2} would be similar as above. If one wants to write down a matrix formulation for the identity \eqref{trans} one encounters a family of polynomials $P_n(a,c)$ which are defined by the generating series $$ \frac{e^{ax}}{e^x-c}=\sum_{n \ge 0} P_n(a,c) \frac{x^n}{n!} $$ with $c \neq 1$. \end{rmk} We now observe that the following theorem can be deduced as an immediate consequence of \thmref{multiple-Lerch}. \begin{thm}\label{poles-mhzf} The multiple Hurwitz zeta function of depth $r$ can be meromorphically continued to ${\mathbb C}^r$ with possible simple poles along the hyperplanes $H_{1,0}$ and $H_{i,k}$, where $2 \le i \le r$ and $k \ge 0$. It has at most simple poles along each of these hyperplanes. \end{thm} To check if each $H_{i,k}$ is indeed a polar hyperplane, we compute the residue of the multiple Hurwitz zeta function of depth $r$ along this hyperplane using \eqref{vbvy} and \eqref{explicit-mhzf}. Recall that it is defined as the restriction of the meromorphic function $(s_1+\cdots+s_i-i+k) ~\zeta_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$ to $H_{i,k}$. \begin{thm}\label{residues-mhzf} The residue of the multiple Hurwitz zeta function $\zeta_{r}(s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r)$ along the hyperplane $H_{1,0}$ is the restriction of $\zeta_{r-1}(s_2,\ldots,s_r;\alpha_2, \ldots, \alpha_r)$ to $H_{1,0}$ and its residue along the hyperplane $H_{i,k}$, where $2 \le i \le r$ and $k \ge 0$, is the restriction to $H_{i,k}$ of the product of $\zeta_{r-i}(s_{i+1},\ldots,s_r;\alpha_{i+1}, \ldots, \alpha_r)$ with the $(0,k)^{\text{th}}$ entry of the matrix $$ \prod\limits_{d=1}^{i-1} {\bf B}(\alpha_{d+1}-\alpha_d;s_1+\cdots+s_d-d). $$ \end{thm} \begin{proof} Let $q \ge 1$ be an integer. As in the proof of Theorem~\ref{poles-mhzf}, we know from \eqref{explicit-mhzf} that $$ \zeta_r(s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r) - \frac{1}{s_1-1} \ \zeta_{r-1}(s_1+s_2-1,s_3,\ldots,s_r;\alpha_2, \ldots, \alpha_r) $$ has no pole along $H_{1,0}$ inside the open set $U_r(q)$. These open sets cover ${\mathbb C}^r$. Hence the residue of $\zeta_r(s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r)$ along $H_{1,0}$ is the restriction to $H_{1,0}$ of the meromorphic function $\zeta_{r-1}(s_1+s_2-1,s_3,\ldots,s_r;\alpha_2, \ldots, \alpha_r)$ or equivalently of $\zeta_{r-1}(s_2,\ldots,s_r;\alpha_2, \ldots, \alpha_r)$. This proves the first part of \thmref{residues-mhzf}. Now let $i, k$ be integers with $2 \le i \le r$ and $0 \le k < q$. Also let $I$ and $J$ be as in \S4.4. Now if one iterates $(i-1)$ times the formula \eqref{vbvy}, one gets \begin{equation*} \begin{split} {\bf V}_r^I(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r) = & \ \left(\prod_{d=1}^{i-1} {\bf B}^{II}(\alpha_{d+1}-\alpha_d;s_1+\cdots+s_d-d)\right)\\ &\times {\bf V}_{r-i+1}^I (s_1+\cdots +s_i-i+1,s_{i+1},\ldots,s_r; \alpha_i, \ldots, \alpha_r)\\ &+ {\bf Y}^{i,I} (s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r), \end{split} \end{equation*} where ${\bf Y}^{i,I} (s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r)$ is a column matrix whose entries are finite sums of products of rational functions in $s_1,\ldots,s_{i-1}$ with meromorphic functions which are holomorphic in $U_r(q)$. These entries therefore have no pole along the hyperplane $H_{i,k}$ in $U_r(q)$. The entries of $$ \prod_{d=1}^{i-1} {\bf B}^{II}(\alpha_{d+1}-\alpha_d;s_1+\cdots+s_d-d) $$ are rational functions in $s_1,\ldots,s_{i-1}$ and hence have no poles along $H_{i,k}$. It now follows from the induction hypothesis that the only entry of ${\bf V}_{r-i+1}^I (s_1+\cdots +s_i-i+1,s_{i+1},\ldots,s_r; \alpha_i, \ldots, \alpha_r)$ that can possibly have a pole along $H_{i,k}$ in $U_r(q)$ is the one of index $k$, which is $$ \zeta_{r-i +1} (s_1 + \ldots + s_i - i + k +1, s_{i +1}, \ldots, s_r; \alpha_i, \ldots, \alpha_r). $$ Its residue is the restriction of $\zeta_{r-i}(s_{i+1},\ldots,s_r; \alpha_{i+1}, \ldots, \alpha_r)$ to $H_{i,k} \cap U_r(q)$, where $2 \le i \le r$ and $0~\le~k < q$. Since the open sets $U_r(q)$ for $q > k$ cover ${\mathbb C}^r$, the residue of $\zeta_r(s_1,\ldots,s_r; \alpha_1, \ldots, \alpha_r)$ along $H_{i,k}$ is the restriction to $H_{i,k}$ of the product of the $(0,k)^{\text{th}}$ entry of the matrix $$ \prod_{d=1}^{i-1} {\bf B}(\alpha_{d+1}-\alpha_d;s_1+\cdots+s_d-d) $$ with $\zeta_{r-i}(s_{i+1},\ldots,s_r; \alpha_{i+1}, \ldots, \alpha_r)$. This proves the last part of \thmref{residues-mhzf}. \end{proof} \section{Proof of \thmref{multiple-Hurwitz}} \subsection{Zeros of Bernoulli polynomials} The information about the exact set of poles of multiple Hurwitz zeta functions in \thmref{multiple-Hurwitz} requires knowledge about the zeros of the Bernoulli polynomials. In this section, we discuss those properties of the zeros of the Bernoulli polynomials which are relevant to our study. Recall that the Bernoulli polynomials $B_n(t)$ are defined by $$ \sum_{n \ge 0} B_n(t) \frac{x^n}{n!} = \frac{x e^{tx}}{e^x -1}. $$ We have the following theorem by Brillhart \cite{JB} and Dilcher \cite{KD} about the zeros of Bernoulli polynomials. \begin{thm}[Brillhart-Dilcher]\label{Brill-Dil} Bernoulli polynomials do not have multiple roots. \end{thm} This theorem was first proved for the odd Bernoulli polynomials by Brillhart \cite{JB} and later extended for the even Bernoulli polynomials by Dilcher \cite{KD}. \thmref{Brill-Dil} amounts to say that the Bernoulli polynomials $B_{n+1}(t)$ and $B_n (t)$ are relatively prime as they satisfy the relation $$ B_{n+1}'(t) = (n+1) B_n(t) \ \text{ for all } n \ge 1. $$ where $B'_{n+1}(t)$ denotes the derivative of the polynomial $B_{n+1}(t)$. With the theorem of Brillhart and Dilcher in place we can now describe the exact set of singularities of the multiple zeta functions. For that it is convenient to have some intermediate lemmas in place. \subsection{Some intermediate lemmas} \begin{lem}\label{two-prod} Let $x,y$ be two indeterminate and the matrix $\bf{B}$ be as in \eqref{eqB2}. Then all the entries in the first row of the matrix $$ {\bf B}(\beta - \alpha; x) \ {\bf B}(\gamma - \beta; y), $$ where $0 \le \alpha, \beta, \gamma< 1$, are non-zero rational functions in $x,y$ with coefficients in ${\mathbb R}$. \end{lem} \begin{proof} Since entries of these matrices are indexed by ${\mathbb N} \times {\mathbb N}$, the entries of the first row are written as $(0,k)^{\text{th}}$ entry for $k \ge 0$. Let us denote the $(0,k)^{\text{th}}$ entry by $a_{0,k}$. Then we have the following formula: $$ x (y+k) \ a_{0,k} = \sum_{i=0}^{k} (x)_{i-1} (y+i+1)_{k-i-1} B_i(\beta-\alpha) \ B_{k-i}(\gamma - \beta) $$ for all $k \ge 0$. As the Bernoulli polynomial $B_0(t)$ is equal to $1$, we get $a_{0,0}= \frac{1}{xy}$ and hence non-zero. For $k \ge 1$, we first note that the set of polynomials $$ P:=\{(x)_{i-1} (y+i+1)_{k-i-1} : 0 \le i \le k \} $$ is linearly independent over ${\mathbb R}$. Now suppose that $B_1(\beta-\alpha) \neq 0$. We know by \thmref{Brill-Dil} that at least one of $B_k(\gamma - \beta)$ and $B_{k-1}(\gamma - \beta)$ is non-zero. It now follows from the linear independence of the set of polynomials in $P$ that $a_{0,k} \neq 0$. Next suppose that $B_1(\beta-\alpha) = 0$, i.e. $\beta-\alpha = 1/2$. Then $\gamma-\beta \neq 1/2$ as $0 \le \alpha, \gamma< 1$. Hence $B_1(\gamma-\beta) \neq 0$. Again by \thmref{Brill-Dil}, we know that at least one of $B_k(\beta-\alpha)$ and $B_{k-1}(\beta-\alpha)$ is non-zero. Now by linear independence of the set of polynomials in $P$, we get $a_{0,k} \neq 0$. This completes the proof of \lemref{two-prod}. \end{proof} \begin{lem}\label{any-prod} Let $n \ge 0$ be an integer and $x,x_1, \ldots, x_n$ be $(n+1)$ indeterminate. Let ${\bf D}$ be an infinite square matrix whose entries are indexed by ${\mathbb N} \times {\mathbb N}$ and is in the ring ${\mathbb R}(x_1, \ldots, x_n)$. Further, suppose that all the entries in the first row of ${\bf D}$ are non-zero. Then for any $\alpha, \beta \in {\mathbb R}$, all the entries in the first row of the matrix ${\bf D}{\bf B}(\beta - \alpha; x)$ are non-zero, where the matrix $\bf{B}$ be as in \eqref{eqB2}. \end{lem} \begin{proof} We first note that each column of ${\bf B}(\beta - \alpha; x)$ has at least one non-zero entry and the non-zero entries of each of these columns are linearly independent over ${\mathbb R}$ as rational functions in $x$ with coefficients in ${\mathbb R}$. Since all the entries in the first row of ${\bf D}$ are non-zero, the proof is complete by the above observation. \end{proof} We are now ready to prove \thmref{multiple-Hurwitz}. \subsection{Proof of \thmref{multiple-Hurwitz}} When $1 \le i \le r$ and $k \ge 0$, the restriction of $$ \zeta_{r-i}(s_{i+1},\ldots,s_r,\alpha_{i+1}, \ldots, \alpha_r) $$ to $H_{i,k}$ is a non-zero meromorphic function. Hence in order to prove \thmref{multiple-Hurwitz}, we need to show that when $2 \le i \le r$ and $k \ge 0$, the $(0,k)^{\text{th}}$ entry of the matrix $$ \prod\limits_{d=1}^{i-1} {\bf B}(\alpha_{d+1}-\alpha_d;s_1+\cdots+s_d-d) $$ is identically zero if and only if $i=2, \ k \in J$. By changing co-ordinates, the above statement is equivalent to say that when $t_1,\ldots,t_{i-1}$ are indeterminate, the $(0,k)^{\text{th}}$ entry of the matrix $$ \prod\limits_{d=1}^{i-1}{\bf B}(\alpha_{d+1}-\alpha_d;t_d) $$ is non-zero in ${\mathbb R}(t_1,\ldots,t_{i-1})$ except when $i=2$ and $k \in J$. For $i=2$, our matrix is ${\bf B}(\alpha_2-\alpha_1;t_1)$ and hence our assertion follows immediately. Now assume that $i \ge 3$. By \lemref{two-prod}, we know that all the entries in the first row of the matrix $$ {\bf B}(\alpha_2-\alpha_1;t_1) {\bf B}(\alpha_3-\alpha_2;t_2) $$ is non-zero in ${\mathbb R}(t_1,t_2)$. Hence the theorem follows from \lemref{two-prod} if $i=3$ and from repeated application of \lemref{any-prod} if $i > 3$. \qed \subsection{A particular case} \thmref{multiple-Hurwitz} shows that precise knowledge about zeros of Bernoulli polynomials determines the exact set of singularities of the multiple Hurwitz zeta functions. Now we have precise knowledge about the rational zeros of the Bernoulli polynomials due to Inkeri \cite{KI}. \begin{thm}[Inkeri]\label{Inkeri} The rational zeros of a Bernoulli polynomial $B_n (t)$ can only be $0, 1/2$ and $1$. This happens only when $n$ is odd and precisely in the following cases: \begin{enumerate} \item $B_n(0)=B_n(1)=0$ for all odd $n \ge 3$, \item $B_n(1/2)=0$ for all odd $n \ge 1$. \end{enumerate} \end{thm} Using \thmref{Inkeri}, we deduce the following corollary of \thmref{multiple-Hurwitz}. A particular case of this corollary, namely when $\alpha_i \in {\mathbb Q}$ for all $1 \le i \le r$, was proved in \cite{AI}. \begin{cor}\label{special-mhzf} If $\alpha_2-\alpha_1=0$, then the exact set of singularities of the multiple Hurwitz zeta function $\zeta_{r}(s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r)$ is given by the hyperplanes $$ H_{1,0}, H_{2,1}, H_{2,2k} \ \text{ and }\ H_{i,k} \ \text{ for all } \ k \ge 0\ \text{ and } \ 3 \le i \le r. $$ If $\alpha_2-\alpha_1=1/2$, then the exact set of singularities of the multiple Hurwitz zeta function $\zeta_{r}(s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r)$ is given by the hyperplanes $$ H_{1,0}, H_{2,2k} \ \text{ and }\ H_{i,k} \ \text{ for all } \ k \ge 0\ \text{ and } \ 3 \le i \le r. $$ If $\alpha_2-\alpha_1$ is a rational number $\neq 0, 1/2$, then the exact set of singularities of the multiple Hurwitz zeta function $\zeta_{r}(s_1,\ldots,s_r;\alpha_1, \ldots, \alpha_r)$ is given by the hyperplanes $$ H_{1,0} \ \text{ and }\ H_{i,k} \ \text{ for all } \ k \ge 0\ \text{ and } \ 2 \le i \le r. $$ \end{cor} \bigskip \noindent {\bf Acknowledgements.} Both the authors would like to thank the Institute of Mathematical Science (IMSc) where this work was done. Part of this work was supported by Number Theory plan project of DAE and a SERB grant. First author would also like to thank ICTP where part of the final draft was written. The second author would like to thank the organisers K. Soundararajan and J-M Deshouillers of a conference in number theory held at IMSc during December 14th-18th, 2015 where he was allowed to present this work. We thank the referee for bringing a number of references to our notice.
2,869,038,155,220
arxiv
\section{Introduction} \label{sec:Intro} Motivated by recent discoveries of interesting multiorbital superconductors, unconventional pairing mechanisms driven by the orbital degrees of freedom have attracted increasing attention. For example, in FeSe families and some heavy fermion superconductors, the superconductivity (SC) appears next to the non-magnetic orbital order phase. Such a phase diagram indicates a significant role of the orbital fluctuations on the pairing mechanism. From a theoretical point of view, it has been a big challenge to explain the emergence of the orbital order/fluctuations based on realistic multiorbital Hubbard models microscopically. In fact, only the spin fluctuations develop whereas the orbital fluctuations remain small within the conventional mean-field-level approximations, such as the random-phase-approximation (RPA) and the fluctuation-exchange (FLEX) approximation \cite{Bickers}. Thus, non-magnetic orbital order cannot be explained based on the mean-field-level approximations. The reason for this failure would be that the interplay between orbital and spin fluctuations, which is described by the vertex correction (VC), is totally neglected in the RPA and FLEX. Recently, the orbital order in Fe-based superconductors has been naturally explained by taking the Aslamazov-Larkin VC (AL-VC) into account \cite{Onari-SCVC,Onari-SCVCS,Yamakawa-FeSe}. In order to study the VCs, the functional-renormalization-group (fRG) is a very powerful and reliable theoretical method. Both the charge-channel and spin-channel VCs are calculated in an unbiased way by solving the RG equation, since the particle-particle and particle-hole channels are included on the same footing without violating the Pauli principle. Using the fRG theory, strong orbital fluctuation emerges in two-orbital Hubbard models in the presence of moderate spin fluctuations, as revealed in Refs. \cite{Tsuchiizu1,Tsuchiizu2}. These fRG studies confirmed the validity of the orbital fluctuation mechanism driven by the orbital-spin mode-coupling due to the AL-VC \cite{Onari-SCVC,Yamakawa-FeSe}. Theoretically, it is natural to expect that the developed orbital fluctuations mediate the pairing formation. The orbital fluctuations can induce not only the singlet SC (SSC), but also the triplet SC (TSC). By performing the fRG theory for the multiorbital models for Sr$_2$RuO$_4$, in which the TSC ($T_{\rm c}=1.5$ K) is expected to be realized \cite{Maeno,Maeno2,Sigrist-Rev,Ishida,Nomura,Wang,RG-Scaffidi,Kivelson}, orbital-fluctuation-mediated TSC has been proposed. In the frequently-used Migdal-Eliashberg (ME) approximation, the SSC pairing interaction is $\frac32{\hat U}^{0;s}{\hat \chi}^s(q){\hat U}^{0;s} -\frac12{\hat U}^{0;c}{\hat \chi}^c(q){\hat U}^{0;c}$, and the TSC pairing interaction is $-\frac12{\hat U}^{0;s}{\hat \chi}^s(q){\hat U}^{0;s} -\frac12{\hat U}^{0;c}{\hat \chi}^c(q){\hat U}^{0;c}$, where ${\hat U}^{0;c(s)}$ is the bare Coulomb interaction matrix for the charge (spin) channel \cite{Onari-SCVC}. Within the ME approximation, spin-fluctuation-mediated SSC is expected when ${\hat \chi}^s(q)$ and ${\hat \chi}^c(q)$ are comparable, because of the factor $\frac32$ for ${\hat \chi}^s(q)$ in the SSC pairing interaction. However, this expectation is never guaranteed beyond the ME approximation since ${\hat U}^{0;c}$ may be enlarged by the VC at low energies, which is actually realized as we explain in the present paper. In this paper, we analyze the two-orbital Hubbard model for the $(\a,\b)$-bands in Sr$_2$RuO$_4$ by using the fRG theory. The aim of the present study is to confirm the realization condition for the orbital-fluctuation-mediated SC by going beyond the ME approximation. For this purpose, we solve the gap equation by including the VC for the bare electron-boson coupling (EBC), which we call the $U$-VC. Due to the $U$-VC, the effective EBC for the charge (spin) channel, ${\hat U}^{c(s)}(k,k')$, deviates from the bare Coulomb interaction ${\hat U}^{0;c(s)}$. By applying the fRG theory, we find the relation $|{\hat U}^{c}(k,k')|\gg |{\hat U}^{0;c}|$ due to the charge-channel $U$-VC in the presence of moderate spin fluctuations. In contrast, ${\hat U}^{s}(k,k')$ is significantly suppressed by the spin channel $U$-VC at low energies. For these reasons, orbital-fluctuation-mediated SC will be realized in various multiorbital systems, such as in Fe-based superconductors and Sr$_2$RuO$_4$. We stress that the phonon-mediated attractive pairing is also enlarged by the factor $({\hat U}^{c}(k,k')/{\hat U}^{0;c})^2$. The Fermi liquid theory tells that the same $U$-VC causes (i) the enhancement of the orbital susceptibility and (ii) that of the orbital-fluctuation-mediated pairing interaction. This fact means that (i) and (ii) are realized simultaneously. This expectation will be confirmed by the present fRG study. \section{$U$-VC for the susceptibilities and gap equation} \label{sec:diagram} First, we introduce the dressed EBC due to the $U$-VC, and formulate the susceptibilities ${\hat \chi}^{c,s}(q)$ and the gap equation in the presence of the same $U$-VC. Figure \ref{fig:fig1} (a) shows the definition of the dressed EBC for the charge and spin channels, ${\hat U}^{c}(k,k')$ and ${\hat U}^{s}(k,k')$, which are irreducible with respect to bare Coulomb interactions ${\hat U}^{0;c}$ and ${\hat U}^{0;s}$: The definitions of ${\hat U}^{0;c}$ and ${\hat U}^{0;s}$ in the orbital basis are given in later section, and they were introduced in Refs. \cite{Takimoto,Onari-SCVC}. We put $k=(\k,\e_n)=(\k,(2n+1)\pi T)$ and $q=(\q,\w_l)=(\q,2l\pi T)$ hereafter. The solid and wavy lines represent the electron Green function ${\hat G}(k)$ and ${\hat \chi}^{x}(q)$ ($x=c,s$), respectively. The rectangle ($\Gamma^{I(U),x}$) is the VC for the bare EBC ${\hat U}^{0;x}$, which we call the $U$-VC. $\Gamma^{I(U),x}$ is irreducible with respect to ${\hat U}^{0;x}$ to avoid the double counting of the RPA-type diagrams. In the present fRG study, the $U$-VC is automatically obtained in solving the RG equation. In later section, we also calculate $U$-VC due to the Aslamazov-Larkin term perturbatively, which is the second-order terms with respect to ${\hat \chi}^{x}(q)$. \begin{figure}[!htb] \includegraphics[width=.9\linewidth]{fig1.eps} \caption{ (a) The effective interaction ${\hat U}^{x}$ for $x=c$ ($+$) and $x=s$ ($-$), which we call the dressed EBC. The filled circle represents the Coulomb interaction ${\hat U}^{0;x}$, and the rectangle ($\Gamma^{I(U),x}$) gives the $U$-VC. $\Gamma^{I(U),x}$ is irreducible with respect to ${\hat U}^{0;x}$ to avoid the double counting of the RPA-type diagrams. (b) Beyond the RPA: The irreducible susceptibility with the VC, where ${\hat \Lambda}^{x}={\hat U}^{x}\{{\hat U}^{0;x}\}^{-1}$. (c) Beyond the ME approximation: The gap equation with the three-point VCs for the coupling constant ($U$-VC). Only the single fluctuation exchange term is shown. } \label{fig:fig1} \end{figure} In Fig. \ref{fig:fig1} (b), we explain the VC for the irreducible susceptibility: The bare susceptibility without the VC is $\chi_{l,l',m,m'}^0(q)= -T\sum_{n}G_{l,m}(k+q)G_{m',l'}(k)$, where $G_{l,m}(k)$ is the Green function in the orbital basis. Then, the RPA susceptibility is ${\hat \chi}^x_{\rm RPA}(q) ={\hat \chi}^0(q)[{\hat 1}-{\hat U}^{0;x}{\hat \chi}^0(q)]^{-1}$. By using the three-point vertex ${\hat \Lambda}^{x}={\hat U}^{x}\{{\hat U}^{0;x}\}^{-1}$, the dressed irreducible susceptibility is given as $\Phi^x(q)= -T\sum_{n}G(k+q)G(k)\Lambda^{x}(k+q,k)$, where the orbital indices are omitted for simplicity. Then, the susceptibility with full VCs is obtained as ${\hat \chi}^x_{\rm with \mbox{-} VC}(q) ={\hat \Phi}^x(q)[{\hat 1}-{\hat U}^{0;x}{\hat \Phi}^x(q)]^{-1}$. Figure \ref{fig:fig1} (c) shows the gap equation due to the single-fluctuation-exchange term in the presence of the $U$-VC for the EBC. Within the RPA and the ME approximation, the pairing interaction for the singlet state is ${\hat V}_{s,{\rm RPA}}(k,k')=\frac32 {\hat I}_{\rm RPA}^s(k-k') -\frac12 {\hat I}_{\rm RPA}^c(k-k')-{\hat U}^{0;s}$, where ${\hat I}_{\rm RPA}^x(q)= {\hat U}^{0;x} ({\hat \chi}^x_{\rm RPA}(q)+\{{\hat U}^{0;x}\}^{-1}){\hat U}^{0;x}$. By including the VCs for both ${\hat \chi}^x_{\rm RPA}$ and the coupling constant ${\hat U}^{0;x}$, the pairing interaction with full VCs is given as ${\hat V}_{s,{\rm with\mbox{-}VC}}(k,k')=\frac32 {\hat I}_{\rm with\mbox{-}VC}^s(k,k') -\frac12 {\hat I}_{\rm with\mbox{-}VC}^c(k,k')-{\hat U}^{0;s}$, where ${\hat I}_{\rm with\mbox{-}VC}^x(k,k')= {\hat U}^{x}(k,k') ({\hat \chi}^x_{\rm with\mbox{-}VC}(k-k')+\{{\hat U}^{0;x}\}^{-1}){\hat U}^{x}(-k,-k')$. Therefore, the enhancement of the pairing interaction due to the charge-channel $U$-VC is naturally expected when the orbital fluctuations are realized by the $U$-VC, in terms of the Fermi liquid theory. For the purpose of analyzing the $U$-VC, the fRG theory is very useful since the $U$-VC for ${\hat \chi}^{x}(q)$ ($x=s,c$) and that for the gap equation are generated on the same footings in terms of the parquet approximation. This is a great merit of the fRG theory \cite{RG-Review}. In the present study, we use the RG+cRPA method, which enables us to perform very accurate numerical study \cite{Tsuchiizu1}. \section{RG+cRPA study for the two-orbital Hubbard model} \label{sec:RG-exp} In this section, we analyze the 2-orbital ($d_{xz}$, $d_{yz}$) Hubbard model, as a canonical simple multiorbital systems. We apply the renormalization-group plus constrained-RPA (RG+cRPA) method, which was developed in Refs. \cite{Tsuchiizu1,Tsuchiizu2,Tsuchiizu3}. By solving the RG differential equation, we obtain the renormalized 4-point vertex $\hat{\Gamma}^{x}_{{\rm RG}}$ ($x=s,c$) and susceptibilities $\chi ^{c(s)}(q)$ by taking account of the $U$-VC in a systematic and in an unbiased way. The superconducting state and the transition temperature ($T_{\rm c}$) are obtained by calculating the SSC and TSC susceptibilities, as formalized and performed in Ref. \cite{Tsuchiizu2}. \subsection{Model Hamiltonian and the four-point vertex given by the RG+cRPA} \label{sec:UVC1} First, we introduce the 2-orbitals square lattice Hubbard model, which describes the ($d_{xz}$, $d_{yz}$)-orbital bandstructure in $\rm{Sr_{2}RuO_{4}}$. We set the kinetic term of the Hamiltonian as \begin{eqnarray} H_{0}=\sum_{k,\sigma}\sum_{l,m}\xi_{k}^{l,m} c^{\dagger}_{k,l,\sigma}c_{k,m,\sigma} , \label{eqn:H0} \end{eqnarray} where $l, m$ takes $1$ or $2$, which corresponds to $d_{xz}$ or $d_{yz}$. $\xi^{l,m}_{k}$ is defined as $\xi^{1,1}_{k}=-2t\cos k_{x} -2t^{''}\cos k_{y}$, $\xi^{2,2}_{k}=-2t\cos k_{y} -2t^{''}\cos k_{x}$, $\xi^{1,2}_{k}=\xi^{2,1}_{k}=-4t^{'}\sin k_{x} \sin k_{y}$. Hereafter, we set the hopping parameters ($t^{}$, $ t^{'}$, $ t^{''})=(1, 0.1, 0.1)$: The unit of energy in the present study is $t=1$. The number of electrons is fixed as $n=n_{xz}+n_{yz}=4\times (2/3)=2.67$. The obtained band dispersion and Fermi surfaces (FSs) are shown in Figs. \ref{fig:FS} (a) and (b), which reproduce FS{$\alpha$} and FS{$\beta$} in Sr$_2$RuO$_4$. This model has been analyzed as a canonical multiorbital model in various theoretical studies, such as the anomalous Hall effect \cite{Kontani-AHE}. In the RG+cRPA method, each band is divided into the higher-energy part ($|\e_{u,\k}|>\Lambda_0$) and the lower-energy part ($|\e_{u,\k}|<\Lambda_0$). In order to perform the renormalization procedure, the lower-energy part is divided into $N_p/2$ patches. Figure \ref{fig:FS} (c) shows the contours for $|\e_{u,\k}|=\Lambda_0=1$ and the center of patches $1\sim64$. In addition, we introduce the on-site Coulomb interaction term, which contains the intra-orbital and inter-orbital Coulomb interactions $U$ and $U'$, the Hund's coupling $J$, and the pair hopping interaction $J'$. The bare Coulomb interaction term is expressed as \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\! H_{int}=\frac{1}{4}\sum_{i}\sum_{l l' m m'} \sum_{\sigma \sigma' \rho \rho'}U_{ll'mm'}^{0;\sigma \sigma' \rho \rho'} c^{\dagger}_{i l \sigma} c_{i l' \sigma'} c_{i m \rho} c^{\dagger}_{i m' \rho'} , \label{eqn:HUU} \\ &&\!\!\!\!\!\!\!\!\!\!\!\! U_{ll'mm'}^{0;\sigma \sigma' \rho \rho'} =\frac{1}{2}U^{0;s}_{ll'mm'} \vec{\bf{\sigma}}_{\sigma \sigma'} \cdot \vec{\bf{\sigma}}_{\rho' \rho} +\frac{1}{2}U^{0;c}_{ll'mm'}\delta_{\sigma,\sigma'}\delta_{\rho',\rho} , \label{eqn:HU} \end{eqnarray} where $U^{0;c}_{ll'mm'}=(-U, U'-2J, -2U'+J, -J', 0)$ and $U^{0;s}_{ll'mm'}=(U, U', J, J', 0)$ in the cases of ($l=l'=m=m'$, $l=m\neq l'=m'$, $l=l'\neq m=m'$, $l=m'\neq l'=m$ and otherwise). Hereafter, we assume the relation $J=J'=(U-U')/2$. The antisymmetrized full four-point vertex ${\hat \Gamma}(\k+\q,\k;\k'+\q,\k')$, which is the dressed vertex of the bare vertex ${\hat U}^{0}$ in Eq. (\ref{eqn:HU}) in the microscopic Fermi liquid theory \cite{AGD}, is depicted in Fig. \ref{fig:FS} (d). Reflecting the SU(2) symmetry of the present model, ${\hat \Gamma}$ is uniquely decomposed into the spin-channel and charge-channel four-point vertices by using the following relation: \begin{eqnarray} &&\Gamma_{ll'mm'}^{\sigma \sigma' \rho \rho'}(\k+\q,\k;\k'+\q,\k') \nonumber \\ && \ \ \ \ \ \ =\frac{1}{2}\Gamma_{ll'mm'}^{s}(\k+\q,\k;\k'+\q,\k') \vec{\bf{\sigma}}_{\sigma \sigma'} \cdot \vec{\bf{\sigma}}_{\rho' \rho} \nonumber \\ && \ \ \ \ \ \ \ +\frac{1}{2}\Gamma^{c}_{ll'mm'}(\k+\q,\k;\k'+\q,\k') \delta_{\sigma,\sigma'}\delta_{\rho',\rho} , \label{eqn:Gamma2} \end{eqnarray} where $\sigma, \sigma', \rho, \rho'$ are spin indices. We stress that ${\hat \Gamma}^{c,s}$ are fully antisymmetrized, so the requirement by the Pauli principle is satisfied. We note that ${\hat \Gamma}^{\uparrow\uparrow\uparrow\uparrow} =\frac12 {\hat \Gamma}^c+\frac12 {\hat \Gamma}^s$, ${\hat \Gamma}^{\uparrow\uparrow\downarrow\downarrow} =\frac12 {\hat \Gamma}^c-\frac12 {\hat \Gamma}^s$, and ${\hat \Gamma}^{\uparrow\downarrow\uparrow\downarrow} ={\hat \Gamma}^s$. \begin{figure}[htb] \includegraphics[width=.9\linewidth]{fig2.eps} \includegraphics[width=.9\linewidth]{fig2d.eps} \caption{(Color online) (a) Band dispersion of 2-orbital Hubbard model and (b) FSs composed of the $d_{xz}$-orbital (green) and $d_{yz}$-orbital (red). (c) The centre of patches ($1\sim64$) on the FSs. The arrows represents the nesting vector. The tip and the tail of each arrow correspond to $(i_\a,i_\b)=(6,37),(8,38),(10,39)$. (d) Definition of the full four-point vertex $\Gamma_{ll'mm'}^{\sigma \sigma' \rho \rho'}(\k+\q,\k;\k'+\q,\k')$ in the microscopic Fermi liquid theory. } \label{fig:FS} \end{figure} \subsection{RG+cRPA Theory} \label{sec:RG+cRPA} We analyze the present model by using the RG+cRPA method, which was introduced in our previous papers \cite{Tsuchiizu1,Tsuchiizu2,Tsuchiizu3} in detail. In this method, we introduce the original cutoff energy $\Lambda_{0}$ in order to divide each band into the higher and the lower energy regions: The higher-energy scattering processes are calculated by using the cRPA: The lower-energy scattering processes are analyzed by solving the RG equation, in which the initial vertices in the differential equation are given by the cRPA. The lower energy region is divided into $N_p/2$ patches for each band as shown in Fig. \ref{fig:FS} (c). \begin{figure}[htb] \includegraphics[width=.9\linewidth]{fig3.eps} \caption{(Color online) The one-loop RG equation for the four-point vertex. The crossed lines represent the electron Green function with cutoff $\Lambda$. The slashed lines represent the electron propagations having the energy shell $\Lambda$. } \label{fig:FS2} \end{figure} In the RG formalism, the four-point vertex function is determined by solving the differential equations, called the RG equations. In the band representation basis, the explicit form of the RG equations is given by \begin{widetext} \begin{eqnarray} \frac{d}{d\Lambda} \Gamma_\mathrm{RG}(k_1,k_2;k_3,k_4) &=& -\frac{T}{N}\sum_{k,k'} \left[ \frac{d}{d\Lambda} G(k) \, G(k') \right] \Bigl[ \Gamma_\mathrm{RG}(k_1,k_2;k,k') \, \Gamma_\mathrm{RG}(k,k';k_3,k_4) \nonumber \\ && {} - \Gamma_\mathrm{RG}(k_1,k_3;k,k') \, \Gamma_\mathrm{RG}(k,k';k_2,k_4) - \frac{1}{2} \Gamma_\mathrm{RG}(k_1,k; k',k_4) \, \Gamma_\mathrm{RG}(k,k_2;k_3,k') \Bigr] , \end{eqnarray} \end{widetext} where $G(k)$ is the Green function multiplied by the Heaviside step function $\theta(|\e_{u,\k}|-\Lambda)$, and $k$ is the compact notation of the momentum, band, and spin index: $k=(\k, \e_n, u, \sigma)$. The diagrammatic representation of the RG equations is shown in Fig.\ \ref{fig:FS2}. The first two contributions in the rhs represent the particle-hole channels and the last contribution is the particle-particle channel. The four-point vertex $\Gamma_\mathrm{RG}(k_1,k_2;k_3,k_4)$ is obtained by solving the above RG differential equation from $\Lambda_0$ to the lower cutoff energy $\w_c$. In a conventional fRG method, $\Lambda_0$ is set larger than the bandwidth $W_{\rm band}$, and the initial value is given by the bare Coulomb interaction in Eq. (\ref{eqn:HU}). In the RG+cRPA method, we set $\Lambda_0<W_{\rm band}$, and the initial value is given by the constraint RPA to include the higher-energy processes without over-counting of diagrams \cite{Tsuchiizu1}. The merits of the RG+cRPA method are listed as: (i) The higher-energy processes are accurately calculated within the cRPA by introducing the fine (such as $128\times128$) $\k$-meshes. This method is justified since the VCs are less important at higher energies. In the conventional $N_p$-patch fRG method, numerical errors due to the violation of the momentum-conservation becomes serious at higher-energy processes. (ii) The scattering processes contributed by the valence-bands (=Van-Vleck processes), which are important in multiorbital systems to derive physical orbital susceptibility, are taken into account in the RG+cRPA method. Especially, the Van-Vleck processes are crucial to obtain the orbital susceptibilities without unphysical behaviors. The full four-point vertex in Fig. \ref{fig:FS} (d) is expressed in the band basis. On the other hand, we solve the four-point vertex in the orbital basis in the present RG+cRPA study, expressed as ${\Gamma}_{uu'vv'}^{\s\s'\rho\rho'}(\k_1,\k_2;\k_3,\k_4)$. These expressions are transformed to each other by using the unitary matrix $u_{l,u}(\k)=\langle l,\k|u,\k \rangle$. In the present RG+cRPA study, we assume that each $\k_i$ is on the FSs, so we are allowed to drop four band indices $u,u',v,v'$. In this paper, we set $\Lambda_{0}=1.0$ ($<$ band width) and $N_p=64$, and introduce the logarithmic energy scaling parameter $\Lambda_{l}=\Lambda_{0}e^{-l}$ ($l\ge0$) in solving the RG equation. We verified that reliable results are obtained by setting $\Lambda_{0}\sim W_{\rm band}/2$. \subsection{Phase diagram obtained by the RG+cRPA} \label{sec:RG} First, we calculate the spin/charge susceptibilities and SSC/TSC susceptibilities at $T=5\times 10^{-4}$ by performing the RG+cRPA analysis. The renormalization is fulfilled till $\Lambda_{l}$ reaches $\Lambda_{l_c}=10^{-2}T$ (i.e., $l_c={\rm ln}(\Lambda_0/10^{-2}T)$). The charge (spin) susceptibilities in the multiorbital model is \begin{eqnarray} \chi^{c(s)}_{l l' m m'}(q)=\int^{\beta}_{0} d\tau \frac{1}{2}\left\langle A^{c(s)}_{l l'}({\bm q},\tau)A^{c(s)}_{m' m}({\bm -\q},0)\right\rangle e^{i\w_l\tau}, \label{eq:suscep} \end{eqnarray} where \begin{eqnarray} A^{c(s)}_{l\, l'}({\bm q})=\sum_{\bm k} (c^{\dagger}_{{\k} l' \uparrow} c_{{\k+\q} l \uparrow}+(-) c^{\dagger}_{{\bm k} l' \downarrow} c_{{\k+\q} l \downarrow}) . \label{eqn:A} \end{eqnarray} The obtained susceptibilities are shown in the Figs. \ref{fig:phase} (a) and (b) : $\chi^c_{x^2-y^2}({\bm q})=\sum_{l , m}(-1)^{l+m}\chi^{c}_{l, l, m, m}({\bm q})$ is the orbital susceptibility with respect to the orbital polarization $n_{xz}-n_{yz}$, and $\chi^{s}({\bm q})=\sum_{l , m}\chi^{s}_{l, l, m, m}({\bm q})$ is the total spin susceptibility. We set the parameters $(U, J/U)=(3.10, 0.08)$ and $T=5\times10^{-4}$, which corresponds to the black circle in the phase diagram in Fig. \ref{fig:phase} (c). Both $\chi^{s}(\q)$ and $\chi^c_{x^2-y^2}(\q)$ has the maximum around the nesting vector ${\bm Q}=(2\pi/3, 2\pi/3)$, and the relation $\chi^{s}(\Q)\approx \chi^c_{x^2-y^2}(\Q)$ is realized. The strong peak in $\chi^{s}(\Q)$ has been observed by the neutron inelastic scattering study for Sr$_2$RuO$_4$ \cite{neutron}. In addition to this result, the STM study \cite{STM} indicates that the TSC in Sr$_2$RuO$_4$ mainly originates from the electronic correlation in the ($\a,\b$)-bands. We stress that the strong enhancement of $\chi^c_{x^2-y^2}$ cannot be obtained in the RPA. This fact means that the strong orbital fluctuations originate from the $U$-VC, shown in Fig. \ref{fig:fig1} (b), calculated by the RG method appropriately. \begin{figure}[htb] \includegraphics[width=.9\linewidth]{fig4.eps} \caption{(Color online) (a) $\q$-dependence of obtained total spin susceptibility $\chi^{s}(\q)$ enlarged at ${\bm q} \approx (2\pi/3,2\pi/3)$. (b) Obtained quadrupole susceptibility $\chi^c_{x^2-y^2}(\q)$. (c) SC phase diagram obtained by RG+cRPA method. } \label{fig:phase} \end{figure} Secondly, we calculate the TSC (SSC) susceptibilities $\chi^{\rm{SC}}_{t(s)}$ by the RG+cRPA method. It is defined as \begin{eqnarray} \chi^{\rm{SC}}_{t(s)}= \frac{1}{2}\int^{\beta}_{0} d\tau \left\langle B^{\dagger}_{t(s)}(\tau)B_{t(s)}(0)\right\rangle , \label{eqn:suscepSC} \end{eqnarray} where \begin{eqnarray} B_{t(s)}=\sum_{{\bm k}\in {\rm FS}}\Delta_{t(s)}({\bm k}) c_{{\bm k},\uparrow}c_{{\bm -\k},\uparrow(\downarrow)} . \label{eqn:B} \end{eqnarray} The gap function $\Delta_{t(s)}({\bm q})$ in Eq. (\ref{eqn:B}) is uniquely determined by maximizing the SC susceptibilities \cite{Tsuchiizu2}. The obtained numerical results for $T=5\times10^{-4}$ and $\Lambda_{l_c}=10^{-2}T$ are summarized as the phase diagram in Fig. \ref{fig:phase} (c). The boundary of the orbital and magnetic orders are shown by the broken lines, and the relation $\chi^s(\Q)= \chi^c_{x^2-y^2}(\Q)$ holds on the dotted line. The boundaries for the TSC and SSC transition are shown by the solid lines. Thus, the TSC and SSC states are respectively realized below the orbital and magnetic order boundaries, for wide range of parameters. We stress that the strong orbital fluctuations and the TSC state is obtained for $J/U\lesssim O(0.1)$, which is comparable to the ratio $J/U=0.0945$ in FeSe derived from the first-principles study. The present result is substantially improved compared to the previous phase diagram for $\Lambda_0=1$ in Ref. \cite{Tsuchiizu2}, in which the strong orbital fluctuations appear only for $J/U<0.03$. The reason for this improvement is that four-point vertex in Ref. \cite{Tsuchiizu2} is underestimated since we included only the processes that rigorously satisfy the momentum conservation in solving the RG equation. In the present study, we allow the scattering processes if the momentum conservation is satisfied within the patch resolution, according to a similar manner explained in Ref. \cite{Metzner,Honerkamp,RG-Review}. This improved method was utilized in the study of the charge-density-wave in curate superconductors \cite{Tsuchiizu3}. The obtained TSC gap function belongs to the $E_{u}$ representation, and approximately follows the following $\k$-dependence: ($\Delta_{t,x}({\bm k})$,$\Delta_{t,y}({\bm k})$) $\propto (\sin 3k_{x},\sin 3k_{y})$. The SSC gap function belongs to $A_{1g}$ or $B_{1g}$ symmetry in the phase diagram in Fig. \ref{fig:phase} (c), similarly to our previous study in Ref. \cite{Tsuchiizu2}. Until now, many theoretical studies on the mechanism of the TSC in Sr$_2$RuO$_4$ have been performed. They are roughly classified into the following two scenarios. One of them is that the TSC is realized mainly in a two-dimensional (2D) FS$\gamma$ composed by the $d_{xy}$-orbital \cite{Nomura, Wang}. Nomura and Yamada explained the TSC state by using the higher-order perturbation theory \cite{Nomura}. In addition, Wang {\it et al}. performed the 2D RG and discussed that the TSC is realized on the FS$\gamma$ in the presence of spin fluctuations at $\q =(0.19\pi,0.19\pi)$. On the other hand, the TSC originating from the q1D FSs had been discussed by applying the perturbation theory \cite{Kivelson, RG-Scaffidi} and the RPA \cite{Takimoto}. Takimoto proposed the orbital-fluctuation-mediated TSC in the RPA \cite{Takimoto}. However, under the realistic condition $U' < U$, the TSC could not overwhelm the SSC in the RPA. In contrast to the RPA, the present authors obtained the TSC state in the wide parameters range with realistic condition $U' < U$ by using the RG+cRPA theory. As shown in the following section, these results originate from the important roles of the $U$-VC which is neglected in the RPA. From the experimental aspect, many efforts have been devoted to reveal the electronic state and the gap structure in Sr$_2$RuO$_4$. For example, strong AFM fluctuations at $\Q$ by the nesting of $\alpha$ and $\beta$ bands were observed by neutron scattering spectroscopy \cite{neutron}. In addition, a large SC gap with 2$|\Delta| \approx 5T_{c}$ was observed by the scanning tunneling microscopy measurement \cite{STM}. The authors expected that the observed large gap appears on the q1D FSs, since the tunneling will be dominated by the ($d_{xz}$,$d_{yz}$) orbitals that stand along the $z$ axis. These experiments indicate that the active bands of the TSC in Sr$_2$RuO$_4$ is q1D FSs. \section{Origin of orbital fluctuation mediated SC: Significant Role of the $U$-VC} In the previous section, we explained that the orbital-fluctuation-mediated TSC state is obtained for realistic parameter range by using the improved RG+cRPA method. In this section, we reveal the microscopic reason why the orbital-fluctuation-mediated pairing interaction becomes superior to the spin-fluctuation-mediated one in the case that ${\hat \chi}^s(q)$ and ${\hat \chi}^c(q)$ are comparable. This is the main aim of the present paper. \subsection{Gap equation beyond the ME scheme } Here, we study the SC state by analyzing the linearized gap equation based on the pairing interaction obtained by the RG equation \cite{RG-gapeq}. The gap equation in the band basis is given as \begin{eqnarray} &&\lambda_{t(s)} \Delta_{t(s)}({\bm k}) =\nonumber \\ &&-\int_{\rm FS} \frac{d{\bm k'}}{v_{{\bm k'}}} {V}^{\w_c}_{t(s)}({\bm k},{\bm k'}) \Delta_{t(s)}({\bm k'}) \ln{\frac{1.13\omega_{c}}{T}} , \label{eqn:gap-eq} \end{eqnarray} where $\Delta_{t(s)}({\bm k})$ is the TSC (SSC) gap function on the FSs, which has odd (even) parity. In Eq. (\ref{eqn:gap-eq}), $\k$ and $\k'$ are the momenta on the FS$\a$ and FS$\beta$, $\lambda_{t(s)}$ is the eigenvalue of the gap equation, and ${V}^{\w_c}_{t(s)}$ is the pairing interaction given by the RG equation, by setting the lower-energy cutoff as $\Lambda_{l_c}= \omega_c$ (i.e., $l_c= {\rm ln}(\Lambda_0/\omega_c)$). The expression of the pairing interaction is given below. We choose the cutoff $\omega_c$ so as to satisfy $\w_c \gg T$, and assume that the renormalization of the susceptibilities ${\hat \chi}^{s,c}(\q)$ saturates for $\Lambda_l<\w_c$. In deriving Eq. (\ref{eqn:gap-eq}), we used the relation $\int_{-\w_c}^{\w_c} d\e_{\k'} \frac1{2\e_{\k'}}{\rm th}(\e_{\k'}/2T) = {\rm ln} (1.13\w_c/T)$. In the present RG study, the pairing interaction in the band is directly given by solving the RG equation for the four-point vertex ${\Gamma}_{{\rm RG}}$, till the lower-energy cutoff $\Lambda_{l_c}= \omega_c$. We set $\w_c=12T= 6\times 10^{-3}$. By using the four-point vertex given by the RG+cRPA in the band basis representation, the pairing interaction in Eq. (\ref{eqn:gap-eq}) with the $U$-VC is given as \begin{eqnarray}{V}^{}_{t,{\rm RG}}({\bm{k},\bm{k}'})&=& -\frac{1}{4}{\Gamma}^{s}_{{\rm RG}}(\k,\k';-\k',-\k) \nonumber \\ &&-\frac{1}{4}{\Gamma}^{c}_{{\rm RG}}(\k,\k';-\k',-\k) , \label{eqn:V1t} \\ {V}^{}_{s,\rm{RG}}(\k,\k')&=& \frac{3}{4}{\Gamma}^{s}_{{\rm RG}}(\k,\k';-\k',-\k) \nonumber \\ &&-\frac{1}{4}{\Gamma}^{c}_{{\rm RG}}(\k,\k';-\k',-\k) . \label{eqn:V1s} \end{eqnarray} In ${V}^{}_{t(s),{\rm RG}}(\k,\k')$, the $U$-VC for the pairing interaction shown in Fig. \ref{fig:fig1} (c) is automatically included. In Fig. \ref{fig:diagram}, we show the typical diagrams included in ${\Gamma}_{\rm RG}$: The bare Coulomb interaction term is given in Fig. \ref{fig:diagram} (a). The single- and crossing-fluctuation-exchange terms are shown in Figs. \ref{fig:diagram} (b) and (c), respectively. The particle-particle ladder term is shown in Fig. \ref{fig:diagram} (d), which is expected to be small when $\w_c\gg T_{\rm c}$. The typical diagrams for the $U$-VC are shown in Fig. \ref{fig:diagram} (e). \begin{figure}[htb] \includegraphics[width=.8\linewidth]{fig5.eps} \caption{ (a) The bare interaction, (b) single-fluctuation-exchange term, (c) crossing-fluctuation-exchange term, and (d) the lowest particle-particle term. (e) Typical diagrams for the $U$-VC. For the charge sector, the Maki-Thompson (MT) term is negligibly smaller than the AL term in the presence of moderate spin fluctuations. The $O(\{U^0\}^3)$-terms in MT and AL terms are dropped to avoid the double counting. In (a)-(e), spin indices are not written explicitly.} \label{fig:diagram} \end{figure} In order to verify the importance of the $U$-VC, we also introduce the pairing interaction within the ME scheme: For this purpose, we solve the RG equation for ${\hat \chi}^{c(s)}_{\rm RG}$ till the lower cutoff $\Lambda_{l_c}=\w_c$. We set $\w_c=12T= 6\times 10^{-3}$. Using the obtained ${\hat \chi}^{c(s)}_{\rm RG}$, the antisymmetrized four-point vertex in the single-fluctuation-exchange approximation is expressed in the orbital basis as follows: \begin{eqnarray} &&{\Gamma}^{s}_{\chi,12,34}= \hat{U}^{0;s}_{12,34} +(\hat{U}^{0;s}\hat{\chi}^{s}(1-2)\hat{U}^{0;s})_{12,34} \nonumber \\ &&\ \ \ \ \ \ \ \ \ -\frac{1}{2}(\hat{U}^{0;c}\hat{\chi}^{c}(1-3)\hat{U}^{0;c})_{13,24} \nonumber \\ &&\ \ \ \ \ \ \ \ \ +\frac{1}{2}(\hat{U}^{0;s}\hat{\chi}^{s}(1-3)\hat{U}^{0;s})_{13,24} , \label{eqn:V3s} \\ &&{\Gamma}^{c}_{\chi,12,34}=\hat{U}^{0;c}_{12,34} +(\hat{U}^{0;c}\hat{\chi}^{c}(1-2)\hat{U}^{0;c})_{12,34} \nonumber \\ &&\ \ \ \ \ \ \ \ \ -\frac{1}{2}(\hat{U}^{0;c}\hat{\chi}^{c}(1-3)\hat{U}^{0;c})_{13,24} \nonumber \\ &&\ \ \ \ \ \ \ \ \ -\frac{3}{2}(\hat{U}^{0;s}\hat{\chi}^{s}(1-3)\hat{U}^{0;s})_{13,24} . \label{eqn:V3c} \end{eqnarray} Here, $\hat{U}^{0;c(s)}$ is the bare Coulomb interaction in Eq. (\ref{eqn:HU}), and $\hat{\chi}^{c(s)}_{{\rm RG}}$ is the $(2\times2)\times(2\times2)$ matrix. The diagrammatic expression for $\hat{V}^{}_{t(s),\chi}$ is given by dropping the $U$-VC in Fig. \ref{fig:diagram} (b). The pairing interaction $V_{t,\chi}(\k,\k')$ [$V_{s,\chi}(\k,\k')$] in the absence of the $U$-VCs are obtained by inputting Eqs. (\ref{eqn:V3s})-(\ref{eqn:V3c}) into Eq. (\ref{eqn:V1t}) [Eq. (\ref{eqn:V1s})], respectively, after performing the unitary transformation by using $u_{l,u}(\k)$. Then, ${\hat \chi}^{s,c}(1-2)$ [${\hat \chi}^{s,c}(1-3)$] in Eqs. (\ref{eqn:V3s}) and (\ref{eqn:V3c}) is replaced with ${\hat \chi}^{s,c}(\k-\k')$ [${\hat \chi}^{s,c}(\k+\k')$]. \subsection{Analysis of the $U$-VC based on the RG+cRPA method} Hereafter, we show the numerical results for the parameters ($U=3.10$, $J/U=0.08$, $\w_c=12T=6\times 10^{-3}$), which corresponds to the black circle in the phase diagram in Fig. \ref{fig:phase} (c). The renormalization of ${\hat \chi}^{s,c}(\q)$ saturates for $\Lambda_l<\w_c$. First, we solve the gap equation (\ref{eqn:gap-eq}) using the pairing interaction ${\hat V}_{t,{\rm RG}}$ and ${\hat V}_{s,{\rm RG}}$ in Eqs. (\ref{eqn:V1t})-(\ref{eqn:V1s}). Figures \ref{fig:gap} (a) and (b) show the obtained gap functions for the TSC state $\Delta_{t,x}(\theta)$ and the SSC state $\Delta_{s}(\theta)$, respectively, The eigenvalues are $\lambda_t=0.47$ and $\lambda_s=0.26$, respectively. The obtained $E_{1u}$ TSC gap and $A_{1g}$ SSC gap are essentially equivalent to the gap structures derived from the SC susceptibilities in Eq. (\ref{eqn:suscepSC}) by the RG+cRPA: see Ref. \cite{Tsuchiizu2}. Thus, the present gap equation analysis is essentially equivalent to the RG study for the SC state, in which the SC gap function is uniquely obtained by maximizing the SC susceptibility. \begin{figure}[htb] \includegraphics[width=.9\linewidth]{fig6.eps} \caption{(Color online) (a) $E_{1u}$-type TSC Gap function $\Delta_{t,x}(\theta)$ on the FS$\a$ and FS$\b$ as functions of $\theta$. (b) $A_{1g}$-type SSC Gap function $\Delta_{s}(\theta)$. (c) $\bar{\lambda}_{t(s)}$ for ${\hat V}_{t(s),{\rm RG}}$ as functions of $\w_c$. (d) $\bar{\lambda}_{t(s)}$ for ${\hat V}_{t(s),{\chi}}$. } \label{fig:gap} \end{figure} Using the solution of the gap equation $\Delta_{t(s)}(\k)$, the averaged pairing interaction $\bar{\lambda}_{t(s)}={\lambda}_{t(s)}/{\rm ln}(1.13\w_c/T)$ is expressed as \begin{eqnarray} \bar{\lambda}_{t(s)} = \frac{\displaystyle \int_{{\rm FS}} \frac{d\bm{k}}{v_{\bm{k}}} \int_{{\rm FS}} \frac{d\bm{k}'}{v_{\bm{k'}}} V_{t(s)}^{\w_c}({\k,\k'}) \Delta_{t(s)}({\bm k}) \Delta_{t(s)}({\bm k'}) } {\displaystyle \int_{{\rm FS}} \frac{d{\bm k}}{v_{\bm k}} \Delta_{t(s)}({\bm k}) \Delta_{t(s)}({\bm k})} . \label{eqn:averaged} \end{eqnarray} Figure \ref{fig:gap} (c) shows the obtained $\bar{\lambda}_{t}$ and $\bar{\lambda}_{s}$ as functions of $\Lambda_l$, where $\Delta_{t}(\k)$ and $\Delta_{s}(\k)$ are fixed to the gap structures shown in Figs. \ref{fig:gap} (a) and (b), respectively. Note that the relation $T_{{\rm c},t(s)}=1.13\w_c\exp(-1/\bar{\lambda}_{t(s)})$. The scaling curve of $\bar{\lambda}_{t,s}$ saturates to a constant when $\Lambda_l$ is smaller than $T$, which is shown by the vertical dotted lines. We find the approximate relation $\bar{\lambda}_{t} \sim 3\bar{\lambda}_{s}$ in Fig. \ref{fig:gap} (c), irrespective of the relation $\chi^s(\Q)\sim\chi^c_{x^2-y^2}(\Q)$ shown in Figs. \ref{fig:phase} (a) and (b). In order to verify the importance of the $U$-VC, we solve the gap equation by using ${\hat V}_{x,\chi}$, in which the $U$-VC is absent. Figure \ref{fig:gap} (d) shows the obtained $\bar{\lambda}_{t}$ and $\bar{\lambda}_{s}$ as functions of $\Lambda_l$. Here, $\Delta_{t}(\k)$ and $\Delta_{s}(\k)$ are fixed to Figs. \ref{fig:gap} (a) and (b), respectively. (Similar result is obtained even if the solution of the gap equation for ${\hat V}_{t(s),\chi}$ is used.) Thus, the relation $\bar{\lambda}_{t} \sim \bar{\lambda}_{s}/3$ is obtained if the $U$-VC is dropped. Therefore, the relation $\bar{\lambda}_{t} \gg \bar{\lambda}_{s}$ is realized when $\hat{V}^{}_{t(s),{\rm RG}}$ is used, while the opposite relation $\bar{\lambda}_{t} \ll \bar{\lambda}_{s}$ is obtained for $\hat{V}^{}_{t(s),\chi}$. Thus, we can concluded that the TSC is realized by the enhancement of the orbital-fluctuation-mediated pairing interaction by the charge-channel $U$-VC, and/or the suppression of the spin-fluctuation-mediated pairing by the spin-channel $U$-VC. \begin{figure}[htb] \includegraphics[width=.9\linewidth]{fig7.eps} \caption{(Color online) Spin- and charge-channel pairing interactions obtained by using the RG+cRPA method: (a) Spin-channel interaction ${\tilde \Gamma}^s_\chi(\k,\k')$ and (b) charge-channel one ${\tilde \Gamma}^c_\chi(\k,\k')$ in the absence of the $U$-VC. (c) ${\tilde \Gamma}^s_{\rm RG}(\k,\k')$ and (d) ${\tilde \Gamma}^c_{\rm RG}(\k,\k')$ in the presence of the $U$-VC. Here, ($\k,\k'$) is the pair of momenta for ($i_\a,i_\b$). (e) The ratios ${\tilde \Gamma}^c_\chi(\k,\k')/{\tilde \Gamma}^s_\chi(\k,\k')$ and ${\tilde \Gamma}^c_{\rm RG}(\k,\k')/{\tilde \Gamma}^s_{\rm RG}(\k,\k')$ as functions of $U$. $\k$ and $\k'$ are set as the start and end positions of the nesting vector shown in Fig. \ref{fig:FS} (b). We take the average over the ellipsoidal area. } \label{fig:interaction} \end{figure} To understand the role of the $U$-VC in more detail, we directly examine the momentum-dependence of the spin- (charge-) channel interaction without the $U$-VC ${\tilde \Gamma}^{s(c)}_\chi(\k,\k') \equiv \Gamma^{s(c)}_\chi(\k,\k';-\k',-\k)$ in addition to those with the $U$-VC ${\tilde \Gamma}^{s(c)}_{\rm RG}(\k,\k') \equiv \Gamma^{s(c)}_{\rm RG}(\k,\k';-\k',-\k)$. Figures \ref{fig:interaction} (a)-(d) show the obtained interactions for the parameters ($U=3.10$, $J/U=0.08$, $\w_c=12T=6\times 10^{-3}$). Here, $i_\a$ and $i_\b$ correspond to the patches on FS-$\a$ and FS-$\b$, respectively. In each panel, the pairs of patches inside the solid ellipsoidal, $(i_\a,i_\b)=(6,37),(8,38),(10,39)$, correspond to the nesting vector $\k\rightarrow \k'$ depicted by the arrows in Fig. \ref{fig:FS} (c). As shown in Figs. \ref{fig:interaction} (a) and (b), both ${\tilde \Gamma}^{s}_\chi(\k,\k')$ and ${\tilde \Gamma}^{c}_\chi(\k,\k')$ take large positive values when ($i_\a,i_\b$) is inside the solid ellipsoidal. Here, $\k-\k'\approx \Q \equiv (2\pi/3,2\pi/3)$. These large interactions originates from the peak structure of $\chi^{s}(\q)$ and $\chi^c_{x^2-y^2}(\q)$ at $\q\approx \Q$, as shown in Figs. \ref{fig:phase} (a) and (b). It is found that, in the absence of the $U$-VC, ${\tilde \Gamma}^{s}_\chi(\k,\k')$ becomes larger than ${\tilde \Gamma}^{c}_\chi(\k,\k')$ inside the ellipsoidal area [$(i_\a,i_\b) \approx (7,37)$] in Figs. \ref{fig:interaction} (a) and (b). For this reason, the relation ${\bar \lambda}_s \gg {\bar \lambda}_t$ is realized by neglecting the $U$-VC, shown in Fig. \ref{fig:gap} (d). Figures \ref{fig:interaction} (c) and (d) show the spin- and charge-channel interactions ${\tilde \Gamma}^{s}_{\rm RG}(\k,\k')$ and ${\tilde \Gamma}^{c}_{\rm RG}(\k,\k')$ in the presence of the $U$-VC. Both ${\tilde \Gamma}^{s}_{\rm RG}(\k,\k')$ and ${\tilde \Gamma}^{c}_{\rm RG}(\k,\k')$ take large positive values when $\k-\k'\approx\Q$. In the presence of the $U$-VC, ${\tilde \Gamma}^{c}_{\rm RG}(\k,\k')$ becomes larger than ${\tilde \Gamma}^{s}_{\rm RG}(\k,\k')$ inside the ellipsoidal area. By making comparison between Figs. \ref{fig:interaction} (a) and (c) [(b) and (d)], the spin-channel [charge-channel] interaction is reduced [enlarged] by the $U$-VC. For this reason, ${\bar \lambda}_t \gg {\bar \lambda}_s$ is realized by taking the $U$-VC into account correctly, shown in Fig. \ref{fig:gap} (c). We note that the large negative values in Figs. \ref{fig:interaction} (c) and (d) at $(i_\a,i_\b)=(6+16,37),(8+16,38),(10+16,39)$ originate from ${\hat \chi}^c(\k+\k')$ for $\k+\k'\approx\Q$, since its contribution is enlarged by the charge-channel $U$-VC in ${\tilde \Gamma}^{s,c}_{\chi}(\k,\k')$. Figure \ref{fig:interaction} (e) shows the ratios ${\tilde \Gamma}^c_\chi(\k,\k')/{\tilde \Gamma}^s_\chi(\k,\k')$ and ${\tilde \Gamma}^c_{\rm RG}(\k,\k')/{\tilde \Gamma}^s_{\rm RG}(\k,\k')$ at $(i_\a,i_\b) \approx (8,38)$ [$\k-\k'\approx\Q$] given by the RG+cRPA as functions of $U$. We set $\w_c=12T=6\times10^{-3}$ and $J/U=0.08$. $\k$ and $\k'$ are set as the start and end positions of the nesting vector shown in Fig. \ref{fig:FS} (c). For $U\rightarrow+0$, both ${\tilde \Gamma}^c_\chi/{\tilde \Gamma}^s_\chi$ and ${\tilde \Gamma}^c_{\rm RG}/{\tilde \Gamma}^s_{\rm RG}$ are equal to $-1$. They change to positive for $U \gtrsim 1$ since ${\tilde \Gamma}^c_{\chi({\rm RG})}$ changes to positive. For $U\gtrsim2$, ${\tilde \Gamma}^c_\chi/{\tilde \Gamma}^s_\chi \ll1$ whereas ${\tilde \Gamma}^c_{\rm RG}/{\tilde \Gamma}^s_{\rm RG}\gg1$. This result means that ${\tilde \Gamma}^{c(s)}_{\rm RG}$ is enlarged (suppressed) by the $U$-VC for wide range of $U$. To summarize, the spin-channel [charge-channel] interaction is drastically reduced [enlarged] by the $U$-VC, by making comparison between Figs. \ref{fig:interaction} (a) and (c) [(b) and (d)]. We stress that, except for the magnitude, the structure of ${\tilde \Gamma}^{x}_{\rm RG}(\k,\k')$ and that of ${\tilde \Gamma}^{x}_{\chi}(\k,\k')$ ($x=s,c$) are very similar. In addition, when $\k$ and $\k'$ are on the same FS, both ${\tilde \Gamma}^{x}_{\rm RG}$ and ${\tilde \Gamma}^{x}_\chi$ remain small. These facts reveal the importance of the single-fluctuation-exchange term in Fig. \ref{fig:diagram} (b), since the multi-fluctuation-exchange terms such as in Fig. \ref{fig:diagram} (c) give different momentum dependence. On the basis of the Fermi liquid theory, the same charge-channel $U$-VC enlarges the charge irreducible susceptibility ${\hat \Phi}^c(q)$ and the pairing interaction, as we show in Fig. \ref{fig:fig1}. Thus, the orbital-fluctuation-mediated pairing will be strongly magnified by the $U$-VC when the orbital fluctuations are driven by the VC. \subsection{Analysis of the $U$-VC based on the perturbation theory} \label{sec:UVC2} In the previous section, we found the significant role of the $U$-VC on the pairing interaction. The orbital-fluctuation-mediated pairing interaction is strongly magnified by the charge channel $U$-VC. We also found the strong suppression of the spin-fluctuation-mediated interaction due to the spin-channel VC in multiorbital systems. In this section, we perform the diagrammatic calculation for the $U$-VC shown in Fig. \ref{fig:diagram} (e), and confirm that the charge channel $U$-VC is strongly enlarged by the AL-VC. In addition, the suppression by the spin channel $U$-VC is mainly given by the $(U^0)^3$-term. The charge- and spin-channel MT-terms in Fig. \ref{fig:diagram} (e) are expressed as \begin{eqnarray} U^{c, {\rm MT}}_{l'm'lm} (k,k') &=& \frac{T}{2} \sum_{q} \sum_{abcd} U^{0;c}_{l'm'bc} \big\{ I^{c}_{aldm}(q)+3 I^{s}_{aldm}(q) \big\} \nonumber \\ & &\times G_{ab}(k+q)G_{cd} (k'+q), \\ U^{s, {\rm MT}}_{l'm'lm} (k, k') &=& \frac{T}{2} \sum_{q} \sum_{abcd} U^{0;s}_{l'm'bc} \big\{ I^{c}_{aldm}(q) - I^{s}_{aldm}(q) \big\} \nonumber \\ & &\times G_{ab} (k+q) G_{cd} (k'+q), \end{eqnarray} where ${\hat I}^x(q)= {\hat U}^{0;x} ({\hat \chi}^x_{\rm RPA}(q)+\{{\hat U}^{0;x}\}^{-1}){\hat U}^{0;x}$. Also, the charge- and spin-channel AL-terms in Fig. \ref{fig:diagram} (e) are \begin{eqnarray} &&U^{c, {\rm AL}}_{l'm'lm} (k, k') = \frac{T}{2} \sum_{q} \sum_{abcdefgh} U^{0;c}_{l'm'af} \nonumber \\ && \times \big\{ \Lambda_{abcdef} (k - k', q) + \Lambda_{fcbeda} (k - k', - q - k + k') \big\} \nonumber \\ && \times \big\{ I^{c}_{bclg} (q + k - k') I^{c}_{mhed} (q) + 3 I^{s}_{bclg} (q + k - k') I^{s}_{mhed} (q) \big\} \nonumber \\ && \times G_{gh} (k' - q) , \label{eqn:ALc} \\ &&U^{s, {\rm AL}}_{l'm'lm} (k, k') = \frac{T}{2} \sum_{q} \sum_{abcdefgh} U^{0;s}_{l'm'af} \nonumber \\ && \times \big\{ \Lambda_{abcdef} (k - k', q) + \Lambda_{fcbeda} (k - k', - q - k + k') \big\} \nonumber \\ && \times \big\{ I^{s}_{bclg} (q + k - k') I^{c}_{mhed} (q) + I^{c}_{bclg} (q + k - k') )I^{s}_{mhed} (q) \big\} \nonumber \\ && \times G_{gh} (k' - q) \nonumber \\ && + \delta U^{s, {\rm AL}}_{l'm'lm} (k, k'), \label{eqn:ALs} \end{eqnarray} where $a\sim h$ are orbital indices, and ${\hat \Lambda}(q,q')$ is the three-point vertex given as \begin{eqnarray} \Lambda_{abcdef} (q, q') = - T \sum_{p} G_{ab} (p + q) G_{cd} (p - q') G_{ef} (p) . \end{eqnarray} The last term in Eq. (\ref{eqn:ALs}) is given as $\delta U^{s, {\rm AL}}_{l'm'lm} (k, k') =\frac{T}{2} \sum_{q} \sum_{abcdefgh} U^{s, 0}_{l'm'af} \big\{ \Lambda_{abcdef} (k - k', q) - \Lambda_{fcbeda} (k - k', - q - k + k') \big\} 2 I^{s}_{bclg} (q + k - k') I^{s}_{mhed} (q) G_{gh} (k' - q)$, which is found to be very small. \begin{figure}[htb] \includegraphics[width=.9\linewidth]{fig8.eps} \caption{(Color online) (a) The ratios $(U^x_{\rm eff}/U^0)^2_{\rm diagram} \equiv (U^x_{\rm with\mbox{-}{\it U}VC}(\k,\k')/U^x_{\rm no\mbox{-}{\it U}VC}(\k,\k'))^2$ ($x=c,s$) given by the diagrammatic calculation as functions of the spin Stoner factor $\a_S$. For $U$-VC, we perform the diagrammatic calculation for Fig. \ref{fig:diagram} (e). (b) Third-order term with respect to $U$ for $U$-VC: We put $U=U'$ and $J=0$ for simplicity. This term is scaled as $\sim (2N_{\rm orb}-1)$, where $N_{\rm orb}$ is the number of $d$-orbital. (c) $(U^x_{\rm eff}/U^0)^2_{\rm RG} \equiv {\tilde \Gamma}^x_{\rm RG}/{\tilde \Gamma}^x_{\chi}$ given by the RG+cRPA method for $2.0\le U \le 3.1$. Inset: $(U^s_{\rm eff}/U^0)^2_{\rm RG}$ for $0\le U\le 3.1$. } \label{fig:perturbation} \end{figure} Figure \ref{fig:perturbation} (a) shows the ratios $(U^x_{\rm eff}/U^0)^2_{\rm diagram} \equiv (U^x_{\rm with\mbox{-}{\it U}VC}(\k,\k')/U^x_{\rm no\mbox{-}{\it U}VC}(\k,\k'))^2$ ($x=s,c$) at $(i_\a,i_\b) \approx (8,38)$ [$\k-\k'\approx\Q$] given by the diagrammatic calculation as functions of the spin Stoner factor $\a_S$. For $U$-VC, we perform the diagrammatic calculation for Fig. \ref{fig:diagram} (e). The double counting of the $O(\{U^0\}^3)$-terms is carefully eliminated. Note that $\a_S$ is the largest eigenvalue of ${\hat \Gamma}^s{\hat\chi}^0(\Q)$, and the relation $\chi^s(\Q)\propto (1-\a_S)^{-1}$ holds. We find that $(U^c_{\rm eff}/U^0)^2_{\rm diagram}$ gradually increases as the system approaches to the magnetic quantum-critical-point ($\a_S\rightarrow1$). The relation $(U^c_{\rm eff}/U^0)_{\rm diagram}^2\gg1$ originates from the charge-channel AL-term since Eq. (\ref{eqn:ALc}) is approximately proportional to $\sum_\q \chi^s(\q)\chi^s(\q+\Q) \sim (1-\a_S)^{-1}$. In contrast, $(U^s_{\rm eff}/U^0)_{\rm diagram}^2$ is suppressed by the $U$-VC, since the small spin-channel AL-term in Eq. (\ref{eqn:ALs}) is proportional to $\sum_\q \chi^s(\q)\chi^c(\q+\Q)$. We verified that the relation $(U^s_{\rm eff}/U^0)^2_{\rm diagram} \ll1$ mainly originates from the $O(\{U^0\}^3)$-term shown in Fig. \ref{fig:perturbation} (b): Its negative contribution is significant in multiorbital systems since the diagram in Fig. \ref{fig:perturbation} (b) is scaled as $\sim(2N_{\rm orb}-1)$, where $N_{\rm orb}$ is the number of $d$-orbital. Figure \ref{fig:perturbation} (c) shows $(U^x_{\rm eff}/U^0)^2_{\rm RG} \equiv {\tilde \Gamma}^x_{\rm RG}(\k,\k')/{\tilde \Gamma}^x_{\chi}(\k,\k')$ ($x=s,c$) at $(i_\a,i_\b) \approx (8,38)$ [$\k-\k'\approx\Q$] obtained by the RG+cRPA study as function of $U$. Here, $\w_c=12T=6\times10^{-3}$ and $J/U=0.08$. This ratio is expect to give the square of the $U$-VC when ${\hat \chi}^{s,c}(\q)$ develops strongly in the strong-coupling region ($U\gtrsim2.5$), in which the single-fluctuation-exchange term in Fig. \ref{fig:diagram} (b) becomes significant. The obtained relations $(U^c_{\rm eff}/U^0)_{\rm RG}^2\gg1$ and $(U^s_{\rm eff}/U^0)_{\rm RG}^2 \ll1$ in the strong-coupling region are consistent with the results given by the perturbation theory in Fig. \ref{fig:perturbation} (a). The inset shows $(U^s_{\rm eff}/U^0)^2_{\rm RG}$ for wide range of $U$: The origin of its $U$-linear term for $U\sim0$ would be some $U^2$-diagrams dropped in ${\tilde \Gamma}^x_{\chi}$, which are less important for the strong-coupling region. (Note that $(U^c_{\rm eff}/U^0)^2_{\rm RG}$ diverges at $U\approx 1.5$ since ${\tilde \Gamma}^x_{\chi}(\k,\k')$ changes its sign with $U$; see in Fig. \ref{fig:interaction} (e).) In summary, the significant role of the $U$-VC has been confirmed on the basis of the perturbation theory and the RG+cRPA theory. Due to the $U$-VC, the orbital- or charge-fluctuation-mediated pairing interaction is magnified by $(U^c_{\rm eff}/U^0)^2\gg1$ in the strong-coupling regime. In contrast, the spin-fluctuation-mediated pairing interaction is suppressed by $(U^s_{\rm eff}/U^0)^2\ll1$, and this suppression is prominent in multiorbital systems. In the strong-coupling regime, consistent results are obtained by the different two methods shown in Figs. \ref{fig:perturbation} (a) and (c). They do not coincide in the weak coupling regime because of the different definitions of $(U^x_{\rm eff}/U^0)^2$ in Figs. \ref{fig:perturbation} (a) and (c). \section{Discussions} \label{sec:dis} In this paper, we analyzed the two-orbital Hubbard model by using the RG+cRPA theory in order to confirm the realization condition for the orbital-fluctuation-mediated SC. To go beyond the ME approximation, we solved the gap equation by including the VC for the EBC, which is called the $U$-VC. Due to the $U$-VC, the effective EBC for the charge (spin) channel, ${\hat U}^{c(s)}$, deviates from the bare Coulomb interaction ${\hat U}^{0;c(s)}$. We verified the relation $|{\hat U}^{c}|\gg |{\hat U}^{0;c}|$ due to the charge-channel $U$-VC in the presence of moderate spin fluctuations. In contrast, ${\hat U}^{s}$ is significantly suppressed by the spin channel $U$-VC. For these reasons, orbital-fluctuation-mediated SC will be realized in various multiorbital systems, such as in Fe-based superconductors and Sr$_2$RuO$_4$. On the basis of the Fermi liquid theory, the same charge-channel $U$-VC enlarges the charge irreducible susceptibility ${\hat \Phi}^c(q)$ and the pairing interaction, as we show in Fig. \ref{fig:fig1}. Thus, the orbital-fluctuation-mediated pairing interaction should be strongly enlarged by the square of the $U$-VC when the orbital fluctuations are driven by the VC in terms of the Fermi liquid theory. In fact, the importance of the single-fluctuation-exchange term in Fig. \ref{fig:diagram} (b) is supported by the very similar momentum dependence between ${\tilde \Gamma}^{x}_{\rm RG}(\k,\k')$ and ${\tilde \Gamma}^{x}_{\chi}(\k,\k')$ ($x=c,s$) in Fig. \ref{fig:interaction} (a)-(d), except for the magnitude. The drastic difference in magnitude between ${\tilde \Gamma}^{x}_{\rm RG}$ and ${\tilde \Gamma}^{x}_{\chi}$ demonstrates the significance of the $U$-VC. We verified that the crossing-fluctuation-exchange term in Fig. \ref{fig:diagram} (c), which should have different momentum dependence, is small in magnitude based on the perturbation method. \begin{figure}[htb] \includegraphics[width=.8\linewidth]{fig9.eps} \caption{The gap equation due to the $e$-ph interaction, where the dotted line represents the phonon propagator and $g$ is the $e$-ph coupling constant. Due to the charge-channel $U$-VC caused by spin fluctuations, the phonon-mediated attractive interaction is enlarged by the factor $(U^c_{\rm eff}/U^0)^2\gg1$. } \label{fig:phonon} \end{figure} We stress that the phonon-mediated attractive pairing is also enlarged by the factor $({U}_{\rm eff}^{c}/{U}^{0})^2\gg1$, as we explain in Fig. \ref{fig:phonon}. The $s_{++}$-wave state in the single-layer FeSe may be given by the electron-phonon ($e$-ph) attractive interaction enhanced by the charge-channel $U$-VC. Note that the relation $({U}_{\rm eff}^{c}/{U}^{0})^2\gg1$ in the presence of moderate spin fluctuations is realized only in two- and three-dimensional systems. If we apply the local approximation, the charge-channel VC is proportional to the square of $\sum_q\chi^s(q)$, which is less singular even for $\a_S\approx 1$. In multiorbital models, the spin-fluctuation-mediated pairing interaction is strongly suppressed by the factor $({U}_{\rm eff}^{s}/{U}^{0})^2\ll1$. This result does not contradict to the enhancement of spin susceptibility $\chi^s(\q)$ shown in Fig. \ref{fig:diagram} (a), since the $U$-VC is effective only at low energies, whereas the irreducible susceptibility $\Phi^s$ in Fig. \ref{fig:fig1} (b) is given by the integration for wide energy range. In the context of the fRG, $\chi^s(\q)$ starts to increase in the early stage of the renormalization, whereas the $U$-VC develops in the later stage. \acknowledgments We are grateful to W. Metzner and C. Honerkamp for useful comments and discussions. This study has been supported by Grants-in-Aid for Scientific Research from Ministry of Education, Culture, Sports, Science, and Technology of Japan.
2,869,038,155,221
arxiv
\section{Introduction} Let $\mathbb{F}_5$ denote the finite field of order $5$. An $[n,k]$ code $C$ over $\mathbb{F}_5$ is a $k$-dimensional vector subspace of $\mathbb{F}_5^n$, where $n$ is called the length of $C$. All codes in this note are codes over $\mathbb{F}_5$. An $[n,k,d]$ code is an $[n,k]$ code with minimum weight $d$. A code $C$ is said to be {\em self-dual} if $C=C^\perp$, where $C^\perp$ denotes the dual code of $C$ under the standard inner product. A self-dual code of length $n$ exists if and only if $n$ is even. As described in~\cite{RS-Handbook}, self-dual codes are an important class of linear codes for both theoretical and practical reasons. It is a fundamental problem to classify self-dual codes of modest length and determine the largest minimum weight among self-dual codes of that length. Self-dual codes over $\mathbb{F}_5$ were classified in \cite{LPS-GF5} for lengths up to $12$. The classification was extended to lengths $14$ and $16$ in \cite{HO-GF5}. The largest minimum weights among self-dual codes of lengths $18,20$ and $22$ were determined in \cite{HO-GF5}, \cite{LPS-GF5} and \cite{HanKim}, respectively. For length $24$, the largest minimum weight is either $9$ or $10$ \cite{LPS-GF5}. In this note, we prove the following theorem. \begin{thm}\label{thm} There exists no self-dual $[24,12,10]$ code over $\mathbb{F}_5$. \end{thm} Hence the largest minimum weight among self-dual codes of length $24$ is exactly $9$. The assertion of Theorem \ref{thm} was a question in \cite[p.~192]{LPS-GF5}. \section{Unimodular lattices and Construction A} An $n$-dimensional (Euclidean) lattice $L$ is {\em unimodular} if $L = L^{*}$, where the dual lattice $L^{*}$ is defined as $L^{*} = \{ x \in {\mathbb{R}}^n | \langle x,y\rangle \in \mathbb{Z} \text{ for all } y \in L\}$ under the standard inner product $\langle x, y\rangle$. The {\em norm} of a vector $x$ is $\langle x, x\rangle$. The {\em minimum norm} of $L$ is the smallest norm among all nonzero vectors of $L$. A unimodular lattice $L$ is {\em even} if all vectors of $L$ have even norms, and {\em odd} if some vector has an odd norm. The {\em kissing number} of $L$ is the number of vectors of minimum norm. If $C$ is a self-dual code of length $n$, then \[ A_{5}(C) = \frac{1}{\sqrt{5}} \{x \in \mathbb{Z}^n \:|\: (x \bmod 5)\in C\} \] is an odd unimodular lattice, where $(x \bmod 5)$ denotes $(x_1 \bmod 5,\ldots,x_n \bmod 5)$ for $x=(x_1,x_2,\ldots,x_n)$. This construction of lattices from codes is called Construction A\@. If $C$ is a self-dual $[24,12,10]$ code over $\mathbb{F}_5$, then $A_5(C)$ is a $24$-dimensional odd unimodular lattice with minimum norm $\ge 2$. The odd Leech lattice is a unique $24$-dimensional odd unimodular lattice with minimum norm $3$. There are $155$ non-isomorphic $24$-dimensional odd unimodular lattices with minimum norm $2$ \cite{Bor} (see also \cite[Table 2.2]{SPLAG}). \section{Proof} In this section, we give a proof of Theorem \ref{thm}. \begin{proof} Let $C$ be a self-dual $[24,12,10]$ code over $\mathbb{F}_5$. As described in \cite{HanKim}, the Lee weight enumerator (see \cite[p.~180]{LPS-GF5} for the definition) of $C$ is uniquely determined. Since the coefficient of $x^{14}y^{10}$ in the Lee weight enumerator is $528$, $A_5(C)$ has minimum norm $2$ and kissing number $528$. The only $24$-dimensional odd unimodular lattice with minimum norm $2$ and kissing number $528$ is the $154$-th lattice in~\cite[Table 17.1]{SPLAG}, which is the direct sum of two copies of the lattice $D_{12}^+$. Thus $A_5(C) = L_1 \oplus L_2$, where for $i=1,2$, $L_i$ is isomorphic to $D_{12}^+$ when restricted to the $12$-dimensional subspace $\mathbb{R}L_i$ of $\mathbb{R}^{24}$. In particular, both $L_1$ and $L_2$ have minimum norm $2$. Let $e_i$ denote the unit vector $(\delta_{i,1},\delta_{i,2},\ldots,\delta_{i,24})$ ($i=1,2,\ldots,24$), where $\delta_{i,j}$ is Kronecker's delta symbol. We claim $\sqrt{5}e_i\in L_1$ or $\sqrt{5}e_i\in L_2$ for each $i\in \{1,\dots,24\}$. Indeed, it suffices to prove the claim for $i=1$. We may write $\sqrt{5}e_1=a+b$, where $a\in L_1$ and $b\in L_2$. Since the minimum norms of $L_1,L_2$ are both $2$, $a\ne0$ and $b\ne0$ would imply $\{\langle a,a\rangle,\langle b, b\rangle\}=\{2,3\}$. We may assume without loss of generality that $\langle a,a\rangle=2$, and write $a=\frac{1}{\sqrt{5}}(c_1,\dots,c_{24})$. Then $c_1=\langle a,\sqrt{5}e_1 \rangle =\langle a,a+b \rangle=2$ since $L_1=A_5(C)\cap L_2^\perp$, and hence $10=5\langle a,a \rangle =\sum_{i=1}^{24}c_i^2=4+\sum_{i=2}^{24}c_i^2$. This implies that the codeword $(c \bmod{5})\in C$ has weight less than $10$. This contradiction shows that either $a=0$ or $b=0$, proving the claim. Since the vectors $\sqrt{5}e_i$ ($i=1,2,\dots,24$) are linearly independent and $\dim L_1=\dim L_2=12$, we may assume without loss of generality that \begin{align*} &\sqrt{5}e_i\in L_1\quad\text{for }i=1,2,\dots,12,\text{ and}\\ &\sqrt{5}e_i\in L_2\quad\text{for }i=13,14,\dots,24. \end{align*} Then \begin{align*} L_1&=A_5(C)\cap L_2^\perp =A_5(C)\cap \bigoplus_{i=1}^{12}\frac{1}{\sqrt{5}}\mathbb{Z} e_i,\\ L_2&=A_5(C)\cap L_1^\perp =A_5(C)\cap \bigoplus_{i=13}^{24}\frac{1}{\sqrt{5}}\mathbb{Z} e_i. \end{align*} Define codes $C_1,C_2$ by \begin{align*} C_1&=\{((c_1,\dots,c_{12})\bmod{5})\mid \frac{1}{\sqrt{5}}(c_1,\dots,c_{12},0,\dots,0)\in L_1\},\\ C_2&=\{((c_{13},\dots,c_{24})\bmod{5})\mid \frac{1}{\sqrt{5}}(0,\dots,0,c_{13},\dots,c_{24})\in L_2\}. \end{align*} Then for $c=(c_1,c_2,\dots,c_{24})\in\mathbb{Z}^{24}$, we have \begin{align*} &(c\bmod{5})\in C \\ &\iff \frac{1}{\sqrt{5}}c\in L_1\oplus L_2 \\ &\iff c=a_1+a_2\text{ for some }a_1,a_2\in\mathbb{Z}^{24}\text{ with } \frac{1}{\sqrt{5}}a_1\in L_1,\; \frac{1}{\sqrt{5}}a_2\in L_2 \\ &\iff \frac{1}{\sqrt{5}}(c_1,\dots,c_{12},0,\dots,0)\in L_1\text{ and } \frac{1}{\sqrt{5}}(0,\dots,0,c_{13},\dots,c_{24})\in L_2 \\ &\iff ((c_1,\dots,c_{12})\bmod{5})\in C_1\text{ and } ((c_{13},\dots,c_{24})\bmod{5})\in C_2. \end{align*} Hence $C$ is decomposable into the direct sum of the two codes $C_1,C_2$, each of which is of length $12$. Since $(C_1 \oplus C_2)^\perp = C_1^\perp \oplus C_2^\perp$ (see \cite[Exercise 30]{Huffman-Pless}) and $C$ is self-dual, both $C_1$ and $C_2$ are self-dual. However, no self-dual code of length $12$ has minimum weight $\ge 10$. This is a contradiction, and the proof is complete. \end{proof}
2,869,038,155,222
arxiv
\section{Introduction} \subsection{Background and overview} Baxter first introduced his Q-operator in \cite{Ba72,Ba73} as an auxiliary tool in the derivation of Bethe Equations for the eigenvalues of the 8-vertex model transfer matrix. The key characters in the story are the transfer matrix $\mc{T}(z)$ and the Q-operator $\mc Q(z)$. A detailed description of the essential properties of $\mc{T}(z)$ and $\mc Q(z)$ can be found in \cite{BLZ97} (also see \cite{VW20} and references therein); the key relation that they satisfy that leads directly to the Bethe equations is of the form \eq{ \mc T(z) \mc Q(z)= \alpha_+(z) \mc Q(q z) + \alpha_-(z) \mc Q(q^{-1}z),\label{eq:TQ1} \\ } where $\alpha_\pm(z)$ are meromorphic functions and $q\in \C^\times$ is not a root of unity. In the original papers of Baxter, the operator $\cQ(z)$ was constructed by a brilliant but ad hoc argument; the representation-theoretic construction of $\cQ(z)$ had to wait more than 20 years until the work of Bazhanov, Lukyanov and Zamolodchikov \cite{BLZ96,BLZ97,BLZ99}. The main idea of the latter approach is to construct both $\mc T(z)$ and $\mc Q(z)$ as partial traces over different representations of the universal R-matrix $\cR$ of $\uq$. The operator $\mc T(z)$ is a twisted trace over a two-dimensional $\uq$-representation $\Pi_z$, and $\mc Q(z)$ is a similarly twisted trace over an infinite-dimensional $U_q(\wh\mfb^+)$-representation $\rho_z$, where $\wh\mfb^+$ is the upper Borel subalgebra of $\wh\mfsl_2$ (the relevant representations are defined in Section \ref{sec:reps:plus} of the current paper). The relation \eqref{eq:TQ1} for closed spin chains then follows immediately by considering a short exact sequence (SES) of $\uqbp$-representations with $\Pi_z\ot \rho_z $ as its `middle' object (cf. \cite[Lem.~2 (2)]{FR99}). The extension of this approach to Q-operators for the open XXZ chain was carried out in \cite{VW20} and details and references can be found therein. For an arbitrary untwisted affine Lie algebra $\wh\mfg$ with upper Borel subalgebra $\wh\mfb^+$, the level-0 representation theory of $U_q(\wh\mfb^+)$ was studied in \cite{HJ12}; for the general connection with the theory of Baxter's Q-operators see \cite{FH15}. As well as this direct SES route to the equation, there is an alternative strategy which we refer to as the `factorization approach'; for closed chains see \cite{BS90,De05,DKK06,De07,BJMST09,BLMS10}. In fact, this approach was the one taken by Bazhanov, Lukyanov and Zamolodchikov. The work that developed this formalism in language most similar to the current paper, in particular the formulation of the intertwining property of the operator $\cO$ (defined in Section \ref{sec:O+} of the current paper), is \cite{KT14}. In this approach, a second operator $\wb{ \mc Q}(z)$ with similar properties to $\mc Q(z)$ is introduced as a trace of $\cR$ over another infinite-dimensional representation $\brho_z $ of $U_q(\wh\mfb^+)$. The affinized version $\ups_z$ of the $U_q(\mfsl_2)$-Verma module is also considered as well as an another infinite-dimensional filtered $\uqbp$-module $\phi_z$; these two representations depend on a complex parameter $\mu$. The key connection between all representations is given by Theorem \ref{thm:O:plus}, which expresses the fact that particular pairwise tensor products are isomorphic as $U_q(\wh\mfb^+)$-modules by means of an explicit intertwiner $\cO$. At the level of the L-operators this implies \eq{ \label{factorization:bulk:intro} \cO_{12} \cL_{\vrho}(q^\mu z)_{13} \cL_{\brho}(q^{-\mu} z)_{23} = \cL_{\ups}(z)_{13} \cL_{\phi}(z)_{23} \cO_{12}, } (see Theorem \ref{thm:fund} of the current paper), which is referred to as \emph{factorization} of the Verma module L-operator $\cL_\ups(z)$ in terms of the L-operators $\cL_{\vrho}(z)$ and $\cL_{\brho}(z)$ which are used to define $\mc Q(z)$, $\wb{\mc Q}(z)$ (the transfer matrix corresponding to the additional operator $\cL_{\phi} (z)$ is trivial). Defining $\mc T_{\mu}(z)$ to be the transfer matrix that is the trace over the $\mu$-dependent representation $\ups_{z}$ of $\cR$ in the first space, Theorem \ref{thm:fund} yields a relation of the following form: \eq{ \mc T_{\mu}(z) \: \propto \: \mc Q(zq^{-\mu/2}) \wb{\mc Q}(zq^{\mu/2}).\label{eq:TQQ} } The SES associated with $\ups_{z}$ in the case $\mu$ is an integer then leads to the key relation \eqref{eq:TQ1}. \subsection{Present work} The main result of the current paper is the following boundary analogue of Theorem \ref{thm:fund}, which we call the \emph{boundary factorization identity}: \eq{ \label{factorization:boundary:intro} \cK_\ups(z)_1 \cR_{\ups\phi}(z^2) \cK_\phi(z)_2 \,\cO = \cO \cK_\vrho(q^{\mu}z)_1 \cR_{\vrho\brho}(z^2) \cK_\brho(q^{-\mu}z)_2 } where $z$ is a formal parameter (which can be specialized to generic complex numbers). The precise statement is given in Theorem \ref{thm:keyrelation:right}. This formula involves the actions of the universal R-matrix of $U_q(\wh\mfsl_2)$ in tensor products of the various infinite-dimensional representations introduced. In addition, the various K-operators are diagonal solutions of reflection equations (boundary Yang-Baxter equations) \cite{Ch84,Sk88}. They arise as actions of the universal K-matrix associated to the augmented q-Onsager algebra, a particular coideal subalgebra of $U_q(\wh\mfsl_2)$, which featured also in e.g.~\cite{BB13,RSV15,BT18,VW20}. More precisely, diagonal solutions of the reflection equation with a free parameter, considered by Sklyanin in his 2-boundary version of the algebraic Bethe ansatz in \cite{Sk88}, are intertwiners for this algebra. Equation \eqref{factorization:boundary:intro} has a natural diagrammatic formulation - see Section \ref{sec:boundaryfactorization}. In a subsequent paper the authors will explain how \eqref{factorization:boundary:intro} yields relations analogous to \eqref{eq:TQQ} and hence \eqref{eq:TQ1} for open chains.\\ The proof of \eqref{factorization:boundary:intro} and of the well-definedness of the various K-operators is an application of the universal K-matrix formalism developed in \cite{AV22a,AV22b} which is built on the earlier works \cite{BW18,BK19}. More precisely, it relies on an extension of the theory of K-matrices for finite-dimensional representations of quantum affine algebras in \cite{AV22b} to level-0 representations of $U_q(\wh\mfb^+)$, which we discuss in Section \ref{sec:augmentedqOns}. The key point is that, for the special case of the augmented q-Onsager algebra there exists a universal element $\cK$, centralizing the augmented q-Onsager algebra up to a twist, with three desirable properties. \begin{enumerate} \item The element $\cK$ lies in (a completion of) the Borel subalgebra $U_q(\wh\mfb^+)$, so that the resulting family of linear maps is itself compatible with $U_q(\wh \mfb^+)$-intertwiners (which play an essential role in the algebraic theory of Baxter Q-operators). \item The coproduct of $\cK$ is of a particularly simple form, which is relevant for the proof of the boundary factorization identity. \item The linear operators accomplishing the action of $\cK$ in level-0 representations satisfy the untwisted reflection equation. \end{enumerate} Thus we obtain the factorization identity \eqref{factorization:boundary:intro} as a natural consequence of the representation theory of $U_q(\wh\mfsl_2)$. The main benefit of this universal approach is that laborious linear-algebraic computations are avoided; in particular, we not even need explicit expressions for the various factors. Nevertheless, we do provide these explicit expressions, as we expect them to be useful in further work in this direction. We also give an alternative computational proof of \eqref{factorization:boundary:intro}, to illustrate the power of the universal approach. This is a `boundary counterpart' to the level-0 theory of the universal R-matrix, which we also include for reference. We do this in Section \ref{sec:Uqhatsl2}, staying close to the original work by Drinfeld and Jimbo \cite{Dr85,Dr86,Ji86a,Ji86b}. In particular, Theorem \ref{thm:R(z):action} states that the grading-shifted universal R-matrix has a well-defined action as a linear-operator-valued formal power series on any tensor product of level-0 representations of $U_q(\wh\mfb^+)$ and $U_q(\wh\mfb^-)$ (including finite-dimensional representations). Often this well-definedness is tacitly assumed, see e.g.~\cite[Sec.~2.3]{VW20}. It also follows from the Khoroshkin-Tolstoy factorization \cite{KT92} of the universal R-matrix, see \cite{BGKNR10,BGKNR13,BGKNR14}; however we are unaware of such a factorization for the universal K-matrix. \subsection{Outline} In Section \ref{sec:Uqhatsl2} we study the action of the universal R-matrix of quantum affine $\mfsl_2$ on tensor products of level-0 representations of Borel subalgebras. Section \ref{sec:augmentedqOns} is a `boundary counterpart' to Section \ref{sec:Uqhatsl2}, where we consider the augmented q-Onsager algebra. We show that its \emph{(semi-)standard} universal K-matrix, see \cite{AV22a,AV22b}, has a well-defined action on level-0 representations of $U_q(\wh\mfb^+)$, see Theorem \ref{thm:K(z):action}, and, with a simple correction, satisfies the above three desirable properties. In Section \ref{sec:Borelreps} we discuss the relevant representations of $U_q(\wh\mfb^+)$ in terms of (an extension of) the q-oscillator algebra, as well as the $U_q(\wh\mfb^+)$-intertwiner $\cO$. Various solutions of Yang-Baxter equations are obtained in Section \ref{sec:LandR} as actions of the universal R-matrix in tensor products of Borel representations. Similarly, in Section \ref{sec:K} we introduce solutions of the reflection equation as actions of the universal K-matrix in Borel representations. We revisit the SES approach to Baxter's Q-operators for the open XXZ spin chain in light of the universal K-matrix formalism in Section \ref{sec:fusionintw}. Next, in Section \ref{sec:boundaryfactorization} we give a diagrammatic motivation of the boundary factorization identity \eqref{factorization:boundary:intro} for the open XXZ spin chain, and provide a short proof using the level-0 theory developed in Section \ref{sec:augmentedqOns}. Finally in Section \ref{sec:discussion} we summarize the main results and point out future work. Some supplementary material is given in appendices. Namely, Appendix \ref{app:qexp} provides some background material on deformed Pochhammer symbols and exponentials. In particular, Appendix \ref{app:R-operators} contains derivations of the explicit expressions of the two R-operators appearing in \eqref{factorization:boundary:intro}. In Appendix \ref{app:altproof} we provide an alternative proof of the boundary factorization identity \eqref{factorization:boundary:intro}, relying on the explicit expressions of all involved factors. The key tool of this proof is provided by Lemma \ref{lem:qexp:auxeqns}, which consists in two product formulas involving deformed Pochhammer symbols and exponentials. \subsection*{Acknowledgments} B.V. would like to thank A. Appel, P. Baseilhac and N. Reshetikhin for useful discussions. This research was supported in part by funding from EPSRC grant EP/R009465/1, from the Simons Foundation and the Centre de Recherches Mathématiques (CRM), through the Simons-CRM scholar-in-residence programme, and by the Galileo Galilei Institute (GGI) scientific programme on ‘Randomness, Integrability and Universality’. R.W. would like to acknowledge and thank CRM and the GGI for their hospitality and support. \subsection*{Data availability statement} Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. \section{Quantum affine $\mfsl_2$ and its universal R-matrix} \label{sec:Uqhatsl2} In this section we study the action of the universal R-matrix of the quasitriangular Hopf algebra quantum affine $\mfsl_2$ on tensor products of level-0 representations (including infinite-dimensional representations) of the Borel subalgebras. We give a basic survey of the algebras involved, the representations and the quasitriangular structure and show that the universal R-matrix has a well-defined action on tensor products of all level-0 representations of the Borel subalgebras. \subsection{General overview of finite-dimensional R-matrix theory} To formulate a quantum integrable system in terms of a transfer matrix built out of R-matrices, one needs finite-dimensional representations of a suitable quasitriangular Hopf algebra. To get trigonometric R-matrices, one can proceed as follows. Let $\mfg$ be a finite-dimensional simple Lie algebra and note that the untwisted loop algebra $L\mfg = \mfg \ot \C[z,z^{-1}]$ has a central extension $\wh{\mfg} = L\mfg \oplus \C c$. In turn, this can be extended to $\wt{\mfg} = \wh{\mfg} \oplus \C d$ where $d$ satisfies $[d,\cdot] = z \frac{\sf d}{{\sf d}z}$. For a fixed Cartan subalgebra $\mfh \subset \mfg$ we define \[ \wh{\mfh} := \mfh \oplus \C c, \qq \wt{\mfh} := \wh{\mfh} \oplus \C d. \] The Lie algebra $\wt\mfg$ is a Kac-Moody algebra and hence has a non-degenerate bilinear form $(\cdot,\cdot)$, which restricts to a non-degenerate bilinear form on $\wt\mfh$. See e.g.~\cite{Ka90} for more detail. The universal enveloping algebras $U(\wh\mfg)$ and $U(\wt\mfg)$ can be q-deformed, yielding non-cocommutative Hopf algebras (Drinfeld-Jimbo quantum groups) $U_q(\wh\mfg)$ and $U_q(\wt\mfg)$, see e.g.~\cite{Dr85,Dr86,Ji86a,KT92,Lu94}. The nondegenerate bilinear form $(\cdot,\cdot)$ lifts to $U_q(\wt{\mfg})$ inducing a pairing between the q-deformed Borel subalgebras and hence a quasitriangular structure. On the other hand, the subalgebra $U_q(\wh{\mfg})$ has a rich finite-dimensional representation theory, see e.g.~\cite{CP94,CP95,Ch02,HJ12}. The grading-shifted universal R-matrix has a well-defined action on tensor products of finite-dimensional representations of $U_q(\wh{\mfg})$ as a formal power series, see e.g.~\cite{Dr86,FR92,KS95,EM03,He19}). We now discuss the extension of this theory to level-0 representations of Borel subalgebras, including various infinite-dimensional representations. We will restrict to the case $\mfg = \mfsl_2$ (but the theory naturally generalizes to any quantum untwisted affine algebra). \subsection{Quantum affine $\mfsl_2$} Denoting the canonical Cartan generator of $\mfsl_2$ by $h_1$, $\wh \mfh$ is spanned by $h_0 = c-h_1$ and $h_1$. The bilinear form on $\wt\mfh$ is defined by \[ (h_0,h_0)=(h_1,h_1)=-(h_0,h_1)=2, \qq (h_0,d)=1, \qq (h_1,d)=(d,d)=0. \] Fix $\eps \in \C$ such that $q = \exp(\eps)$ is not a root of unity. For all $\mu \in \C$ we will denote $\exp(\eps \mu)$ by $q^\mu$. First, we define $U_q(\mfg)$ as the algebra generated over $\C$ by $e$, $f$ and invertible $k$ subject to the relations \eq{ k e = q^2 e k, \qq k f = q^{-2} f k, \qq [e,f] = \frac{k-k^{-1}}{q-q^{-1}}. } The following assignments determine a coproduct $\Del: U_q(\mfg) \to U_q(\mfg) \ot U_q(\mfg)$: \eq{ \label{Delta:def} \Del(e) = e \ot 1 + k \ot e, \qq \Del(f) = f \ot k^{-1} + 1 \ot f, \qq \Del(k^{\pm 1}) = k^{\pm 1} \ot k^{\pm 1}. } It uniquely extends to a Hopf algebra structure on $U_q(\mfg)$. Now the main algebra of interest, $U_q(\wh\mfg)$, arises as follows. \begin{defn}[Quantum affine $\mfsl_2$] \label{def:Uqhatsl2} We denote by $U_q(\wh\mfg)$ the Hopf algebra generated by two triples $\{ e_i, f_i, k_i \}$ ($i \in \{0,1\}$), such that: \begin{enumerate} \item the following assignments for $i \in \{0,1\}$ define Hopf algebra embeddings from $U_q(\mfg)$ to $U_q(\wh\mfg)$: \eq{ e \mapsto e_i, \qq f \mapsto f_i, \qq k \mapsto k_i; } \item the following cross relations are satisfied: \begin{gather} k_i k_j = k_j k_i, \qq k_i e_{j} = q^{-2} e_{j} k_i, \qq k_i f_{j} = q^2 f_{j} k_i, \qq [e_i,f_{j}] = 0, \\ [e_i,[e_i,[e_i,e_{j}]_{q^2}]_1]_{q^{-2}} = [f_i,[f_i,[f_i,f_{j}]_{q^2}]_1]_{q^{-2}} = 0, \end{gather} for $i \ne j$, where we have introduced the notation $[x,y]_p := xy-pyx$.\hfill\defnend \end{enumerate} \end{defn} Consider the affine Cartan subalgebra $\wh\mfh = \C h_0 \oplus \C h_1$. Note that its q-deformation $U_q(\wh\mfh) = \langle k_0^{\pm 1}, k_1^{\pm 1} \rangle$ is isomorphic to the group algebra of the affine co-root lattice \eq{ \wh Q^\vee = \Z h_0 + \Z h_1 \subset \wh\mfh. } The nontrivial diagram automorphism $\Phi$ of the affine Dynkin diagram, i.e. the nontrivial permutation of the index set $\{0,1\}$, lifts to a linear automorphism $\Phi$ of $\wh\mfh$ which preserves the lattice $\wh Q^\vee$. Accordingly, it also lifts an involutive Hopf algebra automorphism of $U_q(\wh \mfg)$, also denoted $\Phi$, via the assignments \eq{ \label{Phi:def} \Phi(e_i) = e_{\Phi(i)}, \qq \Phi(f_i) = f_{\Phi(i)}, \qq \Phi(k_i^{\pm 1}) = k_{\Phi(i)}^{\mp 1} \qq \text{for } i \in \{0,1\}. } \subsection{Quantized Kac-Moody algebra} To define the quantized Kac-Moody algebra $U_q(\wt\mfg)$, choose an extension $\wt Q^\vee$ of $\wh Q^\vee$ (a lattice of rank 3 contained in $\wt\mfh$) preserved by $\Phi$. \begin{rmk} The standard extension of the affine co-root lattice $\Z h_0 + \Z h_1 + \Z d$ is not so convenient for us, mainly in view of the construction of the universal K-matrix in Section \ref{sec:uniK}. Namely, extensions of $\Phi$ to $\wt\mfh$ which are compatible with the bilinear form on $\wt\mfh$ do not preserve this lattice, see also \cite[Sec.~2.6]{Ko14} and \cite[Sec.~3.14]{AV22a}. \hfill \rmkend \end{rmk} The most convenient choice is to use the \emph{principal grading} and set \eq{ d_{\sf pr} := -\frac{1}{8} h_0 + \frac{3}{8} h_1 + 2 d \in \mfh, } so that \[ (d_{\sf pr},h_0)=(d_{\sf pr},h_1)=1, \qq (d_{\sf pr},d_{\sf pr})=0. \] Now we set $\Phi(d_{\sf pr})=d_{\sf pr}$ and obtain a linear automorphism $\Phi$ of $\wt\mfh$ preserving the lattice \[ \wt Q^\vee := \Z h_0 + \Z h_1 + \Z d_{\sf pr}. \] The corresponding dual map on $\wt\mfh^*$, also denoted by $\Phi$, preserves the extended affine weight lattice \eq{ \wt P = \{ \la \in \wt\mfh^* \, | \, \la(\wt Q^\vee) \subseteq \Z \}. } Accordingly, we define $U_q(\wt\mfg)$ as the Hopf algebra obtained by extending $U_q(\wh\mfg)$ by a group-like element\footnote{ It is equal to $\exp(\eps d_{\sf pr})$ if we define $U_q(\wt\mfg)$ as a topological Hopf algebra over $\C[[\eps]]$. } $g$ satisfying \eq{ \label{g:relations} g e_i = q e_i g, \qq g f_i = q^{-1} f_i g, \qq g k_i = k_i g. } Hence, the assignment $\Phi(g)=g$ together with \eqref{Phi:def} defines an involutive Hopf algebra automorphism of $U_q(\wt\mfg)$. \subsection{Co-opposite Hopf algebra structure} For any $\C$-algebra $A$, denote by $\si$ the algebra automorphism of $A \ot A$ which sends $a \ot a'$ to $a' \ot a$ for all $a,a' \in A$. If $X \in A \ot A$ we will also write $X_{21}$ for $\si(X)$. If $A$ is a bialgebra with coproduct $\Del$, the \emph{co-opposite bialgebra}, denoted $A^{\sf cop}$, is the bialgebra with the same underlying algebra structure and counit as $A$ but with $\Del$ replaced by \eq{ \Del^{\sf op} := \sigma \circ \Del } (if $A$ is a Hopf algebra with invertible antipode $S$, then $A^{\sf cop}$ is also a Hopf algebra with antipode $S^{-1}$). The assignments \eq{ \label{om:def} \om(e_i) = f_i, \qq \om(f_i) = e_i, \qq \om(k_i^{\pm 1}) = k_i^{\mp 1} \qq \text{for } i \in \{0,1\}, \qq \qq \om(g) = g^{-1} } define a bialgebra isomorphism from $U_q(\wt\mfg)$ to $U_q(\wt\mfg)^{\sf cop}$ (in particular, $(\om \ot \om) \circ \Del = \Del^{\sf op} \circ \om$) which commutes with $\Phi$. \subsection{Weight modules} We review some basic representation-theoretic notions for $U_q(\wt\mfg)$ by means of which its universal R-matrix can be described. Consider the commutative subalgebra \eq{ U_q(\wt\mfh) = \langle k_0^{\pm 1}, k_1^{\pm 1}, g^{\pm 1} \rangle \subset U_q(\wt\mfg). } Call a $U_q(\wt\mfg)$-module $M$ a $U_q(\wt\mfh)$-weight module if \[ M = \bigoplus_{\la \in \wt P} M_\la, \qq M_\la = \{ m \in M \, | \, k_i \cdot m = q^{\la(h_i)} m \text{ for } i \in \{0,1\}, \, g \cdot m = q^{\la(d_{\sf pr})} m \}. \] Elements of $M_\la$ are said to have weight $\la$. The adjoint action of $U_q(\wt\mfh)$ (with its generators acting by conjugation) endows $U_q(\wt\mfg)$ itself with a $U_q(\wt\mfh)$-weight module structure, with elements of $U_q(\wt\mfh)$ of weight 0. More precisely, the weights of $U_q(\wt\mfg)$ are given by the affine root lattice \[ \wh Q := \Z \al_0 + \Z \al_1 \subset \wt P \] ($e_i$ has weight $\al_i$, $f_i$ has weight $-\al_i$). Furthermore, note that $U_q(\wt\mfg)$ is generated by $U_q(\wt\mfh)$ and the quantum analogues of the standard nilpotent subalgebras \eq{ U_q(\wh\mfn^+)= \langle e_0,e_1 \rangle, \qq U_q(\wh\mfn^-)= \langle f_0,f_1 \rangle. } The action of $U_q(\wt\mfh)$ preserves these subalgebras $U_q(\wh\mfn^\pm)$ and the corresponding weights are the monoids $\pm \wh Q^+$ respectively, where $\wh Q^+ := \Z_{\ge 0} \al_0 + \Z_{\ge 0} \al_1$. \subsection{Quasitriangularity} The universal R-matrix for $U_q(\wt\mfg)$ is an element of a completion of $U_q(\wt\mfg) \ot U_q(\wt\mfg)$ satisfying \begin{gather} \label{R:axiom1} \cR \Del(u) = \Del^{\rm op}(u) \cR \qq \text{for all } u \in U_q(\wt\mfg), \\ \label{R:axiom2} (\Del \ot \id)(\cR) = \cR_{13} \cR_{23}, \qq \qq (\id \ot \Del)(\cR) = \cR_{13} \cR_{12} \end{gather} and hence \eq{ \label{R:YBE} \cR_{12} \cR_{13} \cR_{23} = \cR_{23} \cR_{13} \cR_{12}. } Consider the Hopf subalgebras \[ U_q(\wt\mfb^\pm) = \langle U_q(\wt\mfh), U_q(\wh\mfn^\pm) \rangle. \] The element $\mc{R}$ arises as the canonical element of the bialgebra pairing between $U_q(\wt\mfb^+)$ and the algebra $U_q(\wt{\mfb}^-)^{\sf op}$ (the bialgebra isomorphic as a coalgebra to $U_q(\wt{\mfb}^-)$ but with the opposite multiplication), see \cite{Dr85,Lu94}. In particular, $\cR$ lies in a completion of $U_q(\wt\mfb^+) \otimes U_q(\wt\mfb^-)$. Further, invariance properties of the bialgebra pairing imply \begin{align} \label{omega:R} (\om \ot \om)(\cR) &= \cR_{21}, \\ \label{Phi:R} (\Phi \ot \Phi)(\cR) &= \cR. \end{align} Also, this pairing has a non-degenerate restriction to $U_q(\wh\mfn^+)_\la \times U_q(\wh\mfn^-)_{-\la}$ for all $\la \in \wh Q^+$; denote the canonical element of this restricted pairing by $\Theta_\la$. With our choice of the coproduct we have \eq{ \label{R:factorization} \cR = \Theta^{-1} \cdot \ka^{-1}, \qq \Theta = \sum_{\la \in \wh Q^+} \Theta_\la, \qq } A priori, $\Theta$ acts naturally on $U_q(\wt\mfg)$-modules with a locally finite action of $U_q(\wh\mfn^+)$ or $U_q(\wh\mfn^-)$. We briefly explain one possible definition\footnote{ Note that in the topological Hopf algebra setting one simply has $\ka = q^{c \ot d + d \ot c + h_1 \ot h_1/2}$. } of the element $\ka$. The non-degenerate bilinear form $(\cdot,\cdot)$ on $\wt\mfh$ induces one on $\wt\mfh^*$, which we denote by the same symbol. If $M,M'$ are $U_q(\wt\mfh)$-weight modules we define a linear map $\ka_M: M \ot M' \to M \ot M'$ by stipulating that it acts on $M_\la \ot M'_{\la'}$ ($\la,\la' \in \wt P$) as multiplication by $q^{(\la,\la')}$. The family of these maps $\ka_M$, where $M$ runs through all $U_q(\wt\mfh)$-weight modules, is compatible with $U_q(\wt\mfh)$-intertwiners. Hence it gives rise to a well-defined weight-0 element $\ka$ of the corresponding completion of $U_q(\wt\mfg) \ot U_q(\wt\mfg)$ which we call here \emph{weight completion}. Similarly, we will define weight-0 elements of the weight completion of $U_q(\wt\mfg)$ itself using functions from $\wt P$ to $\C$. See also \cite[Sec.~4.8]{AV22a} for more detail. \subsection{Level-0 representations} Consider the following subalgebras of $U_q(\wh\mfg)$: \eq{ U_q(\wh\mfb^\pm) = \langle U_q(\wh\mfh), U_q(\wh\mfn^\pm) \rangle. } Then $U_q(\wh\mfb^+)$ is isomorphic to the algebra with generators $e_i$, $k_i$ ($i \in \{0,1\}$) subject to those relations in Definition \ref{def:Uqhatsl2} which do not involve the $f_i$ (the proof of e.g.~\cite[Thm.~4.21]{Ja96} applies). We say that a $U_q(\wh\mfb^+)$-module $V$ is \emph{level-0} if it decomposes as \eq{ \label{V:decomposition} V = \bigoplus_{t \in \C^\times} V(t), \qq V(t) = \{ v \in V \, | \, k_0 \cdot v = t^{-1} v, \qu k_1 \cdot v = t v \} } with each $V(t)$ finite-dimensional. Note that the class of finitely generated level-0 modules (this is somewhat more general than \cite[Def.~3.8]{HJ12}) is closed under tensor products. By the $U_q(\wh\mfg)$-relations we have $e_0 \cdot V(t) \subseteq V(q^{-2}t)$, $e_1 \cdot V(t) \subseteq V(q^2t)$. It is convenient to call the subset $\{ t \in \C^\times \, | \, \dim(V(t)) \ne 0 \}$ the \emph{support} of $V$. If $V$ is a finite-dimensional $U_q(\wh\mfg)$-module then it is level-0 with support contained in $\pm q^\Z$, see e.g.~\cite[Prop.~12.2.3]{CP95}. \begin{rmk} \label{} Let $V$ be an irreducible level-0 $U_q(\wh\mfb^+)$-module\footnote{By \cite[Prop.~3.5]{HJ12} this includes all finite-dimensional irreducible $U_q(\wh\mfg)$-modules.}. If $\dim(V)>1$, the $U_q(\wh\mfb^+)$-action does not extend to a $U_q(\wt\mfb^+)$-action. To see this, for instance, one can choose distinct $t,t' \in \C^\times$ in the support of $V$. By irreducibility, for any nonzero $v \in V(t)$, $v' \in V(t')$ there exist $x,x' \in U_q(\wh\mfb^+)$ such that $x \cdot v = v'$, $x' \cdot v' = v$. Without loss of generality, we may assume both $x$ and $x'$ have no term in $U_q(\wh\mfh)$ and are hence non-invertible. If $v$ is an eigenvector of the action of $g$ on $V(t)$ then applying $g$ to $(x'x) \cdot v = v$ results in a contradiction with \eqref{g:relations}. \hfill \rmkend \end{rmk} Analogous definitions and comments can be made for $U_q(\wh\mfb^-)$-modules. \subsection{The action of $\cR$ on tensor products of level-0 modules} \label{sec:R:action} We wish to connect the quasitriangular structure of $U_q(\wt\mfg)$ with the level-0 representation theory of $U_q(\wh\mfg)$, i.e. let the universal R-matrix of $U_q(\wt\mfg)$ act on tensor products of level-0 modules. To do this, we follow the ideas from \cite[Sec.~13]{Dr86} (also see \cite[Sec.~4]{FR92}, \cite[Sec.~1]{He19}). If we write the action of $k_1$ on an arbitrary level-0 module $V$ as $\exp(\eps H_V)$, then note that the factor $\ka$ naturally acts on tensor products $V \ot V'$ of level-0 modules as $\exp(\eps H_{V} \ot H_{V'}/2)$. To let $\Theta$ act on such tensor products, we extend the field of scalars $\C$ over which we defined $U_q(\wt\mfg)$ to the Laurent polynomial ring $\C[z,z^{-1}]$, where $z$ is a formal parameter. The action of $\Theta$ is particularly well-behaved if we use the principal grading. That is, we define a Hopf algebra automorphism $\Sigma_z$ of $U_q(\wt\mfg)[z,z^{-1}]$ such that \eq{ \Sigma_z(e_i) = z e_i, \qq \Sigma_z(f_i) = z^{-1} f_i, \qq \Sigma_z|_{U_q(\wt\mfh)} = \id. } Straightforwardly one sees that \begin{align} \label{omega:Sigma} \om \circ \Sigma_z &= \Sigma_{z^{-1}} \circ \om, \\ \label{Phi:Sigma} \Phi \circ \Sigma_z &= \Sigma_z \circ \Phi. \end{align} Let the height function $\h: \wh Q \to \Z$ be defined by $\h(m_0 \al_0 + m_1 \al_1)=m_0+m_1$ for all $m_0,m_1 \in \Z$ and note that the number of elements of $\wh Q^+$ of given height is finite. The key observation is that \eq{ \label{Theta:keyobservation} (\Sigma_z \ot \id)(\Theta) = (\id \ot \Sigma_{z^{-1}})(\Theta) = \sum_{r \ge 0} z^r \sum_{\la \in \wh Q^+, \, \h(\la)=r} \Theta_\la, } is a formal power series in $z$ whose coefficients are finite sums and hence lie in $U_q(\wh\mfn^+) \ot U_q(\wh\mfn^-)$. Hence $(\Sigma_z \ot \id)(\Theta)=(\id \ot \Sigma_{z^{-1}})(\Theta)$ has a well-defined action as a linear-operator-valued formal power series on a tensor product of any $U_q(\wh\mfn^+)$-representation with any $U_q(\wh\mfn^-)$-representation. Consider now the \emph{grading-shifted universal R-matrix}: \eq{ \label{R(z)uni:def} \cR(z) := (\Sigma_z \ot \id)(\cR) = (\id \ot \Sigma_{z^{-1}})(\cR). } Note that by applying $\Sigma_z \ot \id$ to \eqref{R:axiom1} we deduce that $\cR(z)$ commutes with $\Del(k_1) = \Del^{\sf op}(k_1) = k_1 \ot k_1$. We collect the results obtained thus far. \begin{thrm} \label{thm:R(z):action} Consider a pair of level-0 representations $\pi^\pm: U_q(\wh\mfb^\pm) \to \End(V^\pm)$. Then \eq{ \label{R(z):def} \cR_{\pi^+\pi^-}(z) := (\pi^+ \ot \pi^-)(\cR(z)) \in \End(V^+ \ot V^-)[[z]] } is well-defined and commutes with $\pi^+(k_1) \ot \pi^-(k_1)$. \end{thrm} From now on we will use the standard convention that if $\pi$ is any level-0 representation then the corresponding grading-shifted representation is denoted by a subscript $z$: \eq{ \label{reps:gradingshift} \pi_z := \pi \circ \Sigma_z. } Hence we may write \[ \cR_{\pi^+\pi^-}(z) = (\pi^+_z \ot \pi^-)(\cR) = (\pi^+ \ot \pi^-_{1/z})(\cR). \] Consider two indeterminates $z_1,z_2$. Applying, say, $\Sigma_{z_1} \ot \id \ot \Sigma_{1/z_2}$, to \eqref{R:YBE}, we obtain a $\C[[z_1,z_2]]$-version of the universal Yang-Baxter equation which can be evaluated on suitable triple tensor products. \begin{prop} \label{prop:R(z):YBE} If $\pi^+: U_q(\wh\mfb^+) \to \End(V^+)$, $\pi: U_q(\wh\mfg) \to \End(V)$ and $\pi^-: U_q(\wh\mfb^-) \to \End(V^-)$ are level-0 representations, then we have the following identity of linear-operator-valued formal power series in two indeterminates: \eq{ \label{R(z):YBE} \cR_{\pi^+\pi}(z_1)_{12} \; \cR_{\pi^+\pi^-}(z_1z_2)_{13} \; \cR_{\pi\pi^-}(z_2)_{23} = \cR_{\pi\pi^-}(z_2)_{23} \; \cR_{\pi^+\pi^-}(z_1z_2)_{13} \; \cR_{\pi^+\pi}(z_1)_{12}. } \end{prop} Given a pair of level-0 representations $\pi^\pm: U_q(\wh\mfb^\pm) \to \End(V^\pm)$ it is often convenient to have an explicit expression of $\cR_{\pi^+\pi^-}(z)$ which does not rely on computing the coefficients of the series $\cR(z)$. Essentially following Jimbo's approach from \cite{Ji86b}, we may try to solve a linear equation for $\cR_{\pi^+\pi^-}(z)$. To derive such a linear equation, it is convenient to assume that, say, $\pi^+$ extends to a representation of $U_q(\wh\mfg)$. In this case\footnote{ One can of course apply $\pi^+_z \ot \pi^-$ to \eqref{R:axiom1} for arbitrary $U_q(\wh\mfb^\pm)$-representations $\pi^\pm$, yielding \eqref{R(z):intw} for all $u \in U_q(\wh\mfg)$ such that $\Del(u)$ and $\Del^{\sf op}(u)$ both lie in $U_q(\wh\mfb^+) \ot U_q(\wh\mfb^-)$. However, by applying counits this subalgebra is seen to be equal to $U_q(\wh\mfb^+) \cap U_q(\wh\mfb^-) = U_q(\wh\mfh)$. Hence, one would just recover the second statement of Theorem \ref{thm:R(z):action}. }, one directly obtains the following result. \begin{prop} \label{prop:R(z):intw} If $\pi^+$ is a level-0 $U_q(\wh\mfg)$-representation and $\pi^-$ a level-0 $U_q(\wh\mfb^-)$-representation, then for all $u \in U_q(\wh\mfb^-)$ we have \eq{ \label{R(z):intw} \cR_{\pi^+\pi^-}(z) \cdot (\pi^+_z \ot \pi^-)(\Del(u)) = (\pi^+_z \ot \pi^-)(\Del^{\sf op}(u)) \cdot \cR_{\pi^+\pi^-}(z). } \end{prop} Obviously there is a counterpart of Proposition \ref{prop:R(z):intw} with the role of $U_q(\wh\mfb^-)$ replaced by $U_q(\wh\mfb^+)$. \begin{rmk} \label{rmk:R(z):YBE:define} If the solution space of the linear equation \eqref{R(z):intw} is 1-dimensional, Proposition \ref{prop:R(z):intw} implies that any solution must be a scalar multiple of $\cR_{\pi^+\pi^-}(z)$ and hence satisfy the Yang-Baxter equation. This is well-known if both $V^\pm$ extend to finite-dimensional $U_q(\wh\mfg)$-modules. In this case the existence of the universal R-matrix implies the existence of a solution of the intertwining condition \eqref{R(z):intw} depending rationally on $z$. If $\pi^+$ and $\pi^-$ are both irreducible then it is known, see e.g.~\cite[Sec.~4.2]{KS95} and \cite[Thm.~3]{Ch02}, that $V^+((z)) \ot V^-$ is irreducible as a representation of $U_q(\wh\mfg)((z))$ (extension of scalars to formal Laurent series); hence an application of Schur's lemma yields the 1-dimensionality of the solution space of \eqref{R(z):intw}. In this case, the rational intertwiner is called \emph{trigonometric R-matrix}. For more background and detail, see e.g.~\cite{He19} and \cite[Secs.~2.6 \& 2.7]{AV22b}. In the absence of a linear relation such as \eqref{R(z):intw}, one can use the Yang-Baxter equation \eqref{R(z):YBE} to determine an explicit expression for one of $\cR_{\pi^+\pi}(z)$, $\cR_{\pi^+\pi^-}(z)$, or $\cR_{\pi\pi^-}(z)$, provided the other two are known. \hfill\rmkend \end{rmk} \subsection{Adjusting the grading} \label{sec:grading} In this approach the use of the principal grading in Theorem \ref{thm:R(z):action} avoids further constraints on the representations (e.g.~local finiteness conditions). For completeness we briefly explain how to extend the results of Section \ref{sec:R:action} to arbitrary grading. For nonnegative integers $s_0,s_1$ such that $s_0+s_1$ is nonzero, define a more general Hopf algebra automorphism $\Sigma^{s_0,s_1}_z$ of $U_q(\wt\mfg)[z,z^{-1}]$ by \eq{ \Sigma^{s_0,s_1}_z(e_i) = z^{s_i} e_i, \qq \Sigma^{s_0,s_1}_z(f_i) = z^{-s_i} f_i, \qq \Sigma^{s_0,s_1}_z|_{U_q(\wt\mfh)} = \id } (note that the choice $s_0=0$, $s_1=1$ is used in in \cite[Eq.~(2.11)]{KT14}). Rather than giving generalized versions of the main results above and of various statements in the remainder of this work, we make an observation which will allow the reader to generate these statements, as required. Recalling the decomposition \eqref{V:decomposition} and the associated terminology, suppose the level-0 $U_q(\wh\mfb^+)$-module $V$ is generated by a nonzero element of $V(t_0)$ for some $t_0 \in \C^\times$ (which includes all modules considered in this paper and all finite-dimensional irreducible $U_q(\wh\mfg)$-modules). Then the support of $V$ is contained in $q^{2\Z} t_0$. Now for any indeterminate $y$ and any integer $m$, let $y^{mD}$ denote the linear map on $V$ which acts on $V(q^{-2m}t_0)[y,y^{-1}]$ as scalar multiplication by $y^m$. Writing the corresponding representation as $\pi: U_q(\wh\mfb^+) \to \End(V)$, the more general grading-shifted representation $\pi^{s_0,s_1}_z := \pi \circ \Sigma^{s_0,s_1}_z$ can be related to the representation shifted by the principal grading as follows. Adjoining to the ring $\C[z,z^{-1}]$ a square root $Z$ of $z$, we have \eq{ \pi^{s_0,s_1}_z = \Ad\big( Z^{(s_0-s_1)D} \big) \circ \pi_{Z^{s_0+s_1}}, } where on the right-hand side $\Ad$ stands for `conjugation by'. See \cite[Sec.~5.2]{AV22b} for essentially the same point in the context of irreducible finite-dimensional $U_q(\wh\mfg)$-representations. \section{The augmented q-Onsager algebra, its twist and its universal K-matrix} \label{sec:augmentedqOns} In parallel with the previous section, we consider a particular subalgebra of $U_q(\wh\mfg)$ and extend some recent results on universal K-matrices \cite{AV22a,AV22b} in the context of (possibly infinite-dimensional) level-0 representations of Borel subalgebras of quantum affine $\mfsl_2$. For a related approach tailored to evaluation representations involving essentially the same subalgebra, see \cite{BT18}. \subsection{The twist map $\psi$} We consider the following algebra automorphism and coalgebra antiautomorphism of $U_q(\wt\mfg)$: \eq{ \label{psi:def} \psi := \om \circ \Phi. } From \eqrefs{omega:R}{Phi:R} and \eqrefs{omega:Sigma}{Phi:Sigma}, respectively, we immediately deduce \begin{align} \label{psi:R} (\psi \ot \psi)(\cR) &= \cR_{21}, \\ \label{psi:Sigma} \psi \circ \Sigma_z &= \Sigma_{z^{-1}} \circ \psi. \end{align} By the following result, P-symmetric R-matrices ($\cR(z)_{21} = \cR(z)$) naturally arise in tensor products of representations of the upper and lower Borel subalgebras on the same vector space, provided they are related through $\psi$ and the principal grading is used in the definition of grading-shifted universal R-matrix $\cR(z)$, see \eqref{R(z)uni:def}. \begin{lemma} \label{lem:R:Psymmetry} Consider two pairs of level-0 representations $\pi^\pm, \vrho^\pm: U_q(\wh\mfb^\pm) \to \End(V)$ such that \eq{ \label{pi:psi:relation} \vrho^\mp = \pi^\pm \circ \psi. } Then $\cR_{\pi^+\pi^-}(z)_{21} = \cR_{\vrho^+\vrho^-}(z)$. \end{lemma} \begin{proof} Unpacking the definitions \eqref{R(z):def} and \eqref{R(z)uni:def}, we have \[ \cR_{\pi^+\pi^-}(z)_{21} = \Big( \big( (\pi^+ \ot \pi^-) \circ (\Sigma_z \ot \id) \big)(\cR) \Big)_{21} = \big( (\pi^- \ot \pi^+) \circ (\id \ot \Sigma_z) \big)\big(\cR_{21}\big). \] Now using \eqrefs{psi:R}{psi:Sigma} we deduce \[ \cR_{\pi^+\pi^-}(z)_{21} = \big( (\pi^- \ot \pi^+) \circ (\psi \ot \psi) \circ (\id \ot \Sigma_{z^{-1}}) \big)(\cR). \] Applying \eqref{pi:psi:relation} and using \eqref{R(z):def} and \eqref{R(z)uni:def} once again, we obtain $\cR_{\vrho^+\vrho^-}(z)$ as required. \end{proof} \subsection{The augmented q-Onsager algebra} The map $\psi$ plays an important role in the theory of diagonal matrix solutions with a free parameter of the reflection equation in $U_q(\wh\mfg)$-modules. Namely, fix a parameter $\xi \in \C^\times$ and consider the following subalgebra of $U_q(\wh\mfg)$, also called the \emph{(embedded) augmented q-Onsager algebra}: \eq{ \label{Uqk:def} U_q(\mfk) := \C\big\langle e_0 - q^{-1} \xi^{-1} k_0 f_1, \, e_1 - q^{-1} \xi k_1 f_0, \, k_0 k_1^{-1}, \, k_0^{-1} k_1 \big\rangle. } This is a left coideal: \eq{ \Del(U_q(\mfk)) \subseteq U_q(\wh\mfg) \ot U_q(\mfk). } The automorphism $\psi$ is the trivial q-deformation of a Lie algebra automorphism of $\wh\mfg$, also denoted $\psi$, and $U_q(\mfk)$ is the ($\xi$-dependent) coideal q-deformation of the universal enveloping algebra of the fixed-point subalgebra $\mfk = \wh\mfg^\psi$, in the style of \cite{Ko14} but with opposite conventions. \begin{rmk} See \cite[Rmk. 2.3]{VW20} for more background on this subalgebra. Note that the definition of $U_q(\mfk)$ in \emph{loc.~cit.}~has a misprint: $\xi$ should be replaced by $\xi^{-1}$. \hfill \rmkend \end{rmk} To connect with the universal K-matrix formalism of \cite{AV22a,AV22b}, let $\wt S$ be the bialgebra isomorphism\footnote{In particular, $\wt S$, like the antipode $S$ itself, is an algebra antiautomorphism and a coalgebra antiautomorphism.} from $U_q(\wt\mfg)$ to $U_q(\wt\mfg)^{\sf op,cop}$ (also known as the \emph{unitary antipode}) defined by the assignments \eq{ \label{unitaryantipode} \wt S(e_i) = -q k_i^{-1} e_i, \qq \wt S(f_i) = -q^{-1} f_i k_i, \qq \wt S(k_i^{\pm 1}) = k_i^{\mp 1}, \qq \wt S(g^{\pm 1}) = g^{\mp 1}. } Note that $\wt S^2=\id$. Now consider\footnote{ In general, each element or map in the right coideal setting of \cite{Ko14,AV22a,AV22b} is denoted by a prime on the corresponding object in the current left coideal setting. } the right coideal subalgebra \[ U_q(\mfk)' = \wt S(U_q(\mfk)) = \C \langle f_0 - q \xi^{-1} e_1 k_0^{-1}, f_1 - q \xi e_0 k_1^{-1}, k_0 k_1^{-1}, k_0^{-1} k_1 \rangle \] which is the subalgebra considered in \cite[Sec.~9.7]{AV22a}, forming part of a more general family of right coideal subalgebras (quantum symmetric pair subalgebras) of quantum affine algebras as considered in \cite{Ko14,AV22a,AV22b}. \subsection{Universal K-matrix} \label{sec:uniK} By \cite[Thm.~8.5]{AV22a}, $U_q(\wt\mfg)$ is endowed with a so-called \emph{standard} universal K-matrix, which is an invertible element in a completion of $U_q(\wt\mfb^+)$ satisfying a twisted $U_q(\mfk)$-intertwining property and a twisted coproduct formula involving the universal R-matrix\footnote{ Note that our convention for the coproduct is as in \cite{AV22a}, but the ordering of the tensor product of the two Borel subalgebras is opposite. Hence the R-matrix in \cite{AV22a}, denoted here by $\cR'$, is equal to $\cR_{21}^{-1}$. } \eq{ \cR' = \cR_{21}^{-1}. } There is an action of invertible elements of a completion of $U_q(\wt\mfg)$, gauge-transforming the universal K-matrix and the twisting operator simultaneously, see \cite[Sec.~3.6]{AV22b}. For the case under consideration, there exists a gauge transformation (a `Cartan correction', see \cite[Sec.~8.8]{AV22a}) that brings both the intertwining property and the coproduct formula for the universal K-matrix into a particularly nice form. Moreover, the gauge-transformed universal K-matrix still resides in a completion of $U_q(\wt\mfb^+)$ and, when shifted by the principal grading, acts as a linear-operator-valued formal power series for all level-0 $U_q(\wh\mfb^+)$-modules. To make this more precise, let $\Om: \wt P \to \C^\times$ be any group homomorphism such that $\Om(\al_0)=-\xi$ and $\Om(\al_1)=-\xi^{-1}$. Now define a function $G': \wt P \to \C^\times$ by \eq{ \label{Gprime:def} G'(\la) = \Om(\la) q^{-(\Phi(\la),\la)/2}. } Note that this is not a group homomorphism. Define the corresponding linear operator acting on $U_q(\wt\mfh)$-weight modules as follows: \eq{ G' \cdot v = G'(\la) v, \qq v \in V_\la, \qq \la \in \wt P. } Analogously to our definition of the factor $\ka$ of the universal R-matrix, we thus obtain an invertible element $G'$ of the weight completion of $U_q(\wt\mfg)$. Finally, let $\del=\al_0+\al_1$ be the basic imaginary root of $\wh\mfg$. Then the following result is a special case of \cite[Sec.~9.7]{AV22a}, with the coproduct formula a direct consequence of \cite[(8.21)]{AV22a}. \begin{prop}\label{prop:uniK:rightcoideal} There exists an invertible element \eq{ \Upsilon' = \sum_{\la \in \Z_{\ge 0} \del} \Upsilon'_\la, \qq \Upsilon'_\la \in U_q(\wh{\mfn}^+)_\la, } such that the invertible element \eq{ \cK' := G' \cdot \Upsilon' } satisfies \begin{align} \label{K':axiom1} \cK' \cdot u &= \psi(u) \cdot \cK' \qq \text{for all } u \in U_q(\mfk)', \\ \label{K':axiom2} \Del(\cK') &= (1 \ot \cK') \cdot (\psi \ot \id)(\cR') \cdot (\cK' \ot 1). \end{align} \end{prop} \begin{rmk} This choice of $\cK'$ is also known as the \emph{semi-standard} universal K-matrix, see \cite[Sec.~8.10]{AV22a} and cf.\cite[Ex.~3.6.3 (2)]{AV22b}. It corresponds to the choice of a \emph{twist pair} $(\psi,J)$ where $\psi$ is a bialgebra automorphism (e.g.~a diagram automorphism) and $J$ is the trivial Drinfeld twist $1 \ot 1$, see \cite[Sec.~2.4 and 2.5]{AV22a}; this choice is associated with the simple 3-factor coproduct formula \eqref{K':axiom2}. The semi-standard K-matrix is always available; what is rather special in the case of the augmented q-Onsager algebra is that it still lies in a completion of $U_q(\wt\mfb^+)$. \hfill \rmkend \end{rmk} Now we transform this formalism \cite{AV22a} for the right coideal subalgebra $U_q(\mfk)'$, expressed in terms of the universal R-matrix $\cR'$, to a formalism for the left coideal subalgebra $U_q(\mfk) = \wt S(U_q(\mfk)')$, expressed in terms of the universal R-matrix $\cR$ as used in this paper. To do this, note that, when going from a $U_q(\wt\mfg)$-weight module to its dual, weights transform as $\la \mapsto -\la$. This defines the extension of $S$ and $\wt S$ to a map on the weight completion of $U_q(\wt\mfg)$. Therefore $\wt S(\Om) = \Om^{-1}$ but the non-group-like factor of $G'$ is fixed by $\wt S$. We define $G: \wt P \to \C^\times$ by \eq{ \label{G:def} G(\la) := \Om(\la) q^{(\Phi(\la),\la)/2} } so that $G = \wt S(G')^{-1}$. Also, we set \eq{\Upsilon := \wt S(\Upsilon')^{-1} = \sum_{\la \in \Z_{\ge 0} \del} \Upsilon_\la, \qq \Upsilon_\la \in \wt S(U_q(\wh\mfn^+)_\la) \subset U_q(\wh\mfh) \cdot U_q(\wh\mfn^+)_\la. } \begin{prop} \label{prop:uniK:leftcoideal} The element \eq{ \cK := \wt S(\cK')^{-1} = G \cdot \Upsilon } satisfies \begin{align}\label{K:axiom1} \cK \cdot u &= \psi(u) \cdot \cK \qq \text{for all } u \in U_q(\mfk), \\ \label{K:axiom2} \Del(\cK) &= (\cK \ot 1) \cdot (\id \ot \psi)(\cR) \cdot (1 \ot \cK). \end{align} \end{prop} \begin{proof} This follows straightforwardly from Proposition \ref{prop:uniK:rightcoideal}. Namely, we apply $\wt S$ to \eqref{K':axiom1} and $(\wt S \ot \wt S) \circ \sigma$ to \eqref{K':axiom2}, and use the fact that $\wt S \circ \psi = \psi \circ \wt S$ and $(\wt S \ot \wt S)(\cR) = \cR$. \end{proof} Note that $U_q(\wh\mfb^+)$ is a bialgebra and, as expected, the right-hand side of \eqref{K:axiom2} lies in a completion of $U_q(\wh\mfb^+) \ot U_q(\wh\mfb^+)$, since $\psi$ interchanges the two Borel subalgebras $U_q(\wh\mfb^\pm)$. The reflection equation satisfied by the universal element $\cK$ is as follows: \eq{ \label{K:RE} \cR \cdot (\cK \ot 1) \cdot (\id \ot \psi)(\cR) \cdot (1 \ot \cK) = (1 \ot \cK) \cdot (\id \ot \psi)(\cR) \cdot (\cK \ot 1) \cdot \cR. } This is a consequence of the linear relation \eqref{R:axiom1} for $\cR$ and the coproduct formula \eqref{K:axiom2} for $\cK$, alongside \eqref{psi:R} and $\psi^2=\id$. \subsection{The action of the universal K-matrix on level-0 representations} To deduce that $\cK$ has a well-defined action on level-0 representations of, say, $U_q(\wh\mfb^+)$, we can proceed in a similar way to the case of the R-matrix. This builds on the finite-dimensional theory for more general quantum symmetric pair subalgebras in \cite[Sec.~4]{AV22b}. First note that if $\pi$ is a level-0 representation, $\pi$ and the twisted representation $\pi \circ \psi$ coincide on $U_q(\wh\mfh)$. Now let $z$ once again be a formal variable. Note that by \eqref{G:def} the function $G$ sends the basic imaginary root $\del$ to 1. Hence the proof of \cite[Prop.~4.3.1 (3)]{AV22b} implies that the corresponding factor $G$ of the universal K-matrix descends to level-0 modules. Furthermore, the argument that shows $\Sigma_z(\Theta)$ is a $U_q(\wh\mfn^+) \ot U_q(\wh\mfn^-)$-valued formal power series can be easily adapted to $\Upsilon$; it yields a formal power series with coefficients in $\wt S(U_q(\wh\mfn^+)) \subset U_q(\wh\mfb^+)$: \[ \Sigma_z(\Upsilon) = \sum_{r \ge 0} z^r \sum_{\la \in \Z_{\ge 0} \del, \, \h(\la)=r} \Upsilon_\la. \]\ Now consider the grading-shifted universal K-matrix: \eq{ \label{K(z):def} \cK(z) = \Sigma_z(\cK). } Noting that the form of $\Upsilon$ implies that $\cK$ commutes with $k_1$, we arrive at the following main result, which is a boundary analogue of Theorem \ref{thm:R(z):action}. \begin{thrm} \label{thm:K(z):action} Consider a level-0 representation $\pi: U_q(\wh\mfb^+) \to \End(V)$. Then \eq{ \cK_{\pi}(z) := \pi(\cK(z)) \in \End(V) \ot \C[[z]] } is well-defined and commutes with $\pi(k_1)$.\end{thrm} We will also need boundary counterparts of Propositions \ref{prop:R(z):YBE} and \ref{prop:R(z):intw}. Consider two indeterminates $z_1,z_2$. Applying $\Sigma_{z_1} \ot \Sigma_{z_2}$ to \eqref{K:RE} and using \eqref{psi:Sigma}, we obtain the following reflection equation for the grading-shifted universal operators: \eq{ \label{K(z)univ:RE} \begin{aligned} & \cR(z_1/z_2) \cdot (\cK(z_1) \ot 1) \cdot (\id \ot \psi)(\cR(z_1z_2)) \cdot (1 \ot \cK(z_2)) = \qq \\ &\qq = (1 \ot \cK(z_2)) \cdot (\id \ot \psi)(\cR(z_1z_2)) \cdot (\cK(z_1) \ot 1) \cdot \cR(z_1/z_2). \end{aligned} } Recalling that the universal R-matrix $\cR$ lies in a completion of $U_q(\wh\mfb^+) \ot U_q(\wh\mfb^-)$ and applying a tensor product of suitable representations to \eqref{K(z)univ:RE}, one obtains the \emph{right reflection equation} with multiplicative spectral parameters for P-symmetric R-matrices, as we now state precisely. \begin{prop} \label{prop:K(z):RE} Consider level-0 representations $\pi^+: U_q(\wh\mfb^+) \to \End(V^+)$ and $\pi: U_q(\wh\mfg) \to \End(V)$ such that $\pi \circ \psi = \pi$. Then \eq{ \label{K(z):RE} \begin{aligned} & \cR_{\pi^+\pi}(z_1/z_2) (\cK_{\pi^+}(z_1) \ot \Id_V) \cR_{\pi^+\pi}(z_1z_2) (\Id_{V^+} \ot \cK_\pi(z_2)) = \qq \\ &\qq = (\Id_{V^+} \ot \cK_\pi(z_2)) \cR_{\pi^+\pi}(z_1z_2) (\cK_{\pi^+}(z_1) \ot \Id_V) \cR_{\pi^+\pi}(z_1/z_2). \end{aligned} } \end{prop} The use of linear relations to find explicit solutions of reflection equations was proposed in \cite{MN98,DG02,DM03}. As before, we assume that $\pi$ extends to a $U_q(\wh\mfg)$-representation\footnote{ Analogous to the case of the R-matrix, we can observe that the intersection of $U_q(\mfk)$ and $U_q(\wh\mfb^+)$ is contained in $U_q(\wh\mfh)$. Therefore, applying a level-0 representation $\pi$ to \eqref{K:axiom1} just recovers the second part of Theorem \ref{thm:K(z):action}. }, in which case it restricts to a $U_q(\mfk)$-representation and we obtain the following result as a consequence of \eqref{psi:Sigma}. \begin{prop} \label{prop:K(z):intw} If $\pi: U_q(\wh\mfg) \to \End(V)$ is a level-0 representation such that $\pi \circ \psi = \pi$, then \eq{ \label{K(z):intw} \cK_\pi(z) \cdot \pi_z(u) = \pi_{1/z}(u) \cdot \cK_\pi(z) \qq \text{for all } u \in U_q(\mfk). } \end{prop} We close this section with some comments parallel to Remark \ref{rmk:R(z):YBE:define}. \begin{rmk} If the solution space of \eqref{K(z):intw} is 1-dimensional, Proposition \ref{prop:K(z):intw} implies that any solution must be a scalar multiple of $\mc K(z)$ and hence automatically satisfy the reflection equation \eqref{K(z):RE}. In the case that $\pi: U_q(\wh\mfb^+) \to \End(V)$ extends to a representation and $V$ is finite-dimensional, there is an analogue to Remark \ref{rmk:R(z):YBE:define}. Namely, the solution space of \eqref{K(z):intw} for irreducible representations is 1-dimensional and the existence of a solution of the intertwining condition \eqref{K(z):intw} depending rationally on $z$ leads to a \emph{trigonometric K-matrix}. See \cite[Secs.~5 and 6]{AV22b} for more detail. To explicitly determine $\cK_{\pi^+}(z)$ in the cases where $\pi^+: U_q(\wh\mfb^+) \to \End(V)$ does not extend to a $U_q(\wh\mfg)$-representation, we will use the reflection equation \eqref{K(z):RE}, with the other K-matrix $\cK_\pi(z)$ determined using Proposition \ref{prop:K(z):intw}. \hfill \rmkend \end{rmk} \section{Borel representations in terms of the q-oscillator algebra} \label{sec:Borelreps} \subsection{The infinite-dimensional vector space $W$} The countably-infinite-dimensional vector space plays a central role in the theory of Baxter's Q-operators. We may define it as the free $\C$-module over a given set $\{ w_j \}_{j \in \Z_{\ge 0}}$: \eq{ W = \bigoplus_{j \ge 0} \C w_j. } Given this distinguished basis, elements of $\End(W)$ naturally identify with infinite-by-infinite matrices with the property that all but finitely many entries of each column are zero. It is convenient to work with a particular subalgebra of $\End(W)$ depending on the deformation parameter $q$. More precisely, consider the $\C$-linear maps $a$, $a^\dag$ on $W$ defined by \eq{ a \cdot w_{j+1} = w_j, \qq a \cdot w_0 = 0, \qq a^\dag \cdot w_j = \big( 1-q^{2(j+1)} \big) w_{j+1} } for all $j \in \Z_{\ge 0}$. These operators satisfy the relation $[a, a^\dag]_{q^2} = 1-q^2$. Note that each basis vector $w_j$ is an eigenvector of the compositions $a a^\dag$ and $a^\dag a$ with eigenvalues $1-q^{2(j+1)}$ and $1-q^{2j}$, respectively. For the description of L-operators associated to $U_q(\wh\mfg)$ acting on $W \otimes \C^2$ (particular solutions of the Yang-Baxter equation), it is convenient to consider a linear operator $q^D$ which is a square root of $1-a^\dag a$, i.e. $q^D \cdot w_j = q^j w_j$ for $j \in \Z_{\ge 0}$. Note that $q^D$ is invertible and we let $q^{-D}$ denote its inverse. \begin{rmk} Often the q-oscillator algebra is defined as the abstract algebra generated by $a$, $a^\dag$ and $q^{\pm D}$ subject to certain relations, which naturally embeds into $\End(W)$. This version of the q-oscillator algebra appeared in the guise of a topological algebra for instance in \cite[Sec. 2.3]{BGKNR10} and with slightly different conventions in \cite{KT14}\footnote{ The two vector spaces $W_1$ and $W_2$ introduced in \cite[Sec. 2.3]{KT14} are naturally isomorphic, so that the two algebras ${\rm Osc}_1$ and ${\rm Osc}_2$ can be identified with the same subalgebra of $\End(W_1) \cong \End(W_2)$.}. \hfill \rmkend \end{rmk} \subsection{Diagonal operators from functions and an extended q-oscillator algebra} \label{sec:extqosc} To accommodate the action of the universal R and K-matrices on certain level-0 modules, we will need an extension of the commutative subalgebra $\langle q^{\pm D} \rangle$ and work over the commutative ring $\C[[z]]$. Denote by $\cF$ the commutative algebra of functions from $\Z_{\ge 0}$ to $\C[[z]]$. For any $f \in \cF$ we define $f(D) \in \End(W)$ via \eq{ f(D) \cdot w_j = f(j) w_j. } Thus, we obtain an algebra embedding $\cF \to \End(W)[[z]]$, whose image $\cF(D)$ is the subalgebra of diagonal operators on $W$ with respect to the given basis. Now we combine this with the maps $a$, $a^\dag$ defined above (viewed as maps on $W[[z]] \cong W \ot \C[[z]]$, acting trivially on the second factor). \begin{defn} The \emph{(extended) q-oscillator algebra} is the subalgebra $\cA \subset \End(W)[[z]]$ generated by $a^\dag$, $a$ and $\cF(D)$. \hfill \defnend \end{defn} As can be verified on basis vectors, in $\cA$ one has the relations \eq{ \label{A:basicrels2} a a^\dag = 1-q^{2(D+1)}, \qq a^\dag a = 1-q^{2D}, \qq a f(D) = f(D+1) a, \qq f(D) a^\dag = a^\dag f(D+1). } One straightforwardly verifies that the subalgebras $\cF(D)$, $\langle a^\dag \rangle$ and $\langle a \rangle$ are self-centralizing. Note that the operator \eq{ \ba^\dag := -q^{-2D} a^\dag \in \End(W) } sends $w_j$ to $(1-q^{-2(j+1)})w_{j+1}$. Clearly, $\cA$ is also generated by $\ba^\dag$, $a$ and $\cF(D)$. The transformation $q \mapsto q^{-1}$ defines an algebra automorphism of $\cA$, preserving the subalgebra $\cF(D)$, fixing the generator $a$ and interchanging the generators $\adag$ and $\badag$. \subsection{Endomorphisms of $W \ot W$} The linear maps \[ a_1:=a \ot \Id_W, \qq a^\dag_1 := a^\dag \ot \Id_W, \qq a_2 := \Id_W \ot a, \qq a^\dag_2 := \Id_W \ot a^\dag \] together with $\cF(D_1) \cup \cF(D_2)$ generate $\cA \ot \cA$ over $\C[[z]]$. We will need a larger subalgebra of $\End(W \ot W)$: we will allow all functions of two nonnegative integers as well as formal power series in certain locally nilpotent endomorphisms. Denote by $\cF^{(2)}$ the commutative algebra of functions from $\Z_{\ge 0} \times \Z_{\ge 0}$ to $\C[[z]]$. Similarly, we denote by $D_1$ and $D_2$ the linear operators on the tensor product $W \ot W$ defined by \eq{ D_1 \cdot (w_j \ot w_k) = j w_j \ot w_k, \qq D_2 \cdot (w_j \ot w_k) = k w_j \ot w_k. } For any $f \in \cF^{(2)}$ we define $f(D_1,D_2) \in \End(W \ot W)[[z]]$ via \eq{ f(D_1,D_2) \cdot (w_j \ot w_k) = f(j,k) w_j \ot w_k, } yielding an algebra embedding $\cF^{(2)} \to \End(W \ot W)[[z]]$, whose image $\cF^{(2)}(D_1,D_2)$ is the subalgebra of diagonal operators on $W \ot W$. Now note that $a_1 a^\dag_2$ and $a^\dag_1 a_2$ are locally nilpotent endomorphisms of $W \ot W$. Hence, for any $g_{k,\ell}, h_{k,\ell} \in \cF^{(2)}$ series of the form \eq{ \label{series} \sum_{k,\ell \ge 0} (a^\dag_2)^\ell g_{k,\ell}(D_1,D_2) a_1^k, \qq \sum_{k,\ell \ge 0} (a^\dag_1)^k h_{k,\ell}(D_1,D_2) a_2^\ell } truncate when applied to any basis vector $w_j \ot w_{j'}$. We obtain a class of well-defined elements of $\End(W \ot W)[[z]]$. We denote by $\cA^{(2)}$ the $\C[[z]]$-span of the operator-valued formal series \eqref{series}, which is easily seen to be a subalgebra of $\End(W \ot W)[[z]]$. \subsection{The Borel representations} \label{sec:reps:plus} \mbox{} We introduce four level-0 representations of $U_q(\wh\mfb^+)$. First of all, let $\mu \in \C$ be a free parameter. It is straightforward to check that the following assignments define a representation $\ups$ of $U_q(\wh\mfg)$ on $W$: \eq{ \label{hom:sigma} \begin{aligned} \ups(e_0) &= \ups(f_1) = \frac{1}{1-q^2} a^\dag, && \ups(k_0) = q^{-\mu+1+2D}, \\ \ups(e_1) &= \ups(f_0) = \frac{q^2}{1-q^2} a (q^{-\mu} - q^{\mu-2D}), \qq && \ups(k_1) = q^{\mu-1-2D}, \end{aligned} } The module structure on $W$ defined by $\ups$ is the evaluation Verma module: affinizations of finite-dimensional irreducible $U_q(\mfsl_2)$-modules arise as quotients if $\mu \in \Z_{>0}$ (also see \cite[Sec.~2.2]{KT14}). We will in addition consider three $U_q(\wh{\mfb}^+)$-representations which do not extend to representations of $U_q(\wh{\mfg})$. A useful reducible representation $\phi: U_q(\wh{\mfb}^+) \to \End(W)$ is given by \eq{ \phi(e_0) = 0, \qq \phi(e_1) = \frac{q}{1-q^2} a, \qq \phi(k_0) = q^{\mu+1+2D}, \qq \phi(k_1) = q^{-\mu-1-2D} } which is closely related to the special evaluation homomorphism defined in \cite[Eq.~(4.6)]{KT14}. The following representations $\vrho, \brho: U_q(\wh{\mfb}^+) \to \End(W)$ play an essential role in the definition of Baxter Q-operators:\eq{ \label{homs:plus} \begin{aligned} \vrho(e_0) &= \frac{1}{1-q^2} a^\dag, & \vrho(e_1) &= \frac{q^2}{1-q^2} a, & \vrho(k_0) &= q^{2D}, & \vrho(k_1) &= q^{-2D}, \\ \brho(e_0) &= \frac{q^2}{1-q^2} \ba^\dag, \qu & \brho(e_1) &= \frac{1}{1-q^2} a, \qu & \brho(k_0) &= q^{2(D+1)}, \qu & \brho(k_1) &= q^{-2(D+1)}. \end{aligned} } They correspond to the representations $L^\pm_{1,a}$ introduced in \cite[Def.~3.7]{HJ12} for suitable $a \in \C^\times$ (called \emph{prefundamental} representations in \cite{FH15} which considers their role in the construction of Q-operators for closed chains). We will henceforth repeatedly denote grading-shifted representations by the notation \eqref{reps:gradingshift}. Note that the grading-shifted representations $\vrho_z$, $\brho_z$ correspond to the representations defined by \cite[Eq.~(3.5)]{KT14}. \begin{rmk} \label{rmk:signdifference} Note that the grading-shifted representation in \cite[Eq.~(2.9)]{VW20} is related to $\vrho_z$ by a factor of $-1$ in the actions of $e_0$ and $e_1$: in other words it is equal to $\vrho_{-z}$. Since the Baxter Q-operators only depend on $z^2$, see \cite[Lem.~4.5]{VW20}, there are no serious discrepancies. The benefit of the current choice is its consistency across the relevant level-0 representations, with $\ups$ having the same sign convention as finite-dimensional representations such as $\Pi$, see Section \ref{sec:LandR}. \hfill \rmkend \end{rmk} \subsection{The $U_q(\wh\mfb^+)$-intertwiner $\cO$}\label{sec:O+} The tensor products $\vrho_{q^{-\mu/2} z} \ot \brho_{q^{\mu/2}z}$ and $\ups_{z} \ot \phi_{z}$ of shifted representations are closely related in the following sense: the two induced $U_q(\wh\mfb^+)$-actions on $W \ot W$ are conjugate by an element in $\cA^{(2)}$ which is independent of $z$. More precisely, consider the deformed exponential \eq{ \label{qexp:def} e_{q^2}(x) = \sum_{k=0}^\infty \frac{x^k}{(q^2;q^2)_k}. } We refer to Appendix \ref{app:qexp} for more detail on this formal power series. We now define the following invertible element of $\End(W \ot W)$: \eq{ \label{Oplus:def} \cO = e_{q^2}(q^2 a_1 \ba^\dag_2)^{-1} q^{\mu (D_1-D_2)/2}. } The following statement is \cite[Eq.~(4.4)]{KT14} and connects to \cite[Thm.~3.8]{FH15}; for completeness we provide a proof in the present conventions. \begin{thrm}\label{thm:O:plus} The $U_q(\wh\mfb^+)$-representations $\vrho_{q^{-\mu/2} z} \ot \brho_{q^{\mu/2} z}$ and $\ups_{z} \ot \phi_{z}$ are intertwined by $\cO$: \eq{ \label{O:plus:intw} \cO \, \big( \vrho_{q^{-\mu/2} z} \ot \brho_{q^{\mu/2} z} \big)(\Del(u)) = \big( \ups_{z} \ot \phi_{z} \big)(\Del(u)) \,\, \cO \qq \text{for all } u \in U_q(\wh\mfb^+). } \end{thrm} \begin{proof} The relations \eqrefs{qexp:commute}{qexp:Dba} can be evaluated at $y=q^2$, yielding \begin{align*} q^{\mu (D_2-D_1)/2} e_{q^2}(q^2a_1 \ba^\dag_2) \ba^\dag_2 &= \big( q^{-\mu/2} a^\dag_1 + q^{2(D_1+1)+\mu/2} \ba^\dag_2 \big) q^{\mu (D_2-D_1)/2} e_{q^2}(q^2a_1 \ba^\dag_2), \\ \multicolumn{2}{c}{$q^{\mu(D_2-D_1)/2} e_{q^2}(q^2a_1 \ba^\dag_2) \big( a_1 (q^{-2\mu} - q^{- 2D_1}) + q^{-2(D_1+1)} a_2 \big)= \qq \qq$} \\ \multicolumn{2}{c}{$\qq \qq = \big( a_1 q^{-3\mu/2} + q^{-\mu/2 - 2(D_1+1)} a_2 \big) q^{\mu (D_2-D_1)/2} e_{q^2}(q^2a_1 \ba^\dag_2)$,} \\ q^{\mu(D_2-D_1)/2} e_{q^2}(q^2a_1 \ba^\dag_2) q^{2(D_1+D_2+1)} &= q^{2(D_1+D_2+1)} q^{\mu(D_2-D_1)/2} e_{q^2}(q^2 a_1 \ba^\dag_2) ,\\ q^{\mu (D_2-D_1)/2} e_{q^2}(q^2 a_1 \ba^\dag_2) q^{-2(D_1+D_2+1)} &= q^{-2(D_1+D_2+1)} q^{\mu (D_2-D_1)/2} e_{q^2}(q^2 a_1 \ba^\dag_2). \end{align*} These directly imply \eqref{O:plus:intw} for $u \in \{ e_0,e_1,k_0,k_1\}$. \end{proof} \subsection{Formalism for $U_q(\wh\mfb^-)$} Recall the automorphism $\psi$ defined by \eqref{psi:def}, interchanging the two Borel subalgebras. Note that $\ups: U_q(\wh\mfg) \to \End(W)$ satisfies \eq{ \label{sigma:psi:symmetry} \ups = \ups \circ \psi. } Hence, it is natural to define representations of $U_q(\wh\mfb^-)$ corresponding to $\vrho$, $\brho$ and $\phi$, as follows: \eq{ \label{maps:minus} \vrho^- := \vrho\circ \psi, \qq \brho^- := \brho \circ \psi, \qq \phi^- := \phi \circ \psi. } Explicitly, we have \eq{ \label{homs:minus} \begin{aligned} \vrho^-(f_0) &= \frac{q^2}{1-q^2} a, & \vrho^-(f_1) &= \frac{1}{1-q^2} a^\dag, & \vrho^-(k_0) &= q^{2D}, & \vrho^-(k_1) &= q^{-2D}, \\ \brho^-(f_0) &= \frac{1}{1-q^2} a, & \brho^-(f_1) &= \frac{q^2}{1-q^2} \ba^\dag, & \brho^-(k_0) &= q^{2(D+1)}, & \brho^-(k_1) &= q^{-2(D+1)}, \\ \phi^-(f_0) &= \frac{q}{1-q^2} a, & \phi^-(f_1) &= 0, & \phi^-(k_0) &= q^{\mu+1+2D}, & \phi^-(k_1) &= q^{-\mu-1-2D}. \end{aligned} } By \eqref{psi:Sigma}, whereas the grading-shifted representations $\vrho_z$, $\brho_z$, $\phi_z$ take values in $\End(W) \ot \C[z]$, their negative counterparts $\vrho^-_z$, $\brho^-_z$, $\phi^-_z$ take values in $\End(W) \ot \C[z^{-1}]$. Since $\psi$ is a coalgebra antiautomorphism, using \eqref{psi:Sigma} we immediately deduce the following characterization of the tensorial opposite of $\cO$. \begin{crl}\label{crl:O:minus} The linear map \eq{ \label{Omin:def} \cO_{21} = e_{q^2}(q^2\ba^\dag_1 a_2)^{-1} q^{\mu (D_2-D_1)/2} \in \End(W \ot W). } intertwines the $U_q(\wh{\mfb}^-)$-representations $\brho^-_{q^{-\mu/2} z} \ot \vrho^-_{q^{\mu/2} z}$ and $\phi^-_z \ot \ups_z$, viz. \eq{ \label{O:min:intw} \cO_{21} \, \big( \brho^-_{q^{-\mu/2} z} \ot \vrho^-_{q^{\mu/2} z} \big)(\Del(u)) = \big( \phi^-_z \ot \ups_z \big)(\Del(u)) \, \, \cO_{21} \qq \text{for all } u \in U_q(\wh{\mfb}^-). } \end{crl} \section{L-operators and R-operators} \label{sec:LandR} In order to define L-operators, we recall the standard 2-dimensional representation $\Pi : U_q(\wh{\mfg}) \rightarrow \End(\C^2)$ determined by \eq{ \begin{aligned} \Pi(e_0) = \Pi(f_1) &= \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \qu & \qu \Pi(k_0) &= \begin{pmatrix} q^{-1} & 0 \\ 0 & q \end{pmatrix}, \\ \Pi(e_1) = \Pi(f_0) &= \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \qu & \qu \Pi(k_1) &= \begin{pmatrix} q & 0 \\ 0 & q^{-1} \end{pmatrix}. \end{aligned} } In analogy with \eqref{sigma:psi:symmetry}, we have \eq{ \label{Pi:psi:symmetry} \Pi = \Pi \circ \psi. } \subsection{L-operators for $U_q(\wh\mfb^+)$-modules} \label{sec:L:plus} We will now obtain explicit formulas for certain scalar multiples of $\cR_{\vrho \Pi}(z)$, $\cR_{\brho \Pi}(z)$, $\cR_{\ups \Pi}(z)$ and $\cR_{\phi \Pi}(z)$. In this case both Theorem \ref{thm:R(z):action} and Proposition \ref{prop:R(z):intw} apply. It turns out that the relevant linear equations all have 1-dimensional solution spaces over $\C[[z]]$. The following linear operators are convenient scalar multiples. \begin{align} \cL_\vrho(z) &= \begin{pmatrix} q^D & a^\dag q^{-D-1} z \\ a q^{D+1} z & q^{-D}-q^{D+2} z^2 \end{pmatrix},\\[2mm] \label{Lbar:plus:def} \cL_\brho(z) &= \begin{pmatrix} q^{D+1} - q^{-D+1} z^2 & \ba^\dag q^{-D} z \\ a q^{D} z & q^{-D-1} \end{pmatrix},\\[2mm] \label{Lsigma:plus:def} \cL_\ups(z) &= \begin{pmatrix} q^D - q^{-D+\mu} z^2 & \adag q^{-D-2+\mu} z \\ a q \big( q^{D-\mu} - q^{-D+\mu} \big) z & q^{-D-1+\mu} - q^{D+1} z^2 \end{pmatrix},\\[2mm] \cL_\phi(z) &= \begin{pmatrix} q^{D+1} & 0 \\ a q^{D+1} z & q^{-D-\mu} \end{pmatrix}. \end{align} \begin{rmk} We have abused notation by representing linear operators on $\End(W \ot \C^2)$ as $2 \times 2$ matrices with coefficients in $\End(W)$ (as opposed to the conventional usage that realizes operators on $\End(\C^2\ot W)$ in this way). \hfill \rmkend \end{rmk} The following result is \cite[Cor.\ 4.2]{KT14}. \begin{thrm}\label{thm:fund} The above L-operators satisfy the following relation in $\End(W \ot W \ot \C^2)[[z]]$: \eq{ \label{fund:bulk:plus} \cO_{12} \cL_\vrho(q^{-\mu/2} z)_{13} \cL_\brho(q^{\mu/2} z)_{23} = \cL_\ups(z)_{13} \cL_\phi(z)_{23} \cO_{12} . } \end{thrm} \begin{proof} From \eqref{R:axiom2} one deduces \begin{align*} \cL_\vrho(q^{-\mu/2} z)_{13} \cL_\brho(q^{\mu/2} z)_{23} \; &\propto \; (\vrho_{q^{-\mu/2} z} \ot \brho_{q^{\mu/2} z} \ot \Pi)\big((\Delta \ot \id)(\cR)\big), \\ \cL_\ups(z)_{13} \cL_\phi(z)_{23} \; &\propto \; (\ups_{z} \ot \phi_{z} \ot \Pi)\big((\Delta \ot \id)(\cR)\big). \end{align*} Now Theorem \ref{thm:O:plus} implies \eqref{fund:bulk:plus} up to a scalar. By applying both sides to $w_0 \ot w_0 \ot ({1 \atop 0})$ one observes that the scalar is 1. \end{proof} Given the L-operators for the various $U_q(\wh\mfb^+)$-representations, Lemma \ref{lem:R:Psymmetry} provides us with L-operators for the corresponding $U_q(\wh\mfb^-)$-representations: $\cL^-_\pi(z) = \cL_\pi(z)_{21}$ for $\pi \in \{ \vrho, \brho, \ups, \phi \}$. These are scalar multiples of $\cR_{\Pi \vrho^-}(z)$, $\cR_{\Pi \brho^-}(z)$, $\cR_{\Pi \ups}(z)$ and $\cR_{\Pi \phi^-}(z)$, respectively. Theorem \ref{thm:fund} immediately yields the following result: \begin{crl}\label{thm:fund:minus} The following relation in $\End(\C^2 \ot W \ot W)[[z]]$ is satisfied: \eq{ \label{fund:bulk:minus} \cO_{32} \cL^-_{\vrho}(q^{-\mu/2} z)_{13} \cL^-_{\brho}(q^{\mu/2} z)_{12} = \cL^-_{\ups}(z)_{13} \cL^-_{\phi}(z)_{12} \cO_{32} . } \end{crl} \subsection{Actions of $\cR(z)$ on tensor products of infinite-dimensional Borel representations} By Theorem \ref{thm:R(z):action}, the grading-shifted universal R-matrix also acts on the tensor product of the level-0 modules $(W,\ups)$ and $(W,\phi^-)$ and on the tensor product of the level-0 modules $(W,\vrho)$ and $(W,\brho^-)$ as $\End(W \ot W)$-valued formal power series. It is convenient for us to use rescaled linear-operator-valued formal power series \eq{ \cR_{\vrho\brho}(z), \cR_{\ups\phi}(z) \in \End(W \ot W) \ot \C[[z]], } uniquely defined by the condition that they fix $w_0 \ot w_0$: \eq{ \label{Rrhobrho:Rsitau:def} \begin{aligned} \cR_{\vrho \brho}(z) \, &\propto \, (\vrho \ot \brho^-)(\cR(z)), & \cR_{\vrho \brho}(z) \cdot (w_0 \ot w_0) = w_0 \ot w_0, \\ \cR_{\ups \phi}(z) \, &\propto \, (\ups \ot \phi^-)(\cR(z)), & \qq \cR_{\ups \phi}(z) \cdot (w_0 \ot w_0) = w_0 \ot w_0. \end{aligned} } These power series will appear in the boundary factorization identity. In appendix \ref{app:R-operators} we obtain explicit expressions for $\cR_{\vrho \brho}(z)$ and $\cR_{\ups\phi}(z)$, although we will not need these for the proof of the boundary factorization identity using the universal K-matrix formalism of Section \ref{sec:augmentedqOns}. \section{K-matrices} \label{sec:K} In this section we consider solutions of reflection equations associated to the subalgebra $U_q(\mfk)$. \subsection{Right K-matrices} By Theorem \ref{thm:K(z):action}, applying any of the level-0 $U_q(\wh\mfb^+)$-representations $\vrho$, $\brho$, $\ups$, $\phi$ to the grading-shifted universal K-matrix associated to $U_q(\mfk)$ we obtain $\End(W)$-valued formal power series, satisfying the reflection equation \eqref{prop:K(z):RE}. Moreover, since these commute with the action of $k_1$ they act diagonally with respect to the basis $\{w_j\}_{j \ge 0}$. We will consider the scalar multiples of these linear operators which fix $w_0$: \eq{ \label{K:def} \cK_\pi(z) \, \propto \, \pi(\cK(z)), \qq \cK_\pi(z) \cdot w_0 = w_0. } for $\pi \in \{ \vrho, \brho, \ups, \phi \}$. It is convenient to obtain explicit expressions by applying Propositions \ref{prop:K(z):RE} and \ref{prop:K(z):intw}. These could be found independently of the universal K-matrix formalism, either by solving the reflection equations directly in all cases or by following the approach outlined in \cite{DG02,DM03,RV16} (this relies on the irreducibility of certain tensor products as $U_q(\mfk)((z))$-modules; otherwise the reflection equation must be verified directly). First of all, the linear operator \eq{ \label{eq:RightBoundary:V} K_{\Pi}(z) = \begin{pmatrix} \xi z^2 - 1 & 0 \\ 0 & \xi - z^2 \end{pmatrix} \in \End({\C^2}) [[z]] } is, up to a scalar, the unique solution of the $U_q(\mfk)$-intertwining condition \eq{ K_\Pi(z) \Pi_z(u) = \Pi_{1/z}(u) K_\Pi(z) \qq \text{for all } u \in U_q(\mfk). } By Theorem \ref{thm:K(z):action}, it is proportional to the action of the grading-shifted universal K-matrix in the representation $(\C^2,\Pi)$. Recall that $\Pi \circ \psi = \Pi$; hence, motivated by Proposition \ref{prop:K(z):RE}, for $\pi \in \{\vrho, \brho, \ups, \phi \}$, we consider the right reflection equation \eq{ \label{eq:RightBoundary:W} \cL_\pi(\tfrac{y}{z}) \cK_{\pi}(y) \cL_{\pi}(yz) K_{\Pi}(z) = K_{\Pi}(z) \cL_\pi(y z) \cK_{\pi}(y) \cL_{\pi}(\tfrac{y}{z}) \in \End (W \ot {\C^2})[[y/z,z]]. } \begin{lemma} \label{lem:K:explicit} We have \eq{ \label{eq:KMatrices} \begin{aligned} \cK_\vrho(z) &= (-q^{-D} \xi)^{D} (q^{2}\xi^{-1}z^2;q^2)_{D}, & \cK_\brho(z) &= (q z^2)^{-D} (q^{2} \xi^{-1} z^{-2};q^2)_D^{-1}, \\ \cK_\ups(z) &= z^{-2D} \frac{(q^{2-\mu} \xi^{-1} z^2;q^2)_D}{(q^{2-\mu} \xi^{-1} z^{-2};q^2)_D}, & \qq \cK_\phi(z) &= (-q^{-\mu - D-1} \; \xi)^D. \end{aligned} } \end{lemma} \begin{proof} For $\cK_\ups(z)$, by a straightforward check, the intertwining condition \eq{ \label{Ksigma:intertwine} \cK_\ups(z) \ups_{z}(u) = \ups_{1/z}(u) \cK_\ups(z) \qq \text{for all } u \in U_q(\mfk) } can be solved to find $\cK_{\ups}(z)$, making use of Proposition \ref{prop:K(z):intw}. Since $\cK(z)$ commutes with the action of $k_1$ it follows that $\cK_\ups(z) = f(D)$ for some $f \in \cF$. Now imposing \eqref{Ksigma:intertwine} for the generators $e_0-q^{-1}\xi^{-1}k_0f_1$ and $e_1-q^{-1}\xi k_1f_0$ yields the recurrence relation \[ \frac{f(D+1)}{f(D)} = \frac{1-q^{2(D+1)-\mu} \xi^{-1} z^2}{z^2-q^{2(D+1)-\mu} \xi^{-1}}. \] In particular, the linear relation \eqref{Ksigma:intertwine} has a 1-dimensional solution space. Together with the constraint $f(0)=1$ it yields the formula given in \eqref{eq:KMatrices}. For $\pi \in \{ \vrho, \brho, \phi \}$, it is convenient to consider the linear space \eq{ \label{eq:REsolspace} {\sf RE}_\pi := \{ \cK_\pi(y) \in \cF(D) \, | \, \eqref{eq:RightBoundary:W} \text{ is satisfied} \} } and use Proposition \ref{prop:K(z):RE} to find the explicit expression. Indeed, the operator $K_{\vrho}(z)$ was obtained in \cite[Sec.~2.4]{VW20} as the unique element of the 1-dimensional linear space ${\sf RE}_{\vrho}$ which fixes $w_0$. In an analogous way we obtain the result for $K_{\brho}(z)$. Note that the representation $\phi$ is reducible. Indeed, the solution space of \eqref{eq:RightBoundary:W} in $\End(W)[[z]]$ is infinite-dimensional:~the general solution $\cK_\phi(z)$ is of the form $(- q^{-\mu - D -1} \; \xi)^D p$ with $p$ in the centralizer of $a$ in $\cA$ (i.e.~a polynomial in $a$ with coefficients in $\C[[z]]$). The second part of Theorem \ref{thm:K(z):action} implies that $\cK_\phi(z) \in \cF(D)$, so that $p$ is a scalar. Requiring that $w_0$ is fixed forces $p=1$. \end{proof} \subsection{Left K-matrices} \label{sec:leftK} We now obtain linear-operator-valued power series satisfying a reflection equation for the left boundary by using a well-established bijection, see \cite[Eq.~(15)]{Sk88}, between its solution set and the solution set of the right reflection equation. For fixed $\txi \in \C^\times$ we define \eq{ \label{TildeK:V} \wt K_{\Pi}(z) := (1-q^2 \txi^{-1} z^2)^{-1} (1-q^2 \txi z^2)^{-1} \big( K_{\Pi}(q z)^{-1}|_{\xi \mapsto \txi^{-1}} \big) = \begin{pmatrix} q^2 \txi z^2 -1 & 0\\ 0 & \txi - q^2z^2 \end{pmatrix}. } Also, for $\pi \in \{ \vrho, \brho, \ups, \phi \}$ we define \eq{ \label{TildeK:W} \tcK_{\pi}(z) := \cK_{\pi}(q z)^{-1}|_{\xi \mapsto \txi^{-1}}. } Similarly, note that $\cL_\pi(\ga z)$ is invertible in $\End(W \ot \C^2)[[z]]$ for all $\ga \in \C$. We define \eq{ \label{tildeL:def} \tcL_\pi(z) = \cL_\pi(q^2z)^{-1}. } \begin{lemma} For all $\pi \in \{\vrho, \brho, \ups, \phi \}$ the \emph{left reflection equation} holds: \eq{ \label{LeftBoundary} \tcK_\pi(y) \tcL_\pi(yz) \wt K_{\Pi}(z) \cL_\pi(\tfrac{y}{z}) = \cL_\pi(\tfrac{y}{z}) \wt K_{\Pi}(z)\tcL_\pi(yz)\tcK_\pi(y) \qu \in \End (W \ot {\C^2})[[y/z,z]]. } \end{lemma} \begin{proof} The desired equation \eqref{LeftBoundary} can be rewritten as \[ \wt K_{\Pi}(z)^{-1}\tcL_\pi(yz)^{-1} \tcK_\pi(y)^{-1} \cL_\pi(\tfrac{y}{z}) = \cL_\pi(\tfrac{y}{z}) \tcK_\pi(y)^{-1}\tcL(yz)^{-1}\wt K_{\Pi}(z)^{-1}. \] By \eqrefs{TildeK:V}{tildeL:def}, this is equivalent to the right-reflection equation \eqref{eq:RightBoundary:W} with $y \mapsto qy$, $z \mapsto qz$ and $\xi \mapsto \txi^{-1}$. \end{proof} Using the explicit formulas \eqref{eq:RightBoundary:V} and \eqref{eq:RightBoundary:W} we obtain that the solutions of the left reflection equations \eqref{TildeK:W} are the following $\End(W)$-valued formal power series in $z$: \begin{equation} \label{K-tilde} \begin{aligned} \tcK_\vrho(z) &= (-q^D \txi)^D (q^4 \txi z^2;q^2)_{D}^{-1}, & \tcK_\brho(z) &= (q^3 z^2)^D (\txi z^{-2} ;q^2)_{D}, \\ \tcK_{\ups}(z) &= (q z)^{2D} \frac{(q^{-\mu} \txi z^{-2};q^2)_{D}}{(q^{4-\mu} \txi z^2;q^2)_{D}}, & \qq \tcK_{\phi}(z) &= (-q^{\mu + D + 1} \txi)^D. \end{aligned} \end{equation} \section{Fusion intertwiners revisited} \label{sec:fusionintw} In this short intermezzo we explain how the universal K-matrix formalism naturally leads to relations involving K-matrices and $U_q(\mfb^+)$-intertwiners called \emph{fusion intertwiners} which play a key role in the short exact sequence approach to the Q-operator. These intertwiners were discussed in \cite{VW20} and the relevant relations with K-matrices were shown by a linear-algebraic computation relying on the explicit expressions of the various constituent factors, see \cite[Lem.~3.2]{VW20}. In other words, the representation-theoretic origin of these relations was unclear, which we now remedy. Level-0 representations of $U_q(\wh\mfb^+)$ are amenable to scalar modifications of the action of $U_q(\mfh) = \langle k_1^{\pm 1} \rangle$, see also \cite[Rmk.~2.5]{HJ12}. In particular, for $r \in \C^\times$, define a modified Borel representation $\vrho$ as follows: \eq{ \vrho_r(e_i) = \vrho(e_i), \qq \vrho_r(k_0) = r \vrho(k_0), \qq \vrho_r(k_1) = r^{-1} \vrho(k_1) } and consider the grading-shifted representation $\vrho_{r,z} := (\vrho_r)_z$. There exist $U_q(\wh\mfb^+)$-intertwiners \begin{align*} \iota(r) &: (W,\vrho_{qr,qz}) \to (W \ot \C^2,\vrho_{r,z} \ot \Pi_z), \\ \tau(r) &: (W \ot \C^2,\vrho_{r,z} \ot \Pi_z) \to (W,\vrho_{q^{-1}r,q^{-1}z}), \end{align*} called \emph{fusion intertwiners}, which take part in the following short exact sequence: \begin{equation} \label{SES:plus} \begin{tikzcd} 0 \arrow[r] & (W,\vrho_{qr,qz}) \arrow[r,"\iota(r)"] & (W \ot \C^2,\vrho_{r,z} \ot \pi_z) \arrow[r,"\tau(r)"] & (W,\vrho_{q^{-1}r,q^{-1}z}) \arrow[r] & 0 \end{tikzcd} \end{equation} Explicitly\footnote{The sign mismatch with \cite[Eq.~(3.1)]{VW20} is explained in Remark \ref{rmk:signdifference}.}, we have \eq{ \iota(r) = \begin{pmatrix} q^{-D} a^\dag \\ -q^{D+1} r \end{pmatrix}, \qq \tau(r) = \begin{pmatrix} q^D, & q^{-D} r^{-1} a^\dag \end{pmatrix}. } Analogously to Theorem \ref{thm:fund}, fusion relations for the L-operators $\cL(r,z)$, defined as suitable scalar multiples of $(\vrho_{r,z} \ot \Pi)(\cR)$, now follow from these intertwining properties and the coproduct formulas for $\cR$ \eqref{R:axiom2}, see \cite[Eqns.~(3.8) and (3.9)]{VW20}. Recalling the universal object $\cK$ and Theorem \ref{thm:K(z):action}, we define the corresponding K-operator $\cK_\vrho(r,z)$ as the unique scalar multiple of $\vrho_{r,z}(\cK)$ which fixes $w_0$ (cf.~\cite[Prop.~2.5]{VW20}). Then \eq{ (\vrho_{r,z} \ot \Pi_z)(\Del(\cK)) \qq \propto \qq \cK_\vrho(r,z)_1 \cL(r,z^2) K_\Pi(z)_2 } as a consequence of \eqref{K:axiom2}. Since $\cK$ lies in a completion of $U_q(\wh\mfb^+)$, the intertwining properties of $\iota(r)$ and $\tau(r)$ now directly yield the following fusion relation for the K-operator: \begin{align*} \cK_\vrho(r,z)_1 \cL(r,z^2) K_\Pi(z)_2 \iota(r) \qq &\propto \qq \iota(r) \cK_\vrho(qr,qz) \\ \tau(r) \cK_\vrho(r,z)_1 \cL(r,z^2) K_\Pi(z)_2 \qq &\propto \qq \cK_\vrho(q^{-1}r,q^{-1}z) \tau(r), \end{align*} with the scalar factors determined by applying the two sides of the equation to $w_0$, say. We will be able to prove a boundary counterpart of the factorization identity \eqref{fund:bulk:plus} using similar ideas. We recover, with a much smaller computational burden, the key result \cite[Lemma 3.2]{VW20} (a similar relation for left K-operators can easily be deduced from this, as explained in the last sentence of \cite[Proof of Lemma 3.2]{VW20}). In the approach to Baxter's Q-operator using short exact sequences, the fusion relations for L and K-operators induce fusion relations for 2-boundary monodromy operators, see \cite[Lem.~4.2]{VW20} from which Baxter's relation \eqref{eq:TQ1} follows by taking traces, see \cite[Sec.~5.2]{VW20}. \section{Boundary factorization identity} \label{sec:boundaryfactorization} In motivating and presenting the key boundary relations, it is very useful to introduce a graphical representation of spaces and operators. Let us introduce the following pictures for the different representations introduced in Sections \ref{sec:Borelreps} and \ref{sec:LandR}:\\ \begin{center} \begin{tikzpicture}[scale=0.6] \draw(0,0) node[left]{$\vrho_z=\quad z$}; \draw[blueline=0.5](0,0) -- (2,0); \draw(7,0) node[left]{$\brho_z=\quad z $}; \draw[redline=0.5] (7,0) -- (9,0); \draw(14,0) node[left]{$\phi_{z}=\quad z$}; \draw[greenline=0.5] (14,0) -- (16,0); \draw(0,-2) node[left]{$\vrho^-_z=\quad z$}; \draw[dashblueline=0.5](0,-2) -- (2,-2); \draw(7,-2) node[left]{$\brho^-_z=\quad z$}; \draw[dashredline=0.5] (7,-2) -- (9,-2); \draw(14,-2) node[left]{$\phi^-_{z}=\quad z$}; \draw[dashgreenline=0.5] (14,-2) -- (16,-2); \draw(3,-4) node[left]{$\ups_{z}=\quad z$}; \draw[wavy=0.5](3,-4) -- (5,-4); \draw(10,-4) node[left]{$\Pi_z=\quad z $}; \draw[aline=0.5] (10,-4) -- (12,-4); \end{tikzpicture} \end{center} \vspace{5mm} For any vector spaces $V$, $V'$, denote by $\cP$ the linear map from $V \ot V'$ to $V' \ot V$ such that $\cP(v \ot v') = v' \ot v$ for all $v \in V$, $v' \in V'$. Also set $z = z_1/z_2$. We then have the following pictures for L-operators and R-operators: \[ \begin{array}{ll} \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL_\vrho(z) =$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[blueline=0.3] (-1,0)--(1,0); \draw (0,1) node[above]{$z_2$}; \draw (-1,0) node[left]{$z_1$}; \end{tikzpicture} \qq & \qq \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL_\ups(z)=$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[wavy=0.3] (-1,0)--(1,0); \draw (0,1) node[above]{$z_2$}; \draw (-1,0) node[left]{$z_1$}; \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL_\brho(z)=$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[redline=0.3] (-1,0)--(1,0); \draw (0,1) node[above]{$z_2$}; \draw (-1,0) node[left]{$z_1$}; \end{tikzpicture} \qq & \qq \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL_\phi(z)=$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[greenline=0.3] (-1,0)--(1,0); \draw (0,1) node[above]{$z_2$}; \draw (-1,0) node[left]{$z_1$}; \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL_{\vrho}^-(z) =$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[dashblueline=0.3] (1,0)--(-1,0); \draw (0,1) node[above]{$z_1$}; \draw (1,0) node[right]{$z_2$}; \end{tikzpicture} \qq & \qq \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL^-_{\ups}(z)=$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[wavy=0.3] (1,0)--(-1,0); \draw (0,1) node[above]{$z_1$}; \draw (1,0) node[right]{$z_2$}; \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL^-_{\brho}(z)=$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[dashredline=0.3] (1,0)--(-1,0); \draw (0,1) node[above]{$z_1$}; \draw (1,0) node[right]{$z_2$}; \end{tikzpicture} \qq & \qq \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cL^-_{\phi}(z)=$}; \draw[aline=0.3] (0,1)--(0,-1); \draw[dashgreenline=0.3] (1,0)--(-1,0); \draw (0,1) node[above]{$z_1$}; \draw (1,0) node[right]{$z_2$}; \end{tikzpicture} \\ \hspace{-4pt} \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cR_{\vrho\brho}(z) =$}; \draw[dashredline=0.3] (0,1)--(0,-1); \draw[blueline=0.3] (-1,0)--(1,0); \draw (0,1) node[above]{$z_2$}; \draw (-1,0) node[left]{$z_1$}; \end{tikzpicture} \qq & \qq \hspace{-4pt} \begin{tikzpicture}[scale=0.6] \draw(-1.7,0) node[left]{$\cP \cR_{\ups\phi}(z)=$}; \draw[dashgreenline=0.3] (0,1)--(0,-1); \draw[wavy=0.3] (-1,0)--(1,0); \draw (0,1) node[above]{$z_2$}; \draw (-1,0) node[left]{$z_1$}; \end{tikzpicture} \end{array} \] We now make the following definitions\footnote{These are the modified forms of the R-matrices that appear in the corresponding left reflection equations, see \cite[Eq.~(13)]{Sk88}.}: \eq{ \label{RsitauRrhobrho:tilde} \wt{\cR}_{\vrho\brho}(z) := \cR_{\vrho\brho}(q^2z)^{-1}, \qq \wt{\cR}_{\ups\phi}(z) := \cR_{\ups\phi}(q^2z)^{-1}. } and represent these modified R-matrices by the following pictures: \[ \begin{tikzpicture}[scale=0.6] \draw(-1,0) node[left]{$\wt{\cR}_{\vrho\brho}(z) \cP=$}; \draw[dashredline=0.3] (0,1)--(0,-1); \draw[blueline=0.3] (1,0)--(-1,0); \draw (0,0) node[bblob]{}; \draw (0,1) node[above]{$z_2$}; \draw (1,0) node[right]{$z_1$}; \end{tikzpicture} \qq \qq \qq \begin{tikzpicture}[scale=0.6] \draw(-1,0) node[left]{$\wt{\cR}_{\ups\phi}(z)\cP=$}; \draw[dashgreenline=0.3] (0,1)--(0,-1); \draw[wavy=0.3] (1,0)--(-1,0); \draw (0,0) node[bblob]{}; \draw (0,1) node[above]{$z_2$}; \draw (1,0) node[right]{$z_1$}; \end{tikzpicture} \] The various right-boundary K-matrices are represented as follows: \[ \begin{tikzpicture}[scale=0.6] \begin{scope} [rotate=-45] \draw (0,-2) node[left]{$\cK_{\rho}(z)=$}; \draw[blueline=0.5](0,0) -- (2,0); \draw (-0.3,-0.3) node[left]{$z$}; \draw[dashblueline=0.5](2,0) -- (2,-2); \draw (2,-2) node[left]{$z^{-1}$}; \end{scope} \end{tikzpicture} \qq \begin{tikzpicture}[scale=0.6] \begin{scope} [rotate=-45,xshift=4cm,yshift=4cm] \draw (0,-2) node[left]{$\cK_{\brho}(z)=$}; \draw[redline=0.5](0,0) -- (2,0); \draw (-0.3,-0.3) node[left]{$z$}; \draw[dashredline=0.5](2,0) -- (2,-2); \draw (2,-2) node[left]{$z^{-1}$}; \end{scope} \end{tikzpicture} \qq \begin{tikzpicture}[scale=0.6] \begin{scope} [rotate=-45,xshift=8cm,yshift=8cm] \draw (0,-2) node[left]{$\cK_{\ups}(z)=$}; \draw[wavy=0.5](0,0) -- (2,0); \draw (-0.3,-0.3) node[left]{$z$}; \draw[wavy=0.5](2,0) -- (2,-2); \draw (2,-2) node[left]{$z^{-1}$}; \end{scope} \end{tikzpicture} \qq \begin{tikzpicture}[scale=0.6] \begin{scope} [rotate=-45,xshift=12cm,yshift=12cm] \draw (0,-2) node[left]{$\cK_{\phi}(z)=$}; \draw[greenline=0.5](0,0) -- (2,0); \draw (-0.3,-0.3) node[left]{$z$}; \draw[dashgreenline=0.5](2,0) -- (2,-2); \draw (2,-2) node[left]{$z^{-1}$}; \end{scope} \end{tikzpicture} \] The left-boundary K-matrices defined in Section \ref{sec:leftK} are represented by the natural analogues of these pictures. For example:\begin{center} \begin{tikzpicture}[scale=0.6] \begin{scope} [rotate=-45,xshift=4cm,yshift=4cm]; \draw (-.5,-2.5) node[left]{$\wt\cK_{\rho}(z)=$}; \draw[blueline=0.5](0,-2) -- (0,0); \draw (0,0) node[right]{$z$}; \draw[dashblueline=0.5](2,-2) -- (0,-2); \draw (2,-2) node[right]{$z^{-1}$}; \end{scope} \end{tikzpicture} \end{center} Making use of these pictures, we see that Theorem \ref{thm:fund} and Corollary \ref{thm:fund:minus} are represented by \begin{align*} \begin{tikzpicture}[scale=0.5] \draw[redline=0.2](1,0) -- (4,0); \draw (1,0) node[left]{$q^{\mu/2}z_1$}; \draw[blueline=0.2](1,-2) -- (4,-2); \draw (1,-2) node[left]{$q^{-\mu/2}z_1$}; \draw[greenline=0.8](6,0) -- (9,0); \draw (9,0) node[right]{$z_1$}; \draw[wavy=0.8](6,-2) -- (9,-2); \draw (9,-2) node[right]{$z_1$}; \draw (5,-1) ellipse (1cm and 2cm); \draw(6,-1) node[left]{$\cO$}; \draw[aline=0.5](2.5,1) -- (2.5,-3); \draw (2.5,1) node[above]{$z_2$}; \draw(11,-1) node[]{$=$}; \draw[redline=0.2,xshift=15cm](1,0) -- (4,0); \draw[xshift=15cm](1,0) node[left]{$q^{\mu/2}z_1$}; \draw[blueline=0.2,xshift=15cm](1,-2) -- (4,-2); \draw[xshift=15cm] (1,-2) node[left]{$q^{-\mu/2}z_1$}; \draw[greenline=0.8,xshift=15cm](6,0) -- (9,0); \draw[xshift=15cm] (9,0) node[right]{$z_1$}; \draw[wavy=0.8,xshift=15cm](6,-2) -- (9,-2); \draw[xshift=15cm] (9,-2) node[right]{$z_1$}; \draw[xshift=15cm] (5,-1) ellipse (1cm and 2cm); \draw[xshift=15cm](6,-1) node[left]{$\cO$}; \draw[aline=0.5,xshift=15cm](7.5,1) -- (7.5,-3); \draw[xshift=15cm] (7.5,1) node[above]{$z_2$}; \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.5] \draw[dashgreenline=0.8](4,0) -- (1,0); \draw (1,0) node[left]{$z_1$}; \draw[wavy=0.8](4,-2) -- (1,-2); \draw (1,-2) node[left]{$z_1$}; \draw[dashredline=0.2](9,0) -- (6,0); \draw (9,0) node[right]{$q^{-\mu/2} z_1$}; \draw[dashblueline=0.2](9,-2) -- (6,-2); \draw (9,-2) node[right]{$q^{\mu/2}z_1$}; \draw (5,-1) ellipse (1cm and 2cm); \draw(6,-1) node[left]{$ \cO_{21}$}; \draw[aline=0.5](7.5,1) -- (7.5,-3); \draw (7.5,1) node[above]{$z_2$}; \draw(13,-1) node[]{$=$}; \draw[dashgreenline=0.8,xshift=15cm](4,0) -- (1,0); \draw[xshift=15cm] (1,0) node[left]{$z_1$}; \draw[wavy=0.8,xshift=15cm](4,-2) -- (1,-2); \draw[xshift=15cm] (1,-2) node[left]{$z_1$}; \draw[dashredline=0.2,xshift=15cm](9,0) -- (6,0); \draw[xshift=15cm] (9,0) node[right]{$q^{-\mu/2} z_1$}; \draw[dashblueline=0.2,xshift=15cm](9,-2) -- (6,-2); \draw[xshift=15cm] (9,-2) node[right]{$q^{\mu/2}z_1$}; \draw[xshift=15cm] (5,-1) ellipse (1cm and 2cm); \draw[xshift=15cm](6,-1) node[left]{$\cO_{21}$}; \draw[aline=0.5,xshift=15cm](2.5,1) -- (2.5,-3); \draw[xshift=15cm] (2.5,1) node[above]{$z_2$}; \end{tikzpicture} \end{align*} For the compatibility with the right boundary we claim that \begin{center} \begin{tikzpicture}[scale=0.52] \begin{scope} [rotate=-45] \draw[redline=0.5](2,0) -- (4,0); \draw (2,1) node[left]{$q^{\mu/2}z$}; \draw[blueline=0.5](2,-2) -- (4,-2); \draw (2,-1) node[left]{$q^{-\mu/2}z$}; \draw[greenline=0.5](6,0) -- (7.5,0); \draw (6.5,-0.2) node[below]{$z$}; \draw[dashgreenline=0.8](7.5,0) -- (7.5,-4); \draw (7.5,-4) node[below]{$z^{-1}$}; \draw[wavy=0.8](6,-2) -- (9.5,-2.2); \draw (6.5,-2.2) node[below]{$z$}; \draw[wavy=0.8](9.5,-2) -- (9.5,-4); \draw (9.5,-4) node[below]{$z^{-1}$}; \draw (5,-1) ellipse (1cm and 2cm); \draw(5.5,-0.4) node[left]{$\cO$}; \end{scope} \draw(7,-4) node[]{=}; \begin{scope} [rotate=45,xshift=-1.5cm,yshift=-12cm] \draw[dashgreenline=0.8](4,0) -- (2,0); \draw (2,0) node[below]{$z^{-1}$}; \draw[wavy=0.8](4,-2) -- (2,-2); \draw (2,-2) node[below]{$z^{-1}$}; \draw[dashredline=0.9](9.5,0) -- (6,0); \draw (5.5,.7) node[above]{$q^{-\mu/2}z^{-1}$}; \draw[redline=0.5](9.5,2) -- (9.5,0); \draw (9.5,2) node[above]{$q^{\mu/2}z$}; \draw[blueline=0.25](7.5,2) -- (7.5,-2); \draw (7.5,2) node[above]{$q^{-\mu/2}z$}; \draw[dashblueline=0.7](7.5,-2) -- (6,-2); \draw (7.3,-2.6) node[below]{$q^{\mu/2}z^{-1}$}; \draw (5,-1) ellipse (1cm and 2cm); \draw(5.8,-1.5) node[left]{$\cO_{21}$}; \end{scope} \end{tikzpicture} \end{center} which corresponds to the following identity in $\cA^{(2)}$: \eq{ \label{keyrelation-init:right} \cK_\ups(z)_1 \cR_{\ups\phi}(z^2) \cK_\phi(z)_2 \,\cO = \cO \, \cK_\vrho(q^{-\mu/2}z)_1 \cR_{\vrho\brho}(z^2) \cK_\brho(q^{\mu/2}z)_2, } which we call the \emph{right boundary factorization identity}. The diagrams above serve as a motivation for the identity, which we now prove using results from Section \ref{sec:augmentedqOns} (an alternative computational proof of Theorem \ref{thm:keyrelation:right} is given in Appendix C). \begin{thrm} \label{thm:keyrelation:right} For all $\mu \in \C$, all $q \in \C^\times$ not a root of unity and all $\xi \in \C^\times$, relation \eqref{keyrelation-init:right} is satisfied. \end{thrm} \begin{proof} The proof is analogous to the proof of Theorem \ref{thm:fund}. We first note that \[ \begin{aligned} \big( \vrho_{q^{-\mu/2}z} \ot \brho_{q^{\mu/2}z} \big)\big( (\id \ot \psi)(\cR) \big) &= \big( \vrho_{q^{-\mu/2}z} \ot \brho^-_{q^{-\mu/2}z^{-1}} \big)(\cR) && \propto \; \cR_{\vrho\brho}(z^2), \\ \big( \ups_{z} \ot \phi_{z} \big)\big( (\id \ot \psi)(\cR) \big) &= \big( \ups_{z} \ot \phi^-_{z^{-1}} \big)(\cR) && \propto \; \cR_{\ups\phi}(z^2). \end{aligned} \] Noting the coproduct formula \eqref{K:axiom2}, we obtain \[ \begin{aligned} \cK_\vrho(q^{-\mu/2}z)_1 \cR_{\vrho\brho}(z^2) \cK_\brho(q^{\mu/2}z)_2 \qu &\propto \qu \big( \vrho_{q^{-\mu/2}z} \ot \brho_{q^{\mu/2}z} \big)(\Del(\cK)), \\ \cK_\ups(z)_1 \cR_{\ups\phi}(z^2) \cK_\phi(z)_2 \qu &\propto \qu \big( \ups_{z} \ot \phi_{z} \big)(\Del(\cK)). \end{aligned} \] Now Theorem \ref{thm:O:plus} implies \eqref{keyrelation-init:right} up to a scalar. The fact that all factors fix $w_0 \ot w_0$ shows that the scalar is 1. \end{proof} Compatibility with the left boundary requires that \begin{center} \begin{tikzpicture}[scale=0.52] \begin{scope}[rotate=-45] \draw[dashredline=0.3](4,0) -- (0.5,0); \draw[redline=0.8](0.5,0) -- (0.5,2); \draw[dashblueline=0.6](4,-2) -- (2.5,-2); \draw[blueline=0.8](2.5,-2) -- (2.5,2); \draw(2.5,0) node[bblob]{}; \draw[dashgreenline=0.2](8,0) -- (6,0); \draw (8,-0.5) node[right]{$z^{-1}$}; \draw[wavy=0.2](8,-2) -- (6,-2); \draw (8,-2.5) node[right]{$z^{-1}$}; \draw (5,-1) ellipse (1cm and 2cm); \draw(5.7,-0.3) node[left]{ \mbox{\footnotesize $\cO_{21}^{-1}$}}; % \draw (0.2,2.2) node[right]{$q^{\mu/2} z$}; \draw (2.2,2.2) node[right]{$q^{-\mu/2} z$}; \draw (2.7,-2.8) node[below]{$q^{\mu/2} z^{-1}$}; \draw (4.3,1.6) node{$ q^{-\mu/2} z^{-1}$}; \end{scope} \draw(7.5,-3) node[]{$=$}; \begin{scope}[rotate=45,xshift=2.5cm,yshift=-9cm] \draw[dashgreenline=0.2](2.5,-4) -- (2.5,0); \draw (2.5,-4.7) node{$z^{-1}$}; \draw(2.5,-2) node[bblob]{}; \draw[wavy=0.2](0.5,-4) -- (0.5,-2); \draw (0.5,-4.7) node{$z^{-1}$}; \draw[greenline=0.5](2.5,0) -- (4,0); \draw (3.4,0.7) node{$z$}; \draw[wavy=0.8](0.5,-2) -- (4,-2); \draw (3.5,-2.7) node{$z$}; \draw[redline=0.8](6,0) -- (8,0); \draw (8,0) node[right]{$q^{\mu/2} z $}; \draw[blueline=0.8](6,-2) -- (8,-2); \draw (8,-2) node[right]{$q^{-\mu/2} z $}; \draw(5,-1) ellipse (1cm and 2cm); \draw(5.7,-1.9) node[left]{ \mbox{\footnotesize $\cO^{-1}$}}; \end{scope} \end{tikzpicture} \end{center} The identity in $\cA^{(2)}$ corresponding to this is \eq{ \label{keyrelation:left} \tcK_\brho(q^{\mu/2}z,\wt\xi)_2 \wt{\cR}_{\vrho\brho}(z^2) \tcK_\vrho(q^{-\mu/2}z,\wt\xi)_1 \cO^{-1} = \cO^{-1} \tcK_\phi(z,\wt\xi)_2 \wt{\cR}_{\ups\phi}(z^{2}) \tcK_\ups(z,\wt\xi)_1. } \begin{thrm} \label{thm:keyrelation:left} Relation \eqref{keyrelation:left} is satisfied. \end{thrm} \begin{proof} Given the definitions \eqref{K-tilde} and \eqref{RsitauRrhobrho:tilde}, this follows straightforwardly by inverting \eqref{keyrelation-init:right} and replacing $(z,\xi) \mapsto (qz,\txi^{-1})$. \end{proof} \section{Discussion} \label{sec:discussion} The main result of this paper is Theorem \ref{thm:keyrelation:right} which can be viewed as a boundary analogue of Theorem \ref{thm:fund}. To establish this result, we needed to first show that all R and K-operators involved in Equation \eqref{keyrelation-init:right} are well-defined actions of the universal elements $\cR$ and $\cK$ on the infinite-dimensional $U_q(\wh\mfb^+)$-modules involved. The key fact that allows for this is that $\cR$ and $\cK$ live in completions of $U_q(\wh\mfb^+)\ot U_q(\wh\mfb^-)$ and of $U_q(\wh\mfb^+)$, respectively. This is very familiar for $\cR$ but for $\cK$ relies on the recent works \cite{AV22a,AV22b}. Introducing the $U_q(\wh\mfb^+)$-intertwiner $\cO$ and the formula for $\Delta(\cK)$ given by \eqref{K:axiom2}, relation \eqref{thm:keyrelation:left} follows immediately from the intertwining property of $\cO$. The open Q-operator $\cQ(z)$ of \cite{VW20} is the trace of a product of R and K-operators over the $U_q(\wh\mfb^+)$-module $(W,\vrho_z)$ and there is a similar construction of an open Q-operator $\wb{\cQ}(z)$. In a future paper, the authors will present this construction and the use of Theorem \ref{thm:keyrelation:left} in deriving a boundary analogue of the factorization relation $\mc T_{\mu}(z) \: \propto \: \mc Q(zq^{-\mu/2}) \wb{\mc Q}(zq^{\mu/2})$. They will also develop the analogous theory for different coideal subalgebras, in particular those for which non-diagonal solutions of the reflection equation are intertwiners.
2,869,038,155,223
arxiv
\section{Introduction}\label{sec:intro} Materials play a central role in the effort to produce cheaper and more efficient solar cells. The discovery of improved absorber materials has the potential to significantly increase the cost-effectiveness of photovoltaic devices, but experimental trial and error methods are often slow and expensive. Here, computational material modeling can provide a valuable assist to the material design process, by screening groups of materials for those that have the best properties. The Shockley-Queisser limit~\cite{Shockley1961DetailedCells} is one of the most well-known metrics to determine the maximum efficiency an absorber material can produce in a single-junction solar cell. It was proposed in 1961 and provides a direct relation between the band gap of a material and its maximum possible efficiency. More recently, Yu and Zunger expanded on the work of Shockley and Queisser by introducing the Spectroscopic Limited Maximum Efficiency~\cite{Yu2012IdentificationMaterials} (SLME), which takes the absorption coefficient and thickness into consideration for the calculation of the maximum efficiency. The SLME has since been used to investigate the potential of photovoltaic absorber materials such as perovskites~\cite{Meng2016AlloyingApplication}, direct band gap silicon crystals~\cite{Lee2014ComputationalCrystals}, chalcogenides, and other materials. In our recent work on CuAu-like~\cite{Bercx2016First-principlesSilicon} and Stannite~\cite{Sarmadian2016First-principlesChalcogenides} structures, we also used the SLME to study the efficiency of these materials in the context of thin film solar cells. Interestingly, we found several materials with an SLME above the Shockley-Queisser limit, and identified that this is due to the lower recombination current obtained for the material at lower thicknesses. Since its conception, numerous methods have been proposed to exceed the Shockley-Queisser limiting efficiency~\cite{Nelson2013ExceedingConversion}. Examples include multi-junction~\cite{Shah2004Thin-filmTechnology,Heremans2009StrategiesArchitecture} and hot carrier solar cells~\cite{Konig2010HotDesign}, as well as concepts that use multiple exciton generation~\cite{Hanna2006SolarAbsorbers}. None of these concepts, however, are implemented in the SLME. In this paper, we use a model approach to demonstrate that it is possible to exceed the Shockley-Queisser limit within the detailed balance framework. Simply by dropping the assumption of an infinite absorber layer, i.e. by replacing the Heaviside step function for the absorptivity by a sigmoid function, we obtain efficiencies above the Shockley-Queisser limit. Finally, we analyze for which band gap range a material's efficiency is more likely to exceed the Shockley-Queisser limit. \section{Shockley-Queisser limit}\label{sec:SQ} The maximum efficiency $\eta$ is defined as the maximum output power density $P_m$ divided by the total incoming power density from the solar spectrum $P_{in}$: \begin{equation} \eta = \frac{P_m}{P_{in}} \end{equation} To calculate $P_m$, the power density $P = JV$ is maximized versus the voltage $V$, where the current density\footnote{Note that these current densities are not defined in the conventional way. Rather, they are considered as currents per surface area of the solar cell. This allows us to ignore the surface area of the solar cell in our discussion.} $J$ is derived from the ideal $J-V$ characteristic of an illuminated solar cell: \begin{equation} J = J_{sc} - J_0 \left(e^{\frac{eV}{k_B T}} - 1\right), \end{equation} where $k_B$ is Boltzmann's constant, $e$ is the elementary charge and $T$ is the temperature of the solar cell. The short-circuit current density $J_{sc}$, also known as the photogenerated current or the illuminated current, is calculated from the number of photons of the solar spectrum that are absorbed by the solar cell: \begin{equation} J_{sc} = e \int_0^{\infty} a(E) \Phi_s (E) dE, \label{eq:Ish} \end{equation} where $a(E)$ is the absorptivity and $\Phi_s(E)$ is the photon flux density of the solar spectrum. In their original paper, Shockley and Queisser used a blackbody spectrum of $T_s = 6000~\si{\kelvin}$, but the current convention is to use the AM1.5G solar spectrum~\cite{2012ASTMPA}. The reverse saturation current density $J_0$ is calculated by considering the principle of detailed balance, i.e. in equilibrium conditions the rate of photon emission from radiative recombination must be equal to the photon absorption from the surrounding medium. Because the cell is assumed to be attached to an ideal heat sink, the ambient temperature is assumed to be the same as that of the solar cell. Hence, the spectrum of the surrounding medium is that of a black body at cell temperature $T$: \begin{align} J_0 &= e \pi \int_0^{\infty} a(E) \Phi_{bb}(E) dE\nonumber \\ &= e \pi \int_0^{\infty} a(E) \frac{2E^2}{h^3 c^2} \frac{dE}{e^{\frac{E}{k_B T}}-1}, \label{eq:I0} \end{align} where $h$ is Planck's constant and $c$ is the speed of light. Because of its connection with the recombination of electron-hole pairs at equilibrium, $J_0$ is also referred to as the recombination current density~\cite{Cuevas2014TheJ0}. This is the convention we will use here. To obtain the Shockley-Queisser or detailed balance \textit{limit}, Shockley and Queisser made the assumption that the probability of a photon with an energy above the band gap being absorbed by the cell is equal to unity. This corresponds mathematically to setting $a(E)$ to the Heaviside step function, or, from a physical perspective, to considering an infinitely thick absorber layer. Note that in the original expressions, Shockley and Queisser also included a geometrical factor. However, because we assume the solar cell to have a perfect antireflective coating, as well as a reflective back surface, the geometrical factor is equal to unity~\cite{Ruhle2016TabulatedCells}. \section{Spectroscopic Limited Maximum Efficiency}\label{sec:SLME} Shockley and Queisser's detailed balance limit is considered to be one of the most important results in photovoltaic research. However, as a metric for thin film solar cells, it is somewhat limited in its effectiveness, because it only depends on the band gap of the absorber material in the solar cell. In an attempt to find a more practical screening metric, Yu and Zunger introduced the Spectroscopic Limited Maximum Efficiency~\cite{Yu2012IdentificationMaterials} (SLME) in 2012. The SLME differs from the detailed balance limit in two ways. First, the absorptivity $a(E)$, taken as a Heaviside step function in the calculation of Shockley and Queisser, is replaced by the absorptivity $a(E) = 1 - e^{-2\alpha(E)L}$, where $L$ is the thickness and $\alpha(E)$ is the absorption coefficient, calculated from first principles. This allows us to use the SLME to study the thickness dependence of the efficiency, an important tool in the study of thin film solar cells. Second, the SLME also considers the non-radiative recombination in the solar cell by modeling the fraction\footnote{Actually, Shockley and Queisser also considered the fraction of radiative recombination in their approach. They did not, however, provide a model to calculate it, simply observing that the maximum efficiency is significantly reduced for small fractions $f_r$.} of radiative recombination as a Boltzmann factor, i.e. $f_r = e^{-\frac{\Delta}{kT}}$, with $\Delta = E_g^{da} - E_g$, where $E_g$ and $E_g^{da}$ are the fundamental and direct allowed band gap, respectively. The total recombination current density is then calculated by dividing the radiative recombination current density (Eq.~\ref{eq:I0}) by the fraction of radiative recombination. In this work, we only study direct band gap materials (i.e. $E_g = E_g^{da}$), and hence only radiative recombination is considered ($f_r = 1$), just as in the standard calculation of the detailed balance limit. The SLME has been used to investigate the potential of several classes of photovoltaic absorber materials. In Fig.~\ref{fig:SLME}, we show a selection of calculated efficiencies of direct band gap materials from previous work~\cite{Yu2012IdentificationMaterials,Bercx2016First-principlesSilicon,Sarmadian2016First-principlesChalcogenides}, compared with the Shockley-Queisser limit. We can see that materials typically used in thin-film photovoltaic cells, e.g. chalcopyrite phase \ce{CuIn(S,Se)2}, have a high calculated efficiency. We also note other materials that are less studied with high efficiencies, such as CuAu-like phase \ce{CuInS2} and chalcopyrite phase \ce{CuInTe2}. Most importantly, however, we can see that a significant amount of the presented materials have a calculated efficiency above the Shockley-Queisser limit. Since the calculation of the SLME does not introduce any of the concepts that would typically allow its value to exceed the Shockley-Queisser limit, these results show that for thin-film materials the Shockley-Queisser limit does not necessarily represent an upper limit for the efficiency. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{Fig1.png} \caption{\label{fig:SLME}Collection of calculated SLME values from Yu and Zunger~\cite{Yu2012IdentificationMaterials}, as well as our previous work on CuAu-like~\cite{Bercx2016First-principlesSilicon} and Stannite~\cite{Sarmadian2016First-principlesChalcogenides} structures. We have added the space group of the material structure as a superscript. The efficiency values were calculated for a thickness of 0.5~\si{\micro\meter}. The orange curve represents the maximum efficiencies obtained using the logistic model explained in Section~\ref{sec:logistic}.} \end{figure} In fact, Shockley and Queisser considered their metric as the detailed balance \textit{limit} because of the assumption that since the step function represents the highest possible absorption spectrum for a material with a specific direct band gap, the resulting efficiency must represent an upper limit. However, as we demonstrated in our previous work~\cite{Bercx2016First-principlesSilicon}, this also means the the recombination current density $J_0$ (Eq.~\ref{eq:I0}) will be maximal. Since electron-hole recombination results in a loss of electrons contributing to the external current, this has a negative effect on the photovoltaic conversion efficiency. Hence, it is possible that there is an absorptivity function that would result in a higher efficiency than the Shockley-Queisser limit. As we can see in Fig.~\ref{fig:SLME}, this is exactly what happens for the presented smaller band gap materials. \section{Logistic Function Model}\label{sec:logistic} The next questions are how far we can exceed the Shockley-Queisser limit, and at which band gaps a material is more likely to do so. Clearly, this will depend on the shape of the absorptivity function. In Fig.~\ref{fig:step}, we show the calculated absorptivity of \ce{Cu2ZnGeS4} for various thicknesses, derived from the absorption coefficient calculated from first principles (For computational details, we refer the reader to~\cite{Sarmadian2016First-principlesChalcogenides}). We can see that the absorptivity has a shape reminiscent of a sigmoid function. In order to analyze the maximum efficiency for materials with a direct band gap in the range 0.3-3~\si{\electronvolt}, we model $a(E)$ using a generalized logistic function: \begin{equation} a(E) = f(E) = \frac{1}{(1+e^{-\delta (E - E_g)})^{\beta}}, \end{equation} where $E_g$ is the band gap of the material, and $\beta$, $\delta$ are parameters that determine the shape of the function. In this model for the absorptivity, the parameter $\delta$ is related to the thickness of the material, as for $\delta\rightarrow\infty$, $f(E)$ approaches the Heaviside step function (Fig.~\ref{fig:step}). The second parameter ($\beta$) is important to make sure that the model function ``starts'' at the band gap, i.e. that its value for $E < E_g$ is suitably small, so that it can be approximated to zero. Since $f(E_g) = \frac{1}{2^\beta}$, and $f(E) < f(E_g)$ for $E < E_g$, increasing $\beta$ to a suitably large value gives us this desired function trait. Here, we choose $\beta = 10$ and set $f(E) = 0$ for $E \leq E_g$. As is clear from Fig.~\ref{fig:step}, this model function describes the shape of the calculated absorptivity spectra quite well. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{Fig2.png} \caption{\label{fig:step}Comparison of the model function with calculated absorptivity spectra for \ce{Cu2ZnGeS4} at different thicknesses $L$. We can see that the model function shape matches that of the calculated absorptivity quite well as $L,\delta \rightarrow \infty$.} \end{figure} To study the influence of the band gap on the likelihood of the efficiency exceeding the Shockley-Queisser limit, we calculate the efficiency for $\delta \in [1, 10^4]$ and over the band gap range $E_g~\in~[0.3, 3]~\si{\electronvolt}$. We show the $\delta$-dependency of the efficiency for a selection of band gap values in Fig.~\ref{fig:deltadep}. We can see that for low band gaps, the calculated efficiency crosses the detailed balance limit of the corresponding band gap, in order to return to the limit value for $\delta \rightarrow \infty$. Since $\delta$ can be related to the thickness of the material, this implies that for lower band gap materials, there is a thickness that is optimal for the efficiency. Moreover, a clear trend is visible, with the efficiency exceeding the Shockley-Queisser limit more as the band gap is decreased. This is also what we observe when we look at the plot for the maximum efficiency values in Fig.~\ref{fig:SLME}. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{Fig3.png} \caption{\label{fig:deltadep}Calculated efficiencies for a range of $\delta$ values and a selection of band gaps, compared with the corresponding Shockley-Queisser limit.} \end{figure} It is interesting to note that the SLME values of the materials that exceed the Shockley-Queisser limit are still below the maximum efficiency for the model absorptivity functions of the corresponding band gap in Fig.~\ref{fig:SLME}. However, this does not imply that the logistic function maxima curve represents a new upper limit. It is entirely possible that there is another function profile that would allow for higher efficiencies. Using the logistic function approach, we are simply able to observe for which band gap range the Shockley-Queisser limit does not provide a theoretical upper limit. \section{Conclusion} In their 1961 paper, Shockley and Queisser characterized their calculated efficiency as an upper limit, because of the assumption that if every photon with an energy above the band gap is absorbed, the obtained efficiency must be maximal. Although this assumption may seem entirely sensible at first glance, it does not consider the fact that it also maximizes the recombination current, which is calculated using the detailed balance principle. Because an increased recombination results in a lower efficiency, this means that lowering the absorptivity can produce higher efficiencies than the Shockley-Queisser limit under the right conditions. By using a model absorptivity function, which closely resembles absorptivity spectra calculated from first principles, we have shown that this can occur for low band gaps. This means that one must take care when dismissing low band gap materials based on their Shockley-Queisser limit, for their actual efficiency at certain thicknesses might still make them suitable for thin film photovoltaic applications.
2,869,038,155,224
arxiv
\section{Introduction} In this note, we extend some classical theorems in the theory of total positivity to the case of an arbitrary semisimple complex Lie group. We begin by reviewing the results we are going to generalize. \medskip Let $G = GL_n (\mathbb{C})$ or $SL_n (\mathbb{C})$. The following theorem is due to C.~Loew\-ner~\cite{loewner} and A.~Whitney \cite{whitney} (cf.\ \cite[Lemma~9.1]{karlin}). \begin{theorem} \label{th:Whitney-Loewner classical} For a matrix $x \in G$, the following are equivalent: \begin{itemize} \item[(a)] all minors of $x$ are nonnegative real numbers; \item[(b)] $x$ lies in the closure of the set of matrices with positive minors; \item[(c)] $x$ belongs to the multiplicative monoid in $G$ generated by elementary Jacobi matrices with nonnegative matrix entries. \end{itemize} {\rm(Here in~{\rm(c)}, an ``elementary Jacobi matrix'' is a matrix that differs from the identity matrix in a single entry, located either on the main diagonal, or immediately above or below it.) } \end{theorem} A matrix $x \!\in\! G$ satisfying any of the equivalent conditions (a)--(c) above is called \emph{totally nonnegative}. Furthermore, $x$ is \emph{totally positive} if all its minors are positive. Totally positive matrices are distinguished among the totally nonnegative ones as follows. \begin{theorem} \label{th:TNN-TP classical} For a totally nonnegative matrix $x\!\in\! G$, the following are equivalent: \begin{itemize} \item[(d)] $x$ is totally positive; \item[(e)] all solid minors of $x$ involving either $x_{1n}$ or $x_{n1}$ are positive; \item[(f)] $x$ belongs to the intersection of opposite open Bruhat cells $Bw_\mathrm{o} B\cap B_-w_\mathrm{o} B_-\,$. \end{itemize} {\rm (Here in (e), a ``solid minor'' is a minor formed by several consecutive rows and as many consecutive columns. In (f), we denote by $B$ (resp.\ $B_-$) the subgroup of upper-triangular (resp.\ lower-triangular) matrices, and $w_\mathrm{o}$ is the permutation matrix with $1$'s on the main antidiagonal.) } \end{theorem} Part ${\rm (d)}\!\Longleftrightarrow\!{\rm (e)}$ of Theorem~\ref{th:TNN-TP classical} is a refinement of the classical Fekete criterion due to M.~Gasca and J.~M.~Pe\~na~\cite[Theorem~4.3]{gasca-pena}; the equivalence ${\rm (e)}\!\Longleftrightarrow\!{\rm (f)}$ is a well-known (and easy) linear-algebraic fact (cf., e.g., \cite[Theorem~II.4.1]{gantmacher}). \medskip In their pioneering study of total positivity undertaken in 1930s, F.~R.~Gantmacher and M.~G.~Krein introduced and studied the intermediate class of \emph{oscillatory} matrices defined as follows: a matrix $x \in G$ is called oscillatory if $x$ is totally nonnegative while some power of $x$ is totally positive. The following characterization of this class was obtained in~\cite{GK} (see~\S II.7; cf.\ also \cite[Theorem~9.3]{karlin}). \begin{theorem} \label{th:GK classical} For a totally nonnegative matrix $x\!\in\! G$, the following are equivalent: \begin{itemize} \item[(g)] $x$ is oscillatory; \item[(h)] $x_{i,i+1} > 0$ and $x_{i+1,i} > 0$ for $i = 1, \dots, n\!-\!1$. \end{itemize} \end{theorem} Gantmacher and Krein \cite{GK} further showed that the definition of oscillatory matrices can be refined as follows. \begin{theorem} \label{th:GK bound} A totally nonnegative matrix $x\!\in\! G$ is oscillatory if and only if $x^{n-1}$ is totally positive. \end{theorem} In this paper, we extend Theorems~\ref{th:Whitney-Loewner classical}, \ref{th:GK classical}, and~\ref{th:GK bound} to an arbitrary semisimple complex Lie group~$G$, using the notion of \emph{generalized minors} introduced in~\cite{FZ}. A~generalization of Theorem~\ref{th:TNN-TP classical} follows from results in~\cite{lusztig} or~\cite{FZ}, and is presented below (see Theorem~\ref{th:TNN-TP general}) for the sake of completeness. Even in the case of $SL_n$, our version of the criterion~(h) is more general than the one given above. (Earlier in~\cite{FZ}, we gave a family of total positivity criteria generalizing~(e).) It should also be noted that our proofs are quite different from the ones in~\cite{gasca-pena, GK, karlin, loewner, whitney}. Our main technical tools involve combinatorics of reduced words in Weyl groups, the subdivison of a semisimple group into double Bruhat cells, and the ``generalized determinantal calculus" developed in \cite{FZ}; in particular, the fundamental role is played by a generalized determinantal identity~\cite[Theorem~1.17]{FZ}. The study of total positivity in reductive groups other than $GL_n$ and $SL_n$ was initiated by G.~Lusztig~\cite{lusztig}, who suggested to use the natural generalization of~(c) as the definition of a total nonnegative element. Our extension of the equivalence ${\rm(a)}\Longleftrightarrow{\rm (c)}$ can be rephrased as saying that Lusztig's definition is equivalent to the one in terms of the generalized minors of~\cite{FZ}. \section{Terminology and notation} We will use the setup of~\cite{FZ}, which is briefly reviewed below in this section. Proofs and further details can be found in~\cite{FZ} (see Sections~1.1-1.4). Let $G$ be a simply connected semisimple complex Lie group of rank~$r$ with a fixed pair of opposite Borel subgroups $B_-$ and~$B$; thus $H=B_-\cap B$ is a maximal torus in~$G$. Let $N_-$ and $N$ be the unipotent radicals of $B_-$ and~$B$, respectively. Let $\alpha_1, \ldots, \alpha_r$ be the system of simple roots for which the corresponding root subgroups are contained in~$N$. For every $i\in [1,r]$, let $\varphi_i: SL_2 \to G$ be the canonical embedding corresponding to the simple root~$\alpha_i\,$. For any nonzero $t\in\mathbb{C}$ and any $i\in [1,r]$, we define \begin{equation*} \label{eq:x,y} x_{\overline i} (t) = \varphi_i \mat{1}{0}{t}{1}\, ,\qquad t^{h_i} = \varphi_i \mat{t}{0}{0}{t^{-1}}\, ,\qquad x_i (t) = \varphi_i \mat{1}{t}{0}{1} \, . \end{equation*} Thus $t^{h_i}\in H$, and $t \mapsto x_i (t)$ (resp.\ $t \mapsto x_{\overline i} (t)$) is a one-parameter subgroup in $N$ (resp.\ in $N_-$). The \emph{weight lattice} $P$ can be defined as the group of multiplicative characters of~$H$, here written in the exponential notation: a weight $\gamma\in P$ acts by $a \mapsto a^\gamma$. The lattice $P$ has a $\mathbb{Z}$-basis formed by the \emph{fundamental weights} $\omega_1, \ldots, \omega_r$ defined by $(t^{h_j})^{\omega_i} = t^{\delta_{ij}}$. The \emph{Weyl group} $W$ of $G$ is defined by $W = {\rm Norm}_G (H)/H$. The action of~$W$ on~$H$ by conjugation gives rise to the action of $W$ on the weight lattice $P$ given by $a^{w (\gamma)} = (w^{-1} a w)^\gamma$ for $w \in W$, $a \in H$, $\gamma \in P$. The group $W$ is a Coxeter group with simple reflections $s_1,\dots,s_r$ which can be defined by specifying their representatives in ${\rm Norm}_G (H)$: we set \[ \overline {s_i} = \varphi_i \mat{0}{-1}{1}{0} \in {\rm Norm}_G (H) \, . \] The family $\{\overline {s_i}\}$ satisfies the braid relations in~$W$; thus the representative $\overline w$ can be unambiguously defined for any $w \in W$ by requiring that $\overline {uv} = \overline {u} \cdot \overline {v}$ whenever $\ell (uv) = \ell (u) + \ell (v)$; here $\ell(w)$ denotes the length of~$w\in W$. A \emph{reduced word} for $w \in W$ is a sequence of indices $(i_1, \ldots, i_m)$ that satisfies $w = s_{i_1} \cdots s_{i_m}$ and has the shortest possible length~$m=\ell(w)$. The set of reduced words for~$w$ will be denoted by~$R(w)$. As customary, $w_\mathrm{o}$ denotes the unique element of maximal length in~$W$. We denote by $G_0=N_-HN$ the open subset of elements $x\in G$ that have Gaussian decomposition; this decomposition will be written as $x = [x]_- [x]_0 [x]_+ \,$. For $u,v \in W$ and $i \in [1,r]$, the \emph{generalized minor} $\Delta_{u \omega_i, v \omega_i}$ is the regular function on $G$ whose restriction to the open set ${\overline {u}} G_0 {\overline {v}}^{-1}$ is given by \begin{equation*} \Delta_{u \omega_i, v \omega_i} (x) = \left[{\overline {u}}^{-1} x \overline v\right]_0^{\omega_i} \ . \end{equation*} It can be shown that $\Delta_{u \omega_i, v \omega_i}$ depends on the weights $u \omega_i$ and $v \omega_i$ alone, not on the particular choice of $u$ and~$v$. In the special case $G=SL_n\,$, the generalized minors are nothing but the ordinary minors of a matrix. \section{Main results} We generalize the Loewner-Whitney Theorem (Theorem~\ref{th:Whitney-Loewner classical}) as follows. \begin{theorem} \label{th:loewner general} For an element $x \in G$, the following are equivalent: \begin{itemize} \item[(a)] all generalized minors $\Delta_{\gamma, \delta}$ take nonnegative real values at~$x$; \item[(b)] $x$ lies in the closure of the set of elements with positive generalized minors; \item[(c)] $x$ lies in the multiplicative monoid generated by the elements of the form $t^{h_i}$, $x_i (t)$, and $x_{\overline i} (t)$, with positive~$t$. \end{itemize} \end{theorem} An element $x\in G$ satisfying any of the equivalent conditions (a)--(c) of Theorem~\ref{th:loewner general} is called \emph{totally nonnegative.} The set of all such elements is denoted by~$G_{\geq 0}\,$. The following generalization of Theorem~\ref{th:TNN-TP classical} is immediate from the results in Lusztig~\cite{lusztig}; a proof based on the results in~\cite{FZ} will be given in Section~\ref{sec:TNN-TP general} below. \begin{theorem} \label{th:TNN-TP general} For an element $x\!\in\! G_{\geq 0}\,$, the following are equivalent: \begin{itemize} \item[(d)] all generalized minors of $x$ are positive; \item[(e)] $\Delta_{\omega_i,w_\mathrm{o}\omega_i}(x)>0$ and $\Delta_{w_\mathrm{o}\omega_i,\omega_i}(x)>0$ for any $i\in [1,r]$; \item[(f)] $x$ belongs to the intersection of open Bruhat cells $Bw_\mathrm{o} B\cap B_-w_\mathrm{o} B_-\,$. \end{itemize} \end{theorem} An element $x\!\in\! G$ satisfying any of the equivalent conditions (d)--(f) of Theorem~\ref{th:TNN-TP general} is called \emph{totally positive.} The set of all such elements will be denoted by~$G_{>0}\,$. Let us call an element $x \in G_{\geq 0}$ \emph{oscillatory} if for some positive integer~$m$, the element $x^m$ is totally positive. We will give equivalent reformulations of this property which in particular generalize the criterion (h) in Theorem~\ref{th:GK classical}. In fact, our version of criterion~(h) will be more general even in the special case~$G=SL_n\,$. Let $i$ and $j$ be two indices lying in the same connected component of the Dynkin graph of $G$ (the case $j = i$ is not excluded). Let \[ i = i(1), i(2), \dots, i(l) = j \] be the unique path from $i$ to $j$ in the Dynkin graph. Thus $\{i(k), i(k+1)\}$ is an edge for $k = 1, \dots, l-1$, and all indices $i(k)$ are distinct. Let us denote $c(j \to i) = s_{i(2)} s_{i(3)} \cdots s_{j}$ (in particular, $c(i \to i) = e$), and set \begin{eqnarray*} \begin{array}{l} \Delta_{j \to i} = \Delta_{c(j \to i) \omega_j, s_i c(j \to i) \omega_j}\,, \\[.1in] \Delta_{j \to \overline i} = \Delta_{s_i c(j \to i) \omega_j, c(j \to i) \omega_j} \,. \end{array} \end{eqnarray*} For a given $i$, we say that each minor of the form $\Delta_{j \to i}$ (resp. $\Delta_{j \to \overline i}$) is an $i$-\emph{indicator} (resp. $\overline i$-indicator). \begin{theorem} \label{th:GK general} Let $C$ be a collection of $2r$ generalized minors that contains, for every $i \in [1,r]$, an $i$-indicator and an $\overline i$-indicator. Then, for an element $x \in G_{\geq 0}\,$, the following are equivalent: \begin{itemize} \item[(g)] $x$ is oscillatory; \item[(h)] $\Delta (x) > 0$ for any $\Delta \in C$; \item[(i)] $x$ does not belong to a proper parabolic subgroup of $G$ containing $B$ or~$B_-$. \end{itemize} \end{theorem} Note that the equivalence ${\rm(g)}\Longleftrightarrow{\rm(h)}$ in Theorem~\ref{th:GK general} generalizes Theorem~\ref{th:GK classical}. Indeed, for $G = SL_n$ and the standard numbering of fundamental weights, one checks that $x_{i,i+1} = \Delta_{1 \to i}$ and $x_{i+1,i} = \Delta_{1 \to \overline i}\,$. Thus the set $C$ consisting of these matrix entries satisfies the condition of Theorem~\ref{th:GK general}. Our last main result is a generalization of Theorem~\ref{th:GK bound} to all classical groups. \begin{theorem} \label{th:GK2 general} For any given $G$, there exists a positive integer $m$ with the following property: an element $x \in G_{\geq 0}$ is oscillatory if and only if $x^m \in G_{>0}\,$. A positive integer $m$ has this property if and only if for any permutation $\mathbf{i} = (i_1, \dots, i_r)$ of indices $1, \dots, r$, the concatenation of $m$ copies of $\mathbf{i}$ has a reduced word for $w_\mathrm{o}$ as a subword. \end{theorem} Let $m(G)$ denote the smallest positive integer~$m$ that has the property described in Theorem~\ref{th:GK2 general}. \begin{theorem} \label{th:GK2 general concrete} For a simple group~$G$, the value of $m(G)$ is given by the table \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Type & $A_r$ & $B_r$ or $C_r$ & $D_r\,$, $r$ even & $D_r\,$, $r$ odd & $E_6$ & $E_7$ & $E_8$ & $F_4$ & $G_2$ \\ \hline $m(G)$ & $r$ & $r$ & $r-1$ & $r$ & $8$ & $9$ & $15$ & $6$ & $3$ \\ \hline \end{tabular} \,. \end{center} \end{theorem} The remaining sections contain the proofs of Theorems~\ref{th:loewner general}--\ref{th:GK2 general concrete}. \section{Proof of Theorem~\ref{th:TNN-TP general}} \label{sec:TNN-TP general} The group $G$ has two \emph{Bruhat decompositions}, with respect to opposite Borel subgroups $B$ and $B_-\,$: $$G = \bigcup_{u \in W} B u B = \bigcup_{v \in W} B_- v B_- \ . $$ The \emph{double Bruhat cells}~$G^{u,v}$ are defined by $G^{u,v} = B u B \cap B_- v B_- \,$. Let $H_{>0}$ be the subgroup of~$H$ generated by the elements $t^{h_i}$ for any $t > 0$ and $i \in [1,r]$; equivalently, $H_{>0}$ consists of all $a\in H$ such that $a^\gamma > 0$ for any weight $\gamma \in P$. Following G.~Lusztig, let us define the set~$G_{\geq 0}$ as the multiplicative monoid in $G$ generated by $H_{>0}$ and the elements $x_i (t)$ and $x_{\overline i} (t)$, for $i\in [1,r]$ and $t > 0$. In other words, we use condition (c) of Theorem~\ref{th:loewner general} as the interim definition of~$G_{\geq 0}$. The set $G_{\geq 0}$ is the disjoint union of \emph{totally positive varieties} $G^{u,v}_{> 0}$ defined by $$G^{u,v}_{> 0} = G_{\geq 0} \cap G^{u,v} \, .$$ We denote $[\overline 1, \overline r] = \{\overline 1, \ldots, \overline r\}$. For any sequence $\mathbf{i}= (i_1, \ldots, i_m)$ of indices from the alphabet $[1,r] \cup [\overline 1, \overline r]$, let us define the map $x_\mathbf{i}: H \times \mathbb{C}^m \to G$ by \begin{equation} x_\mathbf{i} (a; t_1, \ldots, t_m) = a\, x_{i_1} (t_1) \cdots x_{i_m} (t_m) \, . \end{equation} By definition, an element $x\in G_{\geq 0}$ can be represented as $x=x_\mathbf{i} (a; t_1, \ldots, t_m)$, for some sequence $\mathbf{i}$, with all the $t_k$ positive and $a\in H_{>0}\,$. A \emph{double reduced word} for the elements $u,v\in W$ is a reduced word for an element $(u,v)$ of the Coxeter group $W \times W$. To avoid confusion, we will use the indices $\overline 1, \overline 2, \ldots, \overline r$ for the simple reflections in the first copy of $W$, and $1, 2, \ldots, r$ for the second copy. A double reduced word for $(u,v)$ is nothing but a shuffle of a reduced word for $u$ written in the alphabet $[\overline 1, \overline r]$ and a reduced word for $v$ written in the alphabet $[1,r]$. We denote the set of double reduced words for $(u,v)$ by $R(u,v)$. The \emph{weak order} is the partial order on~$W$ defined as follows: $u'\preceq u$ stands for $\ell(u)=\ell(u')+\ell({u'}^{-1}u)$. (In other words, a reduced word for $u'$ can be extended on the right to form a reduced word for~$u$.) We note that $w\preceq w_\mathrm{o}$ for any~$w \in W$. The following lemma provides alternative descriptions of the totally positive varieties~$G^{u,v}_{> 0}\,$. \begin{lemma} \label{lem:c'''} For an element $x\in G^{u,v}$, the following conditions are equivalent: \begin{itemize} \item[(${\rm c}'$)] $x \in G^{u,v}_{> 0}\,$; \item[(${\rm c}''$)] for some (equivalently, any) double reduced word $\mathbf{i} \in R(u,v)$, we have $x = x_\mathbf{i} (a;t_1, \dots, t_m)$ with $a \in H_{>0}$ and $t_1, \dots, t_m > 0$; \item[(${\rm c}'''$)] $\Delta_{u' \omega_i, v' \omega_i} (x) > 0$ for all $i\in [1,r]$ and all $u' \preceq u$, $v' \preceq v^{-1}$. \end{itemize} \end{lemma} \smallskip\noindent {\bf Proof. } See \cite[Theorems~1.3 and 1.11]{FZ}. (The equivalence $({\rm c}')\Longleftrightarrow ({\rm c}'')$ was essentially established in~\cite{lusztig}.) \hfill$\square$\medskip Now everything is ready for the proof of Theorem~\ref{th:TNN-TP general}. The implication $({\rm f})\Longrightarrow ({\rm d})$ is a special case of $({\rm c}')\Longrightarrow ({\rm c}''')$, while $({\rm d})\Longrightarrow ({\rm e})$ is trivial. Finally, to show that $({\rm e})\Longleftrightarrow ({\rm f})$, it suffices to note that \[ G^{w_\mathrm{o}, w_\mathrm{o}} = w_\mathrm{o} G_0 \,\cap \, G_0 w_\mathrm{o} = \{x \in G: \Delta_{w_\mathrm{o}\omega_i,\omega_i}(x)\neq 0, \, \Delta_{\omega_i,w_\mathrm{o}\omega_i}(x)\neq 0 \, \, (i \in [1,r])\} \] (cf.~\cite[Corollary~2.5]{FZ} or \cite[Proposition~4.1]{FZcells}). \qed \section{Proof of Theorem~\ref{th:loewner general}} \subsection{Proof of $({\rm b}) \Rightarrow ({\rm a})$} This is obvious since all generalized minors are continuous functions on~$G$. \subsection{Proof of $({\rm c}) \Rightarrow ({\rm b})$} In view of Lemma~\ref{lem:c'''}, it suffices to show that the closure of $G^{w_\mathrm{o}, w_\mathrm{o}}_{> 0}$ contains all totally positive varieties $G^{u, v}_{> 0}\,$. Suppose $x \in G^{u,v}_{> 0}$ for some $u$ and $v$. Take any $\mathbf{i} \in R(u,v)$ and write $x = x_\mathbf{i} (a;t_1, \dots, t_m)$ as in (${\rm c}''$). Choose a word $\mathbf{j} \in R(w_\mathrm{o}, w_\mathrm{o})$ that has $\mathbf{i}$ as an initial segment. Then \begin{equation} x = x_\mathbf{i} (a;t_1, \dots, t_m) = \lim_{t \to +0} x_\mathbf{j} (a;t_1, \dots, t_m, t, \dots, t) \,, \end{equation} and $({\rm b})$ follows. \subsection{Proof of $({\rm a}) \Rightarrow ({\rm c})$} Suppose that $x \in G^{u,v}$ satisfies $({\rm a})$. It suffices to check condition $({\rm c}''')$ in Lemma~\ref{lem:c'''}. Let $\Sigma (x)$ denote the set of all pairs $(u', v') \in W \times W$ such that $\Delta_{u' \omega_i, v' \omega_i} (x) > 0$ for all $i$. Our aim is to show that \begin{equation} \label{eq:aim} (u',v') \in \Sigma (x) \ \ \text{for} \ \ u' \preceq u, \, v' \preceq v^{-1} \ . \end{equation} As a first step we notice that \begin{equation} \label{eq:Schubert cell inequalities} (u, e), (e, v^{-1}) \in \Sigma (x) \ ; \end{equation} this follows from the well-known fact that $\Delta_{u \omega_i, \omega_i}$ vanishes nowhere on the Bruhat cell $B u B$; see, e.g., \cite[Lemma~3.4]{FZcells}. We shall write $u' \to u''$ if $u'' = u' s_i$ for some $i$, and $\ell (u'') = \ell (u') + 1$. In view of (\ref{eq:Schubert cell inequalities}), the desired inclusions (\ref{eq:aim}) are consequences of the following statements: \begin{eqnarray} \label{eq:stat1} \begin{array}{l} \text{if $u' \to u''$ and $(u'', e)\in\Sigma (x)$, then $(u', e)\in\Sigma (x)$;}\\[.1in] \text{if $u' \to u''$ and $(e, u'')\in\Sigma (x)$, then $(e, u')\in\Sigma (x)$;} \end{array} \end{eqnarray} \begin{equation} \label{eq:stat2} \text{if $u' \to u''$, $v' \to v''$, $(u', v'')\in\Sigma (x)$, $(u'', v')\in\Sigma (x)$, then $(u'', v'') \in \Sigma(x)$.} \end{equation} Our proof of both (\ref{eq:stat1}) and (\ref{eq:stat2}) relies on the following identity \cite[Theorem~1.17]{FZ}: \begin{equation} \label{eq:minors-Dodgson} \Delta_{u'\omega_i, v' \omega_i} \Delta_{u'' \omega_i, v'' \omega_i} = \Delta_{u' \omega_i, v'' \omega_i} \Delta_{u'' \omega_i, v' \omega_i} + \prod_{j \neq i} \Delta_{u' \omega_j, v' \omega_j}^{- a_{ji}} \,, \end{equation} whenever $u' \to u'' = u' s_i$ and $v' \to v'' = v' s_i$; here the numbers $a_{ji}$ are the entries of the Cartan matrix of $G$. To prove (\ref{eq:stat1}), suppose that $u' \to u'' = u' s_i$ and $(u'', e) \in \Sigma (x)$. Now specialize (\ref{eq:minors-Dodgson}) at $v' = e$ and evaluate both sides at~$x$. Using the fact that $u' \omega_j = u'' \omega_j$ for $j \neq i$, we see that the second summand on the right-hand side is strictly positive. Since all generalized minors of $x$ are nonnegative, we conclude that both factors on the left-hand side are positive. In particular, $\Delta_{u'\omega_i, \omega_i} (x) > 0$, i.e., $(u', e) \in \Sigma (x)$, as desired. The second part of (\ref{eq:stat1}) is proved in the same way. To prove (\ref{eq:stat2}), suppose that $u' \to u'' = u' s_i$ and $v' \to v'' = v' s_j$, and both $(u', v'')$ and $(u'', v')$ belong to $\Sigma (x)$. We need to show that $\Delta_{u'' \omega_k, v'' \omega_k} (x) > 0$ for all $k$. If $k \neq i$, then $u''\omega_k = u' \omega_k$ and we are done since $(u', v'') \in \Sigma (x)$. The case $k \neq j$ is treated in the same way. It thus remains to consider the case $k = j = i$. But then in (\ref{eq:minors-Dodgson}), the first summand on the right (evaluated at $x$) is positive, implying $\Delta_{u'' \omega_i, v'' \omega_i} (x) > 0$, as desired. This completes the proof of Theorem~\ref{th:loewner general}. \hfill$\square$\medskip \section{Proof of Theorem~\ref{th:GK general}} \subsection{Proof of $({\rm g}) \Rightarrow ({\rm i})$} Since total positivity is described by condition $({\rm f})$ in Theorem~\ref{th:TNN-TP general}, it suffices to show that every proper parabolic subgroup of $G$ containing $B$ or~$B_-$ has empty intersection with the open double Bruhat cell $G^{w_\mathrm{o}, w_\mathrm{o}}$. The latter follows at once from the well known description of maximal proper parabolic subgroups containing $B$ or~$B_-\,$: they are the subgroups $P_1, \dots, P_r$ and $P_{\overline 1}, \dots, P_{\overline r}$ given by \begin{equation} \label{eq:max parabolics} P_i = \bigcup_{i \notin {\rm Supp} (v)} B_- v B_-\ , \quad P_{\overline i} = \bigcup_{i \notin {\rm Supp} (u)} B u B \ , \end{equation} where ${\rm Supp} (w)$ denotes the set of indices that occur in some (equivalently, any) reduced word for~$w \in W$. \subsection{Proof of $({\rm i}) \Rightarrow ({\rm g})$} Consider the monoid $\mathcal{H}$ whose generators $T_1,\dots,T_r$ are subject to relations \[ \begin{array}{rcl} T_i^2 &=& T_i \,;\\[.1in] \underbrace{T_iT_jT_i\cdots}_{m_{ij}} & =& \underbrace{T_jT_iT_j\cdots}_{m_{ij}} \ \ (i \neq j) \, ; \end{array} \] here $m_{ij}$ is the order of $s_i s_j$ in $W$. A~well known theorem of Tits on reduced words (see, e.g., \cite[II,\S3C]{brown}) has the following implications. First, if $(i_1, \dots, i_m) \in R(w)$, then the product $T_{i_1} \cdots T_{i_m}$ only depends on $w$ and so can be unambiguously denoted by~$T_w\,$. Second, the correspondence $w \mapsto T_w$ is a bijection between $W$ and~$\mathcal{H}$. Finally, we have the following criterion for determining when a product of generators is equal to $T_{w_\mathrm{o}}$. \begin{lemma} \label{lem:hecke2} For a word $(i_1,\dots,i_N)$ in the alphabet $[1,r]$, we have $T_{i_1} \cdots T_{i_N}\!=\!T_{w_\mathrm{o}}$ if and only if this word has a reduced word for~$w_\mathrm{o}$ as a subword. \end{lemma} The relevance of $\mathcal{H}$ to our problem is clear from the following lemma. \begin{lemma} \label{lem:hecke} For any $x \in G^{u,v}_{>0}$ and $y \in G^{u',v'}_{>0}$, we have $xy \in G^{u'',v''}_{>0}$, where the elements $u''$ and $v''$ are given by $T_{u''}=T_u T_{u'}$ and $T_{v''}=T_v T_{v'}$. \end{lemma} \smallskip\noindent {\bf Proof. } Follows from condition $({\rm c}'')$ of Lemma~\ref{lem:c'''}, together with the commutation relations among the elementary factors $x_i(t)$ and $x_{\bar i}(t)$, as given in \cite[Theorem~3.1]{BZ} and \cite[Section~2.2]{FZ}. \hfill$\square$\medskip By Lemma~\ref{lem:hecke} and condition $({\rm f})$ of Theorem~\ref{th:TNN-TP general}, for any $x \in G^{u,v}_{>0}$ and any positive integer~$m$, we have \begin{equation} \label{eq:osc exponent} x^m \in G_{>0} \Leftrightarrow T_u^m = T_v^m = T_{w_\mathrm{o}} \ . \end{equation} Suppose that a totally nonnegative element $x$ satisfies condition~$({\rm i})$. By (\ref{eq:max parabolics}), $x \in G^{u,v}_{>0}$ for some elements $u, v \in W$ such that ${\rm Supp} (u) = {\rm Supp} (v) = [1,r]$. We need to show that $x$ is oscillatory. In view of (\ref{eq:osc exponent}), this means that $T_u^m = T_v^m = T_{w_\mathrm{o}}$ for sufficiently large~$m$. The latter is clear from Lemma~\ref{lem:hecke2}: just take~$m=\ell(w_\mathrm{o})$. \subsection{Proof of $({\rm h}) \Leftrightarrow ({\rm i})$} This equivalence can be restated as follows. \begin{lemma} \label{lem:parabolic indicators} Let $i \in [1,r]$, and let $\Delta$ be an $i$-indicator (resp.\ $\overline i$-indicator). Then $\Delta$ vanishes on $P_i$ (resp.\ $P_{\overline i}$), and $\Delta (x) > 0$ for any $x \in G_{\geq 0}$ outside $P_i$ (resp.~$P_{\overline i}$). \end{lemma} \smallskip\noindent {\bf Proof. } It is enough to consider $i$-indicators, the case of $\overline i$-indicators being totally similar. Changing if necessary the numeration of fundamental weights, we can assume without loss of generality that $i = 1$, and $$\Delta = \Delta_{j \to 1} = \Delta_{u \omega_j, s_1 u \omega_j} \, ,$$ where $u = c(j\to 1) = s_2 \cdots s_j\,$, with nonzero Cartan matrix entries $a_{k,k+1}$ for $k = 1, \dots, j-1$. First let us show that $\Delta (x) = 0$ for $x \in P_1$. We will denote by $x^T$ the ``transpose'' of~$x$; more precisely, $x\mapsto x^T$ is the anti-automorphism of~$G$ defined by \begin{equation*} \label{eq:T} a^T = a \quad (a \in H) \ , \quad x_i (t)^T = x_{\overline i} (t) \ , \quad x_{\overline i} (t)^T = x_i (t) \,. \end{equation*} As in~\cite{FZ}, we will use the notation $\Delta^{\omega_i}=\Delta_{\omega_i, \omega_i}$ for the $i$th ``principal minor.'' Using \cite[(1.10), (2.25)]{FZ}, we obtain: \[ \begin{array}{r} \Delta (x) = \Delta_{u\omega_j,s_1u\omega_j}(x) = \Delta_{s_1u\omega_j,u\omega_j}(x^T) = \Delta^{\omega_j}({\overline{s_1u}}^{-1}x^T\overline{u})\\[.1in] = \Delta^{\omega_j}({\overline{u^{-1}s_1u}}^{-1}\overline{u^{-1}}x^T\overline{u}) = \Delta_{u^{-1} s_1 u \omega_j, \omega_j} (\overline {u^{-1}} x^T \overline u) \, . \end{array} \] Observe that $\overline {u^{-1}} x^T \overline u \in P_{\overline 1}$ for any $x \in P_1$ (since all three factors belong to~$P_{\overline 1}$). It remains to prove that $\Delta_{u^{-1} s_1 u \omega_j, \omega_j}$ vanishes on $P_{\overline 1}$. To see this we use the following description of $P_{\overline 1}$ equivalent to (\ref{eq:max parabolics}): $P_{\overline 1} = \pi^{-1} (X_{w_\mathrm{o}'})$, where $\pi$ is the projection of $G$ onto the flag variety $G/B$, the element $w_\mathrm{o}' \in W$ is the longest element of the parabolic subgroup generated by $s_2, \dots, s_r$, and $X_w$ is the Schubert variety corresponding to $w$ (i.e., the closure of the Schubert cell $(B wB)/B$). Our claim that $\Delta_{u^{-1} s_1 u \omega_j, \omega_j}$ vanishes on $P_{\overline 1}$ now follows from the fact that $1 \in {\rm Supp}(u^{-1} s_1 u)$, which means that $u^{-1} s_1 u$ is \emph{not} smaller than or equal to $w_\mathrm{o}'$ in the Bruhat order (cf., e.g., \cite[Lemma 3.4]{FZcells}; in the notation of~\cite{FZcells}, $\Delta_{\gamma,\omega_j}(x)=p_\gamma(\pi(x))$). To complete the proof of Lemma~\ref{lem:parabolic indicators} and Theorem~\ref{th:GK general}, it remains to show that $\Delta_{j \to 1} (x) > 0$ for any element $x \in G_{\geq 0}$ not belonging to $P_1$. We proceed by induction on $j$. Let us first consider the case $j = 1$ when we need to show that $\Delta_{\omega_1, s_1 \omega_1} (x) > 0$. Since $\Delta_{\omega_1, s_1 \omega_1}(b_- x) = \Delta^{\omega_1} (b_-) \Delta_{\omega_1, s_1 \omega_1}(x)$ for any $b_- \in B_-$, we can assume without loss of generality that $x$ has the form $$x = x_{i_1} (t_1) \cdots x_{i_m}(t_m)$$ for some sequence of (unbarred) indices $i_1, \dots, i_m$ and some positive numbers $t_1, \dots, t_m$. The condition $x \notin P_1$ means that at least one of the indices $i_k$ is equal to $1$; let $k$ be the maximal index such that $i_k = 1$. Using the fact that $\overline {s_1}^{\ -1} x_i (t) \overline {s_1} \in N$ for any $i \neq 1$, and the commutation relation \cite[(2.13)]{FZ}, we conclude that \[ \begin{array}{r} \Delta_{\omega_1, s_1 \omega_1}(x) = \Delta^{\omega_1} (x_{i_1} (t_1) \cdots x_{i_k}(t_k) \overline {s_1} \cdot (\overline {s_1}^{\ -1} x_{i_{k+1}} (t_{k+1}) \cdots x_{i_m}(t_m) \overline {s_1})) \\[.1in] = \Delta^{\omega_1} (x_{i_1} (t_1) \cdots x_{i_k}(t_k) \overline {s_1}) = \Delta^{\omega_1} (x_{i_1} (t_1) \cdots x_{i_{k-1}}(t_{k-1}) x_{\overline 1}(t_k^{-1}) t_k^{h_1})\, . \end{array} \] Since the element $x' = x_{i_1} (t_1) \cdots x_{i_{k-1}}(t_{k-1}) x_{\overline 1}(t_k^{-1}) t_k^{h_1}$ is totally nonnegative, and any principal minor is positive on~$G_{\geq 0}$ (see \cite[Corollary~2.5 and Proposition~2.29]{FZ}), we conclude that $\Delta^{\omega_1} (x') > 0$, as desired. Now assume that $j \geq 2$, and that we already know that $\Delta_{j' \to 1}(x) > 0$ for $j' = 1, \dots, j-1$. Let us apply the identity (\ref{eq:minors-Dodgson}) for $i = j$, $u' = s_2 \cdots s_{j-1}$, and $v' = s_1 \cdots s_{j-1}$. In our present notation, it takes the following form: \begin{equation} \label{eq:Dodgson indicator} \Delta^{\omega_j} \Delta_{j \to 1} = \Delta_{u' \omega_j, v'' \omega_j} \Delta_{u'' \omega_j, v' \omega_j} + \prod_{j' > j} (\Delta^{\omega_{j'}})^{- a_{j'j}} \cdot \prod_{j'= 1}^{j-1} \Delta_{j' \to 1}^{- a_{j'j}} \,; \end{equation} here we used that $\Delta_{u\omega_j,v\omega_j}=\Delta^{\omega_j}$ whenever $u$ and $v$ belong to the parabolic subgroup of~$W$ generated by all simple reflections except~$s_j\,$. By the inductive assumption, the second summand in the right-hand side of~(\ref{eq:Dodgson indicator}) is positive at~$x$, while the first summand is nonnegative. It follows that $\Delta_{j \to 1}(x) > 0$, completing the proof. \hfill$\square$\medskip \section{Proof of Theorems~\ref{th:GK2 general} and \ref{th:GK2 general concrete}} \subsection{Proof of Theorem~\ref{th:GK2 general}} This is an immediate consequence of (\ref{eq:osc exponent}) and Lemma~\ref{lem:hecke2}. \subsection{Proof of Theorem~\ref{th:GK2 general concrete}} \label{sec:coxeter-elements} We will need some basic facts about Coxeter elements in Weyl groups (the proofs can be found in~\cite[Section~V.6]{bourbaki}). Recall that a \emph{Coxeter element} $c \in W$ is a product of simple reflections $s_1, \dots, s_r$ taken in any order. All such elements are conjugate to each other and thus have the same order; this order is called the \emph{Coxeter number} of $W$ and denoted by~$h$. Here are the statements we need: \begin{itemize} \item[(C1)] If $W$ is irreducible, then $h/2 = \ell(w_\mathrm{o})/r$. \item[(C2)] If $w_\mathrm{o} = -1$ (i.e., $w_\mathrm{o} (\lambda) = - \lambda$ for any weight $\lambda$), then $h$ is even, and $c^{h/2} = w_\mathrm{o}$ for any Coxeter element $c \in W$. \end{itemize} Now suppose that $G$ is simple, so the Weyl group $W$ is irreducible. Combining Theorem~\ref{th:GK2 general} with (C1)--(C2), we conclude that $m(G) = h/2 = \ell(w_\mathrm{o})/r$ whenever $w_\mathrm{o} = -1$. According to the tables in~\cite{bourbaki}, this gives the desired answer for $m(G)$ for all the types except $A_r$ ($r \geq 2$), $D_r$ ($r$ odd), and~$E_6\,$. Let us consider these remaining cases separately. Throughout, we denote by $\mathbf{i}=(i_1,\dots,i_r)$ a permutation of indices~$1,\dots,r$. It will be convenient to use the notation $\mathbf{i}^k$ for the concatenation of $k$ copies of~$\mathbf{i}$. \emph{Type $A_r\,$}. As usual, we identify $W$ with the symmetric group $S_{r+1}$; under this identification, $s_i$ becomes the transposition of adjacent indices $i$ and $i+1$, and $w_\mathrm{o} (i) = r + 2 - i$ for $i \in [1,r+1]$. If $\mathbf{i}=(1,\dots,r)$, then~$\mathbf{i}^{r-1}$ does not contain a reduced word for~$w_\mathrm{o}$, since any such reduced word must have a subword $r,r\!-\!1,\dots,2,1$ (because $w_\mathrm{o}$ switches 1 and~$r+1$). For an arbitrary permutation~$\mathbf{i}$ of $1,\dots,r$, let us now consider the sequence~$\mathbf{i}^r$. We will form a subsequence~$\mathbf{j}$ of~$\mathbf{i}^r$ as follows. First, $\mathbf{j}$ will include all $r$ entries of~$\mathbf{i}^r$ which are equal to~1. Between any two consecutive~1's, there is a~2; let $\mathbf{j}$ include all these~2's (there will be $r-1$ of them). We then include in~$\mathbf{j}$ the~3's that interlace these~2's ($r-2$ more entries), etc. It is straightforward to check that the subsequence~$\mathbf{j}$ thus obtained will be a reduced word for $w_\mathrm{o}\,$. Thus $m(G) = r$, as claimed. \emph{Type $D_r$ ($\,r$ odd).} In this case $h/2 = \ell(w_\mathrm{o})/r = r-1$. Using the standard combinatorial interpretation of~$D_r\,$, one checks that $(s_1\cdots s_r)^{r-1}\neqw_\mathrm{o}\,$, and so $m(G) \geq r$. To prove the reverse inequality, consider the standard embedding of $W$ into the Coxeter group $\widetilde W$ of type $D_{r+1}$. We know that the Coxeter number $\tilde h$ of $\widetilde W$ is equal to~$2r$, and the longest element $\tilde w_\mathrm{o} \in \widetilde W$ is equal to $-1$. Let $\mathbf{i}=(i_1,\dots,i_r)$ be a permutation of $1,\dots,r$, and denote $\tilde \mathbf{i}=(i_1,\dots,i_r,r+1)$. Then ${\tilde \mathbf{i}}^r$ is a reduced word for $\tilde w_\mathrm{o}$, and therefore it contains a reduced word for $w_\mathrm{o} \in W$ as a subword. We conclude that $m(G) = r$, as desired. \emph{Type $E_6$}. The upper bound $m(G)\leq 8$ can be proved using the fact that $(s_1s_4s_6s_2s_3s_5)^6=1$ (in the notation of Figure~\ref{fig:E6}), together with the following observation based on Lemma~\ref{lem:hecke2}: if $(T_c)^k=T_{w_\mathrm{o}}$ for a Coxeter element~$c\in W$, then $(T_{c'})^{k+1}=T_{w_\mathrm{o}}$ for any Coxeter element~$c'$ obtained by taking a cyclic permutation of any reduced word for~$c$. The lower bound is proved by exhibiting a Coxeter element (namely, $c=s_1s_2s_3s_4s_5s_6$) such that $(T_c)^7\neq T_{w_\mathrm{o}}\,$. (The latter verification is due to H.~Derksen.) \hfill$\square$\medskip \begin{figure}[ht] \setlength{\unitlength}{2pt} \begin{center} \begin{picture}(80,25)(0,-5) \thicklines \put(0,0){\line(1,0){80}} \put(40,0){\line(0,1){20}} \put(0,0){\circle*{2.5}} \put(20,0){\circle*{2.5}} \put(40,0){\circle*{2.5}} \put(60,0){\circle*{2.5}} \put(80,0){\circle*{2.5}} \put(40,20){\circle*{2.5}} \put(0,-5){\makebox(0,0){$s_1$}} \put(20,-5){\makebox(0,0){$s_2$}} \put(40,-5){\makebox(0,0){$s_4$}} \put(60,-5){\makebox(0,0){$s_5$}} \put(80,-5){\makebox(0,0){$s_6$}} \put(35,20){\makebox(0,0){$s_3$}} \end{picture} \end{center} \caption{Generators of the Weyl group of type~$E_6$} \label{fig:E6} \end{figure} \section*{Acknowledgements} Harm Derksen contributed to Section~\ref{sec:coxeter-elements} by first bringing the statements (C1)--(C2) to our attention, and then by verifying the type~$E_6$ case of Theorem~\ref{th:GK2 general concrete}. We are grateful to Harm for his input.
2,869,038,155,225
arxiv
\section{Introduction} In the last 10 years, deep learning has transformed the technology industry enabling computers to perform image classification and recognition, translation, path planning, and more \cite{AKINOSHO2020101827, MISHRA2020104000, KOTSIOPOULOS2021100341, gpt2_fewshot}. While these efforts have been fruitful in terms of providing the desired functionality, most of these implementations employ the use of power-hungry hardware such as GPUs and TPUs \cite{googletpu} and are deployed in systems that are not constrained by power limitations. In recent, years many approaches have been proposed to alleviate these power constraints with methods such as quantization \cite{google_quantization, haq} and approximate computing \cite{2022_aptpu}. With these approaches, many new edge-specific devices have been introduced such as the Nvidia Jetson Nano, Intel Neural Compute Stick 2, and Google Coral Edge TPU. Many of these edge devices were created to take advantage of quantized networks that operate on lower precision values rather than the standard single or double precision floating point representations. As a result, the overall power consumption and architecture area are reduced to specifically benefit applications where space and power are limited. Spiking neural networks (SNNs), considered the latest generation of artificial neural networks (ANNs), are a new class of neural networks that focus on biological plausibility, energy efficiency, and event-based computing \cite{2014_diehl_cook}. Unlike their continuous floating-point value-based deep neural network (DNN) counterparts, SNNs operate on discrete-temporal values which represent the biological action potentials of neurons in the brain \cite{2014_diehl_cook}. SNNs have been shown in many works \cite{heartbeat_loihi, image_seg_loihi, 2022_iopnce_mohammadi} to accomplish comparable accuracies to DNNs while also significantly reducing power and energy consumption. While, SNNs can be more energy and power efficient, training deep SNNs (DSNNs) has been a recurring challenge due to the lack of suitable training/learning algorithms that perform as well as the backpropagation algorithm used in DNNs \cite{2016_training_dsnn_backprop, 2018_rstdp, 2020_snn_training_dilemma}. Many SNN-specific learning algorithms have been proposed such as spike-timing dependent plasticity (STDP) and variants of it \cite{2018_rstdp, 2020_coding_selection}. These learning approaches rely on the temporal patterns found in the time between spikes to adapt the weight values as the network sees more input \cite{neuronal_dynamics}. This approach to learning, while efficient and suitable for SNNs of low depth dimensionality, do not typically scale well to deeper networks due to the lack of feedback from subsequent layers during training \cite{2016_training_dsnn_backprop, 2020_snn_training_dilemma}. To address or even bypass the training and design challenges introduced in SNNs, many DNN to SNN conversion approaches have been proposed \cite{perez_conversion, cao_conversion, 2016_snntoolbox}. One such conversion approach, the SNN Conversion Toolbox \cite{2016_snntoolbox}, uses the parameters in pre-trained DNNs to create a similar SNN and deploy them on Loihi to provide energy-efficient and event-based computation to highly constrained environments in edge computing applications. In this work, we aim to generalize the process of converting pre-trained DNNs into SNNs and deploying the SNN on neuromorphic hardware such as Loihi by contributing the following: \begin{itemize} \item We provide general guidelines for designing and training DNNs for conversion into SNNs. \item After the SNNs are created, we present analysis and optimization techniques to further optimize the SNNs with respect to power, latency, and energy. \item We compare the performance of SNNs on Loihi against the Intel Neural Compute Stick 2 in classifying static images. \end{itemize} The remainder of this work is organized as follows. In Section \ref{sec:edgehardware}, we provide an overview of the two hardware platforms used in this work, the Intel Neural Compute Stick 2 and Intel Loihi, along with their respective APIs. In Section \ref{sec:conversion}, we discuss the conversion methodology and network considerations for converting DNNs to SNNs using the SNN Conversion Toolbox. We then provide some insights and techniques, in Section \ref{sec:deployment}, for optimizing the SNNs in terms of latency and energy consumption. In Section \ref{sec:results}, we present our experimental results with respect to inference accuracy, power, latency, and energy on the three separate image classification tasks. Finally, in Section \ref{sec:conclusion}, we conclude our work with a discussion of the findings and future directions of research. \begin{figure} \centering \resizebox{.7\linewidth}{!}{ \includegraphics[page=1]{loihi_ncs2.pdf} } \caption{Intel Neural Compute Stick 2 (top) \cite{ncs2_pic} and Intel Loihi Kapoho Bay (bottom) \cite{loihi_pics}.} \label{fig:kapoho_ncs2} \end{figure} \section{Edge Hardware} \label{sec:edgehardware} To demonstrate the benefits of neuromorphic hardware for machine learning tasks we use two hardware platforms along with their respective software APIs to perform our experiments. Here, we briefly describe the architectures and APIs of an edge computing neural network accelerator, the Intel Neural Compute Stick 2, and a neuromorphic hardware platform, Intel Loihi. \begin{figure} \centering \resizebox{.6\linewidth}{!}{ \includegraphics[page=2]{loihi_ncs2.pdf} } \caption{Nahuku-32 Loihi server blade with 32 interconnected Loihi chips \cite{loihi_pics}.} \label{fig:nahuku32} \end{figure} \subsection{Intel Neural Compute Stick 2} In 2017, Intel launched the Movidius Neural Compute Stick meant to be used in edge computing devices to accelerate neural networks, specifically in computer vision-based applications using convolutional neural networks (CNNs). Since then, Intel has released an improved version called the Neural Compute Stick 2 (NCS2), which we use herein. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{VGG.pdf} \caption{VGG-9 Network Architecture: \shapelabel{circle}{black!60!red}{R} ReLU, \shapelabel{circle}{blue!60!green}{M} MaxPooling2D, \shapelabel{circle}{blue!60!green}{D} Dropout, \shapelabel{circle}{black!60!red}{S} Softmax.} \label{fig:netarch} \end{figure*} The NCS2, shown in Fig. \ref{fig:kapoho_ncs2}, provides a plug-and-play USB interface for use with edge or small-computer devices like the Raspberry Pi. Specifically, NCS2 is clocked at 700 MHz and includes 16 shave cores, a neural engine, and 4 GB of memory which combine to implement a Vision Processing Unit (VPU) \cite{ncs2_specs}. Compared to conventional hardware's full or double-precision floating point operations, the Intel NCS2 only performs 16-bit floating-point operations allowing power and area savings at the cost of precision/accuracy. To deploy models on NCS2, Intel provides an API called OpenVINO \cite{openvino} which allows models to be compiled and scaled/quantized for deployment on NCS2. Once the model is optimized for NCS2, the network can then be deployed on the device and input can be presented to perform inference. \subsection{Intel Loihi} Intel's neuromorphic platform, Loihi, was introduced in 2018 \cite{2018_loihi} with the goal of deploying SNNs on hardware to better establish neuromorphic computing's viability to accelerate tasks such as image classification and event-based or real-time computing problems. In the subsequent years since its launch in 2018, Loihi has been shown to have magnitudes lower energy consumption in machine learning applications while achieving comparable and even, in some cases, better accuracy than the traditional DNNs deployed on GPUs and TPUs. In terms of scalability, Loihi's hardware architecture enables it to be scaled from small USB form factor devices of one to four Loihi chips to the much larger data center implementations with many Loihi chips contained within a single server blade as seen in Figure \ref{fig:nahuku32} \cite{2018_loihi, 2021_loihi}. Each first-generation Loihi chip is comprised of 128 specialized-event-driven neuro-cores, each capable of implementing up to 1,024 spiking neurons in an SNN. Additionally, each neuro-core in the Loihi chip also contains 128 KB of state memory and allows the implementation of up to 4,096 fan-in or fan-out axons connecting to other neurons. In this work, we employ the SNN Conversion Toolbox \cite{2016_snntoolbox} along with its custom Loihi backend, NxTF \cite{2021_nxtf}, to convert DNNs into SNNs to be deployed on Intel's Loihi platform. We go into further detail about this conversion process and methodology in Section \ref{sec:conversion}. \section{DNN to SNN Conversion Methodology} \label{sec:conversion} Our experiments perform image classification on three distinct image datasets: the MNIST handwritten digit dataset \cite{mnist}, the fashion MNIST (FMNIST) clothing dataset \cite{fmnist}, and the American Sign Language (ASL) Alphabet \cite{asl_kaggle}. MNIST and FMNIST consist of 10 distinct classes each. MNIST contains 70,000 images in total of handwritten digits zero through nine. FMNIST, like MNIST, also contains 70,000 static images including images of different pieces of clothing such as pullovers, trousers, bags, etc. Unlike the MNIST and FMNIST, the ASL Alphabet dataset contains 24 classes representing static hand gestures corresponding to the English letters A thru Y, excluding the non-static gestures for letters J and Z. The ASL Alphabet dataset includes 34,627 static ASL gestures in total. Each of these datasets was split into the typical train, validation, and test subsets which are made up of 60\%, 20\%, and 20\% of the total images, respectively. To eliminate any bias with respect to power, latency, and energy, we chose to use a modified version of the VGGNet network proposed in \cite{vggnet} for the representative DNN of our experiments. We use this network across the three datasets to ensure that the model performances discussed in Section \ref{sec:results} are not influenced by the network size or architecture. As seen in Figure \ref{fig:netarch}, our VGGNet implementation consists of six convolution layers each using the ReLU activation function and followed by a max-pooling layer. After these six convolution layers, we flatten the output and then pass the resulting features into three dense layers consisting of 120, 84, and either 10 or 24 output neurons depending on the input dataset. In total, this implementation of VGGNet, which we call VGG-9 for the remainder of this work, has 66,378 or 67,568 parameters for the MNIST/FMNIST and the ASL Alphabet datasets, respectively. After training these VGG-9 networks, we use SNN Conversion Toolbox \cite{2016_snntoolbox} for converting our DNNs to more energy-efficient SNNs. In \cite{2016_snntoolbox}, Rueckauer et. al. use the weights and activations of pre-trained DNNs to map DNN layers and neurons to the spiking domain in a one-to-one manner. By mapping these layers and neurons to construct the SNN, the challenges of training and designing SNNs can be somewhat avoided. The SNN Conversion Toolbox, compared to the previous conversion works \cite{perez_conversion, cao_conversion}, implements most of the common layers used in DNNs like convolution layers, pooling layers, and activation functions. However, there are some limitations as to what type of pooling layers and activation functions can be converted into spiking equivalents. For example, while the SNN Conversion Toolbox does implement max pooling layers for its built-in simulator, the NxTF backend \cite{2021_nxtf} for Loihi does not due to max-pooling needing special implementation considerations at the neuron level. The SNN Conversion Toolbox also does not support the hyperbolic tangent or TanH activation function. For these reasons, we constrain our implementation of the VGG-9 model to employ average-pooling layers as opposed to the classical VGGNet's max-pooling layers. We refer to these constrained networks in Section \ref{sec:results} as C-DNNs. As we will later see in Section \ref{sec:results}, this change does not significantly impact the network's accuracy and in some cases can even improve the network's accuracy. After training our constrained VGG-9 networks, we then use the SNN Conversion Toolbox to convert the network into an SNN to be deployed on Loihi. The first steps performed by the SNN Conversion Toolbox consist of normalizing the DNN parameters with respect to SNNs, mapping the neurons/layers to spiking equivalents, and converting the input data into sequences of spikes or spike trains. To convert the input data into spike trains the SNN Conversion Toolbox uses a rate-based coding approach to encode an input image's pixel intensities into spike trains that have a proportional spike rate, called the firing rate, to the pixel intensity \cite{2016_snntoolbox}. That is, the higher the pixel intensity, the higher the spike frequency of a pixel's corresponding spike train. These spike trains are then exposed to the SNN for a fixed user-configurable parameter called \textit{duration} measured in milliseconds. In our experiments, we run each SNN for numerous iterations with different durations. This duration parameter directly affects the latency of the network as seen in \cite{2022_iopnce_mohammadi}. Similar to \cite{2022_iopnce_mohammadi}, our analysis includes searching for a duration that reduces the duration parameter and therefore latency with little impact on accuracy. \begin{table} \centering \caption{Model Parameters and Loihi Core Partitioning} \resizebox{.8\linewidth}{!}{ \begin{tabular}{lrc} \hline \\[-8pt] Layer & Parameter & Loihi Cores \\ \hline \hline \\[-8pt] Conv1 & 60 & 12 \\ Conv2 & 880 & 7 \\ Conv3 & 4640 & 4 \\ Conv4 & 9248 & 7 \\ Conv5 & 13872 & 2 \\ Conv6 & 20784 & 3 \\ FC1 & 5880 & 1 \\ FC2 & 10164 & 1 \\ Output & 850\,\textbf{/}\,2040 & 1 \\ \hline \\[-8pt] Total & 66,378\,\textbf{/}\,67,568 & 47 \\ \hline \end{tabular} } \label{table:netparams} \end{table} \section{Deployment Methodology} \label{sec:deployment} While the SNN Conversion Toolbox did not originally support deployment on Loihi, a custom backend for the SNN Conversion Toolbox was released in 2021, called NxTF \cite{2021_nxtf}. This backend was built using Intel's Loihi API called NxSDK and implemented the spiking layers the conversion toolbox was capable of converting. With this layer/neuron implementation, NxTF also included a custom hardware partitioning/distribution optimization algorithm to efficiently distribute the SNN's layers and neurons across the neuro-cores as well as across multiple Loihi chips if needed. In the case of our VGG-9 networks, only one Loihi chip was used by the deployed SNNs. In Table \ref{table:netparams}, we show the layer-by-layer parameters and the number of neuro-cores that the layers were allocated. Once we begin inference on the deployed SNN on Loihi, we perform two types of optimization to (1) reduce latency while maximizing accuracy and (2) to increase data sparseness to reduce overall SNN power and energy consumption. \begin{figure}[] \centering \resizebox{.9\linewidth}{!}{ \includegraphics[]{accuracy_vs_duration.pdf} } \caption{Accuracy vs. Duration with zoomed portion above inflection point.} \label{fig:accuracy_vs_duration} \end{figure} One challenge we encountered while performing our experiments was the polling rates of the power and energy consumption hardware sensors. In this first iteration of Loihi, these sensors operate on timescales of 30 to 40 ms \cite{ncl_models}. Thus, this hardware polling limitation constrains the range of durations for which power and energy metrics can be recorded. To alleviate this issue, we ran multiple durations above the 30-40 ms polling threshold and then averaged the total power consumption. From there, we ran the remaining durations below the polling threshold without measuring power and energy consumption. From our experiments, there is a linear correlation between duration and latency. Using this fact, we are able to predict an upper limit of energy consumption by using a predicted latency value along with the average power consumption measured above the polling threshold. \begin{figure}[h] \centering \resizebox{.6\linewidth}{!}{ \includegraphics{cvstep.png} } \caption{Test dataset subsets used in the proposed latency/duration cross-validation process.} \label{fig:crossvalidation} \end{figure} \subsection{Duration/Latency Cross-Validation} After all layers in the SNN are partitioned to corresponding neuro-cores, the SNN is deployed using a user-configurable parameter file which includes the aforementioned duration parameter. Throughout our experiments, we change this duration parameter to optimize the network's latency and accuracy. As shown in Figure \ref{fig:accuracy_vs_duration}, the longer the input spike trains are presented to the network the more accurate inference becomes. However, as can also be seen in Figure \ref{fig:accuracy_vs_duration}, there is an inflection point for which the accuracy of all networks plateau after increasing the duration above a specific point. Thus, increasing duration above this point causes the latency of the network to increase significantly with little to no gain in accuracy seen in the zoomed portion of Figure \ref{fig:accuracy_vs_duration}. Herein, we present a method to optimize the network to minimize the duration/latency while maximizing the accuracy within 2.0\% of the maximum accuracy. To optimize the duration without overfitting to a specific test dataset during inference, we use an approach that is similar to a cross-validation approach typically employed in DNN training. To start, we first performed inference on our networks for durations ranging from 10 ms to 200 ms in 10 ms intervals using a 2/3 subset of our test dataset. After collecting the durations along with their corresponding accuracies, we then searched this accuracy-duration space for an optimal point that minimizes the duration while maximizing the accuracy within 2.0\% of the network's maximum SNN accuracy. With this optimal point found, we then used the remaining 1/3 subset of the test dataset to test the duration point to ensure reasonable accuracy was still attainable. We then repeated this process two additional times with two different dataset splitting configurations. Figure \ref{fig:crossvalidation} shows how we split the dataset into two for finding the optimal point and then testing it on a separate dataset. \subsection{Increasing Sparsness with Edge Detection} In many works \cite{2018_sparse_computation_asnn, 2021_backprop_sparse_reg, 2022_sparse_compressed_snn}, it has been shown that introducing sparseness into SNNs can achieve further reductions in power, latency, and energy. In \cite{2021_igsc_chandarana}, using edge detection is shown to significantly reduce the input data and therefore should reduce the power and energy consumption. Thus, to increase the sparseness of the input data, we first perform a preprocessing step that applies OpenCV's \cite{opencv_library} Canny edge detection on the inputs prior to inference. By applying Canny edge detection the input data can be reduced and thus lower the spike rate/count by only allowing the neurons of the edge pixels to receive input. As shown in Figure \ref{fig:edgeconversion}, the edge detected images are much more sparse and binary compared to their original images. These edge-detected images are then input into the SNN and the same experiments are performed and the power, latency, and energy are recorded. \begin{figure} \centering \resizebox{.8\linewidth}{!}{ \includegraphics{convert_to_edge.pdf} } \caption{Original images converted to edge images using Canny edge detection.} \label{fig:edgeconversion} \end{figure} \section{Results} \label{sec:results} Here we present our experimental results which consist of comparing the DNN and SNN implementations of our VGG-9 networks on the MNIST, FMNIST, and ASL image classification tasks. We first compare the DNNs to the SNNs in terms of accuracy using both the original images and edge-detected images. In the case of FMNIST, we have omitted the edge detection results as reasonable accuracies were not attainable using the same VGG-9 network. We then compare the performance of NCS2 and Loihi in terms of optimal duration/latency, accuracy, power, and energy. \subsection{Inference Accuracy} \begin{table} \centering \caption{Maximum Accuracy of models trained on regular images vs. edge detected images} \label{table:accuracy} \resizebox{\linewidth}{!}{ \begin{tabular}{clccclccc} \hline \\[-8pt] \multirow{2}{*}{Dataset} & & \multicolumn{3}{c}{Regular Images} & & \multicolumn{3}{c}{Edge Images} \\ \cline{3-5} \cline{7-9} \\[-8pt] & & DNN & C-DNN & SNN & & DNN & C-DNN & SNN \\ \hline \\[-8pt] ASL & & \textbf{99.83} & 99.50 & 99.54 & & \textbf{96.90} & 95.47 & 94.90 \\ MNIST & & 99.28 & 99.38 & \textbf{99.79} & & 98.84 & 98.92 & \textbf{99.04} \\ FMNIST & & \textbf{91.00} & 90.13 & 89.71 & & - & - & - \\ \hline \end{tabular} } \end{table} \begin{table*}[h] \centering \caption{NCS2 vs. Loihi Power and Latency for Original Images and Edge Images Before Duration/Latency Optimization} \label{table:power} \resizebox{\linewidth}{!}{ \begin{tabular}{lccccccccccc} \hline \multicolumn{1}{c}{\multirow{3}{*}{Dataset}} & \multicolumn{3}{c}{NCS2} & \multicolumn{1}{l}{} & \multicolumn{7}{c}{Loihi} \\ \cline{2-4} \cline{6-12} \multicolumn{1}{c}{} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Power \\ (mW)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Latency\\ (ms)\end{tabular}} & \multicolumn{1}{l}{} & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}x86 Cores Power \\ (mW)\end{tabular}} & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}Neuro-Cores Power\\ (mW)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Latency\\ (ms)\end{tabular}} \\ \cline{2-3} \cline{6-11} \multicolumn{1}{c}{} & Idle & Running & & \multicolumn{1}{l}{} & Static & Dynamic & Total & Static & Dynamic & Total & \\ \hline ASL & 635 & 1465 & 2.27 & & 0.136 & 19.05 & 19.19 & 21.54 & 14.00 & 35.54 & 11.56 \\ MNIST & 635 & 1470 & 2.21 & & 0.136 & 19.75 & 19.89 & 21.52 & 22.15 & 43.67 & 10.10 \\ FMNIST & 635 & 1472 & 2.20 & & 0.134 & 19.22 & 19.35 & 21.25 & 26.16 & 47.41 & 12.25 \\ \hline Edge-ASL & 635 & 1438 & 2.50 & & 0.136 & 19.00 & 19.14 & 21.57 & 9.79 & 31.36 & 10.85 \\ Edge-MNIST & 635 & 1453 & 2.25 & & 0.136 & 19.40 & 19.54 & 21.60 & 12.31 & 33.91 & 11.24 \\ \hline \end{tabular} } \end{table*} In Table \ref{table:accuracy}, the DNNs, with the exception of FMNIST, performed rather well in both the original image and edge detected image classification tasks achieving as high as 99.83\% and 98.92\% accuracy, respectively. The constrained or C-DNN for MNIST actually achieves better accuracy than the original DNN in both the original and edge-detected cases. The accuracies for FMNIST are the lowest of the three datasets running on our VGG-9 network achieving 91.00\% on the DNN and 90.13\% on the C-DNN. When comparing the DNNs and C-DNNs to the converted SNNs, Table \ref{table:accuracy} shows that the MNIST SNNs actually outperform both the DNN and C-DNN for both the original images and edge images with a respective 0.41\% and 0.12\% increase in accuracy. With the exception of FMNIST, it appears that even with most of the information being removed by edge detection, the MNIST and ASL models are still able to achieve reasonable accuracies at 99.04\% and 96.90\%, respectively. \subsection{Power, Latency, and Energy} In Tables \ref{table:power} and \ref{table:comp} we provide the power, latency, and energy that yield the maximum attainable accuracy in our experiments. As seen in Table \ref{table:power} the power metrics for both the NCS2 and Loihi remain somewhat similar between the different dataset experiments. This behavior is expected as the networks are very similar in architecture only differing in the output layer. Comparing Loihi and NCS2 in terms of total running power, Table \ref{table:power} shows that Loihi consumes around $\sim22\times$ less power than NCS2 under load. While the original image SNNs already realize significant power reductions, Table \ref{table:power} shows that performing edge detection can further reduce power consumption by as much as $\sim44.42\%$ compared to SNNs without edge detection. Thus, the SNNs on Loihi with edge detection are even more efficient consuming approximately $\sim27\times$ less total power than NCS2's best-case total power consumption. This improvement in power consumption results from Loihi's asynchronous/event-based computations, only consuming additional power when non-zero stimuli are presented. This is in contrast to DNNs where even though input data may be sparse, MAC operations are still performed independent of the input data and thus consume more power. In Table \ref{table:comp} we provide the accuracy, inference power, latency, and inference energy metrics for the optimal duration/latencies. While there is a small decrease in accuracy, $<2.0\%$, due to our latency/duration optimization when comparing Tables \ref{table:accuracy} and \ref{table:comp}, the latency is reduced by at least $\sim9.77\%$ and up to $\sim37.63\%$ shown in Tables \ref{table:power} and \ref{table:comp}. Since power measurements are not possible on Loihi below the aforementioned 30-40 ms polling of the hardware sensors, we have calculated the energy for these optimized latencies by multiplying the expected total power from Table \ref{table:power} by the optimal latencies. According to Table \ref{table:comp} Loihi has worse latency than NCS2. However, this may improve with future iterations of Loihi as \cite{2021_loihi} describes a significant overhead introduced due to the x86 and neuro-core communications. Even though the latency is higher on Loihi, Table \ref{table:comp} shows that the energy consumed by Loihi is $\sim3.08\times$ to $\sim5.00\times$ lower than NCS2. \begin{table}[] \centering \caption{NCS2 vs Loihi - Post-Latency Optimization Comprehensive Analysis} \label{table:comp} \begin{tabular}{clcccc} \hline \multirow{2}{*}{Hardware} & \multicolumn{1}{c}{\multirow{2}{*}{Dataset}} & \multicolumn{4}{c}{Benchmarking Metrics} \\ \cline{3-6} & \multicolumn{1}{c}{} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Power\\ (mW)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Latency\\ (mS)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Energy\\ (mJ)\end{tabular} \\ \hline \multirow{5}{*}{NCS2} & ASL & 99.83 & 830.00 & 2.27 & 1.88 \\ & MNIST & 99.28 & 835.00 & 2.21 & 1.85 \\ & FMNIST & 91.00 & 837.00 & 2.20 & 1.85 \\ \cline{2-6} & Edge-ASL & 96.90 & 803.00 & 2.50 & 2.01 \\ & Edge-MNIST & 98.84 & 818.00 & 2.25 & 1.85 \\ \hline \multirow{5}{*}{Loihi} & ASL & 98.19 & 54.73 & 9.34 & 0.51 \\ & MNIST & 98.55 & 63.56 & 6.13 & 0.39 \\ & FMNIST & 88.25 & 66.76 & 9.03 & 0.60 \\ \cline{2-6} & Edge-ASL & 93.73 & 50.50 & 9.79 & 0.49 \\ & Edge-MNIST & 97.91 & 53.45 & 7.01 & 0.37 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} Herein, we presented methods to constrain and train deep neural networks for conversion into spiking neural networks. In addition to providing some general network design considerations, we proposed two techniques that aim to optimize the latency and power consumption of these SNNs when deployed on neuromorphic hardware. The first technique is based on using a cross-validation method during inference to decrease latency. The other method involved using a pre-processing method such as edge detection to add sparsity to the input of the SNN models to further decrease inference power. Our results, exhibited that Intel's Loihi neuromorphic processor achieves similar, if not better, accuracy to Intel's Neural Computer Stick 2 DNN accelerator while reducing energy consumption by a factor of 5$\times$ when using our proposed energy-efficient deployment strategies. \section*{Acknowledgment} \noindent This work is partially supported by an ASPIRE grant from the Office of the Vice President for Research at the University of South Carolina. Special thanks to the Intel Neuromorphic Research Community (INRC) for providing access to the Loihi chips for the experiments performed in this paper. \balance \bibliographystyle{IEEEtran}
2,869,038,155,226
arxiv
\section{Introduction} In an earlier paper, we described calculations of the graviton-loop corrections to the energy-momentum tensor of a charged spinless or a spin 1/2 particle of mass $m$ and we focused on the nonanalytic component of such results\cite{gar}. This is because such nonanalytic pieces involve singularities at small momentum transfer $q$ which, when Fourier-transformed, yield---via the Einstein equations---large distance corrections to the metric tensor. In particular, for both a spinless field and for a spin 1/2 field the diagonal components of the metric were shown to be modified from their simple Schwarzschild or Kerr forms---in harmonic gauge \begin{eqnarray} g_{00}&=&1-{2Gm\over r}+{2G^2m^2\over r^2}+{7G^2m\hbar\over \pi r^3}+\ldots\nonumber\\ g_{ij}&=&-\delta_{ij}[1+{2Gm\over r}+{G^2m^2\over r^2} +{14G^2m\hbar\over 15\pi r^3}-{76\over 15}{G^2m\hbar\over \pi r^3 }(1-\log\mu r)]\nonumber\\ &-&{r_ir_j\over r^2}[{G^2m^2\over r^2}+{76G^2m\hbar\over 15\pi r^3} +{76\over 5}{G^2m\hbar\over \pi r^3}(1-\log\mu r)] \end{eqnarray} where $G$ is the gravitational constant. (Note that the dependence on the arbitrary scale factor $\mu$ can be removed by a coordinate transformation.) The classical---$\hbar$-independent---pieces of these modifications are well known and can be found by expanding the familiar Schwarzschild (Kerr) metric, which describes spacetime around a massive (spinning) object\cite{rnm}. On the other hand, the calculation also yields quantum mechanical---$\hbar$-dependent---pieces which are new and whose origin can be understood qualitatively as arising from zitterbewegung\cite{gar}. \begin{figure}[h] \begin{center} \epsfig{file=trialfig2.eps,height=4cm,width=7cm} \caption{Feynman diagrams having nonanalytic components. Here the doubly wiggly lines represent gravitons.} \end{center} \end{figure} In the case of a spin 1/2 system there exists, in addition to the above, a nonvanishing {\it off}-diagonal piece of the metric, whose one-loop corrected form, in harmonic gauge, was found to be \begin{equation} g_{ij}=(\vec{S}\times\vec{r})_i\left({2G\over r^3}-{2G^2m\over r^4}+{3G^2\hbar\over \pi r^5} +\ldots\right) \end{equation} Here the classical component of this modification can be found by expanding the Kerr metric\cite{knm}, describing spacetime around a spinning mass and once again there exist quantum corrections due to zitterbewegung\cite{gar}. Based on the feature that the diagonal components were found to have an identical form for both spin 0 and 1/2, it is tempting to speculate that the leading diagonal piece of the metric about a charged particle has a universal form---independent of spin. Whether the same is true for the leading off-diagonal---spin-dependent---component cannot be determined from a single calculation, but it is reasonable to speculate that this is also the case. However, whether these assertions are generally valid can be found only by further calculation, which is the purpose of the present note, wherein we evaluate the nonanalytic piece of the graviton-loop-corrected energy-momentum tensor for a particle of spin 1 and assess the correctness of our proposal. In the next section then we briefly review the results of the previous paper, followed by a discussion wherein the calculations are extended to the spin 1 system. Results are summarized in a brief concluding section. \section{Lightning Review} Since it important to the remainder of this note, we first present a brief review of the results obtained in our previous paper\cite{gar}. In the case of spin 0 systems, the general form of the energy-momentum tensor is \begin{equation} <p_2|T_{\mu\nu}(x)|p_1>_{S=0}={e^{i(p_2-p_1)\cdot x}\over \sqrt{4E_2E_1}}\left[2P_\mu P_\nu F_1^{(S=0)}(q^2))+(q_\mu q_\nu-q^2\eta_{\mu\nu})F_2^{(S=0)}(q^2)\right] \end{equation} where $P={1\over 2}(p_1+p_2)$ is the average momentum while $q=p_1-p_2$ is the momentum transfer. The tree level values for these form factors are \begin{equation} F_{1,tree}^{(S=0)}=1\qquad F_{2,tree}^{(S=0)}=-{1\over 2} \end{equation} while the leading nonanalytic loop corrections from Figure 1a and Figure 1b were determined to be \begin{eqnarray} F_{1,loop}^{(S=0)}(q^2)&=& {Gq^2\over \pi}\left(-{3\over 4}L +{1\over 16}S\right) \nonumber\\ F_{2,loop}^{(S=0)}(q^2)&=&{Gm^2\over \pi}\left(-2L+{7\over 8}S\right) \end{eqnarray} where we have defined $$L=\log ({-q^2\over m^2})\quad {\rm and}\quad S=\pi^2\sqrt{m^2\over -q^2}.$$ Such pieces, which are singular in the small-q limit, come about due to the presence of two massless propagators in the Feynman diagrams\cite{doh} and can arise even in electromagnetic diagrams when this situation is present\cite{emf}. Upon Fourier-transforming, the component proportional to $S$ is found to yield classical ($\hbar$-independent) behavior while the term involving $L$ yields quantum mechanical ($\hbar$-dependent) corrections. The feature that the form factor $F_1^{(S=0)}(q^2=0)$ remains unity even when graviton loop corrections are included arises from the stricture of energy-momentum conservation\cite{gar}. There exists no restriction on $F_2^{(S=0)}(q^2=0)$. In the case of spin 1/2 there exists an additional form factor---$F_3^{(S={1\over 2})}(q^2)$---associated with the presence of spin--- \begin{eqnarray} <p_2|T_{\mu\nu}(x)|p_1>_{S={1\over 2}}&=&{e^{i(p_2-p_1)\cdot x}\over \sqrt{E_1E_2}} \bar{u}(p_2)\left[P_\mu P_\nu F_1^{(S={1\over 2})}(q^2)\right.\nonumber\\ &+&\left.{1\over 2}(q_\mu q_\nu -q^2\eta_{\mu\nu})F_2^{(S={1\over 2 )}}(q^2)\right.\nonumber\\ &-&\left.\left({i\over 4}\sigma_{\mu\lambda}q^\lambda P_\nu+{i\over 4} \sigma_{\nu\lambda}q^\lambda P_\mu\right)F_3^{(S={1\over 2})}(q^2)\right]u(p_1) \end{eqnarray} In this case, the tree level values for these form factors are \begin{equation} F_{1,tree}^{(S={1\over 2})}=F_{2,tree}^{(S={1\over 2})}=1\qquad F_{3,tree}^{(S={1\over 2})}=0 \end{equation} while the nonanalytic loop corrections from Figure 1a and Figure 1b were determined to be \begin{eqnarray} F_{1,loop}^{(S={1\over 2})}(q^2)&=&{Gq^2\over \pi}(-{3\over 4}L +{1\over 16}S) \nonumber\\ F_{2,loop}^{(S={1\over 2})}(q^2)&=&{Gm^2\over \pi}(-2L+{7\over 8}S)\nonumber\\ F_{3,loop}^{(S={1\over 2})}(q^2)&=&{Gq^2\over \pi}({1\over 4}L+{1\over 4}S) \end{eqnarray} In this case both $F_1^{(S={1\over 2})}(q^2=0)$ {\it and} $F_3^{(S={1\over 2})}(q^2=0)$ retain their value of unity even in the presence of graviton loop corrections. That this must be true for $F_1^{(S={1\over 2})}(q^2=0)$ follows from energy-momentum conservation, as before, while the nonrenormalization of $F_3^{(S={1\over 2})}(q^2=0)$ is required by angular-momentum conservation\cite{gar}. An interesting consequence is that there {\it cannot} exist an anomalous gravitomagnetic moment. The universality of these radiative corrections is suggested by the results \begin{equation} F_{1,loop}^{(S=0)}(q^2)=F_{1,loop}^{(S={1\over 2})}(q^2)\quad{\rm and} \quad F_{2,loop}^{(S=0)}(q^2)=F_{2,loop}^{(S={1\over 2})}(q^2) \end{equation} but, of course, the spin-dependent gravitomagnetic form factor $F_3^{(S={1\over 2})}(q^2)$ has no analog in the spin 0 sector. The connection with the metric tensor described in the introduction arises when these results for the energy-momentum tensor are combined with the (linearized) Einstein equation\cite{eeq} \begin{equation} \Box h_{\mu\nu}=-16\pi G\left(T_{\mu\nu}-{1\over 2} \eta_{\mu\nu} T\right) \end{equation} where we have defined \begin{equation} g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu} \end{equation} and \begin{equation} T\equiv {\rm Tr}\,T_{\mu\nu} \end{equation} Taking Fourier transforms, we find---for both spin 0 and spin 1/2---the diagonal components\footnote{Here the r-dependent corrections proportional to $\hbar$ arise from the graviton vacuum polarization correction, while those independent of $\hbar$ arise from corrections to the linear Einstein equation\cite{gar}.} \begin{eqnarray} h_{00}(\vec{r})&=&-16\pi G\int{d^3k\over (2\pi)^3}e^{i\vec{k}\cdot\vec{r}}{1\over \vec{k}^2}\left({m\over 2}- {Gm^2\pi|\vec{k}|\over 4}+{7Gm\vec{k}^2\over 8\pi}\log{\vec{k}^2\over m^2}\right) -{43G^2m\hbar\over 15\pi r^3}\nonumber\\ h_{ij}(\vec{r})&=&-16\pi G\int{d^3k\over (2\pi)^3}e^{i\vec{k}\cdot\vec{r}}{1\over \vec{k}^2}\left[{m\over 2} \delta_{ij}-\delta_{ij}\left({Gm^2\pi|\vec{k}|\over 32} -{3Gm\vec{k}^2\over 8\pi}\log{\vec{k}^2\over m^2}\right)\right.\nonumber\\ &+&\left.\left(k_ik_j+{1\over 2}\vec{k}^2\delta_{ij}\right)\left({7Gm^2\pi\over 16|\vec{k}|}-{Gm\over \pi}\log\vec{k}^2\right)\right]\nonumber\\ &+&4G^2m^2\left({\delta_{ij}\over r^2}-2{r_ir_j\over r^4}\right) +{G^2m\hbar\over 15\pi r^3}(\delta_{ij}+44{r_ir_j\over r^2})\nonumber\\ &-& {44G^2m\hbar\over 15\pi r^3}(\delta_{ij}-3{r_ir_j\over r^2})(1-\log\mu r) \end{eqnarray} while in the case of the spin 1/2 gravitomagnetic form factor we find the off-diagonal term \begin{eqnarray} h_{0i}(\vec{r})&=&-16\pi G{i\over 2}\int{d^3k\over (2\pi)^3}{1\over \vec{k}^2} \left(1-{Gm\pi|\vec{k}|\over 4}-{G\vec{k}^2\over 4\pi} \log{\vec{k}^2\over m^2}\right)(\vec{S}\times\vec{k})_i\nonumber\\ &+&{21G^2\hbar\over 5\pi r^5}(\vec{S}\times\vec{r})_i \end{eqnarray} Evaluating the various Fourier transforms, we find the results quoted in the introduction\cite{fta}. The purpose of the present note is to study how these results generalize to the case of higher spin. Specifically, we shall below examine the graviton-loop corrections to the energy-momentum tensor of a massive spin 1 system. \section{Spin 1} A neutral spin 1 field $\phi_\mu(x)$ having mass $m$ is described by the Proca Lagrangian\cite{pro} \begin{equation} {\cal L}(x)=-{1\over 4}U_{\mu\nu}(x)U^{\mu\nu}(x)+{1\over 2}m^2\phi_\mu(x)\phi^\mu(x)\label{eqn:la} \end{equation} where \begin{equation} U_{\mu\nu}(x)=i\partial_\mu \phi_\nu(x)-i\partial_\nu\phi_\mu(x) \end{equation} is the spin 1 field tensor. Having the Langrangian for the interactions of a spin-1 system, we can calculate the matrix elements which will be required for our calculation. Specifically, the general single graviton vertex for a transition involving an outgoing graviton with polarization indices $\mu\nu$ and four-momentum $q=p_1-p_2$, an incoming spin one particle with polarization index $\alpha$ and four-momentum $p_1$ together with an outgoing spin one particle with polarization index $\beta$ and four-momentum $p_2$ is \begin{eqnarray} V^{(1)}_{\beta,\alpha,\mu\nu}(p_1,p_2)&=&i{\kappa\over 2}\left\{(p_{1\mu}p_{2\nu}+p_{1\nu}p_{2\mu})\eta_{\alpha\beta}+\eta_{\mu\nu}p_{1\beta}p_{2\alpha} \right.\nonumber\\ &-&\left.p_{1\beta}(p_{2\mu}\eta_{\nu\alpha}+p_{2\nu}\eta_{\alpha\mu}) -p_{2\alpha}(p_{1\mu}\eta_{\nu\beta}+p_{1\nu}\eta_{\beta\mu})\right.\nonumber\\ &+&\left.(p_1\cdot p_2-m^2)(\eta_{\mu\alpha}\eta_{\nu\beta}+ \eta_{\mu\beta}\eta_{\nu\alpha}-\eta_{\mu\nu}\eta_{\alpha\beta})\right\}\label{eqn:to} \end{eqnarray} where $\kappa=\sqrt{32\pi G}$ is the gravitational coupling, while the two-graviton vertex with polarization indices $\mu\nu$ and $\rho\sigma$, an incoming spin one particle with polarization index $\alpha$ and four-momentum $p_1$ together with an outgoing spin one particle with polarization index $\beta$ and four-momentum $p_2$ has the form \begin{eqnarray} V^{(2)}_{\beta,\alpha,\mu\nu,\rho\sigma}(p_1,p_2)&=&-i{\kappa^2\over 4}\left\{[p_{1\beta}p_{2\alpha} - \eta_{\alpha\beta}(p_1\cdot p_2 - m^2)] (\eta_{\mu\rho}\eta_{\nu\sigma}+ \eta_{\mu\sigma}\eta_{\nu\rho} - \eta_{\mu\nu}\eta_{\rho\sigma})\right.\nonumber\\ &+&\left. \eta_{\mu\rho}[\eta_{\alpha\beta}(p_{1\nu}p_{2\sigma} + p_{1\sigma}p_{2\nu}) - \eta_{\alpha\nu}p_{1\beta}p_{2\sigma}- \eta_{\beta\nu}p_{1\sigma}p_{2\alpha}\right.\nonumber\\ &-&\left. \eta_{\beta\sigma}p_{1\nu}p_{2\alpha} - \eta_{\alpha\sigma}p_{1\beta} p_{2\nu} + (p_1\cdot p_2 - m^2)(\eta_{\alpha\nu}\eta_{\beta\sigma} + \eta_{\alpha\sigma}\eta_{\beta\nu})]\right.\nonumber\\ &+&\left. \eta_{\mu\sigma}[\eta_{\alpha\beta}(p_{1\nu}p_{2\rho} + p_{1\rho}p_{2\nu}) - \eta_{\alpha\nu}p_{1\beta}p_{2\rho} - \eta_{\beta\nu}p_{1\rho}p_{2\alpha}\right.\nonumber\\ &-&\left. \eta_{\beta\rho}p_{1\nu}p_{2\alpha}- \eta_{\alpha\rho}p_{1\beta} p_{2\nu} + (p_1\cdot p_2 - m^2)\eta_{\alpha\nu}\eta_{\beta\rho} + \eta_{\alpha\rho}\eta_{\beta\nu})]\right.\nonumber\\ &+&\left. \eta_{\nu\rho}[\eta_{\alpha\beta}(p_{1\mu}p_{2\sigma} + p_{1\sigma}p_{2\mu}) -\eta_{\alpha\mu}p_{1\beta}p_{2\sigma} - \eta_{\beta\mu}p_{1\sigma}p_{2\alpha}\right.\nonumber\\ &-&\left.\eta_{\beta\sigma}p_{1\mu}p_{2\alpha} -\eta_{\alpha\sigma}p_{1\beta} p_{2\mu} + (p_1\cdot p_2 - m^2)(\eta_{\alpha\mu}\eta_{\beta\sigma} + \eta_{\alpha\sigma}\eta_{\beta\mu})]\right.\nonumber\\ &+&\left. \eta_{\nu\sigma}[\eta_{\alpha\beta}(p_{1\mu}p_{2\rho} + p_{1\rho}p_{2\mu}) - \eta_{\alpha\mu}p_{1\beta}p_{2\rho} - \eta_{\beta\mu}p_{1\rho}p_{2\alpha}\right.\nonumber\\ &-&\left.\eta_{\beta\rho}p_{1\mu}p_{2\alpha}-\eta_{\alpha\rho}p_{1\beta} p_{2\mu} + (p_1\cdot p_2 - m^2)(\eta_{\alpha\mu}\eta_{\beta\rho} + \eta_{\alpha\rho}\eta_{\beta\mu})]\right.\nonumber\\ &-&\left. \eta_{\mu\nu}[\eta_{\alpha\beta}(p_{1\rho}p_{2\sigma} + p_{1\sigma}p_{2\rho}) - \eta_{\alpha\rho}p_{1\beta}p_{2\sigma} - \eta_{\beta\rho}p_{1\sigma}p_{2\alpha}\right.\nonumber\\ &-&\left.\eta_{\beta\sigma}p_{1\rho}p_{2\alpha}- \eta_{\alpha\sigma}p_{1\beta}p_{2\rho} + (p_1\cdot p_2 - m^2)(\eta_{\alpha\rho}\eta_{\beta\sigma} + \eta_{\beta\rho}\eta_{\alpha\sigma})]\right.\nonumber\\ &-&\left. \eta_{\rho\sigma}[\eta_{\alpha\beta}(p_{1\mu}p_{2\nu} + p_{1\nu}p_{2\mu}) - \eta_{\alpha\mu}p_{1\beta}p_{2\nu} - \eta_{\beta\mu}p_{1\nu}p_{2\alpha}\right.\nonumber\\ &-&\left. \eta_{\beta\nu}p_{1\mu}p_{2\alpha} - \eta_{\alpha\nu}p_{1\beta} p_{2\mu} + (p_1\cdot p_2 - m^2)(\eta_{\alpha\mu}\eta_{\beta\nu} + \eta_{\beta\mu}\eta_{\alpha\nu})]\right.\nonumber\\ &+&\left. (\eta_{\alpha\rho}p_{1\mu} - \eta_{\alpha\mu}p_{1\rho})(\eta_{\beta\sigma} p_{2\nu} - \eta_{\beta\mu}p_{2\sigma})\right.\nonumber\\ &+&\left. (\eta_{\alpha\sigma}p_{1\nu} - \eta_{\alpha\nu}p_{1\sigma})\eta_{\beta\rho} p_{2\mu} - \eta_{\beta\mu}p_{2\rho})\right.\nonumber\\ &+&\left. (\eta_{\alpha\sigma}p_{1\mu} - \eta_{\alpha\mu}p_{1\sigma})(\eta_{\beta\rho} p_{2\nu} - \eta_{\beta\nu}p_{2\rho})\right.\nonumber\\ &+&\left. (\eta_{\alpha\rho}p_{1\nu} - \eta_{\alpha\nu}p_{1\rho})(\eta_{\beta\sigma} p_{2\mu} - \eta_{\beta\mu}p_{2\sigma})\right\} \end{eqnarray} The triple graviton vertex function is given by\cite{don} \begin{eqnarray} \tau^{\mu\nu}_{\alpha\beta,\gamma\delta}(k,q)&=&{i\kappa\over 2}\left\{ P_{\alpha\beta,\gamma\delta} \left[k^\mu k^\nu+(k-q)^\mu (k-q)^\nu+q^\mu q^\nu-{3\over 2}\eta^{\mu\nu}q^2\right]\right.\nonumber\\ &+&\left.2q_\lambda q_\sigma\left[I^{\lambda\sigma,}{}_{\alpha\beta}I^{\mu\nu,} {}_{\gamma\delta}+I^{\lambda\sigma,}{}_{\gamma\delta}I^{\mu\nu,} {}_{\alpha\beta}-I^{\lambda\mu,}{}_{\alpha\beta}I^{\sigma\nu,} {}_{\gamma\delta}-I^{\sigma\nu,}{}_{\alpha\beta}I^{\lambda\mu,} {}_{\gamma\delta}\right]\right.\nonumber\\ &+&\left.[q_\lambda q^\mu(\eta_{\alpha\beta}I^{\lambda\nu,}{}_{\gamma\delta} +\eta_{\gamma\delta}I^{\lambda\nu,}{}_{\alpha\beta})+ q_\lambda q^\nu(\eta_{\alpha\beta}I^{\lambda\mu,}{}_{\gamma\delta} +\eta_{\gamma\delta}I^{\lambda\mu,}{}_{\alpha\beta})\right.\nonumber\\ &-&\left.q^2(\eta_{\alpha\beta}I^{\mu\nu,}{}_{\gamma\delta}+\eta_{\gamma\delta} I^{\mu\nu,}{}_{\alpha\beta})-\eta^{\mu\nu}q^\lambda q^\sigma(\eta_{\alpha\beta} I_{\gamma\delta,\lambda\sigma}+\eta_{\gamma\delta} I_{\alpha\beta,\lambda\sigma})]\right.\nonumber\\ &+&\left.[2q^\lambda(I^{\sigma\nu,}{}_{\alpha\beta} I_{\gamma\delta,\lambda\sigma}(k-q)^\mu +I^{\sigma\mu,}{}_{\alpha\beta}I_{\gamma\delta,\lambda\sigma}(k-q)^\nu\right.\nonumber\\ &-&\left.I^{\sigma\nu,}{}_{\gamma\delta}I_{\alpha\beta,\lambda\sigma}k^\mu- I^{\sigma\mu,}{}_{\gamma\delta}I_{\alpha\beta,\lambda\sigma}k^\nu)\right.\nonumber\\ &+&\left.q^2(I^{\sigma\mu,}{}_{\alpha\beta}I_{\gamma\delta,\sigma}{}^\nu+ I_{\alpha\beta,\sigma}{}^\nu I^{\sigma\mu,}{}_{\gamma\delta})+\eta^{\mu\nu}q^\lambda q_\sigma (I_{\alpha\beta,\lambda\rho}I^{\rho\sigma,}{}_{\gamma\delta}+ I_{\gamma\delta,\lambda\rho}I^{\rho\sigma,}{}_{\alpha\beta})]\right.\nonumber\\ &+&\left.[(k^2+(k-q)^2)\left(I^{\sigma\mu,}{}_{\alpha\beta}I_{\gamma\delta,\sigma}{}^\nu +I^{\sigma\nu,}{}_{\alpha\beta}I_{\gamma\delta,\sigma}{}^\mu-{1\over 2}\eta^{\mu\nu}P_{\alpha\beta,\gamma\delta}\right)\right.\nonumber\\ &-&\left.(k^2\eta_{\gamma\delta}I^{\mu\nu,}{}_{\alpha\beta}+(k-q)^2\eta_{\alpha\beta} I^{\mu\nu,}{}_{\gamma\delta})]\right\} \end{eqnarray} where we have defined \begin{equation} I_{\alpha\beta,\mu\nu}={1\over 2}(\eta_{\alpha\mu}\eta_{\beta\nu}+\eta_{\alpha\nu}\eta_{\beta\mu}) \end{equation} and \begin{equation} P_{\alpha\beta,\mu\nu}=I_{\alpha\beta,\mu\nu}-{1\over 2}\eta_{\alpha\beta}\eta_{\mu\nu} \end{equation} The final ingredient which we need is the harmonic gauge graviton propagator \begin{equation} D_{\alpha\beta,\mu\nu}(q)={i\over q^2+i\epsilon}P_{\alpha\beta,\mu\nu} \end{equation} The leading component of the on-shell energy-momentum tensor between charged vector meson states is then found, from Eq. \ref{eqn:to}, to be \begin{eqnarray} <k_2,\epsilon_B|T_{\mu\nu}^{(0)}|k_1,\epsilon_A>&=&(k_{1\mu}k_{2\nu}+k_{1\nu}k_{2\mu}) \epsilon_B^*\cdot\epsilon_A\nonumber\\ &-&k_1\cdot\epsilon_B^*(k_{2\mu}\epsilon_{A\nu}+k_{2\nu}\epsilon_{A\mu}\nonumber\\ &-&k_2\cdot\epsilon_A(k_{1\nu}\eta_{B\mu}^*+k_{1\mu}\epsilon_{B\nu}^*)\nonumber\\ &+&(k_1\cdot k_2-m^2)(\epsilon_{B\mu}^*\epsilon_{A\nu}+\epsilon_{B\nu}^*\epsilon_{A\mu})\nonumber\\ &-&\eta_{\mu\nu}[(k_1\cdot k_2-m^2)\epsilon_B^*\cdot\epsilon_A-k_1\cdot\epsilon_B^*k_2\cdot\epsilon_A]\label{eqn:tm} \end{eqnarray} and the focus of our calculation is to evaluate the graviton loop corrections to Eq. \ref{eqn:tm}, via the diagrams shown in Figure 1 and keeping only the leading nonanalytic terms, details of which are described in the appendix. Note that due to conservation of the energy-momentum tensor---$\partial^\mu T_{\mu\nu}=0$---the on-shell matrix element must satisfy the gauge invariance condition $$q^\nu<k_2,\epsilon_B|T_{\mu\nu}|k_1,\epsilon_A>=0$$ In our case, the leading order contribution satisfies this condition \begin{equation} q^\mu<k_2,\epsilon_B|T_{\mu\nu}^{(0)}|k_1,\epsilon_A>=0 \end{equation} and, in addition, the contributions of both diagrams 1a or 1b are independently gauge-invariant \begin{equation} q^\mu Amp[a]_{\mu\nu}=q^\mu Amp[b]_{\mu\nu}=0 \end{equation} and these strictures serve as an important check on our result. Because of this gauge invariance condition, the results of these calculations are most efficiently expressed in terms of spin 1 form factors. Indeed, due to covariance and gauge invariance the form of the matrix element of $T_{\mu\nu}$ between on-shell spin 1 states must be expressible in the form \begin{eqnarray} &&<p_2,\epsilon_B|T_{\mu\nu}(x)|p_1,\epsilon_A>=-{e^{i(p_2-p_1)\cdot x}\over \sqrt{4E_1E_2}}[2P_\mu P_\nu \epsilon_B^*\cdot \epsilon_AF_1^{(S=1)}(q^2)\nonumber\\ &+&(q_\mu q_\nu-\eta_{\mu\nu}q^2) \epsilon_B^*\cdot\epsilon_AF_2^{(S=1)}(q^2)\nonumber\\ &+&[P_\mu(\epsilon_{B\nu}^* \epsilon_A\cdot q-\epsilon_{A\nu} \epsilon_B^*\cdot q)+P_\nu(\epsilon_{B\mu}^* \epsilon_A\cdot q-\epsilon_{A\mu} \epsilon_B^*\cdot q)]F_3^{(S=1)}(q^2)\nonumber\\ &+&\left[(\epsilon_{A\mu} \epsilon_{B\nu}^*+\epsilon_{B\mu}^*\epsilon_{A\nu})q^2-(\epsilon_{B\mu}^* q_\nu+\epsilon_{B\nu}^* q_\mu)\epsilon_A\cdot q\right.\nonumber\\ &+&\left.(\epsilon_{A\mu} q_\nu+\epsilon_{A\nu} q_\mu)\epsilon_B^*\cdot q+2\eta_{\mu\nu}\epsilon_A\cdot q \epsilon_B^*\cdot q\right]F_4^{(S=1)}(q^2)\nonumber\\ &+&{2\over m^2}P_\mu P_\nu \epsilon_A\cdot q \epsilon_B^*\cdot q F_5^{(S=1)}(q^2)\nonumber\\ &+&{1\over m^2}(q_\mu q_\nu-\eta_{\mu\nu}q^2)\epsilon_B^*\cdot q\epsilon_A\cdot qF_6^{(S=1)}(q^2)] \end{eqnarray} Using the feature that in the Breit frame for a nonrelativistic particle the spin operator can be defined via \begin{equation} i(\hat{\epsilon}_B^*\times\hat{\epsilon}_A)_k=<1,m_f|S_k|1,m_i> \end{equation} we observe that $F_1^{(S=1)}(q^2),F_2^{(S=1)}(q^2),F_3^{(S=1)}(q^2)$ correspond exactly to their spin 1/2 counterparts while $F_4^{(S=1)}(q^2),F_5^{(S=1)}(q^2),F_6^{(S=1)}(q^2)$ represent new forms unique to spin 1. In terms of these definitions, the tree level predictions can be described as \begin{eqnarray} F_{1,tree}^{(S=1)}&=&F_{3,tree}^{(S=1)}=1\nonumber\\ F_{2,tree}^{(S=1)}&=&F_{4,tree}^{(S=1)}=-{1\over 2}\nonumber\\ F_{5,tree}^{(S=1)}&=&F_{6,tree}^{(S=1)}=0 \end{eqnarray} while the results of the one loop calculation can be expressed as \begin{itemize} \item [a)] Seagull loop diagram (Figure 1a) \begin{eqnarray} F_{1,loop\, a}^{(S=1)}(q^2)&=&{GLq^2\over \pi}(0+3-1-{1\over 2})={3\over 2}{GLq^2\over \pi}\nonumber\\ F_{2,loop\, a}^{(S=1)}(q^2)&=&{GLm^2\over \pi}(-5+2-2+4)=-{GLm^2\over \pi}\nonumber\\ F_{3,loop\, a}^{(S=1)}(q^2)&=&{GLq^2\over \pi}(0+{3\over 2}-1-{1\over 2})=0\nonumber\\ F_{4,loop\, a}^{(S=1)}(q^2)&=&{GLm^2\over \pi}(0+1-1+{3\over 2})={3\over 2}{GLm^2\over \pi}\nonumber\\ F_{5,loop\, a}^{(S=1)}(q^2)&=&{GLm^2\over \pi}(0-3+0+0)=-3{GLm^2\over \pi}\nonumber\\ F_{6,loop\, a}^{(S=1)}(q^2)&=&{GLm^2\over \pi}(-5-{1\over 2}+0+3)=-{5\over 2}{GLm^2\over \pi} \end{eqnarray} \item [b)] Born loop diagram (Figure 1b) \begin{eqnarray} F_{1,loop\, b}^{(S=1)}(q^2)&=&{Gq^2\over \pi}[L({1\over 4}-3+2-{3\over 2})+S({1\over 16}-1+1+0)]={Gq^2\over \pi}({1\over 16}S-{9\over 4}L)\nonumber\\ F_{2,loop\, b}^{(S=1)}(q^2)&=&{Gm^2\over \pi}[S({7\over 8}-1+2-1)+L(1-3+4-3)]={Gm^2\over \pi}({7\over 8}S-L)\nonumber\\ F_{3,loop\, b}^{(S=1)}(q^2)&=&{Gq^2\over \pi}[S(0-{1\over 2}+{1\over 2}+{1\over 4})+L({1\over 6}-{5\over 4}+{3\over 4}+{7\over 12})]= {Gq^2\over \pi}({1\over 4}S+{1\over 4}L)\nonumber\\ F_{4,loop\, b}^{(S=1)}(q^2)&=&{GLm^2\over \pi}(0-1+!-{3\over 2})+{Gq^2\over \pi}\left[L(-{17\over 8}+{3\over 8}-{1\over 2} +{7\over 8})\right.\nonumber\\ &+&\left.S(-{41\over 128}+{3\over 16}-{1\over 4}+{1\over 16})\right]= -{3\over 2}{GLm^2\over \pi}-{Gq^2\over \pi}({11\over 8}L+{41\over 128}S))\nonumber\\ F_{5,loop\, b}^{(S=1)}(q^2)&=&{GLm^2\over \pi}(0+3+0+0) +{Gq^2\over \pi} \left[S({5\over 128}+{3\over 16}+0-{3\over 16})\right.\nonumber\\ &+&\left.L(0+{3\over 4}+0-{1\over 2})\right]=3{GLm^2\over \pi}+{Gq^2\over \pi}({5\over 128}S+{1\over 4}L)\nonumber\\ F_{6,loop\, b}^{(S=1)}(q^2)&=&{Gm^2\over \pi}\left[S({43\over 64}-{1\over 8}+{1\over 4}-{1\over 8})+L({13\over 3}+{1\over 2}+{1\over 2} -{7\over 3})\right]\nonumber\\ &=&{Gm^2\over \pi}(3L+{43\over 64}S) \end{eqnarray} \end{itemize} where we have divided each contribution into the piece which arises from the first four bracketed pieces of the triple graviton vertex above.\footnote{There exists no contribution to the nonanalytic terms from the pieces in the fifth bracket since the intermediate gravitons are required to be on-shell.} The full results of this calculation can then be described via: \begin{eqnarray} F_1^{(S=1)}(q^2)&=&1+{Gq^2\over \pi}(-{3\over 4}L+{1\over 16}S)+\ldots\nonumber\\ F_2^{(S=1)}(q^2)&=&-{1\over 2}+{Gm^2\over \pi}(-2L+{7\over 8}S)+\ldots\nonumber\\ F_3^{(S=1)}(q^2)&=&1+{Gq^2\over \pi}({1\over 4}L+{1\over 4}S)+\ldots\nonumber\\ F_4^{(S=1)}(q^2)&=&-{1\over 2}+{Gq^2\over \pi}({11\over 8}L+{41\over 128}S)+\ldots\nonumber\\ F_5^{(S=1)}(q^2)&=&{Gq^2\over \pi}({1\over 4}L+{5\over 128}S)+\ldots\nonumber\\ F_6^{(S=1)}(q^2)&=&{Gm^2\over \pi}({1\over 4}L+{43\over 128}S)+\ldots \end{eqnarray} and we note that $F_{1,2,3,loop}^{(S=1)}(q^2)$ as found for unit spin agree precisely with the forms $F_{1,2,3,loop}^{(S={1\over 2})}(q^2)$ determined previously for spin 1/2 and with $F_{1,2,loop}^{(S=0)}(q^2)$ in the spinless case. It is also interesting that the loop contributions to the "new" form factors $F_{4,loop}^{(S=1)}(q^2),F_{5,loop}^{(S=1)}(q^2)$ which have no lower spin analog, vanish to order $q^0$ even though there exist nonzero contributions from both loop diagrams individually. Of course, the nonrenormalization of $F_1^{(S=1)}(q^2=0)$ and $F_3^{(S=1)}(q^2=0)$ required by energy-momentum and angular momentum conservation is obtained, as required, meaning that, as noted above, there exists no anomalous gravitomagnetic moment. However, there is a new feature here that deserves notice. Working in the Breit frame and assuming nonrelativistic motion, we have the kinematic constraints \begin{eqnarray} \epsilon_A^0\simeq{1\over 2m}\hat{\epsilon}_A\cdot\vec{q},\qquad \epsilon_B^0\simeq -{1\over 2m}\hat{\epsilon}_B^*\cdot\vec{q}\nonumber\\ \epsilon_B^*\cdot\epsilon_A\simeq -\hat{\epsilon}_B^*\cdot\hat{\epsilon}_A-{1\over 2m^2}\hat{\epsilon}_B^*\cdot\vec{q}\hat{\epsilon}_A\cdot\vec{q} \end{eqnarray} we find that \begin{eqnarray} &&<p_2,\epsilon_B|T_{00}(0)|p_1,\epsilon_A>\simeq m\left\{\hat{\epsilon}_B^*\cdot\hat{\epsilon}_AF_1^{(S=1)}(q^2) +{1\over 2m^2}\hat{\epsilon}_B^*\cdot\vec{q}\hat{\epsilon}_A\cdot\vec{q}\right.\nonumber\\ &\times&\left. [F_1^{(S=1)}(q^2)-F_2^{(S=1)}(q^2)-2(F_4^{(S=1)}(q^2)+F_5^{(S=1)}(q^2) -{q^2\over 2 m^2}F_6^{(S=1)}(q^2))] \right\}+\ldots\nonumber\\ &&<p_2,\epsilon_B|T_{0i}(0)|p_1,\epsilon_A>\simeq -{1\over 2}[(\hat{\epsilon}_B^*\times\hat{\epsilon}_A)\times\vec{q}]_iF_3^{(S-1)}(q^2)+\ldots \end{eqnarray} Then using the connections \begin{eqnarray} i\hat{\epsilon}_B^*\times\hat{\epsilon}_A&=&<1,m_f|\vec{S}|1,m_i>\nonumber\\ {1\over 2}(\epsilon_{Bi}^*\epsilon_{Aj}+\epsilon_{Ai}\epsilon_{Bj}^*)-{1\over 3}\delta_{ij}\hat{\epsilon}_B^*\cdot\hat{\epsilon}_A&=&<1,m_f|{1\over 2}(S_iS_j+S_jS_i)-{2\over 3}\delta_{ij}|1,m_i>\nonumber\\ \quad \end{eqnarray} between the Proca polarization vectors and the spin operator $\vec{S}$ we can identify values for the gravitoelectric monopole, gravitomagnetic dipole, and gravitoelectric quadrupole coupling constants \begin{eqnarray} K_{E0}&=&mF_1^{(S=1)}(q^2=0)\nonumber\\ K_{M1}&=&{1\over 2}F_3^{(S=1)}(q^2=0)\nonumber\\ K_{E2}&=&{1\over 2m}\left[F_1^{(S=1)}(q^2=0)-F_3^{(S=1)}(q^2=0)-2F_4^{(S=1)}(q^2=0)-2F_5^{(S=1)}(q^2=0)\right]\nonumber\\ \end{eqnarray} Taking $Q_g\equiv m$ as the gravitational "charge," we observe that the tree level values--- \begin{equation} K_{E0}=Q_g\qquad K_{M1}={Q_g\over 2m}\qquad K_{E2}={Q_g\over m^2} \end{equation} are {\it unrenormalized} by loop corrections. That is to say, not only does there not exist any anomalous gravitomagnetic moment, as mentioned above, but also there is no anomalous gravitoelectric quadrupole moment. \section{Conclusion} Above we have calculated the graviton loop corrections to the energy-momentum tensor of a spin 1 system. We have confirmed the universality which was speculated in our previous work in that we have verified that \begin{eqnarray} F_{1,loop}^{(S=0)}(q^2)&=&F_{1,loop}^{(S={1\over 2})}(q^2)=F_{1,loop}^{(S=1)}(q^2)\nonumber\\ F_{2,loop}^{(S=0)}(q^2)&=&F_{2,loop}^{(S={1\over 2})}(q^2)=F_{2,loop}^{(S=1)}(q^2)\nonumber\\ F_{3,loop}^{(S={1\over 2})}(q^2)&=&F_{3,loop}^{(S=1)}(q^2) \end{eqnarray} The universality in the case of the classical (square root) nonanalyticities is not surprising and in fact is {\it required} by the connection to the metric tensor. In the case of the quantum (logarithmic) nonanalyticities it is not clear why these terms must be spin-independent. We also found additional form factors for the spin 1 system and have shown that in addition to the vanishing of the anomalous gravitomagnetic moment there cannot exist any anomalous gravitoelectric quadrupole moment. It is tempting to conclude that the graviton loop correction universality which we obtained holds for arbitrary spin. However, it is probably not possible to show this by generalizing the calculations above. Indeed the spin 1 result involves {\it considerably} more computation than does its spin 1/2 counterpart, which was already much more tedious than that for spin 0. Perhaps a generalization such as that used in nuclear beta decay can be employed\cite{nbd}. Work is underway on such an extension and results will be reported in an upcoming communication. \begin{center} {\bf Acknowledgement} \end{center} This work was supported in part by the National Science Foundation under award PHY-02-42801. Useful conversations with John Donoghue and Andreas Ross are gratefully acknowledged, as is the hospitality of Prof. A. Faessler and the theoretical physics group from the University of T\"{u}bingen, where this paper was finished. \section{Appendix} In this section we sketch how our results were obtained. The basic idea is to calculate the Feynman diagrams shown in Figure 1. Thus for Figure 1a we find\cite{ppr} \begin{equation} Amp[a]_{\mu\nu}={1\over 2!}\int{d^4k\over (2\pi)^4}{\epsilon_B^{*\beta}V^{(2)}_{\beta,\alpha,\lambda\kappa,\rho\sigma}(p_2,p_1) \epsilon_A^\alpha P[\alpha\beta;\lambda\kappa]P[\gamma\delta;\sigma\rho] \tau_{\mu\nu}^{\alpha\beta,\gamma\delta}(k,q)\over k^2(k-q)^2}\label{eqn:a} \end{equation} while for Figure 1b\cite{ppr} \begin{eqnarray} Amp[b]_{\mu\nu}&=&\int{d^4k\over (2\pi)^4}{1\over k^2(k-q)^2((k-p)^2-m^2)}\nonumber\\ &\times&\epsilon_B^\beta V^{(1)}_{\beta,\delta,\lambda\kappa}(p_2,p_1-k)\left(-\eta^{\delta\zeta}+{(p1-k)^\delta (p_1-k)^\zeta\over m^2}\right)\nonumber\\ &\times&V^{(1)}_{\zeta,\theta,\rho\sigma}(p1-k,p1)\epsilon_A^\theta P[\alpha\beta;\lambda\kappa]P[\gamma\delta;\sigma\rho] \tau_{\mu\nu}^{\alpha\beta,\gamma\delta}(k,q)\label{eqn:b} \end{eqnarray} Here the various vertex functions are listed in section 3, while for the integrals, all that is needed is the leading nonanalytic behavior. Thus we use \begin{eqnarray} I(q)&=&\int{d^4k\over (2\pi)^4}{1\over k^2(k-q)^2}={-i\over 32\pi^2}(2L+\ldots)\nonumber\\ I_\mu(q)&=&\int{d^4k\over (2\pi)^4}{k_\mu\over k^2(k-q)^2}={i\over 32\pi^2}(q_\mu L+\ldots)\nonumber\\ I_{\mu\nu}(q)&=&\int{d^4k\over (2\pi)^4}{k_\mu k_\nu\over k^2(k-q)^2}={-i\over 32\pi^2}(q_\mu q_\nu{2\over 3}L-q^2\eta_{\mu\nu}{1\over 6}L +\ldots)\nonumber\\ I_{\mu\nu\alpha}(q)&=&\int{d^4k\over (2\pi)^4}{k_\mu k_\nu k_\alpha\over k^2(k-q)^2}={i\over 32\pi^2}(-q_\mu q_\nu q_\alpha {L\over 2}\nonumber\\ &+&(\eta_{\mu\nu}q_\alpha+\eta_{\mu\alpha}q_\nu +\eta_{\nu\alpha}q_\mu){1\over 12}Lq^2 +\ldots)\nonumber\\ \quad \end{eqnarray} for the "bubble" integrals and \begin{eqnarray} J(p,q)&=&\int{d^4k\over (2\pi)^4}{1\over k^2(k-q)^2((k-p)^2-m^2)}={-i\over 32\pi^2m^2}(L+S)+\ldots\nonumber\\ J_\mu(p,q)&=&\int{d^4k\over (2\pi)^4}{k_\mu\over k^2(k-q)^2((k-p)^2-m^2)}={i\over 32\pi^2m^2}\nonumber\\ &\times&[p_\mu((1+{1\over 2}{q^2\over m^2})L-{1\over 4}{q^2\over m^2}S)-q_\mu(L+{1\over 2}S)+\ldots]\nonumber\\ J_{\mu\nu}(p,q)&=&\int{d^4k\over (2\pi)^4}{k_\mu k_\nu\over k^2(k-q)^2((k-p)^2-m^2)}={i\over 32\pi^2m^2}\nonumber\\ &\times&[-q_\mu q_\nu(L+{3\over 8}S)-p_\mu p_\nu{q^2\over m^2}({1\over 2}L+{1\over 8}S)\nonumber\\ &+&q^2\eta_{\mu\nu}({1\over 4}L+{1\over 8}S)+(q_\mu p_\nu+q_\nu p_\mu)(({1\over 2}+{1\over 2}{q^2\over m^2})L+{3\over 16}{q^2\over m^2 S})\nonumber\\ J_{\mu\nu\alpha}(p,q)&=&\displaystyle\int\frac{d^4k}{(2\pi)^4} \frac{k_\mu k_\nu k_\alpha}{k^2(k-q)^2((k-p)^2-m^2)} \nonumber\\ &=& \frac{-i}{32\pi^2m^2}\bigg[ q_\mu q_\nu q_\alpha\bigg(L+\frac5{16}S\bigg)+p_\mu p_\nu p_\alpha\bigg(-\frac16 \frac{q^2}{m^2}\bigg) \nonumber\\ \nonumber&+&\big(q_\mu p_\nu p_\alpha + q_\nu p_\mu p_\alpha + q_\alpha p_\mu p_\nu\big)\bigg(\frac13\frac{q^2}{m^2}L+ \frac1{16}\frac{q^2}{m^2}S\bigg)\nonumber\\&+&\big(q_\mu q_\nu p_\alpha + q_\mu q_\alpha p_\nu + q_\nu q_\alpha p_\mu \big)\bigg(\Big(-\frac13 - \frac12\frac{q^2}{m^2}\Big)L -\frac{5}{32}\frac{q^2}{m^2}S\bigg)\nonumber\\ \nonumber &+&\big(\eta_{\mu\nu}p_\alpha + \eta_{\mu\alpha}p_\nu + \eta_{\nu\alpha}p_\mu\big)\Big(\frac1{12}q^2L\Big)\nonumber\\ \nonumber&+&\big(\eta_{\mu\nu}q_\alpha + \eta_{\mu\alpha}q_\nu + \eta_{\nu\alpha}q_\mu\big)\Big(-\frac16q^2L -\frac1{16}q^2S\Big) \bigg]+\ldots\nonumber\\ \quad \end{eqnarray} for their "triangle" counterparts. Similarly higher order forms can be found, by either direct calculation or by requiring various identities which must be satisfied when the integrals are contracted with $p^\mu,q^\mu$ or with $\eta^{\mu\nu}$. Using these integral forms and substituting into Eqs. \ref{eqn:a} and \ref{eqn:b}, one determines the results quoted in section 3.
2,869,038,155,227
arxiv
\section{Introduction} Text summarization aims to produce an accurate text snippet to capture the key information. Existing methods are either extractive or abstractive. Extractive methods select sentences from the document and the abstractive methods generate sentences based on the input document as a summary. With the advancement of natural language processing (NLP) research, especially in the area of large-scale pre-trained language models \cite{devlin2019bert, peters2018deep, radford2019language, liu2019roberta} in recent years, abstractive summarization has become a popular research topic and made significant progress. In most of existing abstractive summarization models, such as BART \cite{lewis2020bart}, PEGASUS \cite{zhang2020PEGASUS} and ProphetNet \cite{qi2020prophetnet}, all adopt Transformer-based architecture \cite{vaswani2017attention}. They are usually first pre-trained in an unsupervised manner with a large amount of corpus and then fine-tuned on a specific dataset for supervised downstream applications. These models have shown superiority on various text understanding tasks, especially for generating abstractive summaries. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{./img/figure1-example.png} \caption{An illustration example of ESACL.} \label{fig::figure1-framework} \end{figure} Despite impressive performance on standard benchmarks, these deep networks are often brittle when deployed in real-world systems \cite{goel2021robustness}. The primary reason lies in that they are not robust to various noises, such as data corruption \cite{belinkov2018synthetic}, distribution shift \cite{hendrycks2020many} or harmful data manipulation \cite{jia2017adversarial}. In addition, they may also heavily rely on spurious patterns for prediction \cite{mccoy2019right}. As demonstrated in prior studies, the seq2seq model plays a critical role in many downstream applications. Thus, we expect to enable its denoising capability when developing such a seq2seq model in NLP tasks. Furthermore, many prior studies in language understanding find that the global semantics may significantly be neglected by Transformer-based models \cite{fang2020cert}. Because self-attention in these models is usually applied to learn and predict word-level characteristics during pre-training. The sentence embeddings aggregated from word embeddings learned by existing pre-trained language models have been found not be able to effectively and sufficiently capture the semantics among sentences \cite{li2020sentence}. This can lead to poor performance for subsequent tasks, e.g., abstractive summarization. The reason is that summarization requires wide-coverage natural language understanding going beyond the meaning of individual words and sentences \cite{liu2019text}. Therefore, to build a denoising seq2seq model, state-of-the-art (SOTA) approaches like BART \cite{lewis2020bart} and MARGE \cite{lewis2020pre} developed new objectives for pre-training. BART is trained by first corrupting documents at a word level and then optimizing a reconstruction loss between the generated output and the original document. MARGE learns the model by self-supervising the reconstruction of target text where it first retrieves a set of related texts and then maximize the likelihood of generating the original documents based on selected texts. All these seq2seq-based approaches are inspirational and emphasize the ability of denoising and modeling global semantics. In this study, we propose a new framework \textbf{ESACL}, \underline{E}nhanced \underline{S}eq2Seq \underline{A}utoencoder via \underline{C}ontrastive \underline{L}earning, to improve the denoising ability of the seq2seq model and increase the model flexibility by achieving our goal through fine-tuning. Unlike most existing methods that design denoising objectives in pre-training, ESACL optimizes the model in the fine-tuning phase which requires less computation resources and significantly saves training time. Specifically, ESACL leverages self-supervised contrastive learning \cite{chen2020simple, he2020momentum} and integrates it into a standard seq2seq autoencoder framework. Overall, it involves two stages: (1) sentence-level document augmentation, and (2) joint learning framework of seq2seq autoencoder and contrastive learning with an overall objective based on a fine-tuning loss and a self-supervised contrastive loss. Regarding the seq2seq autoencoder, ESACL uses a similar architecture to BART, which is a standard transformer-based model with a multi-layer bi-directional encoder and left-to-right decoder. As shown in Figure \ref{fig::figure1-framework}, ESACL performs document augmentation to create two instances, and designs a unique framework underlying the seq2seq model: it not only uses the output from the decoder for fine-tuning but also tries to maximize agreement of the output from the encoder between two augmented instances. A key step in contrastive learning is data augmentation. Various augmentation strategies have been developed in many NLP tasks at the word level, such as inserting a new word or swapping two tokens. To capture high-level semantics and the structural information of the entire document, we perform data augmentation at the sentence level. In this study, we implement several combinations of data augmentation and our experiment results show that (i) the model performance can be improved with sentence-level augmentation; (ii) the summarization performance with different data augmentation strategies does not vary much; (iii) the augmentation that largely interrupts the structure of the document should be avoided. To sum up, ESACL proposes a new way of denoising a seq2seq model via fine-tuning for abstractive summarization. It presents a new scheme for summarization which incorporates self-supervised contrastive learning into a seq2seq framework to improve the model flexibility. The major contributions of this study are as follows: \begin{itemize}[leftmargin=*] \itemsep0em \item We propose ESACL, a new abstractive text summarization framework that jointly trains a seq2seq autoencoder with contrastive learning through fine-tuning. \item We evaluate ESACL using two summarization datasets through quantitative measurement, robustness check, and human evaluation. ESACL achieves state-of-the-art performance and has shown better flexibility concerning modeling potential irrelevant noises. \item We introduce several sentence-level document augmentation strategies and conduct an ablation study to understand their impact on the performance. \end{itemize} \section{Related Work} \label{section::related-work} Three lines of research are closely related to our paper: abstractive text summarization, contrastive learning, and data augmentation. \textbf{Abstractive text summarization} has achieved promising results with the rapid development of deep learning. Neural network-based models \cite{rush2015neural, nallapati-etal-2016-abstractive, chopra2016abstractive, nallapati2017summarunner, zhou2017selective,tan2017abstractive, gehrmann-etal-2018-bottom, zhu2019ncls} enable the framework for generating abstractive summary. Recently, with the success of attention mechanism and Transformer-based \cite{vaswani2017attention} language models, pre-training based methods \cite{devlin2019bert, radford2018improving, radford2019language} have attracted growing attention and achieved state-of-the-art performances in many NLP tasks, and pre-training encoder-decoder Transformers \cite{song2019mass, lewis2020bart, zhang2020PEGASUS, qi2020prophetnet, lewis2020pre} show great successes for the summarization. \textbf{Contrastive learning} has been recently a resurgence in image analysis and language understanding \cite{khosla2020supervised, chen2020simple, fang2020cert, gunel2020supervised}. Researchers have developed many contrastive learning-based frameworks, including self-supervised framework \cite{fang2020cert} and supervised framework \cite{gunel2020supervised} and apply them to different language understanding tasks, e.g., sentiment analysis \cite{li2020cross} and document clustering \cite{shi2020self}. They mainly use contrastive learning to help models deeply explore the unique characteristics of data while ignoring irrelevant noises, which also motivates the present study. \begin{figure*} \centering \includegraphics[width=1\textwidth]{./img/figure2-framework.png} \caption{The overall architecture of our proposed ESACL. \label{fig::model-design} \end{figure*} \textbf{Data augmentation} is the key in contrastive learning and has been widely applied in image analysis \cite{wong2016understanding}. Textual data augmentation is different and can be mainly categorized into word-level transformation \cite{ kolomiyets2011model, wang2015s, zhang2015character, qiu2020easyaug} and neural text generation \cite{sennrich2016improving, yu2018qanet}. In our paper, to preserve the global semantics while filtering irrelevant noise for a document, we design several sentence-level augmentation strategies and show their effectiveness in summarization. Based on the experiment results, we believe that developing new alternative augmentations for text summarization has its great merit. \section{Preliminary} Automatic text summarization aims at condensing a document to a shorter version while preserving the key information. Let $\textbf{d} = \left\{\textbf{x}_1, \textbf{x}_2, ..., \textbf{x}_N\right\}$ be an input document with $N$ tokens and $\textbf{x}_i$ is the word embedding for the $i$-th token. Given a document $\textbf{d}$, we expect to learn a function $f(\textbf{d})$ that maps $\textbf{d}$ to another sequence of tokens $\textbf{y}=\left\{\textbf{y}_1, \textbf{y}_2, \cdots, \textbf{y}_m\right\}$, where $\textbf{y}$ is the generated summary with $m$ tokens. $m$ is an unknown apriori and depends on the input sequence and the task-specific requirement. Such a function $f(\cdot)$ is often implemented by a seq2seq model. The key idea is to represent an input sequence as a low-dimensional vector while preserving the contextual information in the sequence as much as possible, upon which a new task-specific sequence with an arbitrary length can be automatically generated \cite{jurafsky2020speech}. A typical seq2seq model usually consists of three components: \begin{itemize}[leftmargin=*] \itemsep0em \item An \textbf{encoder}, denoted as $f_{\text{encoder}}$ that accepts an input sequence $\textbf{d}$, and generates a corresponding sequence of contextualized representation $\textbf{h}$. \item A \textbf{context vector}, $\textbf{c}$ that is a function of $\textbf{h}$ and conveys the essence of the input to the decoder. \item And a \textbf{decoder}, $f_{\text{decoder}}$ that uses $\textbf{c}$ to generate an arbitrary length of sequence $\textbf{y}$ based on the task-specific requirement. \end{itemize} \section{Our Proposed Model} In this section, we present our proposed model ESACL, which leverages self-supervised contrastive learning to enhance the denoising ability of a seq2seq framework. Figure \ref{fig::model-design} illustrates the overall architecture of ESACL. For a given input document $\textbf{d}$, ESACL first creates a pair of augmented documents that are expected to associate with the same original target summary. ESACL then generates the latent representation of the augmented documents using the Transformer-based encoder and performs self-supervised contrastive learning to encourage the model to capture potential noises in the document $\textbf{d}$. Finally the optimized latent representation is sent to the Transformer-based decoder to generate the summary. In Section \ref{section::self-supervised-contrastive-learning}, we present our implementation of contrastive learning in ESACL. In Section \ref{section::document-augmentation}, we introduce several sentence-level document augmentation strategies which are the key in contrastive learning. In Section \ref{section::sequence-to-sequence}, we describe the detailed seq2seq architecture of ESACL, in particular how the self-supervised contrastive learning is incorporated and how they are jointly trained via fine-tuning. \subsection{Document Augmentation} \label{section::document-augmentation} Data augmentation has been used to increase the denoising capability of a model. In prior literature as mentioned in Section \ref{section::related-work}, there exists many contrastive learning-based models and applications in NLP. However, most of these methods focus on augmentation at the word level, which might not be suitable for text summarization because the global semantics and even noises at a higher level (e.g., sentence or document) can be easily ignored. In this study, we perform document augmentation at the sentence level. Specifically, given an input document $\textbf{d}$ with a sequence of $k$ sentences, we manipulate the document via various transformations at the sentence level to augment the document. By doing this, ESACL can generate another sequence $\hat{\textbf{d}}$ where main semantics are preserved with some additional noises. Similar to \citet{qiu2020easyaug}, we design several document augmentation approaches at the sentence level, as follows: \begin{itemize}[leftmargin=*] \itemsep0em \item \textbf{Random Insertion (RI):} randomly pick an existing sentence and insert it into a random position in the input document. \item \textbf{Random Swap (RS):} randomly select two sentences and swap their positions. \item \textbf{Random Deletion (RD):} randomly delete a sentence from the input document. \item \textbf{Document Rotation (DR):} randomly select a sentence and the document is rotated using this selected sentence as the pivot. \end{itemize} \subsection{Self-Supervised Contrastive Learning} \label{section::self-supervised-contrastive-learning} We introduce self-supervised contrastive learning into ESACL during the fine-tuning process to enhance its noising flexibility. ESACL performs document augmentation of the original input document to create positive training pairs. Along with negative pairs (two different documents), ESACL is able to encourage itself to identify if two context vectors learned from the encoder are representing the same original input document. By doing so, ESACL improves the quality of the context vector $\textbf{c}$ during the fine-tuning which can benefit the performance of downstream language generation. To form positive pairs during training, we perform document augmentation to create two augmented instances for each document in a batch of $K$ training instances $b=\left\{\textbf{d}_1, \textbf{d}_2, ..., \textbf{d}_K \right\}$. Suppose $\textbf{d}_i$ is the original input document, we generate the augmented documents $\hat{\textbf{d}}_{2i-1}=A_1(\textbf{d}_i)$ and $\hat{\textbf{d}}_{2i}=A_2(\textbf{d}_i)$, where $A$ refers to a specific augmentation strategy. Thus, we have $2K$ augmented instances in total for a batch, and we assume $\hat{\textbf{d}}_{2i-1}$ and $\hat{\textbf{d}}_{2i}$ are augmented from the same input document $\textbf{d}_i$. A positive pair is defined if and only if two instances are from the same original input document. Otherwise they are considered as a negative pair. We use the pre-trained encoder $f_{\text{encoder}}(\cdot)$ to obtain the latent representation of each augmented document $\hat{\textbf{d}}$ as $\textbf{h}=f_{\text{encoder}}(\hat{\textbf{d}})$. In our work, we use the final hidden vector corresponding to the first input token as the aggregate representation for the document like prior literature did \cite{devlin2019bert}. ESACL also applies a non-linear projection head $g$ to further understand the deep semantics among latent dimensions. It projects the representation $\textbf{h}$ into another latent space $\textbf{z}=g(\textbf{h})$, which is used to calculate the contrastive loss $l(i,j)$ for the positive pair as Equation \ref{eq::loss}. Here $\mathbbm{1}_{[k \neq i]}$ is 1 when $k \neq i$ and 0 otherwise. $\tau$ is a temperature parameter. $\text{sim}(\cdot, \cdot)$ is a cosine similarity measure. \begin{equation} l(i,j) = -\log \frac{\exp(\text{sim}(\textbf{z}_i, \textbf{z}_j)/\tau)}{\sum_{k=1}^{2K} \mathbbm{1}_{[k \neq i]} \exp(\text{sim}(\textbf{z}_i, \textbf{z}_k)/\tau)} \label{eq::loss} \end{equation} The loss of contrastive learning in ESACL is: \begin{equation} \mathcal{L}_{\text{cl}} = \frac{1}{2K}\sum_{i=1}^{K}[l(2i-1, 2i)+l(2i, 2i-1)] \label{eq::cl-loss} \end{equation} \subsection{Sequence-to-sequence Architecture} \label{section::sequence-to-sequence} For the abstractive text summarization, we follow the literature and adopt the Transformer-based seq2seq model, which has proven to be effective (see Section \ref{section::related-work}). A natural question arising here is how to leverage the denoising ability of contrastive learning in the seq2seq framework to improve the summarization. To answer this question, we design a combined loss to jointly learn model parameters. For each instance $\textbf{d}_i$, we obtain two augmented instances: $\hat{\textbf{d}}_{2i-1}$ and $\hat{\textbf{d}}_{2i}$ \footnote{Different augmentation strategies can be combined. For example, $\hat{\textbf{d}}_{2i-1}$ is augmented via \textbf{RI} while $\hat{\textbf{d}}_{2i}$ is via \textbf{RS}. }, which are considered as a positive pair for self-supervised contrastive learning. They are also used to generate summaries $\hat{\textbf{y}}_{2i-1}$ and $\hat{\textbf{y}}_{2i}$. The generated summary is compared with the target summary of the original input document for calculating the fine-tuning loss, $\mathcal{L}_{\text{generate}}$, which measures the generation performance. In this study, we define the $\mathcal{L}_{\text{generate}}$ as the cross-entropy loss. We also use the generated positive pair to calculate the contrastive learning loss as we introduced, which measures the noising flexibility of our model. Equation \ref{eq::loss-function} summarizes the overall loss of ESACL as the weighted sum of two losses. A hyper-parameter $\alpha \in [0,1]$ is used to balance the importance of contrastive learning and the summary generation. The overall process of ESACL is summarized in Algorithm \ref{alg::the_alg}. \begin{equation} \mathcal{L} = \alpha \mathcal{L}_{\text{cl}} + (1-\alpha)\mathcal{L}_{\text{generate}} \label{eq::loss-function} \end{equation} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm} \caption{ESACL training in one epoch} \label{alg::the_alg} \begin{algorithmic}[1] \REQUIRE batch size $K, f_{encoder}, f_{decoder}, g$ \STATE {Pick two augmentation strategies $A_1$, $A_2$} \FOR{each batch $b \in \left\{1,...,B\right\}$} \FOR{each document $i \in \left\{1,...,K\right\}$ in $b$} \STATE {\color{gray}\# using the first augmentation} \STATE $\hat{\textbf{d}}_{2i-1}=A_1(\textbf{d}_i)$; \hfill {\color{gray}\# augmented instance} \STATE $\textbf{h}_{2i-1}=f_{\text{encoder}}(\hat{\textbf{d}}_{2i-1})$; \STATE $\textbf{z}_{2i-1}=g(\textbf{h}_{2i-1})$; \hfill {\color{gray}\# projection} \STATE $\hat{\textbf{y}}_{2i-1}=f_{\text{decoder}}(\textbf{h}_{2i-1})$; \hfill {\color{gray}\# generation} \STATE {\color{gray}\# using the second augmentation} \STATE $\hat{\textbf{d}}_{2i}=A_2(\textbf{d}_i)$; \hfill {\color{gray}\# augmented instance} \STATE $\textbf{h}_{2i}=f_{\text{encoder}}(\hat{\textbf{d}}_{2i})$; \STATE $\textbf{z}_{2i}=g(\textbf{h}_{2i})$; \hfill {\color{gray}\# projection} \STATE $\hat{\textbf{y}}_{2i}=f_{\text{decoder}}(\textbf{h}_{2i})$; \hfill {\color{gray}\# generation} \ENDFOR \STATE \textbf{calculate} $\mathcal{L}$ using Equation \ref{eq::loss-function}; \STATE $\theta\leftarrow \underset{\theta}{\arg\min}\mathcal{L}(f_{encoder}, f_{decoder}, g \mid \theta)$; \ENDFOR \STATE \textbf{return} the learned $f_{encoder}^*, f_{decoder}^*, g^*$. \end{algorithmic} \end{algorithm} \section{Experiments} \subsection{Experiment Setting} We evaluate our model using two popular summarization datasets: the CNN/Daily Mail dataset (CNN/DM) \cite{hermann2015teaching} and the extreme summarization dataset (XSUM) \cite{narayan2018don}. Our experiments are conducted with 3 NVIDIA V100 GPUs. We adopt a 12-layer encoder and a 6-layer decoder with 16 attention heads. We warm-start the model parameter with the distil-BART pre-trained model\footnote{We choose distil-BART provided by HuggingFace. For CNN/DM, we use "sshleifer/distilbart-cnn-12-6". For XSUM, we use "sshleifer/distilbart-xsum-12-6". Appendix \ref{appendix::implementation} records the detailed implementation.} and trains 5 epochs with a batch size of 16\footnote{It takes about 35 hours for 5 epochs on our machine.}. For projection head in contrastive learning, we implement a 2-layer MLP to project the representation to a 128-dimensional latent space. We use Adam optimizer with a learning rate of $5e-7$. Given the limited computing resource (e.g., the memory limitation), we need to freeze some layers of the encoder to reduce the number of parameters. The impact of freezing different layers of the encoder will be discussed in the following ablation study (see Section \ref{section::ablation-study}). All results reported below are based on freezing the first 6 layers of the encoder. For the loss calculation, we set $\alpha=0.2$ and $\tau=0.5$. For data augmentation, we choose two augmentation operations, and discuss this hyper-parameter in Section \ref{section::ablation-study}. For the purpose of reproducibility, all codes are publicly available here\footnote{https://github.com/chz816/esacl}. \subsection{Experimental Results} We compare our proposed model with the following cutting-edge summarization models. \begin{itemize}[leftmargin=*] \itemsep0em \item \textbf{Lead-N} uses the first $N$ sentences of the article as its summary. \item \textbf{BERTSUM} \cite{liu2019text} proposes a novel document-level encoder based on BERT to generate summary. \item \textbf{MATCHSUM} \cite{zhong-etal-2020-extractive} is an extractive summarization approach which formulates the task as a semantic text matching problem. \item \textbf{PGNet} \cite{see2017get} is the pointer-generator network, which copies words from the source text and retains the ability to produce novel words. \textbf{PGNet+Cov} is with the coverage mechanism. \item \textbf{BART} \cite{lewis2020bart} employs the bidirectional encoder to enhance the sequence understanding and the left-to-right decoder to generate the summary. \item \textbf{PEGASUS} \cite{zhang2020PEGASUS} introduces a new pre-train objective to encourage the model generate target sentences, which enables the model to capture global information among sentences. \item \textbf{ProphetNet} \cite{qi2020prophetnet} predicts the next $n$ tokens simultaneously based on previous context tokens at each time step. \end{itemize} We adopt ROUGE \cite{lin2004rouge} F1 score as the evaluation metric. We choose ROUGE-1, ROUGE-2, and ROUGE-L for performance measurement, which are the common choices in the literature. We report the performance for all baseline models using the numbers from the original literature. \textbf{Results on CNN/DM}: Table \ref{table::performance-cnndm} records the performance on CNN/DM. ESACL outperforms most of the baseline models and achieves the highest ROUGE-L score on CNN/DM. Comparing to the SOTA extractive system MATCHSUM, ESACL achieves a higher ROUGE-2 and ROUGE-L score. Comparing to three SOTA abstractive systems, ESACL outperforms ProphetNet and improves the performance of BART by 7.3\% on ROUGE-L. Our model achieves comparable performance with PEGASUS, which is the best-performed SOTA model. \begin{table}[htbp] \centering \begin{threeparttable} \begin{tabular}{lccc} \toprule \multicolumn{1}{c}{\textbf{Model}} & \textbf{RG-1} & \textbf{RG-2} & \textbf{RG-L} \\ \hline Lead-3 & 40.07 & 17.68 & 36.33 \\ \hline BERTSUM & 42.13 & 19.60 & 39.18 \\ MATCHSUM & \textbf{44.41} & 20.86 & 40.55 \\ \hline PGNet & 36.44 & 15.66 & 33.42 \\ PGNet+Cov & 39.53 & 17.28 & 36.38 \\ BART & 44.16 & 21.28 & 40.90 \\ ProphetNet & 43.68 & 20.64 & 40.72 \\ PEGASUS & 44.17 & \textbf{21.47} & \textbf{41.11} \\ \hline ESACL & \textbf{44.24} & \textbf{21.06} & \textbf{41.20} \\ \toprule \end{tabular} \end{threeparttable} \caption{ROUGE (RG) evaluation on CNN/DM dataset} \label{table::performance-cnndm} \end{table} \textbf{Results on XSUM}: Table \ref{table::performance-xsum} records the ROUGE score on XSUM. ESACL outperforms the natural baseline and extractive systems. Our model achieves comparable performance to BART, and it is lower than the best-performed model PEGASUS. \begin{table}[htbp] \centering \begin{threeparttable} \begin{tabular}{lccc} \toprule \multicolumn{1}{c}{\textbf{Model}} & \textbf{RG-1} & \textbf{RG-2} & \textbf{RG-L} \\ \hline Lead-1 & 16.30 & 1.60 & 11.95 \\ \hline BERTSUM & 38.81 & 16.50 & 31.27 \\ MATCHSUM & 24.86 & 4.66 & 18.41 \\ \hline PGNet & 29.70 & 9.21 & 23.24 \\ PGNet+Cov & 28.10 & 8.02 & 21.72 \\ BART & 45.14 & 22.27 & 37.25 \\ ProphetNet \tnote{*} & - & - & - \\ PEGASUS & \textbf{47.21} & \textbf{24.56} & \textbf{39.25} \\ \hline ESACL & \textbf{44.64} & \textbf{21.62} & \textbf{36.73} \\ \toprule \end{tabular} \begin{tablenotes} \footnotesize \item[*] ProphetNet doesn't provide the result on XSUM. \end{tablenotes} \end{threeparttable} \caption{ROUGE (RG) evaluation on XSUM dataset} \label{table::performance-xsum} \end{table} The experimental results on two datasets show the effectiveness of the joint learning framework with contrastive learning, as indicated by the superior performance improvement of ESACL over many baseline models and the comparable performance to the best-performed SOTA model with much smaller architecture. Comparing to BART and PEGASUS, our model has less trainable parameters: we have a 12-layer encoder and 6-layer decoder, which is much smaller than the architecture of BART: 12-layer encoder and 12-layer decoder, and the architecture of PEGASUS: 16-layer encoder and 16-layer decoder. \subsection{Human Evaluation} To further examine the quality of the generated summaries by ESACL, we conduct the human evaluation. Two common indicators in the literature, \textbf{informativeness} and \textbf{fluency} are used to measure the quality of summary \cite{huang-etal-2020-knowledge, xu-etal-2020-self}. Informativeness measures whether the summary covers the important information from the input article and fluency focuses on if the generated summary is grammatically correct. We randomly select 100 articles from the XSUM test set and hire 7 fluent English speakers as our annotators to rate summaries generated by distil-BART and ESACL. They are required to give a comparison between the two generated summaries that are presented anonymously. Table \ref{table::human-evaluation} reports the human evaluation results. Overall, we find that our model is capable of capturing the key information of a document and the global semantics, which can be further demonstrated by the two example generated summaries from ESACL in Table \ref{table::example-summary}. \begin{table}[htbp] \centering \begin{tabular}{l|ccc} \toprule & \textbf{Win} & \textbf{Tie} & \textbf{Loss}\\ \hline Informativeness & 38.5\% & 24.7\% & 36.8\% \\ Fluency & 19.5\% & 61.0\% & 19.5\% \\ \hline \toprule \end{tabular} \caption{Human evaluation results on XSUM dataset.} \label{table::human-evaluation} \end{table} \begin{table*} \centering \begin{tabular}{ m{25em} | m{15em} } \toprule \multicolumn{1}{c|}{\textbf{Source article (abbreviated)}} & \multicolumn{1}{c}{\textbf{Summary by ESACL}} \\ \hline The London trio are up for best UK act and best album, as well as getting two nominations in the best song category. "We got told like this morning 'Oh I think you're nominated'", said Dappy. "And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!" ... & N-Dubz have revealed they were surprised to be nominated for four Mobo Awards. \\ \hline Since late November, Scotland's five mountain resorts have attracted 373,782 customers. The ski season is estimated to have attracted £37.5m into the local economy. With fresh snow on the slopes, CairnGorm Mountain expects skiing during the first weekend of June. Recent figures from Ski Scotland showed that this season's figures were better than the last bumper season of 2000-2001. ... & A record number of skiers and snowboarders have visited Scotland's five ski areas this winter. \\ \toprule \end{tabular} \caption{Two example summaries by ESACL on XSUM dataset.} \label{table::example-summary} \end{table*} \section{Discussion} \subsection{Impact of Contrastive Learning Component} Since our model is warmed up using distil-BART, one could assume that the original distil-BART may simply need to be fine-tuned longer to achieve the same experimental results. Inspired by \citet{peinelt2020tbert}, we perform an additional experiment to finetune distil-BART using the same experimental settings. By analyzing the results in Table \ref{table::finetune-model-analysis}, we can conclude that longer finetuning does not considerably boost distil-BART's performance. \begin{table}[htbp] \centering \begin{subtable}{1\linewidth}\centering {\begin{tabular}{lccc} \toprule \multicolumn{1}{c}{\textbf{Model}} & \textbf{RG-1} & \textbf{RG-2} & \textbf{RG-L} \\ \hline distil-BART & 41.23 & 19.38 & 38.11 \\ ESACL & \textbf{44.24} & \textbf{21.06} & \textbf{41.20}\\ \toprule \end{tabular}} \caption{Performance on CNN/DM} \label{tab:1a} \end{subtable} \begin{subtable}{1\linewidth} \centering {\begin{tabular}{lccc} \toprule \multicolumn{1}{c}{\textbf{Model}} & \textbf{RG-1} & \textbf{RG-2} & \textbf{RG-L} \\ \hline distil-BART & 44.41 & 21.40 & 36.50 \\ ESACL & \textbf{44.64} & \textbf{21.62} & \textbf{36.73}\\ \toprule \end{tabular}} \caption{Performance on XSUM} \label{tab:1b} \end{subtable} \caption{Finetune distil-BART under the same setting.} \label{table::finetune-model-analysis} \end{table} \subsection{Robustness Check} \begin{table}[htbp] \centering \begin{tabular}{llcc} \toprule \multicolumn{2}{c}{\textbf{Metric}} & \textbf{Baseline} & \textbf{ESACL}\\ \hline \multirow{2}{*}{Length} & Longest & 15.89 & 15.96 \\ & Shortest & 23.35 & 23.14\\ \hline \multirow{2}{*}{Abstractive} & Most & 19.44 & 19.77 \\ & Least & 24.21 & 24.13 \\ \hline \multirow{2}{*}{Distilled} & Most & 15.71 & 16.15 \\ & Least & 24.21 & 24.13 \\ \hline \multirow{2}{*}{Position} & Latest & 17.30 & 17.60 \\ & Earliest & 22.22 & 22.40 \\ \toprule \end{tabular} \caption{Robustness Check on sub-populations defined by metrics using ROUGE-2. Baseline refers to distil-BART.} \label{table::robustness} \end{table} We perform a robustness check for ESACL to better understand the impacts of different datasets on the constrastive learning performance. Follow \citet{goel2021robustness}, we use several heuristics from literature to identify sub-populations of datasets. We first select top 10\% and bottom 10\% examples in the test set as two subpopulations based on four metrics from \citet{goel2021robustness}: \textbf{length}, \textbf{abstractiveness}, \textbf{distillation} and \textbf{position}. Then we evaluate the performance using ROUGE score on each subpopulation. Table \ref{table::robustness} shows the performance of ESACL in each population comparing to distil-BART \footnote{We report ROUGE-2 score in Table \ref{table::robustness}. We include the detailed results for other ROUGE scores in Appendix \ref{appendix::robustness-check}.}. For \textbf{length}, we find that ESACL performs better than the baseline on the longest set, which is the hardest to summarize considering a large amount of information. For the most \textbf{abstractive} set, ESACL is more capable to reconstruct the text and achieves higher performance. This is more significant on the most \textbf{distilled} set, ESACL performs much better than the baseline by improving the performance by 2.8\%. We can also identify a smaller performance gap between two sub-populations for ESACL, thereby emphasizing that ESACL performs more robustly than the baseline. \subsection{Ablation Study} \label{section::ablation-study} To better understand the contribution of different modules in ESACL to the performance, we conduct an ablation study using the XSUM dataset. \noindent \textbf{Document augmentation.} As we illustrated the importance of data augmentation in contrastive learning (see Section \ref{section::document-augmentation}), we design several document augmentations but we have not explored their impact on the summarization performance. Table \ref{table::ablation-rouge} shows the result of ESACL using different augmentation methods \footnote{we use ROUGE-2 as the evaluation metrics, and we also report the results using ROUGE-L in Appendix \ref{appendix::document-augmentation}.}. We can clearly see that (1) the performance with different combinations of augmentation in abstractive text summarization does not vary too much. (2) The augmentation method that interrupts the document structure, such as document rotation (\textbf{DR}), is usually harmful to the performance, since the structure of the input document plays an important role. \begin{table}[htbp] \centering \begin{tabular}{c|c|c|c|c} \toprule & \textbf{RI} & \textbf{RD} & \textbf{RS} & \textbf{DR}\\ \hline \textbf{RI} & 21.62 & 21.56 & 21.51 & 21.47 \\ \textbf{RD} & - & 21.59 & 21.58 & 21.38 \\ \textbf{RS} & - & - & 21.46 & 21.41 \\ \textbf{DR} & - & - & - & 21.11 \\ \toprule \end{tabular} \caption{Performance on XSUM dataset under different combinations of document augmentation.} \label{table::ablation-rouge} \end{table} \noindent \textbf{Number of augmentation operations.} One question we have not answered is what is the optimal number of augmentation operations. We expect this number to be in a reasonable range: too large can completely change the document's structure and too small does not add enough noise. So we design the experiment with varied number of sentences modified in the document augmentation and Table \ref{table::ablation-different-sentences} shows the performance of ESACL under Random Deletion (\textbf{RD}) and Random Swap (\textbf{RS}) with different numbers of augmentation operations $n$. Given there are 19.77 sentences per article for XSUM on average \cite{narayan2018don}, we decide to choose $n$ from [1, 3, 5]. As we expected, both $n=5$ and $n=1$ performs worse than $n=3$. A reasonable choice for $n$ should be based on the characteristics of datasets under the guidance that data augmentation is useful to add some noises while preserving the critical information. \begin{table}[htbp] \centering \begin{tabular}{c|ccc} \toprule & \textbf{RG-1} & \textbf{RG-2} & \textbf{RG-L} \\ \hline $n=1$ & 44.48 & 21.54 & 36.64 \\ $n=3$ & 44.52 & 21.58 & 36.59 \\ $n=5$ & 44.36 & 21.48 & 36.52 \\ \hline \toprule \end{tabular} \caption{Performance on XSUM dataset with different numbers of augmentation operations.} \label{table::ablation-different-sentences} \end{table} \noindent \textbf{Layer freezing in the encoder.} In all the above experiments, we need to freeze some layers in the encoder because of the memory limitation. Especially for contrastive learning, it benefits from the larger batch size comparing to supervised learning \cite{chen2020simple}. This brings us to a trade-off between the batch size and the number of finetuned layers. Previous studies find that higher-level layers capture context-dependent aspects of text meaning while lower-level states model aspects of syntax \cite{peters2018deep, mou-etal-2016-transferable}. Thus, in our study, we freeze the first several $l$ layers of the encoder in ESACL. Table \ref{table::ablation-different-layers} reports the performance for different $l$'s under Random Deletion (\textbf{RD}) and Random Swap (\textbf{RS}). When $l=12$, the model is fine-tuned only using the augmented documents, which makes contrastive learning ineffective. When compared to $l=6$, we clearly see the benefit of incorporating contrastive learning into the seq2seq during fine-tuning. \begin{table}[htbp] \centering \begin{tabular}{l|ccc} \toprule & \textbf{RG-1} & \textbf{RG-2} & \textbf{RG-L} \\ \hline $l=6$ & 44.52 & 21.58 & 36.59 \\ $l=9$ & 44.41 & 21.47 & 36.55 \\ $l=12$ & 44.27 & 21.41 & 36.46 \\ \toprule \end{tabular} \caption{Performance on XSUM dataset when freezing the first $l$ layers in the encoder.} \label{table::ablation-different-layers} \end{table} \section{Conclusion} In this paper, we propose ESACL, an enhanced sequence-to-sequence model via contrastive learning to improve the performance of abstractive text summarization, where two critical components are jointly learned via fine-tuning. With several proposed sentence-level document augmentation, ESACL can build an autoencoder with a denoising capability through fine-tuning. We empirically evaluate ESACL on two datasets both quantitatively and qualitatively. The results demonstrate that ESACL outperforms several cutting-edge benchmarks. We also examine the impact of different augmentation strategies on the performance and explore the robustness of ESACL. \clearpage